Writing

Opinion pieces on physics, AI Safety, and the intersection of the two.

Discerning CERN's Problems: Perspective of an Early Career Researcher
Discerning CERN's Problems: Perspective of an Early Career Researcher

CERN, Europe’s flagship laboratory for particle physics, has reached the most critical point in its 70-year history. Its founding convention has enabled extraordinary success and scale: the world’s largest experiment, 25 member states, and discoveries underpinning multiple Nobel Prizes. But that ambition is now stretching the fabric of practical reality, with CERN going all in on the Future Circular Collider at a capital cost of at least 30 billion CHF and a timeline running all the way to 2100. In this piece I outline the trajectory that led CERN here, and the concerns that the community, especially early-career researchers, must confront.

CERN for AI: First-hand Lessons for AI Safety
CERN for AI: First-hand Lessons for AI Safety

AI systems have gone from autocomplete toys to passing professional exams to capable cybersecurity experts to a forecast ‘country of geniuses in a data centre’ in under a decade. The unprecedented risks associated with rapidly accelerating capability, wide proliferation, and general-purpose application have driven many researchers and resources into the burgeoning field of AI safety. This field faces a coordination problem strikingly similar to the one that led to CERN’s founding in 1954 when nuclear research was too expensive, too complex, and too important for any single European nation. ‘CERN for AI’ is therefore often floated as a necessary pursuit. In this piece, I draw on my direct experience working at CERN to explore what gave it success and, crucially, what pitfalls any such institution must avoid.