Fellows 2024

  • Andrés Campero

    Andrés Campero

    Andres has a PHD in Artificial Intelligence from the Brain and Cognitive Sciences at MIT. He was founder and CEO of Stateoftheart.ai, wrote a book about Computation and its Evolution, founded an AI conference in Latam RIIAA, worked at FB AI Research.

    Project: Exploring the possibility of consciousness in machines and in particular the experience of pain and pleasure, under the umbrella of the Hard Problem of Consciousness.

  • Avyay Casheekar

    Avyay Casheekar

    Avyay is a student researcher working in the field of Technical AI Safety. They have worked with the Software Systems lab from Columbia and have spent time working on smaller interpretability projects.

    Project: Using smaller aligned models to prevent misuse and jailbreaks in larger models.

  • Michelle Malonza

    Michelle Malonza

    Researcher at the ILINA Program under the Studies in Transformative AI project, where she has been working on the most promising levers available to Global South countries in ensuring safe AI development. Michelle has a law degree from Strathmore/Kenya.

    Project: Proposals to address the threat of AI Generated Content on disinformation and misinformation.

  • Matteo Pistillo

    Matteo Pistillo

    Matteo is a research scholar at the Institute for Law & AI. He worked for five years in cross-border commercial dispute resolution as an associate attorney at Cleary Gottlieb Steen & Hamilton LLP. He holds a J.D., cum laude, from Università di Torino and an LL.M. from Stanford Law School (Fulbright Scholarship).

    Project: Researching the role of compute thresholds for AI governance, and insurability of catastrophic risks.

  • María Paula Mujica

    María Paula Mujica

    Maria has worked for the Colombian government, more recently as a Digital Transformation and AI Policy Manager at the Presidency of Colombia. She worked for the AI Policy Lab at the Chamber of Commerce . She studied Law and Philosophy in Colombia and an LLM at UCL in the UK.

    Project: Exploring global south policy engagement in climate change as a case study to extrapolate strategies for AI engagement.

  • Sienka Dounia

    Sienka is a fellow working on Technical AI Alignment. He has previously done upskilling in AI Alignment during his time as a fellow at ILINA Program. Before that, he worked as a technical support coordinator at African Development University.

    Project: Exploring Double Descent through the lens of Singular Learning Theory and Developmental Interpretability

  • Francesca Sheeka

    Francesca is an AI Policy Lead at SaferAI, a research organisation focused on mitigating risks from advanced AI. She worked at the OECD as an AI Policy Analyst, leading the development of the organisation’s Database of National AI Policies and Strategies and helped launch the OECD Working Party on AI Governance.

    Project: Building International Consensus for Safer AI: Benchmarks for Success.

  • Yiqin Fu

    Yiqin (pronounced ee-ching) is a 5th-year PhD student in political science at Stanford studying Chinese innovation using quantitative methods, looking specifically at investment and patent data.

    Project: emergent capabilities and implications on international security.

  • Mick Yang

    Mick has a BA in Law from Oxford University, with 3+ years experience in go-to-market business strategy and operations—most recently BCG on AI deployment. Previously produced award-winning investigative data journalism and helped think-tanks make their research more accessible.

    Project: Enforcement levers behind a licensing regime for foundation models, taking into trends in cyber-security and open-source software regulation, data efficiency, and smaller model proliferation.

  • Laura Prieto

    Laura is a senior data scientist at an insurance company working on ML models for pricing. Now she is a fellow at AI Futures Fellowship learning about catastrophic risks from AI. She double majored in industrial engineering and software and computing engineering.

    Project: Experiment to teach LLMs to say ‘I don’t know’ instead of creating hallucinations.

  • Cecilia Wood

    Cecilia is an economics PhD student at the London School of Economics. Her research focuses on using economic theory (information theory, mechanism design, decision theory) to AI safety. Her current focus is on self-modification of preferences. Her previous background is in mathematics and philosophy.

    Project: Self-modification incentives when facing an opponent.

  • George Wang

    George is a technical AI researcher working with Timaeus on developmental interpretability. He has a PhD in combinatorics and previously worked with early stage startups as a software engineer.

    Project: Applying and scaling developmental interpretability tools on transformers to detect phase transitions in the training of language models.

  • Sam Deverett

    Sam is a technical AI researcher currently focused on AI’s role in escalating conflict. His other research interests include cooperative AI and LLM evaluations. Sam has a BA in Data Science from UC Berkeley as well as several years of industry experience in the field.

    Project: Using LLMs to scale deliberative democracy, with a focus on finding the best questions to ask participants in order to surface points of consensus or cruxes of disagreements.