Once or twice a month on weekday evenings at a local pub. Open to all.
Organisers: Frank Morley (data scientist); Robert Henderson (computer scientist, Fivey Software).
Each session we'll discuss a different paper in the field of machine learning, typically covering a foundational concept, a recent application, or an ethical or policy issue. You'll need to read the paper before the session.
Contact e-mail: rob at robjhen dot com
When: Monday 14th April 2025, 6 pm – 8 pm.
Where: The Castle Inn, 36 Castle Street, Cambridge, UK. (Most likely we'll be at one of the large tables upstairs.)
Paper: A. Aguirre. Keep the Future Human: Why and How We Should Close the Gates to AGI and Superintelligence, and What We Should Build Instead. 2025.
We'll discuss the recent essay by Anthony Aguirre, director of the Future of Life Institute. Aguirre makes the case for why the current trajectory of advancement in the field of AI poses a dramatic threat to human civilisation, and recommends how we should steer the course of AI development in a safer direction.
You'll need to read the paper in advance. Ideally please also bring along your own copy of the paper to refer to in the session.
When: Wednesday 30th April 2025, 6 pm – 8 pm.
Where: The Castle Inn, 36 Castle Street, Cambridge, UK. (Most likely we'll be at one of the large tables upstairs.)
Paper: Y. LeCun et. al. Deep Learning. In Nature, 521, 436–444, 2015.
This review paper by the three godfathers of AI
(LeCun,
Bengio, and Hinton) gives a good overview of the field of deep
learning as it was in 2015. It serves as a useful introduction to
two important techniques that predate transformers: convolutional
and recurrent neural networks.
You'll need to read the paper in advance. Ideally please also bring along your own copy of the paper to refer to in the session.
When: Wednesday 25th June 2025, 6 pm – 8 pm.
Where: The Castle Inn, 36 Castle Street, Cambridge, UK. (Most likely we'll be at one of the large tables upstairs.)
Book: N. Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
Many high-profile scientists and AI company executives believe that super-human level AI could pose an existential risk to humanity, and have stated so publicly. But where does this idea of existential risk from AI come from? The answer is: it comes from the work of a number of philosophers and thinkers, perhaps the most prominent and influential of whom is Nick Bostrom. His seminal book develops a reasoned argument for why the default outcome from creating smarter-than-human AI would be humanity's doom (chapters 6-8), and also discusses in what ways doom might be avoided (chapters 9, 10, and 12).
You'll need to read chapters 6–10 and 12 of the book in advance. Please also bring along your own copy of the book to refer to in the session. You can pick up a copy at bookshops for about £12.
Paper: A. Vaswani et. al. Attention is All you Need. In Advances in Neural Information Processing Systems 30, 2017.
Paper: D. E. Rumelhart et. al. Learning representations by back-propagating errors. In Nature, 323, 533–536, 1986.
Papers:
Papers:
Paper: Y. Chaudhary and J. Penn. Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models. In Harvard Data Science Review, Special Issue 5, 2024.
Paper: D. Guo et. al. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. 2025.
Related reading:
Papers:
Papers: