- This event has passed.
To celebrate the launch of the OxAI Safety Hub, we’re running a 4 week lecture series exploring the field of AI Safety.
In our first lecture, Rohin Shah from DeepMind will give us an introduction to AI Alignment:
You’ve probably heard that Elon Musk, Stuart Russell, and Stephen Hawking warn of dangers posed by AI. What are these risks, and what basis do they have in AI practice? I will first describe the more philosophical argument that suggests that a superintelligent AI system pursuing the wrong goal would lead to an existential catastrophe. Then, I’ll ground this argument in current AI practice, arguing both that it is plausible that we build superintelligent AI in the coming decades, and that there are plausible mechanisms by which such a system would pursue an incorrect goal.