Last Tuesday brought another high-profile open letter warning about AI risks. Signed by many prominent figures including the CEOs of OpenAI and Google DeepMind, the statement is short and not-sweet:
Mitigating the risk of extinction from AI should be a global priority (..)
Unlike the March 2023 open letter by the Future of Life Institute which called for a 6-months AI slowdown, this one doesn’t prescribe a specific action. Instead, it calls for the “risk of extinction from AI” to be acknowledged as a global concern, in a similar vein to nuclear proliferation and pandemics.
Can we really go “extinct from AI” ?
Doomsday conditions
For a catastrophe to occur, two things need to happen:
Capability: some system or agent has world-transforming capabilities
Act: the system uses these capabilities in a way that brings about doom (e.g. human extinction, climate disaster, massive infrastructure breakdown).
In case of avoiding the nuclear doomsday, they ship already has sailed on Capability (we have enough nuclear weapons to destroy our world many times over), so everything hangs on avoiding the Act.
Act (alignment)
So far we’ve been able to avoid nuclear annihilation just by focusing on the Act (2.). Could focusing on safeguards work to avoid the “AI Extinction” too ? Indeed, much of the current discussion in the AI communities focuses on ensuring that increasingly capable artificial intelligence tools stay under human control and that they act in a way aligned with human values.
This line of defense has a crucial weakness though: not all of the AI systems in question will necessarily be under control of benevolent human “good actors”. Software proliferates easily and fast, as illustrated by the recent blossoming of open LLMs rivalling industry-grade closed systems.
🔮 Prediction
Any powerful technical capability is likely to proliferate and eventually be available to both careless “good actors” and competent “bad actors”. We are unlikely to succeed in restricting any type of AI tools to “good actors” only.
Capability
Instead of relying only on the safeguards, we should avoid creating the AI doomsday Capability in the first place. While we may not fully predict and control the behavior of increasingly more powerful AI systems (which keep surprising us with emergent capabilities), we do control the interfaces that connect the AI systems to the real world. And so, it is in our power to insulate automatic systems from critical decision making chains such as weapons systems and critical infrastructure.
What will make it harder is that automation can be so often brilliantly useful.
What will make it easier is that insulating automated systems from critical infrastructure doesn’t only protect us from AI doomsday scenarios. Out of hand AI in many ways is indistinguishable from extremely capable human attacker. What will make our systems resilient against out of control AI will also make it more resilient against cyber criminals today.
In other news
📽️ CBS interview with Geoffrey Hinton, a pioneer of using neural networks for machine learning. One of the best conversations out there about how we got to where we are and what to expect next.
📈 In May we reached 50 subscribers 🥳 ! Thank you for being here :). If you’re reading this and not subscribed, join us for the ride!
Postcard from Tonnerre
Done with travels for a while :). Last photo from a recent bike trip in Burgundy: Fosse Dionne, found in the French town of Tonnerre (population 4k). Covers a cave at least 370 meters deep (video).
Have a great week 💫,
Przemek