One dimension I think needs more air time is the cost-sharing and political salability problem. Even if both sides quietly agree that some kind of AGI risk coordination is necessary, who’s going to fund the shared infrastructure, data standards, red-teaming protocols, etc.? And more importantly,how do you sell that to domestic political actors who see “safety collaboration” as code for “strategic concession”?
The Singapore conference seems like a small miracle in this context. But unless there’s a way to make AI safety look like strength to domestic constituencies,not just caution,I don’t see how we scale these diplomatic fragments into anything durable.
One dimension I think needs more air time is the cost-sharing and political salability problem. Even if both sides quietly agree that some kind of AGI risk coordination is necessary, who’s going to fund the shared infrastructure, data standards, red-teaming protocols, etc.? And more importantly,how do you sell that to domestic political actors who see “safety collaboration” as code for “strategic concession”?
The Singapore conference seems like a small miracle in this context. But unless there’s a way to make AI safety look like strength to domestic constituencies,not just caution,I don’t see how we scale these diplomatic fragments into anything durable.