How can technological development policy work to curb AI risk? What does the race between safety and technological development look like? How have we used policy to deal with risk in the past? These questions are important to the future of our species and YOU should take a stab at figuring them out
Substitute development is an underexplored risk reduction path-- good for you for highlighting it here. If we didn't have effective alternative refrigerants, we'd almost certainly still be using CFCs.
But it seems to me that the most important way the AI risk debate differs from previous such debates, including nuclear and ozone-destruction risk, is that it's not based on any direct evidence of harm. Nuclear test ban treaties were passed because we knew from terrible experience how destructive nuclear weapons could be; the Montreal Protocol came through because we had clear physical evidence of the ongoing harms of the ozone hole. All AI alarmists have are thought experiments and science-fiction scenarios. And the argument that we can't wait for direct evidence of harm because then it will be too late has a very bad history of being used to justify destructive, overreaching preventive actions: think of GW Bush justifying the disastrous Iraq invasion on the grounds that "we can't let the smoking gun be a mushroom cloud."
This has so much good data
Substitute development is an underexplored risk reduction path-- good for you for highlighting it here. If we didn't have effective alternative refrigerants, we'd almost certainly still be using CFCs.
But it seems to me that the most important way the AI risk debate differs from previous such debates, including nuclear and ozone-destruction risk, is that it's not based on any direct evidence of harm. Nuclear test ban treaties were passed because we knew from terrible experience how destructive nuclear weapons could be; the Montreal Protocol came through because we had clear physical evidence of the ongoing harms of the ozone hole. All AI alarmists have are thought experiments and science-fiction scenarios. And the argument that we can't wait for direct evidence of harm because then it will be too late has a very bad history of being used to justify destructive, overreaching preventive actions: think of GW Bush justifying the disastrous Iraq invasion on the grounds that "we can't let the smoking gun be a mushroom cloud."