5/30/2023: AI Extinction Risk!!??

This morning a group of prominent AI researchers made a joint statement as shown above. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Beside the 23-word statement, there are no other details. I wish more rationale was given when such a significant statement was made. What does this mean? If we don’t deal with the AI risk, we may die? What are we supposed to do?

The pandemic risk was well known before COVID-19 but there’s nothing ordinary folks could do. It was an abstract idea. Even now, it is still an abstract idea. But at least we know how things could unfold for a pandemic. Namely, people get infected by some exotic zombie virus and it spreads fast causing people to die. For nuclear wars, we can imagine that an atomic bomb is dropped killing hundreds of thousands of people instantaneously and causing radiation damage to millions of people.

But what does an AI extinction event entail? I am sure there is science fiction I could read about this. But could these big shots please elaborate? There are some possibilities I could think of but I am not sure if it’s an extinction event or it’s more like humans becoming slaves to AI. Namely, humans will live under the terror of AI like the domesticated animals today. It’s unclear if we are going to become dogs/cats or chickens/pigs/cows though. Anyhow, the public deserves more details and explanations if more public policy regarding the AI extinction is going to be enacted. 

Some AI experts like Yann LeCun and Andrew Ng seem to be dismissive of the idea of AI super intelligence. My counter argument to Dr. LeCun is that if  we can figure out how to implement dog-level intelligence, we could figure out super intelligence in no time because it could just be a matter of scaling up the same system and we will have no time to react. Therefore, I don’t think it’s a good idea to disregard the risk even when the probability is tiny. We need to take it seriously in case the AI extinction risk materializes. At the very least, we need to think through the scenarios of how AI can end up being in control of humanity. Again, we need more concrete details. 

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments