I listened to an interview this morning on NPR, where an expert from an algorithm institute in Canada (sorry did not catch the institute or his name) claimed that that we have entered the danger zone with AI. That AI can be programmed to seek and formulate it’s own goals and the danger is handing it agency, the ability to make changes Independently.
The expert cited an example where a Russian Early Warning System signaled an ICBM launch from the United States, and the officer who was in the position to push the button, did not, because he said it did not feel right. The early warning system was in error, there was no launch from the US, and a machine programmed to respond Independently would have sent nuclear missiles to the US.
A different interview:
In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.
"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.
Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."
The expert cited an example where a Russian Early Warning System signaled an ICBM launch from the United States, and the officer who was in the position to push the button, did not, because he said it did not feel right. The early warning system was in error, there was no launch from the US, and a machine programmed to respond Independently would have sent nuclear missiles to the US.
Stanislav Petrov: The man who may have saved the world
Stanislav Petrov tells the BBC how a decision he made 30 years ago may have prevented a nuclear war.
www.bbc.com
A different interview:
Leading experts warn of a risk of extinction from AI
In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.
"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.
Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."