Physicist Stephan Hawking Warns Artificial Intelligence Could Destroy Us

At the opening ceremony of the 2017 Web Summit in Lisbon on Monday, Stephen Hawking claimed artificial intelligence could be the greatest advance in human history or that it could completely destroy us.

“The rise of AI could be the worst or the best thing that has happened for humanity. AI could develop a will of its own ,” said
physicist Stephen Hawking .

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” Hawking added.

“Perhaps we should all stop for a moment and focus not only on making our AI better and more successful, but also on the benefit of humanity,” Hawking said.

Who Gets to Determine?

Excuse me for asking, but who gets to determine, in advance, what is or what isn't a "benefit of humanity"?

Is it me, you, Goldman Sachs, President Trump. the EU, China, ISIS, or AI itself?

I can partially answer that question. It sure isn't me.

Mike "Mish" Shedlock

Comments (26)
No. 1-25
Seenitallbefore
Seenitallbefore

So Hawking has been designated humanities guru. Yea he has a high IQ but hasn't demonstrated a whole lot of common sense to me. This is at best a sick recluse who pours over data and thinks all day about who knows what. Sure doesn't make him any more wise than any of the rest of us. Wisdom and intelligence do not go hand in hand.

wootendw
wootendw

" Excuse me for asking, but who gets to determine, in advance, what is or what isn't a "benefit of humanity"? "That's the trillion dollar question that no one answers because those who live by it - government officials - always think they know the answer.

That being said, I am concerned about robots which can kill, that is, army or police robots programmed to neutralize a threat. It seems to me that if a robot can be designed to kill a threatening adversary, why not make them just a little more complex so that they can catch, rather than kill, the culprit?

xil
xil

i expect killing is much more efficient

Brother
Brother

I think our brains are thinking a little too much into the future. I have never seen a AI robot around town let alone the police or fire version. The self driving cars today can self drive for like 10 seconds before fail and who would put a missile in AI auto mode?

NewGlobalStrategy
NewGlobalStrategy

It's actually nobody - the universe evolves in it's own way and AI is coming whether we like it or not. However greater acceptance that freewill is a delusion might spread a bit more humility around the place.