Astrophysicist Stephen Hawking believed that if humanity came to rely too much on machines, we'd be headed straight for destruction.
Is artificial intelligence a good thing or a bad thing? According to Stephen Hawking, the eminent British astrophysicist who died on 14 March, 2018, artificial intelligence is not necessarily a source of progress. In fact, he believed it was quite the opposite...
When asked about it, on many occasions, in his last years of life, Hawking shared his fear that 'the development of full artificial intelligence could end humanity,' as he stated in a 2014 interview with the BBC.
If humans were to fully develop artificial intelligence, it would take off on its own, and re-design itself at an ever-increasing rate, according to him. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.
Artificial intelligence has 'enormous potential'
A few months after this interview, the scientist, as well as Tesla's Elon Musk and others, signed an open letter calling for a ban on 'killer robots.' The letter warned of an arms race that would involve powerful new technologies that have the ability to detect and hit targets without human intervention.
Stephen Hawking was also increasingly concerned about AI falling into the hands of terrorists or dictators. At the Web Summit in Lisbon, he expressed his fear once more, saying that 'the rise of AI could be the worst or the best thing that has happened for humanity.'
But Stephen Hawking's take on artificial intelligence was not all doom and gloom. In 2016, at the inauguration of a research center at Cambridge University, he said that:
AI may be the most important event in the history of our civilization.
In fact, Hawking believed that when used properly, Artificial Intelligence has 'enormous potential.' According to the astrophysicist, if used properly, new technologies could make it possible for us to repair the damage mankind has done to nature, as well as eradicate diseases, or poverty.