Artificial intelligence experts pledge to protect humanity from machines
Artificial intelligence experts all around the world are signing an open letter, pledging to protect mankind from machines.
Washington: Artificial intelligence experts all around the world are signing an open letter, pledging to protect mankind from machines.
Though still decades away from developing a machine that might enslave humankind, the Future of Life Institute put forth the open letter to ensure the progress in the field does not grow out of control, CNet reported.
The signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014, MIT professors and experts at some of technology's biggest corporations, including from within IBM's Watson supercomputer team and Microsoft Research.
The letter's summary reads: "Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls."
While the 3 most immediate concerns are areas like machine ethics and self-driving cars and autonomous weapons systems, the long-term plan is to stop treating fictional dystopias as fantasy and address the possibility that artificial intelligence could start acting against its programming someday.
The main aim of Future of Life Institute, which is a volunteer-only research organization, is mitigating the potential risks of human-level man-made intelligence that may then advance exponentially. It was mainly founded by Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.
SpaceX and Tesla CEO Elon Musk, who's on the institute's board of directors, has been calling it "summoning the demon" and had said that there should be some regulatory oversight just to make sure that "we don't do something very foolish."
In May 2014, renowned physicist Stephen Hawking co-wrote in an article for the Independent, alongside Future of Life Institute members Tegmark, Stuart Russell and Frank Wilczek, that "one can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."