ARTIFICIAL INTELLIGENCE – UNWARRANTED
Elon Musk, the co-founder of Paypal, the founder of Tesla and SpaceX, the 99th richest man in the world– a man who seems to be known by everyone around the globe. An undoubted genius, a visionary and a brilliant marketer.
All of which makes his ridiculous views on Artificial Intelligence more puzzling.
Musk initially made headlines last year for his statement about the high probability of the world as we know it is a matrix. These musings, while seemingly bizarre at first glance, seem valid upon further introspection. The further you dig, the more you believe his words; at some point it seems so painfully obvious that he is right, you wonder why you never thought of it before- and that is precisely the genius of a man who managed to make the US government invest in electric cars during the depths of recession.
The AI of today is used everywhere, from Google’s search engines to Facebook’s facial recognition software. It is, however, a far cry from the terminator style technology Musk would have us believe exists. Technology is advancing rapidly, and nobody can say how fast it develops and how much it does- but we can say with a suitable guarantee it will never cross that threshold- at the very least, there is no imminent threat of it doing so.
The sort of idea we have about Artificial Intelligence is very different from the AI that exists in real life- the intelligence is not actual intelligence, per say. Nothing can make decisions on its own; it is all about what’s in the code. Machines will not decide to wipe out the human race and proceed to do so, simply because it is just not in the code. Imparting machines with even a little intelligence are extremely tough, making machines with self-decision capacity to that extent is almost impossible. As professor Toby Walsh of New South Wales University argues, we are certain to run into some fundamental or engineering limits before that, citing examples of the death of Moore’s law and the inability of acceleration of light beyond a certain point.
Sure, there exist scenarios where AI may prove to be dangerous. Automated war machines targeting people are not likely to stop until out of ammunition (or people). Technology steadily is putting people out of work. All of that can be controlled by appropriate legislature, and none of that, however, justifies Musk snapping up robotics and machine-learning companies to ‘keep an eye on safe Intelligence development.’Neither does it support his stance on technology being catalysts of World War 3, that even North Korea is less of a threat (of course, from an American point of view). A war between the united states and North Korea would lead to loss of millions of lives, while the advance of artificial intelligence would lead to, at most, a machine that is able to beat players at complex board games a little faster.
This hype and fear mongering do nothing much except possibly impede development, something most Silicone Valley executives have acknowledged. Of course, the caution of some form is needed, but mostly to stop large companies from growing even larger and establish a complete monopoly over the sector.
To a reporter’s question about having a kill switch for a hypothetical sentient murderous machine, Musk replied:“I am not sure I would want to be the one holding the kill switch for some super powered A.I. because you would be the first thing it kills”. With fears this outlandish, little wonder his theory faces skeptics at every turn.Living in an era where printers do not work half the time, the rise of malevolent technology is far from, if ever, achievable.
520 total views, 2 views today