Future nightmare of humanity is not evil robots, it's unemployment

 

Future nightmare of humanity is not evil robots, it's unemployment

We asked our questions about robots and future to the experts Prof. Dr. Levent Akın, Prof. Dr. Ethem Alpaydın and Prof. Dr. Cem Say from Boğaziçi University Computer Engineering department, who are known for their work on AI.

One of the most important discussion topics of our current time is AI. Not a day goes by without a new discussion on it. There are completely opposite, conflicting scenarios, ranging from how AI will ease the lives of people to how evil robots will take over the world and be our wicked overlords. The confrontations and clashes between the people who believe robots will provide us with a better future and those who believe AI will be our doom are often occurrences in our time.

Upon hearing all these discussions, we asked the scientists from Boğaziçi University working with AI about their opinions on the subject. Prof. Dr. Levent Akın, Prof. Dr. Ethem Alpaydın ve Prof. Dr. Cem Say, scholars from Boğaziçi University Computer Engineering department, answered our questions in light of the recent discussions.

The arguments and discussions we hear recently from national and foreign press often focus on highly speculative opinions on how AI could be dangerous and evil to humanity. Most recently, we've read stories about Facebook pulling the plug on an AI system after two AIs that belong to the project created their own special language to communicate with each other. First of all, what are your opinions on the subject "Is AI dangerous?"?

Levent Akın: First and foremost, it should be noted that this story is fabricated. What really happened is this: Scientists were working on an AI program that could haggle with humans. To speed the process of learning, they had two AI programs haggling with each other, using English as the language. While doing so, the scientists rewarded the achievements of the programs so as to encourage betterment. However, the grammar rules of English was not one of the points to be rewarded for in the project, therefore, the programs started using the words unlike how humans would. This problem was fixed after the scientists coded English in as a rule in the haggling process as these programs are going to be used with humans in English in the future. So the project was actually a success and no one pulled the plug on anything. Artificial intelligence is not a threat on its own. The danger and threat surface only when it is used with evil intentions, just as it is the case with any other tool. One way to control this is to place an ethical higher system in AI systems and to let the AI go through the ethical system to decide if proceeding with the input is ethical or not.

Ethem Alpaydın: In my opinion, this whole thing is blown out of proportion. I didn't see this piece of news in the scientific discussion lists on the subjects regarding AI that I am subscribed to. General media exaggerated the news and made it seem like AI was out of control to make it seem sensational. To my knowledge, the programs were shut down because of errors.

Cem Say: Facebook didn't pull any plugs. Since they forgot to code the rule about speaking English in the program, it started steering off of English grammar over time. When the scientists noticed this, they stopped it and trained the program again, this time rewarding it for using proper English as well. There had never been any danger whatsoever. Considering the possible future economic repercussions of AI such as unemployment of humans, I believe the risk of existential threats to humans is negligibly small in the near future.

To give an example, if the scientists and researchers could predict the environmental repercussions of the Industrial Revolution, there would not have been any revolution. Indeed, Elon Musk (CEO of Tesla Motors) has pointed out that AI is an existential risk to humanity and our civilization, and that we do not yet know how to prevent a destruction caused by the robots. Facebook's CEO Mark Zuckerberg, on the other hand, disagrees with Musk. What can we infer from this discussion without getting carried away by the disaster scenarios or being overly positive?

Levent Akın: Along with the desired effects, every new technology brings about some side effects, some of which are very dangerous, while some others are less dangerous. The important thing here is to weigh the pros and cons and decide if the technology is worth using or not. In this discussion, however, it is not even certain both sides mean the same thing when they mention AI. When Elon Musk talks about AI, he most probably is talking about the human equivalent of AI systems. It is not possible to build such systems using the same techniques we use today. AI, in our current day, is being used as a tool. Every application has a clear purpose. Therefore, it is not possible for the AI system that has been developed to do anything other than its designated purpose: a program built to play chess cannot be used in medicine to diagnose patients. However, it is possible for the unit to not play chess very well if not coded or designed properly. In my opinion, the biggest threat and the worst case scenario stemming from AI is possible widespread unemployment of human beings in the future. Nevertheless, it is possible to prevent this with proper planning. The evil robot overlords we see in apocalyptic movies are not possible with our current technology. What's important is to make educated decisions that will benefit society.

Ethem Alpaydın: It might not always be worthwhile with regards to science when the CEOs of two publicly-traded multi-billion-dollar companies. I think both of these businessmen are commenting on the subject based on their own company's point of view. We are considerably distant from the times where robots could destroy humanity, there is no reason whatsoever to be afraid of such a thing. Even so, it is evident that due to the technological advances in various areas, plenty of professions will disappear, partly but not solely due to AI. This might cause an unprecedented amount of unemployment for humans, which, in turn, might bring about calamities such as famines, mass migrations, etc. This is what we should truly be afraid of and what we should try to prevent. Increasing the number of life-long education programs might be a solution. That is to say, there are more important risks that we are faced with than evil robots, and we should focus on real problems than such sci-fi scenarios.

Cem Say: The only thing we can infer from this discussion is that at least one of the owners of the two giant companies that shape the AI development is awfully wrong about the potential of AI. I do not think we would give up on the Industrial Revolution had we known about its current repercussions, because IR has innumerable advantages as well.

The prominent technology companies such as Google, Facebook, Amazon, IBM, and Microsoft have signed a partnership agreement to ascertain that AI will be used only to benefit the society. The US government also published standardized rules for self-driving cars in 2016. There is a consensus between the companies and governments on how to control AI, but the main problem still stands on how to actually implement these regulations. Who will be responsible when a robot hurts a human being? The company that developed the robot, the software developer that coded it, or the designer that designed it? Will we see new professions such as the AI police or the AI judge, etc. in the near future?

Levent Akın: There are numerous studies all around the world focusing on AI law. One of the most important ones on the subject is a research project titled RoboLaw, conducted a few years ago in Europe. The proposals of this project have been presented at the European Union Parliament to stand as a reference guide when making laws about AI. The robots at today's capacity are not regarded as an autonomous person, hence they are not held responsible for crimes. Upon inspecting the circumstances, the culprit will be identified. The decision will be made on a case-by-case basis, which means that in some cases the company will be held responsible while in some others the user/operator will be the perpetrator. Therefore, expert judges and lawyers on AI law are highly likely to exist, though I don't think AI police is probable. However, if new control mechanisms for AI are to be implemented in the future, I think it is likely for a new profession to emerge to supervise AI.

Ethem Alpaydın: It seems like a distant possibility to me for AI to have their own legal or ethical responsibility. By then, both the hardware and the software for the AI systems will be developed and built by a company, and the responsibility will belong to it. Systems that make their own decisions automatically currently exist in our lives, such as autopilots in planes. If anything goes wrong with them, the developing company takes the responsibility. Even so, I'm sure when, for example, self-driving cars are used more commonly, the driving tests for people will be much more difficult. I'm also sure that even though there will be some problems here and there, there will be fewer traffic accidents and transportation will be faster, easier and more cost-efficient.

Cem Say: It won't be much different than how the responsibility for an accident with elevators is divided today. AI systems can work as a judge or a police officer, but can it determine if itself is guilty of a crime that it has committed? I don't think we are ready for this, at least not in the short term.

Self-driving cars come to mind the first when considering the dangers of AI. In fact, the fact that such cars are easily "hackable" has become a major security problem. As a matter of fact, the situation has become so grave that there are some YouTube videos on how to hack self-driving cars. Keeping these in mind, what should be the boundaries to AI in your opinion?

Levent Akın: It is possible in today's world that any device connected to a network is susceptible to a hack. Recently, millions of devices like cameras, ovens, and fridges were hacked and used to attack various websites. Therefore, while designing products with AI, there should be installed devices that will make hacking impossible or at least prevent the hacker from taking dangerous actions if the device is hacked. This does not necessarily have to limit AI. After all, the AI devices, robots, programs will be used by humans; if there will be limitations, those will be about which areas they can be used in. Ethem Alpaydın: A car being hacked is not a problem of AI, it is a problem of security. Currently, computers that are tasked with much more critical activities than self-driving cars can be hacked from time to time but we don't just give up and stop using them; we make them safer. Our cars might be stolen, for example, but that probability does not stop us from using cars either.

Cem Say: They should be more intelligent so as not to get hacked. There might be a new official body in the future to strictly regulate the software for crucial systems like self-driving cars.

Do you have anything else to add?

Levent Akın: I don't.

Cem Say: There is no use fearing the inevitable!

Ethem Alpaydın: It is also important where you approach the AI discussion. Maybe we should stop focusing on copying or imitating the discussions in other countries in Turkey and start thinking about how to make the best use of AI in our specific conditions. In Japan, for example, where the birth rates are quite low, it makes sense to build robots with arms and legs, robots that can replace humans in workspaces. In Turkey, however, where unemployment rates are already high, our aim should not be to build robots that can replace humans but to use AI in such a way that it enhances and improves the efficiency and qualifications of the workers. To give you an example, a software that can simultaneously translate one language to another in real time would be a better solution for our people who do not speak a foreign language than "tourist guide robots".

 

 

Translation: Elif Sarmış