It Is Time for a Robotics Law (December 26, 2016)

Developments in robotics technology have given rise to a variety of discussions on the potential social, economic and cultural implications of robotics.  Potential ethical problems and lack of a legal framework are giving rise to scary scenarios about robots.  Should we fear artificial intelligence, or are we worrying needlessly?  Will it be possible to create a new legal framework and a system of ethics to parallel the advances in robotics technology?

We talked with Professor Cem Say, Professor Levent Akın, and Professor Ethem Alpaydın of Boğaziçi University Computer Engineering Department, and Burçak Ünsal, who teaches Cyber Law at BU Department of Management and Information Systems and is a member of BU Information Systems Research Center.

Robotizing sectors:  Chronic unemployment or new job opportunities?

Robots affected the presidential elections in the United States last November.  Robots wrote news briefs, replied to tweets, inflamed the discussions, and influenced the voters.  Athletic footwear giant Adidas uses robots to manufacture shoes.  In Japan, robotics technology has gained a place in many areas of life, from caring for the elderly to service at restaurants. All these developments imply that robots are seriously competing with humans today.  Could these developments be the precursor of chronic unemployment in the future?

Prof. Cem Say:  Any task that can be defined or specified can be assigned to robots.  And when we add to this a learning process, robots can also perform many tasks that require “creativity” such as writing a newspaper column in the specific writing style of a given columnist, or making a film.  Nevertheless, in order for a job to be completely transferred from humans to robots, the robots must provide significant advantages.  We can say, therefore, that the future composition of world economy will determine which jobs will remain in the human domain and in which parts of the world.

Prof. Levent Akın:   Contradictory reports are being published on this issue.  If we look at the effects of the development of automation on employment throughout history, we will see that initially some jobs disappear, leading to massive unemployment; yet at the same time new technologies create new fields of work and unemployment decreases instead of increasing.

For instance, in the 19th century there were no airplane pilots or computer programmers.  Some researchers are predicting a similar trend for these new technologies.  People in some sectors will lose their jobs because of artificial intelligence and robotics technologies, but new job opportunities will emerge.  Among such new job areas are “training robots” or “implementing artificial intelligence.”

Unemployment, or a utopia where we will not have to work?

McKinsey Global Institute recently published a report on the potential for automating certain jobs in the US market by using existing technologies.  According to the report, the fact that it is technically possible to automate a job does not necessarily mean its automation will bring success; there are many economic and social factors to be taken into account.  The report also states that the jobs with the highest potential for automation are physical jobs conducted in structured environments – such as welding, assembly line work, food service or packaging – as well as data processing and data collecting. Job areas that range in the medium degree of automation potential include physical jobs in dynamic environments such as construction, forestry, or animal husbandry.   Jobs that require expertise, management jobs and those that involve decision making, planning, or creativity are the least likely to be automated. 53% of activities in the retail sector are susceptible to automation; in the finance sector the rate is 43%.

Other researchers argue that the present situation is very different from the historical development of automation and that it will ultimately lead to unemployment.  Such a development may result in a final global economic crisis; or it may lead to a utopia where people engage in science and arts, but do not have to work.

As self-driving vehicles, drones and personal assistants like Siri develop, the demand for human labor will decline

Burçak Ünsal:  The digital media and netbots played a part in the US presidential election in November 2016.  Actually considering our political climate, trolls, fake accounts and so forth, we are familiar with this. Besides, the first person to effectively use digital media and technology as a financial resource and for communication in a political campaign was Obama.  Those who keep up with the sector know very well the role of digital communication and crowd-funding in Obama’s success.

Artificial intelligence, robotics technologies and the Internet of Things (IoT) have unlimited application areas in the world.  Actually, their effective use goes back to the 1980s.  From diagnostics and therapy to energy, defense (war), education, arts, production, services, transportation, clothing,  the “artificial intelligence + robotics technologies + IoT” formula in all those areas is just like the indispensable “water + salt + heat” formula in cooking.

Currently, sophisticated devices are manufactured on assembly lines with minimum or no human intervention: cars, computers, chips, durable and nondurable consumer goods, food, etc.… Going back to your question, domestic robots and service robots are already replacing humans. In the morning, when you enter your bathroom equipped with high technology sensors and systems, you will have a physical check up without going to the hospital:  the system will analyze your body temperature, saliva, urine, sweat, weight and pulse on basis of your previously stored medical data and determine the medication you need to take as well as the dosage. You will also be able to receive preventive medicine service and treatment, often without a physician present.  Numerous armies in the world are officially working on war robots, weapons technologies, and even regeneration of human organs.  I haven’t even mentioned self-driving vehicles, drones or personal assistants like Siri yet.

It would be too presumptive to claim that the need for human labor will disappear totally; but we can say that it will certainly decrease substantially.  This is not just a prophecy.

“We will learn a new profession every 20 years”

Prof. Ethem Alpaydın:  As robots become smarter, humans will need to be even smarter. For instance, with self-driving automobiles, the profession of “driving” will become obsolete.  Instead of complex, repetitive jobs, people will have to seek more creative jobs to earn a living; this will mean more engagement in science and longer education periods. As the world is changing at a rapid rate, we will not be able to stay in the same profession that we learned in our twenties until we retire. I think in the next 15 or 20 years all automobiles will be self-driving; in other words, people who are earning their living as professional drivers today (and I think there are a few thousand such people in our country) will have to learn a new profession in 15-20 years, and the state will have to start training programs to that end.  We will probably have to learn a new profession every 20 years.

“The right to privacy must be taken into account in designing robots”

The rapid incorporation of robots into our lives has led to various discussions on many different issues such as security, social life, and law.  Experts are in full agreement that robots should be equipped with ethical rules. How can we develop ethical rules pertaining to robots? Personal assistant robots employed in elderly care or self-driving vehicles raise many questions.  What must an ideal legal and ethical regulation comprise?

Prof. Cem Say:  This issue has been on the agenda ever since science fiction writer Isaac Asimov’s “Law of Robotics”, the fundamental principles to be placed in the brains of robots that robots could not violate under any circumstances.  It would be very difficult to agree on these principles; however, an international team of lawyers, philosophers, and engineers can draw up a list of rules and develop them into an international law.  The problem here is that these rules can be violated.

The robots must first meet the established expectations with respect to humans employed as care givers for the elderly. The interesting point here is that robots also possess some additional capabilities. For example, a care giver robot can record all sounds and vision within the facility; for technical reasons, it may even have to send the recorded data to a center.  At this point data security and privacy issues emerge.

Prof. Levent Akın:   Studies on ethics have been going on for a long time all around the world.  There are a few EU projects on the topic.  A draft prepared in South Korea describes the principles guiding not only humans but also robots.  A few months ago a standard was published in England: BS 6811 “Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems”.  The standard is intended to provide guidelines on identifying potential ethical hazards of robot applications and means of eliminating or reducing these risks, and to offer information on safe design,  protective measures, and the design and various application of robots.

Some of the socio-ethical principles to be observed include the following: it should be possible to find who is responsible for a given robot or behavior; robot design should take into account the privacy of personal life;  in any situation, the final decision-maker must be a human; and employment and environmental factors should be taken into consideration.

Prof. Ethem Alpaydın:  Human-like robots are a very recent topic, but machinery that affects human life has existed for a long time.  Airplanes have auto-pilot systems; factories have automatic control systems; many auto parts contain pre-programmed components (for example, the ABS brake systems).  So this is not a very recent phenomenon.  When there is a problem or malfunction in any one of those systems, the responsible party is the manufacturing firm.  That will be the case for robots as well because robots will also be machinery manufactured to perform certain tasks; perhaps they will be more flexible and more capable, but they will still be machines designed by humans.  Robots that constitute a different species, that live with us, and have their individual rights and responsibilities will remain in the realm of science fiction for a long time to come.  The same legal process that applies when a plane crashes because of defective manufacturing will also be followed when a self-driving automobile is involved in an accident. 

Legislation needs to regulate this new reality

Burçak Ünsal:  It is algorithms, analytic calculation systems and communication systems that make machines intelligent. This technology is a new wave in the endless motions of the same sea.  If there is a technological development that will fundamentally alter every aspect of life and if it is developing at a rapid pace, not being able to compose a corresponding legislation is unthinkable.

Without a legislative instrument, we cannot establish or distribute the responsibilities and rights of the “producer”, “server”, “individual”, “society”, or “state” who have conflicting interests; we cannot regulate the public incentives and special investments needed for the development of this technology or impose taxes; we cannot take measures to prevent potential material and nonmaterial damages, and we cannot compensate for damages incurred.

We cannot legislate a phenomenon or situation that we cannot define.  The main difficulty here is not in defining “artificial”; it is in defining “intelligence”.  In order to be able to make and discuss this definition, to regulate the conflicting interests fairly and in a manner conducive to further   development, lawyers, ministries, regulatory institutions like the Information and Communication Technologies Authority (ICTA), and the relevant commissions of the law-maker (the parliament) should, in the very least, have adequate technical training and capability, closely follow the developments, and possess a sophisticated philosophical approach.

With the term “artificial intelligence”, we do not refer to the intelligence of a human or a natural organism (leaving aside the discussions on modified biological material):  we refer to an unnatural, “currently” non-living system performing an action by calculating the data and the conditions, by making a deliberate choice in order to reach a certain aim – in other words, we refer to a “will”.

We need to be able to differentiate the concept of intelligence, and indirectly the concept of “will”, from human intelligence; we have failed to do that legally and philosophically.  And that failure is nothing new; humans have not been able to do that in their approach to other intelligent organisms as well.

We now have a reality created by humans, which can do what humans cannot, and do better and more efficiently what they can; this reality needs to be legislated.

The scientists who developed this technology also composed the first legal regulations of such phenomena, and conducted the original experiments and made the first scientific publications on legal informatics, a subfield of AI concerning the applications of AI to legal reasoning.  One of the first articles on legal regulation of artificial intelligence was published by Buchanan and Headrick in Stanford Law Review in 1970 [1].  The result of the Taxman System that was developed immediately afterwards was discussed in Thorne McCarty’s article “An Experiment on Artificial Intelligence and Legal Reasoning” published in Harvard Law Review [2].

Since the 1970s, both technology and law developed at a hard to follow pace.  Today such issues as responsibilities of manufacturers and users, consumer rights, security, incentives, frequency/numbering allocations (concession contracts), application areas and limits in the fields of  artificial intelligence, robotics technologies, human cloning, and IoT are discussed and  defined in the legal system [3].  In USA [4] in particular, and in the European Union [5], Japan [6] and other technology producing countries these issues are brought before law makers and are legislated [8].

In May 2014, the European Union completed the two-year RoboLaw project of 1.5 million Euros and developed the fundamental principles [9].  With extensive work on robotics and other advanced technologies as well as the legal regulations they developed, governments in Japan officially took steps to develop the country.  They are aiming to become leaders in the field with their authentic work on technology and the legal substructure of the issue [10].  In the United States, there is a wealth of highly esteemed publications and discussions on the issue [11].

Legal regulations in the USA and the EU

The situation is no different in life sciences, an inseparable part of high-technology. In 1975, a baby having not two but three biological parents was born in the US.  The birth was due to an in-vitro fertilization technique called “cytoplasmic transfer”.  Cytoplasmic transfer involves the in vitro treatment of a genetic mitochondrial disease of the biological mother.  Cytoplasm from the egg of a healthy donor is injected into an egg cell of the biological mother, making in vitro fertilization of the mother’s egg possible. The technique was banned in the US in 2001 on moral and legal grounds, and because of the risks involved. United Kingdom, on the other hand, continued to apply the technique and established the legal framework for it in 2015. Britain thus gained a competitive advantage in this kind of scientific studies and their funding. Human cloning for reproductive purposes was banned by laws in 70 countries.  However, human cell cloning for treatment purposes in regenerative medicine and stem cell research is legally possible in almost all of them.

EU Charter of Fundamental Rights and an additional protocol to the European Union Convention of Human Rights and Biomedicine banned human cloning. Although only three countries (Greece, Spain, and Portugal) have integrated this ban into their own laws, the Charter of Fundamental Rights is binding for all countries in accordance with Lisbon Agreement.

Federal financial funding for human cell cloning studies were banned in USA in 2010, putting an end to state-supported research on the issue.  On the other hand, private universities and institutions have been going on with their research work by their own financial means.  The federal law banning this application was not passed, but 15 states passed laws that banned human cloning for reproductive purposes.

United Nations could not reach a consensus on the issue since some members demanded a ban for human cloning for all purposes, including treatment.  However, in 2005, UN published the nonbinding Charter on Human Cloning and issued a call for prohibiting all forms of human cloning that is not compatible with human dignity. 

Fully autonomous war power and robotics technologies in the defense industry

The most frequent use of robotics technologies is witnessed in the defense sector.  In 2012, Human Rights Watch published a report titled “Losing Humanity:  The Case against Killer Robots”, inviting countries to ban the development, manufacture and use of fully autonomous, or unmanned weapons.  Can limiting R&D work in the defense sector have a negative impact on robotics technologies?

Prof. Cem Say:  Such bans serve the purposes of only those who issue the ban.  So even if the ban is issued, I don’t think it will be implemented.

Prof. Levent Akın:   At this point R&D work is advancing at a rapid pace – except in the defense industry; technologies and applications that we used to view as science fiction until a short time ago are actually being used now.  I don’t think that limiting research on fully autonomous weapons will negatively affect the studies in civilian areas of application.

Prof. Ethem Alpaydın:  The rapid advances in electronics and the computer sector in the last seventy years are due partly to space research, but particularly to the defense industry.  For example, ARPANET, a network considered to be the ancestor of the Internet, was established by the Advanced Research Projects Agency (ARPA) of the US Department of Defense.  Robotics and related technologies are also supported by the defense industry today.  I don’t believe a ban of this sort will work; if human beings want to wage war, they will do so using whatever they have.

“It is unfair to consider only the hazards of robots”

What kind of regulations do we have on the world agenda at present?  Where does Turkey stand in these discussions?

Prof. Cem Say:  As far as I know, the world is at the “we need to talk” stage.  I have not heard of any studies on this topic in Turkey.

Prof. Levent Akın:   In the United States, several studies were conducted on developing the legal principles regarding robotics applications within the framework of the Robolaw project, and a report was presented to EU Parliament covering issues to take into consideration in preparing the law. The report includes regulations for general healthcare, security, environment and the protection of personal information.  In the US, there has been some work on the legal framework, but the country has fallen behind.  In Turkey, there is no work that I know of on the subject.

Prof. Ethem Alpaydın:  An increasing number of people have been discussing the hazards of artificial intelligence lately; I don’t think such hazards are imminent.  For the generation that grew up watching Terminator films, the possibility seems attractive but I think it is exaggerated.  We will not wake up one morning and find ourselves surrounded with robots that are as intelligent as or more intelligent than humans.  Like every technology, artificial intelligence will advance in small steps; more intelligent systems will make our lives easier.  So it is unfair to forget about the positive sides and focus on hazards.  To my knowledge, this topic is not discussed in Turkey yet.

Burçak Ünsal:   Ever since the early 2000s, even during the current state of emergency, regulations have been prepared and funding has been provided for establishing Technoparks to encourage and finance R&D activities, setting up incubation centers and accelerators, supporting small enterprises and investor ecosystems (Business Angels of Turkey), and making electronic trade and the digital area more beneficial for society.  

Planning for a knowledge society has been going on for years.  However, in terms of expediency and efficiency of the applications, the results obtained are not where we want them to be. There are people like us in Turkey, professionals and academics, who follow the developments on the issue very closely and offer proposals for legal definitions and regulations so as to create a basis for discussions.  Unfortunately, it is hard to say that there are any effective, significant studies that might pave the way for our universities, entrepreneurs and investors to catch up with the technological revolution.  I would like to take this opportunity to make a call, one more time, for such studies and applications.

News:  Ö. Duygu Durgun, Gökçe Büyükbayrak (Office of Corporate Communications)

References:

[1] https://www.jstor.org/stable/1227753?seq=1#page_scan_tab_contents

[2] http://www.cs.rutgers.edu/~mccarty/research/hlr77.pdf

[3] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2609777

[4] https://artificialintelligencenow.com/media/documents/AINowSummaryReport_3_RpmwKHu.pdf

[5] http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDO...

[6] http://www.kl.i.is.nagoya-u.ac.jp/jurisin2016/

[7] https://cs.stanford.edu/people/eroberts/cs181/projects/201011/ComputersMakingDecisions/ regulation/i...

[8] http://apps.americanbar.org/dch/committee.cfm?com=ST248008

[9] http://www.robolaw.eu/

[10] http://link.springer.com/article/10.1007/s00146-015-0628-1

[11] https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=235051.

Event Archive: