Press Release

History makers,
we change the world

Dr. Gerald Stokes, DTS Chair, wrote column at Mail Business Newspaper …

Press Releaseㅣ2020-07-20 11:49

Autonomy?

 Written by Dr. Gerald Stokes, Chair of the Department of Technology and Society at SUNY Korea


The interest in artificial intelligence (AI) around the world and here in Korea is justifiably high. The exponential growth in computing power over the past six or more decades is incredible. The prospect of having a machine that is as intelligent as a human with an even larger memory, complete access to the internet, and incredible reliability is compelling, yet …


The study of AI has taken several paths. One path has been to identify tasks that a computer might do that demonstrates intelligence. This has resulted in some interesting accomplishments. AI has conquered its human competitors in games like Chess and Go. These games are considered among the most complex humans play and have always been thought of as demonstrations of intellectual prowess, if not intelligence.


Another path has been to develop “expert systems”, codifying the decision processes of skilled human practitioners in a set of rules that can be faithfully and infallibly executed. More recently, with “big data” we can train algorithms to make decisions as we would, capturing our implicit intelligence as embodied in past decisions. The goal of these efforts seems to be to create an intellectual peer that can act independently on our behalf. We already have robots that carry out repetitive tasks in manufacturing such as replacing humans on assembly lines. Why not take on increasingly complicated tasks like medical diagnosis or driving a car?


The drive to create an autonomous intelligence seems to be a core aspiration of AI research. This effort comes with a nagging fear that perhaps someday this “intelligence” might begin acting on its own behalf. The root of this dilemma or set of concerns is the conflation of artificial intelligence with autonomy. This conflation breeds a whole host legitimate concerns about robotic self-awareness, ethics for and by robots, and ultimately robotic rights. These are all intriguing topics … but should autonomy be the aspiration for this important field of research?

 

Rather than asking what is intelligence? We might rather ask when are humans the most intelligent? My answer is simple. It is when they are acting collectively and cooperatively. One can easily ask the question, “What about mobs?” or other dysfunctional groups. Noting that there are also dysfunctional geniuses, I contend the idea is worth exploring, nonetheless. This could be viewed as an extension of the “Wisdom of the Crowd” concept introduced and discussed by James Surowiecki. But the nature of the partnership is different.

 

In partnership with a human or humans, AI is more of an ancillary intelligence(A*I) than an artificial intelligence (AI). This also recognizes that AI, or A*I, derives great power from all the connections it has to the assets that can be found on the internet. It too brings a crowd to the table of analysis and decision making.


More critically it clarifies the roles in the relationship. The human is the decider, not the autonomous AI. The human is advised by the A*I and the decision is better. However, it is still the human’s decision, not that of some entity that cannot take responsibility. Notice that I have dropped the term “robot” and am only speaking an AI or an A*I, independent of whether either could carry out physical actions in the real world.


This transformation to A*I is already present in many of the applications we see now around the world. They are embodied in many everyday things like Apple’s Siri, vehicle GPS systems, IBM’s Watson, and an impressive set of “assistants” such as Samsung’s Bixby. They are just the beginning. As interfaces, voice and facial recognition, sensor integration, and 5G enabling connectivity improves, the possibilities are staggering. Situational awareness will be heightened in a way that will change every business and social interaction. Your A*I does not forget a face, a meeting or details of a contract or transaction. It could improve human interaction in ways hard to imagine. It will cause us to rethink education, privacy, and our presence in the world.


In education, we were traumatized by the invention of the electronic calculator. Now we teach how to use it. What about our personal A*I? Will we train students on how to use it? Starting at what age? And privacy, each personal A*I shares assets with others. Will there be a conversation amongst our A*Is without human intervention? Very tempting, and probable, it seems.


The temptation will show up elsewhere. Our faith in the A*I builds confidence in the autonomous AI it could become. My car does a good job of lane detection. It makes sure I do not follow too closely and knows the way to my destination. Why can’t I just let it do the driving? We must be cautious as we allow our A*I evolve into an AI in the real world. The transition cannot be construed as a transfer of responsibility.


In contemplating the rise of the robots, the science fiction author Isaac Asimov formulated three laws of robotics. “(1) A robot may not injure a human being, or through inaction, allow a human being to come to harm. (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”


Perhaps there should be a fourth law that states: Every robot must have a human being that is responsible and accountable for its actions. This is the necessary condition for a transition from A*I to AI. I think the appropriateness of this is clear for driverless cars and anthropomorphic robots, but it has deeper ramifications. What of the “robots” that review immigration records, approve loan applications, conduct police investigations, or make medical diagnoses. These are clearly areas for our A*I partner to work with us, but are these decisions appropriate for autonomous AI decision making?


Our growing faith in and dependence on our A*I friends and partners, whether humanoid or just an analytic computer, can make us lazy. We may be tempted to default on our role as the accountable decision maker. We must either understand and accept our ultimate responsibility or resist the temptation to so easily grant autonomy.


Please click here to know more about the Department of Technology and Society

Read the original article on Mail Business (Korean)

 

Close

Spring 2021 Stony Brook University Application
Early Deadline: November 30(Mon), 2020
Regular Deadline: February 5(Fri), 2021
Fall 2021 FIT Application
Regular Deadline: To be announced.
스토니브룩대학교 2021 봄학기 지원하기
조기지원마감일: 2020년 11월 30일 (월)
정규지원마감일: 2021년 2월 5일(금)
2020 가을학기 지원하기
지원마감일: 추후 공지 예정