Artificial Intelligence: Trick or Treat?

artificial-intelligence-3382507_1920

Artificial Intelligence (AI), about which futurologists and sci-fi writers warned us for the at least last 50 years, came here to be, no matter we want it or not. Scientists, researchers and writers, have made forecasts about the machines, capable of acting intelligently, nearly on the human level. Everyone has seen at least one movie, or been reading at least one book about

It is a sphere, also, where everyone has his own opinion. Some are admitting it exists, some – neglecting the fact of it. Who is right or wrong? What can be included in the definition of AI? AI – can be described as an ability of machines (computers) to fulfill intellectual functions of humans. But what is included in human intellectual functions? These are the questions still without answers.

We are like babies, that are trying to come into the deep ocean and trying it with our foot: we know how attractive it is, but, at the same time, how scary it is as everything so huge and unknown. How should we act in this situation, what strategy we can use: shall we learn to swim, or we will just follow the currents?

The fourth revolution and society

 

Many are calling the coming changes the “forth revolution” (Helfstein S., Manley I.P, 2018) that will change the way business, industries and markets work and exist. Indeed, potential of AI is huge, and everyone is using it already, and development of the first AIs can be dated to 1950-60s, when the first prototypes were developed (Siri aka Eliza, Nadine aka Shaky, etc.)

Often, researchers are divided into 2 groups, based on their perception of how fast AI will develop with time. I will call these groups Optimists (Templton B.(2016), Hawkin S., Musk E. (2017)), who suppose that AI will develop in higher and higher tempo, and Pessimists (Helfstein S., Manley I.P, 2018, Etzioni O. (2014)), that think that AI capabilities of tempo is overestimated.

The fears of new technologies exist. But they were characteristics of every industrial revolution when innovations were changing the societies’ landscapes and structures. Everything that is unknown or cannot be understood or explained might become a source of fear. Fear of instability, loosing one’s place in society, necessity to define oneself as an individual and establish a status in the society again. So, this fear is not new and such alterations in the society comes now and then. Recently, more often than before

Automation is changing the landscape for businesses when some tasks are outsourced to the AI. Already now many functions are performed by machines. But when it comes to a larger scale, the whole set of professions are under the risk of extinction. Therefore, the labour market will be disrupted, but the scale of it and the vector is not easy to forecast. Society must develop new ways of employment and wealth distribution.

We do not need to worry that humans will not have enough jobs when AI comes into action: Morgan Stanley researchers are forecasting decline of the working-age population by 2050 (Helfstein S., Manley I.P, 2018). In this situation, AI will help to cover the gap for workers in industries.

workforce

Deep Blue was the first computer that was exceeding human intellectually. At the time it happened, it was a shocking event, but now it does not seem extraordinary. More than that Deep Blue is so outdated, that it found its last place at the Museum of Computer History’s exposition. Any smart phone now exceeds us intellectually, and we still using them.

Directions in AI development

 

We are used to see AI as a “almighty” technology in the movies, but in reality, it is far from that and still is in the start phase of its development.

AI is still a concept as we do not know what direction it will develop most: will it remain a machine, or will it get a human-like behavior? The direction is not defined yet and we might be able to see different scenarios of the direction the development goes. It happens because no single technology defines AI development nowadays. The whole set of opportunities exists for different purposes. Trajectory of development is non-linear, and it is interesting, I can say even exciting to see the way it drives.

Markoff (2015) in his research on history of automation, points out that machines can simultaneously augment and displace humans. He is raising the question that is interesting to everyone: will machines help us, or will they destroy or replace us? Although describing his worries, he finds positive ways of using AI: as helpers, or careers for the generation of “baby-boomers”.

Whether we want it or not, but the day, when we have to live along with autonomous computers will come sooner or later, or, actually, sooner than we are expecting. Some debates are going around the form in which these computers will exist: as a part of a real human being in form of chip inside the body (intelligence augmentation), or as a really autonomous artificial intelligence, kind of computer. Will it be easier for humans to percept that they are losing to such “super-humans” or to the computer? Will it still be left a sphere where human brain is better?

Elton Musk started a Neuralink in 2016 – a concept (startup) that links human brain directly to computers or other electronic devices via cybernetic implants in order to find a way to increase human intelligence, and to make an AI a part of human brain. The details about it is still unclear and it is still more the idea than reality, but Musk started funding in the end of 2017, but without any active fundraising or investments. Time will show whether this project came into reality or stayed being a draft. We need to remember that human brain is the most complex thing we know, and it is still enigma for scientists how it works.

It is obvious, that with appearance of such creatures, the whole basis of the society must change to adapt to a new reality. Nothing that was existing before will work: laws have to be changed, communication, esthetics. It is, anyway, still hope that the sphere of ethics will remain the same, although it is also in the risk group (do humans want to put themselves on the same level with the God by creating life? Can it be called a life? If this type of AI will be destroyed by human, will it considered to be a sin? Will working with such a software be considered as interfering into the independent mind? Etc. etc.)

AI and ethical dilemmas

technology-3389904_1920

Engineers, that are constructing these AIs, or writing the codes are preferring to hold themselves off from these ethical problems. As Brad Templton (2016) noted in the interview, let the state deal with the laws and all kind of this problems. Although, even the well-known guru of technology Elon Musk (2017) suggested that it might be needed to regulate the development of AI and called AI the most serious threat to the survival of the human race (2014).

Even when the AI will be under a state regulation, engineers should always remember that they have social and ethical responsibilities while working on the AI. AI, in general, is still a technology, and should be regulated as any other technology that imposes risks for humans and is a matter of human safety.

We will have more and more machines, and examples of AI around us, so the question is how to use it to improve our living, not to take over human freedom and the right to control our own lives.

robot-1583551_1920I guess that nobody wants to find oneself in the middle of Terminator movie, although modern robots can do most of the functions robots from Terminator was doing, apart from being totally autonomous and taking their own decisions. But they can climb, walk upstairs, operate instruments, open doors, use weapons, self-driving cars exist as well. So, are we there already? Can’t even use expression “yet” in this case.

Another risk is what if it is some mistake in the code and the system will not work as it was planned to work? What if the mistake will be found too late and the damage is done already? These questions are, of course, maybe over-cautious, but they have to be answered, before we are entering the high-risk sphere of developing AI.

Software that we are using every day is collecting all types of data about us and our behavior, habits, biometric features (fingerprints, voice, appearance, movements), etc. All “assistants”: Siri, Alexa, Alisa, etc. are based on and animated by AI, and take decisions based on the information they are collecting about us.

big-data-2296821_1280

Everyone has experienced what it is called targeted advertising, personalized product recommendations, tailored marketing. We meet it every time we are opening the browser, social networks or getting commercial emails. It seems odd, but AI with so vast capabilities working on how make us spend more money.

Competitive advantages being human?

team-965093_1920

We are still at the early stage in AI development and research. Anyway, AI is developing fast and technologies are helping with smart homes, medical services, crime prevention, bank services. Even decisions upon the loan is made with a help of AI, which analyses all types of data about a person (bank history, credit history, social networks, consumer behavior, what and how a person fills in various online forms, etc.) and takes a decision based on this information. The system is highly complex, and algorithms are learning as they are working. The negative side of it – it’s not always possible to see why one or another decision was made.

Arne Krokan (2017) consideres AI to be more merely knowledge, and not able yet to deal with and fully understand a human nature with all its weak sides from the pure logical and scientific pont of view.

So, we still have such competitive advantages as intuition, feelings and faith. Intuition is base on data, of course, and AI using even bigger amounts of data, but human can work with it more effective. Our intuition, still, cannot be described by a code or numbers. Our decisions are based not only on the knowledge we have, but also, on our culture, cultural codes, our experience, our perception of the society and world around us, etc., etc. It lies on the top of our fingers, even for us is not that easy to describe or understand it. Luckily, it is not easy to reproduce with AI.

So, trick or treat?

We are having control over technology as long as it is performing under the rules, laws and values created by people. At once we are losing such functions – we will lose control and become inferior to AI. Oren Etzioni (2016), though, reassured us that we still be able control the machines in the nearest future. Anyway, AI is still just a tool for performing tasks, and requires input and orders from a human.

Viktor Mayer-Schönberger in his interview (2016) argues that AI and its algorithms cannot have ethics and are cannot be judged from the ethical point of view. Instead, we should focus on the ethics of how data and AI is used.

Another problem – we have to be responsible and really careful that technology is used by people with a decent intentions and values. But here there is no guarantee that it will happen. If someone is creating technology, or algorithm – one must take responsibility for it. Easy to say, but almost impossibly to fulfill. Nuclear power was created only with good intentions, the same can be said even about weapon, as it can be created merely to protect motherland. What could possibly go wrong? But it did.

At the meantime, the biggest IT companies are continuing working with AI, nevertheless admitting that it poses a high risk to the people and society. Some states started working with regulation of the AI, but it goes slowly enough to lose a pace to the development, possibilities and capacities that AI gains every day. It is still a question whether states will manage to take it over control, because most of the transnational corporations are not belonging to one state.

Therefore, we should not “jump into the turbid waters” of AI, but rather develop it under the control from both: state and society in order to reduce risks and to foresee possible consequences.

What do you think about perspectives of AI? Do you feel threat, or do you consider AI to be a salvation? Comments are welcomed!

Bibliography

Breland A., (2017), Elon Musk: We need to regulate AI before ‘it’s too late’, http://thehill.com/policy/technology/342345-elon-musk-we-need-to-regulate-ai-before-its-too-late

Darren Q., (2016), Interview: Singularity University’s Brad Templton on the future of “robocars”, https://newatlas.com/autonomous-cars-future-brad-templeton-singularity-university/46647/

Davenport T. H, Ronanki R. (2018), Artificial Intelligence for the Real World, http://www.managementissues.com/index.php/organisatietools/83-organisatietools/994-artificial-intelligence-for-the-real-world

Etzioni, O. (2017), How to Regulate Artificial Intelligence, The New York Times, https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html

Etzioni, O. (2014), It’s time to intelligently discuss Artificial Intelligence, https://medium.com/backchannel/ai-wont-exterminate-us-it-will-empower-us-5b7224735bf3

Helfstein S., Manley I.P. (2018) AlphaCurrents: Artificial Intelligence and Automation: Forth Industrial Revolution, https://pwm.morganstanley.com/therichmangroup/mediahandler/media/135091/Alpha%20Currents%20_%20AI%20and%20the%20Fourth%20Industrial%20Revolution.pdf

Gibbs S. (2014), Elon Musk: artificial intelligence is our biggest existential threat, The Guardian, https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat

Knapton, S. (2015) Meet Nadine, the world’s most human-like robot, https://www.telegraph.co.uk/science/2016/03/12/meet-nadine-the-worlds-most-human-like-robot/

Krokan, A. (2017), Kunstlig Intelligens og naturlig dumhet,  http://www.krokan.com/arne/2017/06/28/kunstig-intelligens-og-naturlig-dumhet/

Markoff, J. (2015) Machines of Loving Grace: The Quest for Common Ground Between Human and Robots, HarperCollings Publishers, New York, NY, USA, ISBN: 0062266691 9780062266699

Nilsson, N.J. (2009) Artificial Intelligence A New Synthesis, Morgan Kaufmann Publishers, Inc. San Francisco, California, ISBN 1-55860-535-5

Salecha, M. (2016), Story of ELIZA, the first chatbot developed in 1966, https://analyticsindiamag.com/story-eliza-first-chatbot-developed-1966/

Templton B., If we are lucky, our pets may keep us as pets, https://www.templetons.com/brad/apes.html

Zicari, R (2016), Civility in the Age of Artificial Intelligence, Vital Speeches of the Day, January 2016, pp. 8-12, http://www.odbms.org/2016/02/civility-in-the-age-of-artificial-intelligence/

Zicari R.V. (2016), On Artifical Intelligence and Society. Interview with Oren Etzioni, http://www.odbms.org/blog/2016/01/on-artificial-intelligence-and-society-interview-with-oren-etzioni/

Zicari R.V. (2016), On Big Data and Society. Interview with Viktor Mayer-Schönberger, https://medium.com/backchannel/ai-wont-exterminate-us-it-will-empower-us-5b7224735bf3

 

Leave a comment