ProductsLogo
LogoStudy Documents
LogoAI Grader
LogoAI Answer
LogoAI Code Checker
LogoPlagiarism Checker
LogoAI Paraphraser
LogoAI Quiz
LogoAI Detector
PricingBlogAbout Us
logo

AI: The Future of Human-Like Emotions

Verified

Added on ย 2019/09/30

|10
|3686
|248
Essay
AI Summary
The content discusses the rapid progress of Artificial Intelligence (AI) and its increasing presence in our daily lives. The author highlights how AI can sort emails, monitor mobile calls, and even use unmanned drones to identify and eliminate targets. The text also touches on the emotional capabilities of AI, citing ancient desires to create someone in one's own image. It concludes that as humans interact with AI systems more frequently, it is crucial to integrate emotional intelligence into these systems, making them more human-like. Ultimately, the future of AI lies in its ability to mimic human emotions and behaviors.

Contribute Materials

Your contribution can guide someoneโ€™s learning journey. Share your documents today.
Document Page
Future of
Emotions in Artificial
Intelligence
Student: FILL STUDENT'S NAME
Faculty: FILL FACULTY'S NAME
Course: FILL COURSE TITLE
Date: FILL DATE

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Research Statement
Does artificial intelligence feel emotions, and where are we heading to?
Introduction
Emotions are what makes us human. And when we attempt to give machines the power to
think like us, is it any wonder we would also like them to feel like us. After all, without
emotions, a man is also an automaton. Have we been able to create an automaton that feels
like a human does? Will we be able to?
In this research, I will explore human emotions and intelligence, talk about our desire for
machines to feel emotions and react, issues standing in the way of progress and finally
extrapolate the future of emotions in AI.
Also, for this research I will only talk about Weak AI which concerns itself with solving a well-
defined narrow set of problems e.g. scheduling a user's goals automatically in available time
slots in Google Calendar app (available on Web, Android and iOS). Strong AI or general AI is out
of scope for this discussion.
An Ancient Wish to Forge The Gods
AI began in ancient times with the human desire to create something in his own image,
something that can think, act and feel like him. This desire has been finding outlet since that
time in the myths, legends, stories, speculations, arts and literature, technology of the times.
(McCorduck, 2004).
For example, mechanical men and artificial beings appear in Greek myths, such as the golden
robots of Hephaestus to serve their masters intelligently and Pygmalion's Galatea where a
marble sculpture is made alive by a Goddess. In the Middle Ages, there were rumors of secret
mystical or alchemical means of placing mind into matter. By the 19th century, ideas about
artificial men and thinking machines were developed in fiction, as in the original Frankenstein
(McCorduck, 2004).
Also, it was not always good news for the AI, as it suffered major setbacks, now referred to as AI
Winters, in which financial funding and interest of researchers dropped. Fortunately, these
were short-lived and it is my personal opinion that so many individuals, companies,
governments are directly or indirectly funding AI that there will never be another AI Winter.
Document Page
Describing Human Intelligence
Behind all these, in every era and civilization, we can see an assumption that human
intelligence" can be so precisely described that a machine can be made to simulate it."
(McCarthy, Minsky, Rochester, & Shannon, 2006). But then, do we know everything about
human intelligence? Is it a standalone feature of us, or do emotions, attitudes, moods,
memories interact and interfere with each other?
Human intelligence can be defined as mental quality that consists of the abilities to learn from
experience, adapt to new situations, understand and handle abstract concepts, and use
knowledge to manipulate oneโ€™s environment ("human intelligence | psychology", 2016). This
general definition is satisfactory as far as discussions are among humans. When we want to
explain this to a machine, when we are bound to work in terms of well-defined inputs and
outputs, when we desire to create artificial intelligence, we require to understand it ourselves
and this raises philosophical arguments about the nature of the mind and the ethics of creating
artificial beings endowed with human-like intelligence (McCorduck, 2004).
Nonetheless, legends, myths and fiction are not bound by determinism of inputs and outputs
and contain sentient creatures, automatons that think, talk and act like a human would .
However, to create something like this does require elaborate specification of requirements.
Role of Emotions in Intelligence
Earlier, emotions and intelligence were considered distinct, but now we are finding that
emotions make our thinking possible (McCarthy, Minsky, Rochester, & Shannon, 2006) and it is
probably counterproductive to try to separate them (Pessoa, 2009).
Isolated from all external stimuli, forced to make a rational decision, we are fully capable of
coldly calculating multiple options, run them through in our minds and then choose an optimal
path with maximum benefit and minimum loss. However, decisions are rarely taken such coldly
and rarely are we so detached from the process and the outcome. We usually make decisions
with emotions and then justify them with logic (Takahashi, 2013).
Our propensity for making decision with emotions give us the life suggestion that we should
never make a decision when we are angry, and never make a promise when we are happy
("Don't promise when you are happy, don't reply when you are angry, and don't decide when
you are sad. - Tiny Buddha", 2016). It must happened with you also, that you took a decision
when you were angry, and later, in hindsight, you realize you made a wrong decision and given
a chance, you would do something else. ("Article: Emotional Intelligence Impacts Decision-
Making - Jason Kleid: Changing Lives ยป Optimizing Performance", 2016)
Document Page
Also, we have anatomical proof for this daily-life human experience. Human brain is a tightly
packed and dense with distances between synapses being less. And if some portions of the
brain are relatively far, then in indicates a slight disconnect in their roles. The 'thinking brain',
neocortex is farther from the visceral 'feeling brain', amygdala (Pessoa, 2009). Further,
information reaching the brain through eyes and ears, is first sent to amygdala, and second to
neocortex. Thus, emotional brain gets the information first, and if conditions and the person's
temperament warrants, the person reacts even before the thinking part of the brain receives
the incoming information (Gabriel, 2000).
Why Do We Care About Emotions in AI
Which brings us to the point of our discussion - emotions in AI. Why do we care? Why 'dumb'
machines which follow rules, no matter how intricate and vast, and serve our purpose not
sufficient?
As we noted above, we usually make decisions with emotions and then justify them with logic
(Takahashi, 2013). Thus, it becomes logical that when we attempt to create sentient beings in
our likeness, they carry over our emotional tendencies also, for better or for worse.
Also, an unemotional system will always follow the path of maximum benefit and least loss,
without any consideration for human aspect of the decision. This makes them completely
predictable, and possibly counterproductive to the mission at hand.
Being predictable also makes the system more liable to be manipulated, just like with us
humans. If we know that the system is fully focused on optimal results with no emotional input,
we can manipulate the inputs to get the desired outputs with near certainty. After all, it
requires an emotional capacity to judge that we are being tricked, and we are better-off by not
taking a decision which on the face of it looks optimal.
As an example of being counterproductive, consider a smart missile which en route to its
assigned target, may decide that for maximum benefit, it would be optimal to target an area of
dense human population. It might be that the military just wants to perform a show of strength
with minimum human loss to enemy, and the missile was assigned to target a sparsely
populated hillside. After all, without emotions a man is just a statistic to include in the kill
count, and such demonization or dehumanization (removing the emotion part) of the enemy is
standard technique in war (Dower, 2009).
Another advantage of having emotional capability is a better user experience, and would be
essential for the integration of future robots into the human society (Samsonovich, 2012). If the
artificial intelligence is able to judge its user's state of mind and alter its responses accordingly,
it will be more accepted, the user experience will be less jarring and more pleasant. For

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
example, the paper clip helper in older versions of Microsoft Word, Clippit is intelligent at
observing our actions, predicting what we want to do and offer guidance, but it has zero
emotional skill. No wonder, Clippit was almost universally hated. Even a dog has more
emotional skill than Clippit, as when you yell at your dog to get down from the sofa, it does so
with drooped ears, submissive body language and knows something is wrong, and not with a
happy dance of Clippit to have interrupted your workflow (Picard, 2004).
One specific example is the emotionally capable machines that help the elderly, disabled or
people with special-needs. For these people, an AI that reacts and is empathetic towards them
will be a welcome change.
We note here that our attachment to an artificial intelligence's reactions to cold calculations
does not make it more or less sentient, intelligent or alive. We get attached to inanimate things
like a valuable watch, a precious porcelain figure or an everyday item with sentimental value
(Lungarella, 2007). For our discussion, we require the artificial intelligence to actually feel the
happiness, the sadness, the anger and all other emotions.
Thus, if we would like to have artificial intelligence which almost equals us, to which we can
hand over higher responsibilities, which we want to trust and want to please, we require
emotions in them.
Issues in Realizing Emotions in AI
For half a century, artificial intelligence researchers have focused on giving machines linguistic
and mathematical-logical reasoning abilities, modeled after the classic linguistic and
mathematical-logical intelligences (Picard, 2004).
What happened was that since its beginning, the cognitive revolution was guided by a
metaphor: the mind is like a computer and that humans are a set of software programs running
on 3 pounds of neural hardware - our brain. And cognitive psychologists were interested in this
software. The computer metaphor helped stimulate some crucial scientific breakthroughs. It led
to the birth of artificial intelligence and helped make our inner life a subject suitable for science.
But the computer metaphor was misleading, at least in one crucial respect. Computers don't
have feelings. Feelings didn't fit into the preferred language of thought. Because our emotions
weren't reducible to bits of information or logical structures, cognitive psychologists diminished
their importance. ("Emotion | AITopics", 2016) Thus, we are behind as far as emotions in AI are
concerned. But now the focus is shifting to make them react to emotions of their users and
actually feeling the emotions for themselves.
Document Page
As we have seen in our discussion, humans all over the world are active in bringing artificial
intelligence up to the level of human experience in terms of emotions. However, we see some
issues in this - issues which are not primarily computational roadblocks, but fundamental
differences between a man and a machine. Like, there are some things which an artificial
intelligence is simply unable to experience.
In the movie The Matrix, a sentient software program designed to guard the sanctity of the
complete system, Agent Smith, talks to a human who is plugged into the software (emphasis
mine),
"Did you know that the first Matrix was designed to be a perfect human world? Where none
suffered, where everyone would be happy. It was a disaster. No one would accept the program.
Entire crops were lost.
Some believed we lacked the programming language to describe your perfect world. But I
believe that, as a species, human beings define their reality through suffering and misery.
The perfect world was a dream that your primitive cerebrum kept trying to wake up from.
Which is why the Matrix was redesigned to this: the peak of your civilization."
("Agent Smith (Character)", 2016)
We see that some AI will be able to feel some emotions like joy, satisfaction, contentment on
finding a solution to a problem. Other emotions like disappointment, sadness, surprise, fear,
anger, resentment, friendship and appreciation for beauty, art, values, morals, etc. would also
be truly felt by the AI.
However, other emotions will be not wholly or faithfully experienced by an AI, even with a
sensing robotic body, beyond mere implanted simulation e.g. hunger, thirst, drunkenness,
gastronomical enjoyment, various feelings of sickness, such as nausea, indigestion, motion
sickness, sea sickness, etc., sexual love, attachment, jealousy, maternal/paternal instincts
towards one's own offspring, fatigue, sleepiness, irritability, dreams and associated creativity.
Why? With all the hardware, software and human intelligence at our disposal, we are not able
to make AI feel the complete range of emotions as we do. The reason is not the programming
language as in The Matrix, but due to the non-biological nature of machines and software. They
may be made to feel these emotions by implanting them, but unlike the ones discussed before,
they may not be true feelings.
We see a solution to that which may not be feasible in the near future, but we do know what is
preventing us and how to overcome it. Though, the solution changes the core ideology of the
field. Since so many of our emotions take birth in our bodily experiences, the only way to create
Document Page
these feelings organically for the AI would be to reproduce a biological body for the AI machine,
which will grow, age, mature. But then, we are not talking about AI anymore but genetically
engineering a new life being ("Could a machine feel human-like emotions ?", 2016).
Future
The future of AI is promising. We have not forgotten the AI Winters that fell on AI ecosystem,
but we have reasons to look up to the sky and hold each other's hands as we use the increasing
power of hardware, software and the best minds of humanity to still attempt to create
someone in our likeness.
The inclusion of AI in our society and our life may not be abrupt but it will be gradual. Gradual
enough to not be noticed and we will already be dependent in AI handling that part of our daily
lives e.g. Google's secondary email app Inbox (available on Web, Android and iOS) uses AI to
sort incoming email into bundles, and notifies explicitly only in case it calculates the email is
important enough that the user be disturbed.
On a grander scale, it's use will increase in military e.g. USA is using AI to monitor mobile call,
duration of call, location patterns to identify terrorists from the citizens of Pakistan. Once the AI
identifies suspects with sufficient confidence, unmanned drones kill the targets (Robbins, 2016).
I also see ever-decreasing likelihood of an AI Winter happening again as the web of AI is too
wide and too deep and financial support is coming from almost every individual and every
government in the developed world. For example, when a user pays for a music app that
analyses locally stored songs and groups them by beats, tempo, passion and allows the user to
create playlists on the fly, he is contributing to AI. Governments are too interested in outing
their rivals and other than the human losses in conflicts, it is all good news for the AI
community.
Conclusion
What began and continues as the man's desire to create someone in his own image began, an
ancient wish "an ancient wish to forge the gods." (McCorduck, 2004) . It began with stories and
myths of beings with human-like abilities, intelligence and emotions. The foundation was
further laid when humans tried to describe the process of human thinking as a mechanical
manipulation of symbols. This led to progress, but at the cost of emotions in AI. Having reached
a considerable success in logical intelligence, researchers now focus on the exciting challenges
and rewards of emotionally capable AI.

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
AI is everywhere whether we feel it or it is assumed to be just another service we use as we go
in with our lives e.g. from the Google search a professor does to check if his students are
plagiarizing (which involves AI capability to search through index of billions of pages) , or a sleep
app on your mobile (which notices the sounds of the user throughout the night, estimates a
"light sleep period" and gently wakes up the user), or government spying or military tactics etc.
Now, let's take all of this, which itself is progressing and put emotional capabilities on top it -
that is the future and that is where the man and the machine wants to be. We do check our
mobiles 150 times a day ("Americans Check Their Cell Phones 150 Times a Day", 2015), and the
effect on us is similar to interacting with a person. The chemicals released in the brain, the cells
that fire - it's all the same. So, let's take it further and bring them closer to us, in all respects,
just as our ancient fathers imagined.
Document Page
References
McCorduck, P. (2004). Machines who think. Natick, Mass.: A.K. Peters.
McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (2006). A Proposal for the Dartmouth
Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), 12.
http://dx.doi.org/10.1609/aimag.v27i4.1904
Takahashi, H. (2013). Molecular neuroimaging of emotional decision-making. Neuroscience
Research, 75(4). http://dx.doi.org/10.1016/j.neures.2013.01.011
Pessoa, L. (2009). Cognition and emotion. Scholarpedia, 4(1), 4567.
http://dx.doi.org/10.4249/scholarpedia.4567
Don't promise when you are happy, don't reply when you are angry, and don't decide when you
are sad. - Tiny Buddha. (2016). Tiny Buddha. Retrieved 16 July 2016, from
http://tinybuddha.com/wisdom-quotes/dont-promise-when-you-are-happy-do-not-reply-
when-you-are-angry-and-dont-decide-when-you-are-sad/
Article: Emotional Intelligence Impacts Decision-Making - Jason Kleid: Changing Lives ยป
Optimizing Performance. (2016). Jasonkleid.com. Retrieved 16 July 2016, from
http://www.jasonkleid.com/articles/emotional-intelligence-leads-to-good-decision-making/
Gabriel, G. (2000). What is Emotional Intelligence? - Brain Connection. Brain Connection.
Retrieved 16 July 2016, from http://brainconnection.brainhq.com/2000/05/26/what-is-
emotional-intelligence/
Dower, N. (2009). The ethics of war and peace (p. 91). Cambridge, UK: Polity.
Lungarella, M. (2007). 50 years of artificial intelligence (p. 297). Berlin: Springer.
Picard, R. W. (2004). Toward Machines with Emotional Intelligence. In ICINCO (Invited Speakers)
(pp. 2-4).
Samsonovich, A. V. (2012, July). An Approach to Building Emotional Intelligence in Artifacts. In
Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence.
Agent Smith (Character). (2016). IMDb. Retrieved 16 July 2016, from
http://www.imdb.com/character/ch0000745/quotes
Could a machine feel human-like emotions ?. (2016). Life 2.0. Retrieved 16 July 2016, from
http://www.vitamodularis.org/articles/could_a_machine_feel_human-like_emotions.shtml
Document Page
Robbins, M. (2016). Has a rampaging AI algorithm really killed thousands in Pakistan?. the
Guardian. Retrieved 16 July 2016, from https://www.theguardian.com/science/the-lay-
scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan
Emotion | AITopics. (2016). Aitopics.org. Retrieved 16 July 2016, from
http://aitopics.org/topic/emotion
human intelligence | psychology. (2016). Encyclopedia Britannica. Retrieved 16 July 2016, from
https://www.britannica.com/topic/human-intelligence-psychology
Americans Check Their Cell Phones 150 Times a Day. (2015). Text Request | Online Texting &
Brand Engagement Content. Retrieved 16 July 2016, from
https://www.textrequest.com/blog/americans-check-their-cell-phones-150-times-a-day/
1 out of 10
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]