Robot Consciousness and Mental Illness

Verified

Added on  2020/03/23

|7
|1860
|46
AI Summary
This assignment delves into the complex ethical questions surrounding artificial intelligence (AI) and mental illness. It examines whether robots with advanced capabilities could develop consciousness and, consequently, be susceptible to mental health conditions. The text discusses various ethical theories and proposes that observing AI exhibiting symptoms of mental illness could indicate a transition towards human-like consciousness. It also suggests that studying such AI dysfunction could provide valuable insights into the workings of both artificial and human minds.
tabler-icon-diamond-filled.svg

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Running Head: INFORMATION TECHNOLOGY ETHICS
-INFORMATION TECHNOLOGY ETHICS-
CAN ARTIFICIAL INTELLIGENCES SUFFER FROM MENTAL ILLNESS
Name of the Student
Name of the University
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
1INFORMATION TECHNOLOGY ETHICS
The robotics and artificial intelligence offers the prospects of sentience, capacity of
consciousness and rationality to the agents. If there is chance of these agents to have mind then
there is also a potential chance for the mind to be malfunctioned, or in other word the robots and
the Artificial Intelligence to suffer from mental illness. The presence of the AI psychopathology
can be recognized by the philosophical aspect of the mental illness. This explains the insights of
the mental disorder of any robot or human along with offering a stage to examine the psychiatric
disease of biological or artificial intelligence (Ashrafian, Darzi&Athanasiou, 2015). The
tendency of the artificial intelligence suffering for mental illness makes it vital to consider that
the AI may have achieved some mental capabilities of consciousness and rationality, in the case
of them being subsequently dysfunctional at times. For this kind of conditions in AI it is
preferable to reciprocate some deep insights into mental health and mechanism that helps to
prevent these mental malfunctions.
The idea of computerized reasoning has prompted the dialog of whether robots have
organization. The ensuing inquiries of whether manmade brains (AIs) will show awareness,
consciousness and insightfulness have all stayed questionable. In the theoretical case exhibited
over, the robots could be considered to experience the ill effects of mental maladies when
considered through a human demonstrative focal point, notwithstanding (i) would they be able to
likewise be considered as robot dysfunctional behaviors (or would they say they are essentially a
human overlay from a basic imitating impact)? (ii) Would these robots experience the ill effects
of these ailments similarly as people? (iii) Considering that the AIs would not have been
composed, constructed or modified to experience the ill effects of any intellectual brokenness,
would such a discovering give knowledge into the theory of psychological sickness?
Document Page
2INFORMATION TECHNOLOGY ETHICS
From a functional point, the advancement of innovations that show conscious
personalities as manmade brains keeps on making various forward steps (Horvitz& Mulligan,
2015). It is vital to stay away from any roundabout errors, for example, cognizance prompts
mental sickness and automated mental brokenness in this way prompts awareness, rather
cognizance might be an express that unidirectional can bring about a subgroup of its people
experiencing mental brokenness; and such brokenness can't exist without a conscious operator.
In the event that we can recognize conscious AI's, at that point we therefore need to recognize
any potential dysfunctional behaviors that they create (Yampolskiy, 2014).The robotics and
artificial intelligence offers the prospects of sentience, capacity of consciousness and rationality
to the agents. If there is chance of these agents to have mind then there is also a potential chance
for the mind to be malfunctioned, or in other word the robots and the Artificial Intelligence to
suffer from mental illness. The presence of the AI psychopathology can be recognized by the
philosophical aspect of the mental illness.
In the case depicted, the robots did not show any material changes in their physical
structure or preparing examination with the goal that when considered by wary Szaszian
hypothesis (Critchley2015). They can't formally be considered to experience the ill effects of
dysfunctional behavior. This is on account of there was no material or basic proof for their
mental brokenness with the goal that they can't experience the ill effects of a hidden sickness or
pathology, rather this is apparently their decision of behavior alignment.
Four ethical theories include Deontology, Utilitarianism, Rights, And Virtues:
The Deontological class of moral speculations expresses those individuals should cling to
their commitments and obligations when occupied with basic leadership when morals are in play.
Document Page
3INFORMATION TECHNOLOGY ETHICS
Clients generally rent AI that could help in solving the problems.. Contingent upon such sites is
hazardous for the general population as it forgoes the general population developing
emphatically and attempting to end up plainly effective through wrong means like lease a
programmer site. In this manner, it requests for taking right activities and endeavor to take care
of the issues to the Artificial intelligence and never enable any outsider to affect others'
information since it is unlawful and exploitative as per deontology hypothesis.
Utilitarian ethical theories are based on one’s ability to predict the consequences of an
action. Executing these speculations in the situation, there is have to keep the emphasis on the
outcomes so as indicated by the article, every one of the customers of lease a programmer site
would confront troublesome time when their names were unveiled by this site. Consequently,
there is no reality that could be utilized for considering the activity performed by lease a
programmer site right yet its customers additionally played out a similar thing since clearly if AI
misbehaves with another person's record for its customer then it could hurt its customer's need
too on the request of other customer accordingly. In this way, organization is totally in benefit
however it ought to have comprehended by its customer before reaching it for performing
unauthorized task because of the mental illness.
In moral speculations in view of Rights, the rights set up by a general public are secured
and given the most elevated need. Rights are thought to be morally right and substantial since a
expansive supports them. Implementing this theory in the given scenario, there is need to
perform important actions against rent an artificial intelligence so that it could not encourage to
pay it for performing tasks that does not hurt any customer or employee ethically or physically,
details on small issues which is not good thing and will create lots of problems in the society.
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
4INFORMATION TECHNOLOGY ETHICS
The Virtue Ethical theory judges a person by someone’s character rather than by an
action that may deviate from his/her normal behavior. Making use of a mentally ill robot or AI is
against the moral and values of ethics. Employing a programmer of AI does unlawful thing along
these lines, there is requirement to get taught about it that they are getting to be casualties of such
sites by enabling them to destroy the information of their adversaries and life partners since lease
a programmer could likewise hacking the pivotal information of its customers additionally in the
event that it showed the individual subtle elements of its customer publically.
Artificial intelligence or robots develops symptoms of mental illness that can be identifies
by three factors: 1) does the robots have been accidently programmed to have such mental
disorientation; if so then how this could be reversed by correcting the programs?, 2) If there is a
presence of consciousness and free will in the robots, is it relatable for them to suffer from
mental illness de-novo which is against their original coding?, 3) If the robot suffer from this
illness, is it preferable to say that this could represent the initial transition to some consciousness
to human like stage?
Ethical theories are utilized for survey the situation from alternate point of view so
critical activities against the guilty parties can be taken. Subsequently, every one of the
hypotheses has diverse strategies of considering an activity decent or terrible and requires
appropriate comprehension to check whether an activity is moral or not before performing it. The
future has qualified for similar rights and bolster that humankind directs for those with
psychological adjustment, so AI's that exhibit dysfunctional behavior ought to be manageable to
the suitable treatments that others conscious society can offer them. In spite of the facts, the very
conclusion of psychological sickness in that, an AI might be the course of recognizing the
presence of conscious AIs (at any rate in the subgroup experiencing mental illness); its reality
Document Page
5INFORMATION TECHNOLOGY ETHICS
may likewise offer bits of knowledge into AI mind, that is working in a comparable way that
humanmental infection can offer a bits of knowledge into the human cerebrum with the end goal.
This AI dysfunctional behavior might have the capacity to offer chose bits of knowledge into
some mind of AI’s. Hence, by concluding the whole solution it is presentable that even Artificial
Intelligence and Robots suffer from mental illness.
Reference
Ashrafian, H., Darzi, A., &Athanasiou, T. (2015). A novel modification of the Turing test for
artificial intelligence and robotics in healthcare. The International Journal of Medical
Robotics and Computer Assisted Surgery, 11(1), 38-43.
Calvo, R. A., Dinakar, K., Picard, R., &Maes, P. (2016, May). Computing in mental health. In
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in
Computing Systems (pp. 3438-3445). ACM.
Constantinou, A. C., Fenton, N., Marsh, W., &Radlinski, L. (2016). From complex questionnaire
and interviewing data to intelligent Bayesian network models for medical decision
support. Artificial intelligence in medicine, 67, 75-93.
Copeland, J. (2015). Artificial intelligence: A philosophical introduction. John Wiley & Sons.
Critchley, H. (2015). The Predictive Brain: Consciousness, Decision and Embodied Action.
Gilbert, P. (2016). Human nature and suffering. Routledge.
Horvitz, E., & Mulligan, D. (2015). Data, privacy, and the greater good. Science, 349(6245),
253-255.
Document Page
6INFORMATION TECHNOLOGY ETHICS
Poo, M. M., Du, J. L., Ip, N. Y., Xiong, Z. Q., Xu, B., & Tan, T. (2016). China Brain Project:
basic neuroscience, brain diseases, and brain-inspired computing. Neuron, 92(3), 591-
596.
Silverman, B. G., Hanrahan, N., Bharathy, G., Gordon, K., & Johnson, D. (2015). A systems
approach to healthcare: agent-based modeling, community mental health, and population
well-being. Artificial intelligence in medicine, 63(2), 61-71.
Yampolskiy, R. V. (2014). Utility function security in artificially intelligent agents. Journal of
Experimental & Theoretical Artificial Intelligence, 26(3), 373-389.
chevron_up_icon
1 out of 7
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]