Ethics and Code - Assignment
VerifiedAdded on 2021/05/31
|6
|1395
|57
AI Summary
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Running head: ETHICS AND CODE
Ethics and Code
Name of the Student:
Name of the University:
Author Note:
Ethics and Code
Name of the Student:
Name of the University:
Author Note:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1
ETHICS AND CODE
Ethical questions- what if human beings fail to get job due to the advent of artificial
intelligence and mechanics?
A) Three virtues relevant to the question:
Self-regarding virtues
Elements such as courage, temperance and prudence which are regarded as traits of
self-regarding virtues can be destroyed if the human beings are losing jobs to artificial
intelligence. Artificial intelligence involves self-regarding virtue as it threatens the welfare of
the others (Bostrom and Yudkowsky 2014).
Intellectual and Moral Virtue:
Virtues are acquired by human beings are classified in terms of intellectuality and
morality, and the two virtues are independent of each other
Positive and Negative Virtues:
The classification is based on the distinct course of actions that these values undertake
and result in deeds under positive and negative.
B) Morality
Artificial intelligence requires optimization of logistics, research and a series of
technologically updated machines producing better work, bereft of blunders. Morality of
artificial intelligence at the costs of human labor is debatable since it snatches away jobs from
human labor and destroys the hierarchy of labor (Pistono and Yampolskiy 2016). In the
globalized society, physical works no longer require human interference and human beings
are only employed for the purpose of cognitive activities. In case of artificial intelligence
they will not be in a position to distinguish in between positive and negative virtues and their
ETHICS AND CODE
Ethical questions- what if human beings fail to get job due to the advent of artificial
intelligence and mechanics?
A) Three virtues relevant to the question:
Self-regarding virtues
Elements such as courage, temperance and prudence which are regarded as traits of
self-regarding virtues can be destroyed if the human beings are losing jobs to artificial
intelligence. Artificial intelligence involves self-regarding virtue as it threatens the welfare of
the others (Bostrom and Yudkowsky 2014).
Intellectual and Moral Virtue:
Virtues are acquired by human beings are classified in terms of intellectuality and
morality, and the two virtues are independent of each other
Positive and Negative Virtues:
The classification is based on the distinct course of actions that these values undertake
and result in deeds under positive and negative.
B) Morality
Artificial intelligence requires optimization of logistics, research and a series of
technologically updated machines producing better work, bereft of blunders. Morality of
artificial intelligence at the costs of human labor is debatable since it snatches away jobs from
human labor and destroys the hierarchy of labor (Pistono and Yampolskiy 2016). In the
globalized society, physical works no longer require human interference and human beings
are only employed for the purpose of cognitive activities. In case of artificial intelligence
they will not be in a position to distinguish in between positive and negative virtues and their
2
ETHICS AND CODE
actions would solely be based on data interpretation and instructions. Therefore they will be
unable to employ positive and negative virtues while conducting or executing a series of
actions (Huber, Weiss and Rauhala 2016). It would assume a powerful and disruptive role
without the ethical implications of virtues. Unless an industry is manufacturing robots that are
capable of encoding ethical considerations or virtues as a part of their machine learning
algorithms they will not be able to base their decisions and work output in an ethical manner.
Many research analysts have also suggested that AI can learn from the human behavior of
weighing wrong and right by their virtues but that would tantamount to more confusion
(Russell, Dewey and Tegmark 2015). It is morally wrong to use AI because the production of
AI would be artificial components, predominantly action-centered without ethical or virtue
considerations. In the light of moral light, the AI would produce result without taking into
consideration surrounding environment and cultural norms.
C) Immanuel Kant’s categorical imperative concerns the golden rule of acting towards
somebody in the manner that would act as a maxim on universal level (Russell,
Dewey and Tegmark 2015). The act must be fashioned in an admirable manner and
not simply because of self-attainable selfish goals. In short, it revolves round the
universality of moral laws that would be applicable to every single people. In the
quest of morality to make oneself more perfect, and motives should never be based on
selfish interests that should be more inclined towards rational considerations.
i. One of the specific rules working with AI that it has to be designed according to
Asimov’s Laws of Robots according to which no human beings should be harmed
through it and robots must obey their orders so as to preserve positive virtues.
ii. One of the general rules working with AI is to prototype it in the manner of a real
person instead of algorithm or data. This rule will involve making use of
ETHICS AND CODE
actions would solely be based on data interpretation and instructions. Therefore they will be
unable to employ positive and negative virtues while conducting or executing a series of
actions (Huber, Weiss and Rauhala 2016). It would assume a powerful and disruptive role
without the ethical implications of virtues. Unless an industry is manufacturing robots that are
capable of encoding ethical considerations or virtues as a part of their machine learning
algorithms they will not be able to base their decisions and work output in an ethical manner.
Many research analysts have also suggested that AI can learn from the human behavior of
weighing wrong and right by their virtues but that would tantamount to more confusion
(Russell, Dewey and Tegmark 2015). It is morally wrong to use AI because the production of
AI would be artificial components, predominantly action-centered without ethical or virtue
considerations. In the light of moral light, the AI would produce result without taking into
consideration surrounding environment and cultural norms.
C) Immanuel Kant’s categorical imperative concerns the golden rule of acting towards
somebody in the manner that would act as a maxim on universal level (Russell,
Dewey and Tegmark 2015). The act must be fashioned in an admirable manner and
not simply because of self-attainable selfish goals. In short, it revolves round the
universality of moral laws that would be applicable to every single people. In the
quest of morality to make oneself more perfect, and motives should never be based on
selfish interests that should be more inclined towards rational considerations.
i. One of the specific rules working with AI that it has to be designed according to
Asimov’s Laws of Robots according to which no human beings should be harmed
through it and robots must obey their orders so as to preserve positive virtues.
ii. One of the general rules working with AI is to prototype it in the manner of a real
person instead of algorithm or data. This rule will involve making use of
3
ETHICS AND CODE
participant’s raw data. Automated systems or humanoid prototypes are being
created with the aim of bridging the gap between machines and human beings.
III. Yes, it is self-contradictory because the AI was primarily invented to take the
position of a human being so as to replace emotions and virtues and produce
solely action and rationale based output. Therefore the rule will result in a
severely unethical exploitation of human capacities and negating their capabilities,
enhancing the robot’s usefulness at the cost of human labor (Mittelstadt et al.
2016)
IV. Yes it is clearly at odds with Kant’s practical imperative which states that
humanity should ever be treated as a means and that moral and practical actions
should be triumphed over cognition. The rule makes arbitrary usage of man’s will
and therefore is at odds.
V. Yes. The original purpose of AI is to achieve human reasoning at the cost of
ethical consideration for the purpose of fast decision making and execution unlike
human beings. Therefore the rule is contradictory since it is more of an
amalgamation of human elements with improved intelligence to replace human
labor force. The stimulation of human emotions and ethical consideration into
them would be unfair.
VI. No. It is because according to Kant’s theory on ethics the rightness or unfairness
of any action is independent of the morality of the action or outcome that it would
fetch (Kant 2017). Actions should not be judged based on the means that they are
achieving but whether or not they fulfill our duty. The moral duties which fall
under the categorical imperative revolves round an universal maxim on which
everybody should act.
ETHICS AND CODE
participant’s raw data. Automated systems or humanoid prototypes are being
created with the aim of bridging the gap between machines and human beings.
III. Yes, it is self-contradictory because the AI was primarily invented to take the
position of a human being so as to replace emotions and virtues and produce
solely action and rationale based output. Therefore the rule will result in a
severely unethical exploitation of human capacities and negating their capabilities,
enhancing the robot’s usefulness at the cost of human labor (Mittelstadt et al.
2016)
IV. Yes it is clearly at odds with Kant’s practical imperative which states that
humanity should ever be treated as a means and that moral and practical actions
should be triumphed over cognition. The rule makes arbitrary usage of man’s will
and therefore is at odds.
V. Yes. The original purpose of AI is to achieve human reasoning at the cost of
ethical consideration for the purpose of fast decision making and execution unlike
human beings. Therefore the rule is contradictory since it is more of an
amalgamation of human elements with improved intelligence to replace human
labor force. The stimulation of human emotions and ethical consideration into
them would be unfair.
VI. No. It is because according to Kant’s theory on ethics the rightness or unfairness
of any action is independent of the morality of the action or outcome that it would
fetch (Kant 2017). Actions should not be judged based on the means that they are
achieving but whether or not they fulfill our duty. The moral duties which fall
under the categorical imperative revolves round an universal maxim on which
everybody should act.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
4
ETHICS AND CODE
From the above discussion it can be easily concluded that the action of implementing
AI against the backdrop of categorical imperative and Kantian ethics that place human
beings at the central framework of values and norms and regards human dignity on a
pedestal higher moral values. Kant is not inclined towards on weapons or shrewd means
for the purpose of achieving greater value. The implementation of AI would clearly
contradict Kant’s desire to put human beings as the end not as a medium for exploitation
in order to achieve personal gain and desires is apparent. The entire AI revolution is
morally lacking and it only strives to achieve the best output. Furthermore the AI are
often brought into action through bias algorithms which are often gender based
accentuating the already existing sexist worldviews. There are also a number of unethical
implications of employing AI for instance thousands of human beings losing their daily
bread, perpetuating the wealth inequality that these machines are generating. The end
result naturally will consequent in revenues draining away to a few people and the
widening of the gap between reach and poor. These are some of the ethical reasons why
implementing AI as a substitute of human force is the worst choice.
ETHICS AND CODE
From the above discussion it can be easily concluded that the action of implementing
AI against the backdrop of categorical imperative and Kantian ethics that place human
beings at the central framework of values and norms and regards human dignity on a
pedestal higher moral values. Kant is not inclined towards on weapons or shrewd means
for the purpose of achieving greater value. The implementation of AI would clearly
contradict Kant’s desire to put human beings as the end not as a medium for exploitation
in order to achieve personal gain and desires is apparent. The entire AI revolution is
morally lacking and it only strives to achieve the best output. Furthermore the AI are
often brought into action through bias algorithms which are often gender based
accentuating the already existing sexist worldviews. There are also a number of unethical
implications of employing AI for instance thousands of human beings losing their daily
bread, perpetuating the wealth inequality that these machines are generating. The end
result naturally will consequent in revenues draining away to a few people and the
widening of the gap between reach and poor. These are some of the ethical reasons why
implementing AI as a substitute of human force is the worst choice.
5
ETHICS AND CODE
Reference List:
Bostrom, N. and Yudkowsky, E., 2014. The ethics of artificial intelligence. The
Cambridge handbook of artificial intelligence, pp.316-334.
Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N. and Walsh, T., 2017. Ethical
considerations in artificial intelligence courses. arXiv preprint arXiv:1701.07769.
Guyer, P., 2014. Kant. Routledge.
Huber, A., Weiss, A. and Rauhala, M., 2016, March. The Ethical Risk of Attachment:
How to Identify, Investigate and Predict Potential Ethical Risks in the Development of
Social Companion Robots. In The Eleventh ACM/IEEE International Conference on
Human Robot Interaction (pp. 367-374). IEEE Press.
Kant, I., 2017. Kant: The metaphysics of morals. Cambridge University Press.
Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L., 2016. The ethics of
algorithms: Mapping the debate. Big Data & Society, 3(2), p.2053951716679679.
Pistono, F. and Yampolskiy, R.V., 2016. Unethical research: How to create a malevolent
artificial intelligence. arXiv preprint arXiv:1605.02817.
Russell, S., Dewey, D. and Tegmark, M., 2015. Research priorities for robust and
beneficial artificial intelligence. Ai Magazine, 36(4), pp.105-114.
ETHICS AND CODE
Reference List:
Bostrom, N. and Yudkowsky, E., 2014. The ethics of artificial intelligence. The
Cambridge handbook of artificial intelligence, pp.316-334.
Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N. and Walsh, T., 2017. Ethical
considerations in artificial intelligence courses. arXiv preprint arXiv:1701.07769.
Guyer, P., 2014. Kant. Routledge.
Huber, A., Weiss, A. and Rauhala, M., 2016, March. The Ethical Risk of Attachment:
How to Identify, Investigate and Predict Potential Ethical Risks in the Development of
Social Companion Robots. In The Eleventh ACM/IEEE International Conference on
Human Robot Interaction (pp. 367-374). IEEE Press.
Kant, I., 2017. Kant: The metaphysics of morals. Cambridge University Press.
Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L., 2016. The ethics of
algorithms: Mapping the debate. Big Data & Society, 3(2), p.2053951716679679.
Pistono, F. and Yampolskiy, R.V., 2016. Unethical research: How to create a malevolent
artificial intelligence. arXiv preprint arXiv:1605.02817.
Russell, S., Dewey, D. and Tegmark, M., 2015. Research priorities for robust and
beneficial artificial intelligence. Ai Magazine, 36(4), pp.105-114.
1 out of 6
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.