AI Ethics and Safety Testing

Verified

Added on  2020/03/02

|8
|2083
|152
AI Summary
The assignment presents a case study about an AI consultant facing an ethical dilemma. The company wants to launch a project without thorough safety testing, prioritizing market share over potential harm to users. The consultant must weigh his professional obligations to the company against his ethical responsibility to ensure user safety. The assignment delves into the complexities of AI ethics, considering the perspectives of stakeholders like users, companies, and the consultant himself.

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Running head: PROFESSIONAL IT CULTURE
Assignment
[Student Name Here]
[Institution’s Name Here]
[Professor’s Name Here]
[Date Here]

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
PROFESSIONAL IT CULTURE 2
Table of Contents
Content Page
Introduction........................................................................................3
The ethical dilemma..........................................................................................3
Ethical response based on the consequences.....................................................4
Application of principles (Consultant duties)....................................................4
Response of a caring person..............................................................................5
Relevant ACS codes of professional conduct....................................................5
Defence for the AI consultant............................................................................6
Conclusion (summary).......................................................................7
References............................................................................................8
Document Page
PROFESSIONAL IT CULTURE 3
Introduction
Technology has always pointed towards the automation of machines, more so through virtual
systems that will inherently replace the need for human participation. Artificial intelligence is
the technological element that deals with this automation, where technology will generally
become independent of mankind. Now, while this outcome is favourable for the efficiency of
daily operation, it does raise many serious ethical concerns. For one, most operations
conducted today, irrespective of the field, require human judgement so as to protect people’s
safety. Furthermore, it takes special subliminal attributes to conduct daily activities such as
empathy and self-awareness, concepts that are not possessed by programmable systems (the
AIs) (Bostrom & Yudkowsky, 2011). A similar problem is exhibited in this paper, where a
company in the automotive industry is faced with a serious ethical dilemma in deploying an
untested AI vehicle. In light of this outcome, this short report analyzes the case study from an
ethical perspective.
The ethical dilemma
In the case study, an electric car-making company is sited, where for the last few years has
been developing a self-driving car. This company has collected enough data to successfully
support the deployment process. However, John, an AI consultant with the company feels
that more tests are needed for AI systems to understand certain scenarios more so, those
related to accidents incidences. To him, the machine learning process requires time to perfect
the vehicles’ responses, especially when faced with life and death situations, an eventuality
that is most likely to occur with a vehicle on the road. These sentiments are opposed by the
company as they risk losing the market to the competitors who as may have developed a
similar vehicle, having the same technology. Therefore, the company proposes that the extra
tests suggested by John should be conducted after the car is launched on the road. Therefore,
John must decide on whether to sign off on the project and risk the lives of the users or stand
Document Page
PROFESSIONAL IT CULTURE 4
up against the company wishes. In essence, he faces a battle between the growth of his career
and the greater common good.
Ethical response based on the consequences
According to Burton et al (2017), self-autonomous cars will in the future serve as functioning
members of the community as they will hold equal responsibility as humans. These machines
will have to choose between what is good and what is bad, a great undertaking for an item
lacking a conscious mind. Therefore, the developed vehicle at some point will face the
dilemma of life and death, which based on the existing results requires further research and
simulation. If launched, both the company and the costumers may end up facing many serious
consequences (Burton, et al., 2017).
For the company, and more so the AI consultant (John) his code of conduct will be on the
line, having signed off on a project lacking the necessary accreditations. Moreover, his
reputation as an AI consultant will be lost as he will hold all the blame for any eventualities.
Consequently, the customers of the newly developed vehicles would face the gravest
consequences, having placed their lives on a substandard system. In the event of an accident,
they (the customers) could easily lose their lives, an outcome that would spell another
consequence for the consultant as he could face criminal charges for endangering people’s
lives (Bostrom, Ethical Issues in Advanced Artificial Intelligence, 2011).
Application of principles (Consultant duties)
Two major ethical issues are outlined in this case study, one, a misleading leadership as
outlined by the company’s management. Two, the dilemma of choosing between the common
good of the people, and the growth of the consultant career. John, the AI consultant holds
both a professional and moral obligation in ensuring that the outcomes of the project are
favourable to the users. In fact, his actions, whether intentional or not are considered ethical if
they promote the greater good of the people (Utility principle). Therefore, at a fundamental

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
PROFESSIONAL IT CULTURE 5
level, he must protect the end users as they stand to lose the most in the overall outcome
(Mill, 2012).
In addition to this, John must conduct himself in accordance with the code of conduct that
requires him to apply his professional skills in the most accurate way. The correct way in this
instance will have him giving a truthful opinion on the outcomes of the project. He must
stand for the existing results that would see an untested system deployed onto the roads thus
risk the lives of the motorists involved. In all, his conscious should propel him to preserve the
greater good and not his career (Mill, 2012).
Response of a caring person
In general, a caring person is associated with heartfelt actions that promote the great good of
the people regardless of the conditions or situation. Furthermore, caring people are associated
with empathy, an experience or capacity of understanding situations based on other people’s
perspective (Cottingham, 2010). Therefore, if faced with the ethical dilemma at hand, the
actions of a caring person will be directed towards protecting the end user. For one, they will
stand and fight for the extra tests needed by the vehicle’s AI system.
Furthermore, they would protest the recommendation made by the company by outlining the
risks involved regardless of the opposition they face. If this approach fails, the caring person
would seek external help to try and stop the process as the overall consequences greatly out
do the individual consequences (losing a job and career). In essence, the situation at hand
would push a dedicated professional to become a whistle blower to the internal proceedings
of an organization and its products.
Relevant ACS codes of professional conduct
The elements outlined in the ACS’s code of conduct are a set of guidelines to ICT
professionals who in their duties must honour and respect their actions as well as those of the
end users. Furthermore, the ACS code of conduct that guides its users in responding to ethical
Document Page
PROFESSIONAL IT CULTURE 6
dilemmas as it stipulates the actions that should be taken in a professional environment. Now,
with respect to John’s situation, he as a professional consultant should offer the best and most
optimal solution to the users. However, this solution should be based on the conducts outlined
by the ACS as they seek to protect the common good.
a. Honesty – John should be true to his profession and to his customers who indirectly trust
his judgement as well as the decisions he makes. This code calls for him as an honourable
professional whose information, skills and conduct are based on honesty.
b. Public interest – the common good of the people as stipulated by the code where a
professional must consider the people affected by their decisions. The end users may lose
their lives, while the company itself may be subject to criminal litigation.
c. Competence – a professional should conduct his/her duties based on the mandates given
by the industrial stakeholders. The customer is the most important stakeholder in this case.
d. Professionalism – a code that requires experts to promote the codes of conduct by
aspiring to be a better professional. This requirement starts with protecting the greater good
of the people (ACS, 2014).
Defence for the AI consultant
John as an AI consultant cannot perform the extra tests on his own, in fact, he requires
additional resources to execute them. These resources include testing material/equipment
which will incur additional expenses to the project. Furthermore, the tests will also require
additional time, a resource the company currently lacks as their competitors are in the process
of launching their own products. Therefore, considering the company at hand, John may also
be considering the greater good of the organization and its employees who may lose their
position if the project fails. Again, the project would fail if the company loses its market
Document Page
PROFESSIONAL IT CULTURE 7
share to the competitors thus lack the necessary funds to compensate for the resources used in
the development process (Wah, 2008).
In addition to this, consider the individual himself, who as a consultant must adhere to the
hiring company and his contractual obligations. If he signed off on the project, he would be
trying to fulfil his role as requested by the employer, a stakeholder of the professional which
as stated in the ethical conduct must be obeyed. Moreover, he also seeks to expand his career
thus propel his life and that of his family to greater heights.
Conclusion (summary)
Despite the position held by the consultant, his focus should be on the greater good of the end
user, who lacks the technical ability to understand the overall operation of the developed
system. Therefore, the end-users (customers) inherently place their trust in the company
which must conduct its duties in an ethical manner. The AI consultant should, therefore, side
with the common good and stand with his opinion of conducting extra safety tests. Yes, the
company may lose the market but this outcome is less consequential as compared to the loss
of life. Furthermore, the consultant also faces an individual battle between self-preservation
(growing his career) and his professional conduct. In this dilemma, he should place emphasis
on the professional conduct by following the codes of ethics that in this case align with the
common good.

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
PROFESSIONAL IT CULTURE 8
References
ACS. (2014). ACS Code of Professional Conduct Professional Standards Board Australian
Computer Society. Inspiring success, Retrieved 22 August, 2017, from:
https://www.acs.org.au/content/dam/acs/acs-documents/ACS%20Code-of-
Professional-Conduct_v2.1.pdf.
Bostrom, N. (2011). Ethical Issues in Advanced Artificial Intelligence. Philosophy Faculty,
Retrieved 22 August, 2017, from: http://www.nickbostrom.com/ethics/ai.pdf.
Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. Cambridge
Handbook Of Artificial Intelligence, Retrieved 24 April, 2017, from:
http://www.nickbostrom.com/ethics/artificial-intelligence.pdf.
Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., N, M., & Walsh, T. (2017). Ethical
Considerations in Artificial Intelligence Courses. Retrieved 24 April, 2017, from:
https://arxiv.org/pdf/1701.07769.pdf.
Cottingham, J. (2010). EMPATHY AND ETHICS. Abstracta special issue, Retrieved 22
Auguts, 2017, from: .
Mill. (2012). The principle of utility determines the rightness of acts (or rules of action?) by
their effect on the total happiness. Retrieved 22 August, 2017, from:
http://faculty.philosophy.umd.edu/PGreenspan/Crs/MILL.pdf.
Wah, B. (2008). Ethics and professional responsibility in computing. Wiley encyclopaedia of
computer science, Retrieved 22 August, 2017, from:
https://www.ideals.illinois.edu/bitstream/handle/2142/12247/ecse909.pdf?
sequence=2.
1 out of 8
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]