Ethical in Artificial Intelligence Systems

Verified

Added on  2023/04/11

|12
|2659
|176
AI Summary
This paper explores the ethical implications of artificial intelligence systems and the need for ethical governance in AI. It discusses the benefits and challenges of AI, the role of government and professional associations in regulating AI, and the importance of building trust in AI through ethical governance.

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Running head: ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS
1
Ethical in artificial intelligence systems
Name:
Institution:

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 2
Introduction
At present, the application of artificial intelligence (AI) by the corporations illustrates
how technology is crucial. The system dependability is preferable and effective way to numerous
industries globally due to its relative advantage over the human routine. AI comes with a broad
scope of benefits and narrow ranges of shortcoming which can be surpassed with advancement
and improvement of technology. The structure has been received positively making it a powerful
tool in entities. With low-cut legal regulation over its use, it raises some societal, ethical debate
on control of its use. AI denotes to the numerous long-lasting study project focused at creating
information know-hows that are part of all facets of problem-solving and human being level
intelligence. Alan Turing is well known for outlining the study that would afterwards come to be
referred to as artificial intelligence in his inspirational 1950 paper “Computing Machinery and
Intelligence.” This work discovers the questions of ethical governance for an artificial
intelligence system (AIS) (Woodhouse & Sarewitz, 2007). The paper outlines a clear roadmap
which connects some aspects such as ethics, regulations standards, responsible investigation and
invention, and community engagement as a concept to direct ethical governance in AI. The paper
argues that moral governance is needed to creating public trust in AI.
Artificial intelligence can build machines that can communicate actively in clear ways with
human beings as illustrated by Amazon Alexa, Apple Siri and OK Google, alongside with the
numerous systems that corporations employ to automate user service (Stilgoe, Owen &
Macnaghten, 2013). However, the above services are in a way of having normal kinds of
unscripted conversation human have with one another. However, that is not necessary when it
comes to evaluating the ethical effect of the above know-hows. Moreover, there are still several
other uses of AI knowledge. Almost all of the information technology (IT’s) such as robotics,
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 3
computer games, malware filtering or data mining all use programming approaches. Therefore,
government and professional association are currently adapting ethical guideline and regulation
to assists design this vital technology, one of a perfect example is the Global Initiative on the
Ethics of Intelligent and Autonomous Systems (IEEE 2018).
Information technologies have not been satisfied to stay in the virtual realm and software
executions. The techniques are interrelating directly with a human being over robotic
applications (Hope & Moehler, 2015). Robotics is advancing technologies however it has
already generated some applications that have significant moral effects. Knowledge such as
medical, military, personal and a world of sex robotics are merely some of the existing
application of science that has effects (Anderson and Anderson 2011).
Combat has been critical motivating power behind technological improvement. The army takes a
significant part in studying novel technologies, and their work has resulted in considerable
progress in other fields. Numerous more prominent inventions, such as the internet have been
done from a supported armed study with the driving strength being the weaponry growth. A
computer was initially developed to break enemy codes and enumerate missiles trajectories.
With the above in consideration, the army has been and will stay to be AI's driving vigor
(Voegtlin & Scherer, 2017).
Many important contributions exist in the developing area of robots ethics and machine morality.
For instance, in Wallach and Allen’s book Moral Machines: Teaching Robots Right from
Wallach (2011), the writers demonstrate notions for the structure and programming of
equipment that can operatively think on ethical queries and examples from the robotics arena
where experts are attempting to build devices that can act in a ethically logical manner. The
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 4
development of fully and semi-autonomous computers (machine deciding with no or little human
interventions), into free existence, will not be modest. Up to now, Wallach (2011) has backed to
the argument on the importance of idea is assisting in structuring community plan on the
application and robotics regulations.
Military engineering has shown to be one of the most morally exciting robotic uses. Currently,
the above machines are broadly remotely controlled or semi-autonomous, however over a period
these technologies are probably becoming more self-directed due to the contemporary combat
necessity (Vasen, 2017). In the first decades of fights in the 21st century, robot artillery has been
involved in many murders of both noncombatants and fighters, and therefore this is also a moral
matter. Several ethicists are careful in their approval that automated combat that technology is
applied to improve moral conduct in war, for example, minimising military and civilian victims
or assist warfighters in following legal and ethical codes of behavior in combat (Sarewitz, 2015).
Over the past few years, the private sectors have made a considerable venture in the growth of
artificial intelligence (AI) with self-governing dimensions that can interrelate with a human to
fulfil duties in the household, vacation, work, communal care, healthcare and schooling. The
above progress possibly offers enormous societal gains. They can minimise human efforts, save
time and also lessen cost. Additionally, they can improve the wellbeing through the delivery of
dependable care help for the aged populace (Blok, Hoffmans & Wubben, 2015). .
Societal approaches towards novel intelligent machineries are typically optimistic. But,
apprehensions have been raised concerning the reckless application and possibly detrimental
effects of artificial intelligence. The above concerns are generally built in social rhetoric, which
inclines to rank AI ubiquity as inevitable. At the same time, they are possibly well-founded and

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 5
genuine as their fears are around the influence on occupations and mass redundancy. We
comprehend that there no means for generating trust, but we also understand from the practice
that technology is reliable if it brings gains and is secure and properly controlled (Marris, 2015).
Creating trust on AI will need a range of technique, from those at the level of individual system
and use the domain to those at an institutional place. This paper argues that one essential but not
satisfactory aspect of building trust in AI is ethical governance. Moral governance can be
defined as a set of routine, process, values and cultures designed to make sure the highest
standard of behaviour (Sarewitz, 2015)..
There is a moral pact of commonality across the principle, remarkably that AI should be free of
deception and bias, do not harm, respect human freedom and rights, including the privacy and
dignity, while backing well-being. Finally, dependable and transparent will make sure the focus
of duties and responsibility remains with their human operatives. Moral code and standard both
fit in a border model of liable research and innovation (AI) (Burget, Bardone & Pedaste, 2017).
RI initiatives across the academic, rule and regulation developed a decade ago and started aiming
to pinpoint and address risks and uncertainties linked with new areas of science. Thus, RI
suggests a novel routine for RI governance. The purpose is to make sure the RI are done in the
societal importance by joining techniques for motivating more autonomous decision-making
over a huge presence of broad patron that might be influenced directly by the institution of new
machineries (de Saille & Medvecky, 2016). Responsible invention underpins and notifies ethics
and standards. Importantly, ethics governance is an essential pillar of RI. RI also connects with
ethics though, for example, community engagement (Blok, Hoffmans & Wubben, 2015).
Another critical element of RI is the capability to transparently and systematically evaluate and
compare structure abilities, frequently with standardised benchmarks.
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 6
It is well known that there are communal fears about AI. Many of the worries are certainly out-
of-place, activated by media hype and press, but several are centred in frank apprehension over
how the equipment may effects, for instance, privacy or jobs (Leach & Scoones, 2006). It is
apparent that community trust in AI cannot be just presumed. To do so could jeopardise the type
of community reaction of novel know-how witnessed in the 1990s with genetically modified
foods in Europe (Helliwell, Hartley, Pearce& O'Neill, 2017). Therefore, proactive activities to
create communal trust are required, for instance, the formation of machine intelligence
command, which would lead civic argument, pinpoint perils and make an appropriate
recommendation.
Work by the British Standards Institution Technical Subcommittee on Robots Devices led to the
journal of B8611 which gives direction on the identification of probable ethical problems and
offers rules on secure design, defensive measures and info for the application and design of
robots (Bringsjord et al., 2011). BS 8611 articulate a vast scope of moral dangers and their
extenuation including the commercial and environmental , application, and societal hazards and
offer inventors with ways on to evaluate and lessen the risk connected with these moral risks.
The societal dangers comprise deception, unemployment, and loss of trust, confidentiality,
secrecy and addiction (Marris, 2015).
Initial efforts regulation and ethics were concentrated on automation; it is merely more lately that
focus has changed towards AI ethics. As the robots are physical artefacts, they are certainly more
freely described and therefore controlled than cloud-based AIs. The already prevalent uses of AI
actively proposes that greater resolve require to be directed towards deliberating on the ethical
and societal effect of AI, including the regulation and governance of AI (Bringsjord et al., 2011).
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 7
This paper is generally focused with automation and AI. But unavoidably near-future self-
directed structures, most outstandingly driverless carriages, are by default ethical proxies. It is
apparent that assistive machineries and driverless motor vehicles make choices with ethical
effects, even if those technologies have not been created to openly embed ethical values and
regulate their selections with regard to those values. Debatably, all self-directed systems
indirectly deliberate the gains of their creators or ,even more reflect the benefits of their
designers or, even more worryingly, training data sets (Marris, 2015)..
Generally, equipment is trusted if it brings gains while also being adequately regulated, harmless
and when an accident occurs, focus on robust assessment. For example, one of the motives we
trust aircrafts is that we are aware that they are part of an extremely controlled sector with an
outstanding safety version. The idea behind commercial airplane safeness is not merely the
perfect scheme but strict well-being certification routines, and when actions go erroneous,
publicly and robust, visible process of air accident investigation is done. It is for that reason to
propose that part of robots types ought to be controlled by a body analogous to the Civil Aviation
Authority (CAA) (Anderson and Anderson 2011). It is worth also to note that the airline accident
study is a social routine of reestablishment that require to be alleged as robust and impartial, and
which act as kind of end so that aviation trade does not get an everlasting taint on the social
awareness. Therefore, alike role is anticipated to robots accidents (Bringsjord et al., 2011).
Conclusion
In this work, it has been arguing that as there is no deficient of comprehensive ethical values in
AI, there is slight proof that those guidelines have yet transformed into exercise. Ethical practices
begin with personal and developing professional codes of moral conducts. However, people

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 8
need to be inspired by a robust institutional framework and principle leadership. The directive
needs regulatory groups, connected with community engagement to offer transparency and self-
assurance in the vigour of organisational courses. All of which backs the procedure of creating
community trust.
The argument about the community impact of generating intelligent machineries has involved
lots of folks and organisations over the past few years. Ever since the original science narrative,
predictions and speculations have become realism. Thus, there is no cause to assume that AI and
robots will not occur. Human beings are currently living and undergoing a golden era of know-
how with no boundary in view. Though, the ethical and moral inference in AI is obvious, one can
argue that there are several people and nations existing in poverty without labour and thus there
are no motives to generate mechanical labourers that can reason.
On the other hand, another party can allege that society cannot advance possessions without the
assistance of equipment which can think. But, the community is getting more chaotic, and they
will trust devices such as education, business and government establishments. On a more
comprehensive level, sentiments also vary around what machines should appear like and the
scope to which to make them intelligent. There are no perfect responses for this still. We even do
not approve on what precisely expresses intellect, and already we are producing artificial once.
But then again, when we come to machines one of the most regularly requested questions is
whether there are moral and ethical accountabilities to produce automaton workforces. But, there
is no clear response to that. After investigating, I established that automatons labours take
occupations from human labour force is right, and those occupations are usually monotonous
jobs, tedious and often dangerous to social employees.
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 9
References
Blok, V., Hoffmans, L., & Wubben, E. F. (2015). Stakeholder engagement for responsible
innovation in the private sector: Critical issues and management practices. Journal on
Chain and Network Science, 15(2), 147-164. [Online]. Retrieved 25 March, 2019 from:
https://www.wageningenacademic.com/doi/abs/10.3920/JCNS2015.x003
Bringsjord, S., Taylor, J., Wojtowicz, R., Arkoudas, K., van Heuvlen, B., Anderson, M., &
Anderson, S. (2011). Piagetian roboethics via category theory: Moving beyond mere
formal operations to engineer robots whose decisions are guaranteed to be ethically
correct. Machine ethics, 361-374. [Online]. Retrieved 25 March, 2019 from:
https://books.google.com/books?
hl=en&lr=&id=N4IF2p4w7uwC&oi=fnd&pg=PA361&dq=Anderson,+M.+and+S.+L.
+Anderson+(eds.),+2011,+Machine+Ethics,+Cambridge:
+Cambridge+University+Press.&ots=5XZTsni0Kr&sig=JsbWmhLFZXbVm-
65mfdh1sqdBtM
IEEE 2018, “Ethically Aligned Design: The IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems”, IEEE[Online]. Retrieved 25 March, 2019 from:
https://ethicsinaction.ieee.org/
Burget, M., Bardone, E., & Pedaste, M. (2017). Definitions and conceptual dimensions of
responsible research and innovation: a literature review. Science and engineering
ethics, 23(1), 1-19. [Online]. Retrieved 25 March, 2019 from:
https://link.springer.com/article/10.1007/s11948-016-9782-1
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 10
de Saille, S., & Medvecky, F. (2016). Innovation for a steady state: a case for responsible
stagnation. Economy and Society, 45(1), 1-23. [Online]. Retrieved 25 March, 2019 from:
https://www.tandfonline.com/doi/abs/10.1080/03085147.2016.1143727
Helliwell, R., Hartley, S., Pearce, W., & O'Neill, L. (2017). Why are NGOs sceptical of genome
editing?: NGOs’ opposition to agricultural biotechnologies is rooted in scepticism about
the framing of problems and solutions, rather than just emotion and dogma. EMBO
reports, 18(12), 2090-2093. [Online]. Retrieved 25 March, 2019 from:
http://embor.embopress.org/content/18/12/2090.abstract
Hope, A., & Moehler, R. (2015). Responsible Business Model Innovation: Reconceptualising the
role of business in society. [Online]. Retrieved 25 March, 2019 from:
http://nrl.northumbria.ac.uk/22974/
Leach, M., & Scoones, I. (2006). The slow race: Making science and technology work for the
poor. London: Demos, 06. [Online]. Retrieved 25 March, 2019 from:
https://opendocs.ids.ac.uk/opendocs/bitstream/handle/123456789/12419/
leachetal_2006_slow.pdf?sequence=1
Marris, C. (2015). The construction of imaginaries of the public as a threat to synthetic
biology. Science as Culture, 24(1), 83-98. [Online]. Retrieved 25 March, 2019 from:
https://www.tandfonline.com/doi/full/10.1080/09505431.2014.986320
Sarewitz, D. (2015). CRISPR: Science can't solve it. Nature News, 522(7557), 413. [Online].
Retrieved 25 March, 2019 from: https://www.nature.com/articles/522413a

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 11
Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible
innovation. Research Policy, 42(9), 1568-1580. [Online]. Retrieved 25 March, 2019
from: https://www.sciencedirect.com/science/article/pii/S0048733313000930
Vasen, F. (2017). Responsible innovation in developing countries: an enlarged agenda.
In Responsible Innovation 3(pp. 93-109). Springer, Cham. [Online]. Retrieved 25 March,
2019 from: https://link.springer.com/chapter/10.1007/978-3-319-64834-7_6
Voegtlin, C., & Scherer, A. G. (2017). Responsible innovation and the innovation of
responsibility: Governing sustainable development in a globalized world. Journal of
Business Ethics, 143(2), 227-243. [Online]. Retrieved 25 March, 2019 from:
https://link.springer.com/article/10.1007/s10551-015-2769-z
Wallach, W. (2011). From robots to techno sapiens: Ethics, law and public policy in the
development of robotics and neurotechnologies. Law, Innovation and Technology, 3(2),
185-207. [Online]. Retrieved 25 March, 2019 from:
https://www.tandfonline.com/doi/pdf/10.5235/175799611798204888
Woodhouse, E., & Sarewitz, D. (2007). Science policies for reducing societal inequities. Science
and Public Policy, 34(2), 139-150. [Online]. Retrieved 25 March, 2019 from:
https://academic.oup.com/spp/article-abstract/34/2/139/1689094
Document Page
ETHICAL IN ARTIFICIAL INTELLIGENCE SYSTEMS 12
1 out of 12
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]