Google's march to the business of war must be stopped

Verified

Added on  2023/06/11

|12
|3591
|100
AI Summary
This article discusses the ethical issues raised by Google's contract with the US Defense Department to provide AI technology for drone surveillance and target attacks. It analyzes the major ethical issues and proposes an ethical decision based on Deontology school of thought.

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Comparative Business Ethics and Social
Responsibility
1

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Contents
Introduction......................................................................................................................................3
Summary of the arguments and background...................................................................................3
Major ethical issues raised in the article..........................................................................................4
Ethical decision................................................................................................................................5
Personal Ethical Decision and Moral Philosophy...........................................................................6
Conclusion.......................................................................................................................................7
References........................................................................................................................................8
Appendix........................................................................................................................................10
2
Document Page
Article Title: Google's march to the business of war must be stopped
Name of Newspaper: The Guardian
Introduction
Google has been a pioneer in information technologies for last many years (Google, 2018).
Recently a new business deal between US Defense Department and Google has come to light
where Google is going to provide artificial intelligence (AI) technology to the defense
department to power its Project Maven for drone surveillance and target attacks (The Guardian,
2018) Google will also provide experts and engineers for the purpose.
The ethical question arising from this business decision is whether a multinational company with
access to personal data of billions of people around the world be a part of military operations of
any one country (The Guardian, 2018). The question is about the moral correctness of the
decision of the company to use its state of the art technology of artificial intelligence, cloud
computing, its experts and its engineers for military offensive of single country (The Guardian,
2018). A large section of employees of Google, media and academicians are against this business
decision (Gray, 2018). Therefore, the arguments given in the article are explained and ethical
decision making is sought in the subsequent analyses.
Summary of the arguments and background
The article strongly supports termination of contract under which Google is going to supply
artificial intelligence technology to US Defense Department (Gray, 2018). Project Maven is a
military operation of the US in which it intends to use artificial intelligence for drone
surveillance to take out images of complete towns. (The NewYork Times, 2018) There are
concerns that operation may also use AI for attacking targets at remote locations.
Google employees and a section of the media are of the view that Google has a huge data base
of people around the world (The Guardian, 2018). It has long organised people’s personal data
like their personal email, calendars, location data, private photos etc. (Google, 2018) Therefore,
has a huge access to people’s personal life including their location (Google, 2018) Hence,
Google has the responsibility to protect this data.
3
Document Page
Some who support this business decision say that earlier advancement in computing technology
was funded by the military; hence, it is not wrong for the companies to help the military (The
Guardian, 2018). However, this argument does not hold light when compared with Google
compromising or sharing its data through artificial intelligence for statistically targeted
surveillance and killings (The Guardian, 2018). The project is for rooting out terrorism or anti-
social forces. However, this can also compromise civilian’s data (The NewYork Times, 2018).
However, terminating the contract may entail heavy penalty for Goggle. It could suffer huge
financial losses (Gray, 2018). Additionally, it could be taken to court for breach of contract.
Major ethical issues raised in the article
Corporate governance consists of the policies, rules and the procedures which direct an
organisation (Tricker, 2012). Corporate social responsibility is an initiative by an organisation to
take responsibility of its actions on environment and people (Mason & Simmons, 2014).
Through corporate social responsibility a firm tries to conduct its functions ethically and
discharge its work in a socially responsible manner (Schwartz, 2011). The issue of Google
collaborating with the defense department can hinder Google’s corporate governance as well as
it corporate social responsibility.
Major ethical issue raised in the article is whether an international company like Google which
has access to massive personal data join hands in defense offensive of a single company (The
Guardian, 2018). This action by Goole is not in line with its privacy policy and its corporate
social responsibility to the environment in which it operates; which includes the whole globe.
Google states that it is dedicated to the privacy of its customers and the data will not be misused
(Google, 2018). However, through this contract personal data is on the verge of getting
compromised. This is because this personal data will be used in AI techniques for drone
surveillance (The Guardian, 2018). Therefore, Google’s decision is not in conformity with its
corporate social responsibility policy.
Another ethical issue raised in the article is whether a company like Google with operations
around the world involve itself in using its AI technology to operate autonomous camera and
weapons (The Guardian, 2018). Google’s AI chief had recently said that he opposed use of AI
for making autonomous weapons. This contract can be deemed unethical because there is move
4

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
at the United Nations to ban use of autonomous weapons (Bergen, 2018). There are many
employees of Google who are opposing this contract as they find working for defense of one
country immoral (The Mediapost, 2018). It would be morally wrong and disrespectful of Google
to force its employees to do something which they deem as wrong.
Ethical decision
Most ethical decision in this case is for Google to terminate the contract. Google has not yet
taken any decision on the demand of termination of the contract (The Guardian, 2018). This is
because of financial and political implications. However, employees have shown their
resentment through a petition to the CEO (The NewYork Times, 2018). A dozen employees have
decided to leave the company as they deem working on warfare technology, morally incorrect
(The Guardian, 2018). Google has not yet make any official announcement of its stand but the
company has stated that the contract is only a test and non-classified images will be used (Gray,
2018). This is a very high profile contract and it is not easy for Google to take a hurried decision.
However, employees concern and decision to oppose it is correct as Google is an international
company and has accesses to extremely large amount of classified information of its users (The
Guardian, 2018). Additionally, a global firm like Google should not involve in actions which
may lead to breach of privacy and even killings (The Guardian, 2018). The company should not
involve itself in warfare and take a decision on cancellation of this project
There have been instances in past where companies had to recall their products or abandon their
ambitious projects due to ethical issues involving human life (Stanwick, 2013).
Samsung had to announce a mass recall of its ambitious Note 7 smartphone on account of
bursting batteries issue that endangers human life. (D'Souza, 2018) Volkswagen had to recall
around 281 thousand cars due to faulty fuel pipe which posed a threat to human life (D'Souza,
2018). In a similar case Kraft’s foods had to recall 6.5 million boxes of macaroni and cheese
when it was found that some of these boxes contained metal (D'Souza, 2018). These companies
recalled their products on ethical grounds as they endangered human life. They had to face a lot
of financial loss due to this recall; however as a CSR initiative they did so (D'Souza, 2018).
Similarly, Google should also not indulge in an activity which could possibly endanger human
5
Document Page
life.in a mass way. The firm should not indulge in warfare techniques as it is a consumer services
company.
Personal Ethical Decision and Moral Philosophy
Moral philosophy is an understanding of ethics and psychology (Wolff, 2018). It tries to
understand what conduct is moral and ethical. A person’s ethical decision making process
depends on his own moral philosophy of right and wrong (Ferell, et al., 2012). Various theorists
have propounded seven schools of moral philosophy based on people’s sense of right and wrong.
Teleology considers an action as morally right if it produces a desired result or acceptable
behaviour (Ferell, et al., 2012). Egoism considers an act which maximises self-interest as
acceptable. Utilitarianism on the other hand, is the school of thought which considers acceptable
behaviour as one which maximises utility for all concerned. On the other hand, deontology is a
normative philosophy which makes judgement on right and wrong based on people’s rights
(Ferell, et al., 2012). That is, if an action hinders a person’s rights like right to speak, right to live
etc. then that action is morally incorrect.
Relativist school of thought considered that there is no absolute right or wrong (Ferell, et al.,
2012). Persons of this thought and opinion judge ethical behaviour subjectively based on
individual principles. Such people try to evaluate all views regarding a particular action and then
try to emerge at a conclusion (Ferell, et al., 2012). If a consensus cannot be reached then such a
person would look for industry practices and a solution which conforms to maximum views
(Ferell, et al., 2012).
Virtue ethics is another moral philosophy which believes good or bad is what is conventionally
believed along with what a mature person with a good moral character considers as right (Ferell,
et al., 2012). Virtues like honesty, truthfulness, trust etc. are considered as good in a situation.
Moral philosophy of justice advocates fairness as a measure of right and wrong (Ferell, et al.,
2012). It talks of distributive justice, procedural justice and interactional juices. In other words,
fair treatment in distribution, processes and interactions.
If I was given the power to make a decision in the given case of Google and the business of war,
I would decide on the basis of Deontology school of thought. This school of thought talks of
6
Document Page
moral behaviour as one which protects people’s rights and adheres to moral and social duties
(Ferell, et al., 2012). Google is a Global company with billions of customers around the world.
Additionally, Google has access to personal data of its customers (Google, 2018). In this light if
Google shares any information with department of defense, it would be a breach of right to
privacy of these millions of people.
Additionally, helping the military with AI for surveillance camera is another breach of right to
privacy. Furthermore, as a global company, Google has the responsibility of being, truthful and
honest with all its customers. By joining hands with one country’s military, the company is
forsaking its duty of honesty with its global customer against whom this AI technology might be
used. Also there are fears that AI enabled autonomous weapons may also be used for attacking
targets (The Guardian, 2018). This hinder people’s right to live. Therefore, Google should
terminate its contract with department of defense and make a policy of never using its
technology for warfare.
Conclusion
A multinational company with a global customer base and access to huge personal data must
refrain from letting its resources to be used for purposes like military offensive. A multinational
company has a responsibility to the global customers and must treat them equally. Hence,
Google should refrain from using its technology for warfare purposes
7

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
References
Bergen, M., (2018) Inside Google, a Debate Rages: Should It Sell Artificial Intelligence to the
Military?. Bloombberg, 14 May, pp.
https://www.bloomberg.com/news/articles/2018-05-14/inside-google-a-debate-rages-should-it-
sell-artificial-intelligence-to-the-military.
D'Souza, M., (2018) 10 times companies had to recall their products. [Online]
Available at: http://edgardaily.com/en/life/2016/10-times-companies-had-to-recall-their-
products-33189
[Accessed 24 May 2018].
Ferell, O., Fraedrich, J. & Ferell, (2012) Business Ethics: Ethical Decision Making & Cases.
New York: Cengage Learning.
Google, (2018) Our Company. [Online]
Available at: https://www.google.com/about/
Gray, S., (2018) Report: Google Employees Resigning Over Controversial Pentagon Contract.
Fortune, 14 May.
Mason, C. & Simmons, J., (2014) Embedding Corporate Social Responsibility in Corporate
Governance: A Stakeholder Systems Approach. Journal of Business Ethics, 119(77), pp. 0615-
1699.
Schwartz, M. S., (2011) Corporate Social Responsibility: An Ethical Approac. Calgary:
Broadview Press.
Stanwick, P., (2013) Understanding Business Ethics. London: SAGE Publications.
The NewYork Times, (2018) The Business of War’: Google Employees Protest Work for the
Pentagon. The NewYork Times, 4 April.
The Guardian, (2018) Google's march to the business of war must be stopped. The Guardian, 15
May, pp. https://www.theguardian.com/commentisfree/2018/may/16/google-business-war-
8
Document Page
project-maven.
The Mediapost, (2018) Google Employees Oppose Pentagon Partnership. The MediaPost, 4
April.
Tricker, B., (2012) Corporate Governance: Principles, Policies and Practices. Oxford: Oxford
University Press.
Wolff, J., (2018) Understanding Business Ethics. New York City: W. W. Norton & Company.
9
Document Page
Appendix
Article on ethical Issue
Google's march to the business of war must be stopped
We stand with thousands of Google employees, demanding an end to its contract with the US
Department of Defense
Should Google, a global company with intimate access to the lives of billions, use its technology
to bolster one country’s military dominance? Should it use its state of the art artificial
intelligence technologies, its best engineers, its cloud computing services, and the vast personal
data that it collects to contribute to programs that advance the development of autonomous
weapons? Should it proceed despite moral and ethical opposition by several thousand of its own
employees?
Gizmodo reported this week that more than a dozen Google employees have resigned over
Google providing AI support to a Pentagon drone program called Project Maven, which aims to
improve the ability of drones to identify humans. This follows a public letter, signed by 3,100-
plus Google employees who say that Google should not be in the business of war.
We agree with and support those employees and we are joined by more than 700 academic
researchers who study digital technologies. We support their demand that Google terminates its
contract with the US Department of Defense (DoD), that the company commit not to weaponize
the personal data they collect, or support the development of autonomous weapons. We also urge
their executives to join other artificial intelligence (AI) and robotics researchers and technology
executives in supporting an international treaty to prohibit autonomous weapon systems.
Google has long sought to organize and enhance the usefulness of the world’s information, and
along the way it has taken responsibility for collecting our most intimate information, from our
personal correspondence to our calendars, to our location data, to our private photos. Being
entrusted with such personal information comes with the responsibility to protect it, and to use it
carefully, in ways that respect the global makeup of those who contribute these records of their
lives.
Given this grave responsibility, news of Google’s involvement in the defense department’s
Project Maven alarmed many of us who study digital technologies. Maven is a US military
program that applies AI to drone surveillance videos for the purpose of detecting “objects of
interest”, which are flagged for human analysts. Google is providing not only AI technologies
(potentially built in part on the personal data that Google collects), but also engineers and
expertise to the DoD. Maven is already being used “in the Middle East” and the project is slated
10

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
to expand by next summer, eventually being used on blanket surveillance footage from “a
sophisticated, hi-tech series of cameras … that can view entire towns”
Reports on Project Maven currently emphasize the role of human analysts, but the DoD’s
ambitions are clear. These technologies are poised to automate the process of identifying targets,
including people, and directing weapons to attack them. Defense One reports that the DoD
already plans to install image analysis technologies onboard the drones themselves, including
armed drones. From there, it is only a short step to autonomous drones authorized to kill without
human supervision or meaningful human control. We already lack sufficient oversight and
accountability for US drone operations. It’s unlikely that we would know when the US military
takes Maven across the threshold from image analysis assistance to fully autonomous drone
strikes.
Even without automated targeting, the US drone program has been extremely controversial, with
many arguing that targeted killings violate US and international law. Targeted killings include
“personality strikes”, on known individuals named on “kill lists”, and “signature strikes” based
on “pattern-of-life analysis”, which target people based only on their appearance and behavior in
surveillance imagery. As a result, not only are bystanders frequently killed in strikes, but social
gatherings of civilians, such as weddings, are sometimes directly targeted. “Every independent
investigation of the [drone] strikes,” the New York Times reported in 2013, “has found far more
civilian casualties than administration officials admit
The fact that military funding supported the early development of computing technology does
not mean that it must determine the field’s future, particularly given the current power of the tech
industry. With Project Maven, Google joins hands with the arguably illegal US drone program,
and advances the immoral practice of statistically and algorithmically targeted killings. Google, a
global company, has aligned itself with a single nation’s military, developing a technology that
could potentially put its users, and their neighbors, at grave risk.
We are at a critical moment. Two months ago, Stanford professor and Google Cloud AI director
Fei-Fei Li wrote an op-ed in the New York Times titled How to Make AI That’s Good For
People. We call on Google’s leadership to live up to its ethical responsibilities by listening to
people who challenge Google to expand their definition of “good”. We call on Google to expand
its definition of “people” to include those already subjected to illegal drone strikes and data
surveillance.
This week, in response to a question at the I/O developer conference, Google AI chief Jeff Dean
stated that he opposes using AI to build autonomous weapons. We call on Google to support
ongoing international efforts at the United Nations to ban the development and use of
autonomous weapons. We call on Google to respect employees’ right to refuse work they find
immoral or unethical. Google’s employees asked their company to leave money on the table and
11
Document Page
stay true to its words: “Don’t be evil.” Nowhere is this more urgent than in deciding whether to
build systems that decide who lives and who dies.
We ask Google to:
Terminate its Project Maven contract with the DoD.
Commit not to develop military technologies, nor to allow the personal data it has
collected to be used for military operations.
Pledge to neither participate in nor support the development, manufacture, trade or use of
autonomous weapons; and to support efforts to ban autonomous weapons.
https://www.theguardian.com/commentisfree/2018/may/16/google-business-war-project-maven
12
1 out of 12
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]