Gender Bias and Artificial Intelligence
VerifiedAdded on 2022/12/29
|10
|2851
|1
AI Summary
This article discusses the impact of gender bias on artificial intelligence and its implications. It explores how AI can serve as an equalizer and the barriers to inclusion. The article also provides solutions to address gender bias in AI.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Running head: GENDER BIAS AND ARTIFICIAL INTELLIGENCE
Gender Bias and Artificial Intelligence
Name of the student:
Name of the university:
Author Note
Gender Bias and Artificial Intelligence
Name of the student:
Name of the university:
Author Note
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1GENDER BIAS AND ARTIFICIAL INTELLIGENCE
Systems thinking is critical in developing solutions to sustainability challenges
Here, by systems thinking gender bias and sustainability challenges, the issues with artificial
intelligence are considered. Having the quick development of artificial intelligence the biased
information can affect various predictions that are made by the machines. As one has the dataset of
different human decisions, this involves bias in it. It comprises of the hiring of decisions, medical
diagnosis, grading the exams for the student and approval of loans. Further, any aspect that is
demonstrated in the test, the vice and images needs the processing of information. It can be
influenced through the race, gender and cultural biases (Caliskan, Bryson and Narayanan 2017).
Here, the “wicked problem” is that though AI comprises of the potential for making decisions
ineffective and less biased way. This can never be actually a clean state. AI is just useful as the data
within it can power it. The quality relies on the way the creators can program that what, learn, decide
and think. Due to this reason, AI is able to inherit and amplify its creator’s biases. These developers
are commonly unaware of the biases created by them. Otherwise AI can use biased information.
Here, the outcomes of these technologies are life-altering (Buolamwini and Gebru 2018). The
already existing gas in workplaces has included the present gaps to promote and hire the females.
This can broaden as the biases get written unintentionally to the code of AI. Otherwise, the AI can
learn to make discrimination.
The various vital terms or ideas involving this area of concern is discussed hereafter.
Artificial Intelligence is found to disruptive in all the sectors of life. This consists of the well the
business can seek talent. Moreover, the organizations have been aware of Return Of Investment
coming from finding the proper person for a suitable task. Again, the women have been analyzed in
negative view from the other’s side. This happens as the behavioural differences granular in nature
present between the men and women. Again, colossal scale meta-analysis, on the other hand, has
Systems thinking is critical in developing solutions to sustainability challenges
Here, by systems thinking gender bias and sustainability challenges, the issues with artificial
intelligence are considered. Having the quick development of artificial intelligence the biased
information can affect various predictions that are made by the machines. As one has the dataset of
different human decisions, this involves bias in it. It comprises of the hiring of decisions, medical
diagnosis, grading the exams for the student and approval of loans. Further, any aspect that is
demonstrated in the test, the vice and images needs the processing of information. It can be
influenced through the race, gender and cultural biases (Caliskan, Bryson and Narayanan 2017).
Here, the “wicked problem” is that though AI comprises of the potential for making decisions
ineffective and less biased way. This can never be actually a clean state. AI is just useful as the data
within it can power it. The quality relies on the way the creators can program that what, learn, decide
and think. Due to this reason, AI is able to inherit and amplify its creator’s biases. These developers
are commonly unaware of the biases created by them. Otherwise AI can use biased information.
Here, the outcomes of these technologies are life-altering (Buolamwini and Gebru 2018). The
already existing gas in workplaces has included the present gaps to promote and hire the females.
This can broaden as the biases get written unintentionally to the code of AI. Otherwise, the AI can
learn to make discrimination.
The various vital terms or ideas involving this area of concern is discussed hereafter.
Artificial Intelligence is found to disruptive in all the sectors of life. This consists of the well the
business can seek talent. Moreover, the organizations have been aware of Return Of Investment
coming from finding the proper person for a suitable task. Again, the women have been analyzed in
negative view from the other’s side. This happens as the behavioural differences granular in nature
present between the men and women. Again, colossal scale meta-analysis, on the other hand, has
2GENDER BIAS AND ARTIFICIAL INTELLIGENCE
shown that females have more enormous benefits as that coming to soft skills. This redisposes the
people in becoming more efficient leaders. They can adopt more efficient “leadership style” than
the males. Apart from this, as the leaders are chosen as per the self-awareness, coachability, integrity
and emotional intelligence. Besides, there most of the leaders who have been women instead of
being men.
The primary purpose of the following study is to evaluate the gender bias rising from the area
of artificial intelligence. The plan as per what the essay is organized is analyzed now. The study is
made around various sources that are negated to the discussion. Its analysis is presented critically
and two sides of the arguments are developed. Examples are to be provided where needed. Instead of
just making a reporting, the published work is summarized, assessed, explained and evaluated.
Ultimately, the study answers the primary concern, to what extent the statement that artificial
intelligence can five rises in gender bias or can to do away with gender bias is evaluated.
Discussion on gender bias and artificial intelligence:
To understand the scenario, the way AI can serve as the equalizer for the bias is to be
assessed. For this, the instances of artificial intelligence developing human processes are determined.
Next, the bad news regarding how the bias in AI is the barrier to the inclusion is confirmed. Then,
the instances of AI bias long with how the AI creators can be more diverse in nature is understood
from here. Further, the questions to consider are highlighted. Lastly, the AI consortiums, research
teams, along with the start-ups, are demonstrated (Osoba and Welser IV 2017).
The argument regarding AI serving as the equalizer:
AI can serve as the equalizer as it can decrease the decisions for people what has been
naturally subjected to their individual consciousness and make predictions with various algorithms
shown that females have more enormous benefits as that coming to soft skills. This redisposes the
people in becoming more efficient leaders. They can adopt more efficient “leadership style” than
the males. Apart from this, as the leaders are chosen as per the self-awareness, coachability, integrity
and emotional intelligence. Besides, there most of the leaders who have been women instead of
being men.
The primary purpose of the following study is to evaluate the gender bias rising from the area
of artificial intelligence. The plan as per what the essay is organized is analyzed now. The study is
made around various sources that are negated to the discussion. Its analysis is presented critically
and two sides of the arguments are developed. Examples are to be provided where needed. Instead of
just making a reporting, the published work is summarized, assessed, explained and evaluated.
Ultimately, the study answers the primary concern, to what extent the statement that artificial
intelligence can five rises in gender bias or can to do away with gender bias is evaluated.
Discussion on gender bias and artificial intelligence:
To understand the scenario, the way AI can serve as the equalizer for the bias is to be
assessed. For this, the instances of artificial intelligence developing human processes are determined.
Next, the bad news regarding how the bias in AI is the barrier to the inclusion is confirmed. Then,
the instances of AI bias long with how the AI creators can be more diverse in nature is understood
from here. Further, the questions to consider are highlighted. Lastly, the AI consortiums, research
teams, along with the start-ups, are demonstrated (Osoba and Welser IV 2017).
The argument regarding AI serving as the equalizer:
AI can serve as the equalizer as it can decrease the decisions for people what has been
naturally subjected to their individual consciousness and make predictions with various algorithms
3GENDER BIAS AND ARTIFICIAL INTELLIGENCE
on the basis of the data. The algorithms can develop the process of decision-making ranging from
loan applications to gets hired for the job (Levendowski 2017).
Instances of AI developing human processes:
The algorithm is successfully identified by boars inaccurate way permitting the evaluation of
characteristics like making. There are few organizations building the tools limiting the bias though
analyzing the applications. This is on the basis of abilities, skills, and specific data. Here, monitoring
the AI tools can assure that bias never creep in. Besides, there are tools to scan the tracking system
of applicants and additional career sites for seeking the candidates and eradicate the names from the
overall program to decrease the bias (Zhao et al. 2017). Besides, few tools can obscure the
appearance of candidate and voice as the interview process goes on. This can diminish the potential
for bias.
The argument regarding AI bias is a barrier to inclusion:
The quality is dependable on the way the creators are able to program that to act, learn,
decide and think. Due to this, AI might inherit and amplify the creator’s biases. They are unaware of
their individual biases. Otherwise, AI can use the data biased (Flekova et al. 2016).
Instances of bias in AI for the above situation:
At one place an employer has been advertising for job opening under the male dominated
sector. This is through the platform of social media. The ad algorithm of the platform has been
pushing the jobs to the men in maximizing the returns of quality and number of applicants. Another
tech business has been spending prolonged months developing the tool of AI hiring. This is through
feeding the resumes from the candidates at the top level. The function of AI has been making review
of the resumes of candidates and then recommend the promising ones (Yapo and Weiss 2018). Since
the industry has been dominated by males most of the resumes utilized for teaching the AI has been
on the basis of the data. The algorithms can develop the process of decision-making ranging from
loan applications to gets hired for the job (Levendowski 2017).
Instances of AI developing human processes:
The algorithm is successfully identified by boars inaccurate way permitting the evaluation of
characteristics like making. There are few organizations building the tools limiting the bias though
analyzing the applications. This is on the basis of abilities, skills, and specific data. Here, monitoring
the AI tools can assure that bias never creep in. Besides, there are tools to scan the tracking system
of applicants and additional career sites for seeking the candidates and eradicate the names from the
overall program to decrease the bias (Zhao et al. 2017). Besides, few tools can obscure the
appearance of candidate and voice as the interview process goes on. This can diminish the potential
for bias.
The argument regarding AI bias is a barrier to inclusion:
The quality is dependable on the way the creators are able to program that to act, learn,
decide and think. Due to this, AI might inherit and amplify the creator’s biases. They are unaware of
their individual biases. Otherwise, AI can use the data biased (Flekova et al. 2016).
Instances of bias in AI for the above situation:
At one place an employer has been advertising for job opening under the male dominated
sector. This is through the platform of social media. The ad algorithm of the platform has been
pushing the jobs to the men in maximizing the returns of quality and number of applicants. Another
tech business has been spending prolonged months developing the tool of AI hiring. This is through
feeding the resumes from the candidates at the top level. The function of AI has been making review
of the resumes of candidates and then recommend the promising ones (Yapo and Weiss 2018). Since
the industry has been dominated by males most of the resumes utilized for teaching the AI has been
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
4GENDER BIAS AND ARTIFICIAL INTELLIGENCE
from men. This led to the AI n discriminating against the women recommended. It resulted in
discriminating against the women suggested. Besides, the face-assessment programs of AI have been
displaying racial and gender bias. It demonstrated the lesser errors to find out the gender of men who
have light skins (Leavy 2018). This was against the high errors for finding the gender has for women
with dark complexions. Apart from this, the voice-activated innovation in the cars are able to resolve
the distracted driving. Nevertheless, various systems of vehicles have been tone-deaf towards the
voice of women. Additionally, they had complexity to identify the accents of foreign language
(Castro and New 2016).
The solution to the concern of gender bias in AI:
AI has not been the only objective. This technology with its algorithms are able to reflect the
creator’s biases. Here, those with the unbiased at inception are also able to understand the biases of
the human trainers in due time. This is intended to be programmed, audited, monitored and
reviewed. This is to assure that this has never been biasing and turn the bias on the basis of the data
and algorithms. Including more women and many diverse kinds of workers having technical
expertise is a method to decrease bias (Savulescu and Maslen 2015). Through delivering the extra
viewpoints and much security of failure creates the creation and training of AI to be much accurately
put reflection on the inclusive and diverse societies. Further, higher diversity is also able to decline
the thinking of the entire group and develop the decision making of them. It has been leveraging the
greater variety of view-points for quicker and undertakes detailed decisions. The homogeneous
teams of AI and the people conducting researches might never pay close attentions for finding the
time of bias to get crept in. This scenario also involves the time when the scene affects the time AI is
trained and created (Hacker 2018).
from men. This led to the AI n discriminating against the women recommended. It resulted in
discriminating against the women suggested. Besides, the face-assessment programs of AI have been
displaying racial and gender bias. It demonstrated the lesser errors to find out the gender of men who
have light skins (Leavy 2018). This was against the high errors for finding the gender has for women
with dark complexions. Apart from this, the voice-activated innovation in the cars are able to resolve
the distracted driving. Nevertheless, various systems of vehicles have been tone-deaf towards the
voice of women. Additionally, they had complexity to identify the accents of foreign language
(Castro and New 2016).
The solution to the concern of gender bias in AI:
AI has not been the only objective. This technology with its algorithms are able to reflect the
creator’s biases. Here, those with the unbiased at inception are also able to understand the biases of
the human trainers in due time. This is intended to be programmed, audited, monitored and
reviewed. This is to assure that this has never been biasing and turn the bias on the basis of the data
and algorithms. Including more women and many diverse kinds of workers having technical
expertise is a method to decrease bias (Savulescu and Maslen 2015). Through delivering the extra
viewpoints and much security of failure creates the creation and training of AI to be much accurately
put reflection on the inclusive and diverse societies. Further, higher diversity is also able to decline
the thinking of the entire group and develop the decision making of them. It has been leveraging the
greater variety of view-points for quicker and undertakes detailed decisions. The homogeneous
teams of AI and the people conducting researches might never pay close attentions for finding the
time of bias to get crept in. This scenario also involves the time when the scene affects the time AI is
trained and created (Hacker 2018).
5GENDER BIAS AND ARTIFICIAL INTELLIGENCE
Various questions to be considered here:
What are the strategies to diversify the talent pool in creating a diverse workforce with staffs
from multiple backgrounds, languages, worldviews and perspectives?
What are the programs can be instituted for rising the areas of the unconscious bias and the
way to combat with that under the human workforce and the systems for artificial
intelligence? How can the business assure that they ate hiring and the AI systems of talent
management that are free from any bias?
What is the process in place assuring that the business ate monitoring the algorithms
routinely for the bias? In what way one can quickly address the bias as they find that t be
creeping to the processes, actions and decisions of AI?
What kinds of steps are to be developed for an inclusive workplace where the people can fee
secured for speaking up? How can business the culture value accountability and respect?
This is simpler for catching AI bias as the humans have been interacting with tools to
understand they can be complex of present systems a place despite any adverse
repercussions?
What are the best practices ethical considerations and ethical regulations for the vigilance
occurring against the perpetuating bias making the business champion under the AI space?
Some startups, research groups and AI consortiums:
Open AI:
This nonprofit from researches towards the path of secured AI.
Equal AI:
It concentrates in determining and eradicating bias in addressing gender distinction in hiring
and tech education.
Various questions to be considered here:
What are the strategies to diversify the talent pool in creating a diverse workforce with staffs
from multiple backgrounds, languages, worldviews and perspectives?
What are the programs can be instituted for rising the areas of the unconscious bias and the
way to combat with that under the human workforce and the systems for artificial
intelligence? How can the business assure that they ate hiring and the AI systems of talent
management that are free from any bias?
What is the process in place assuring that the business ate monitoring the algorithms
routinely for the bias? In what way one can quickly address the bias as they find that t be
creeping to the processes, actions and decisions of AI?
What kinds of steps are to be developed for an inclusive workplace where the people can fee
secured for speaking up? How can business the culture value accountability and respect?
This is simpler for catching AI bias as the humans have been interacting with tools to
understand they can be complex of present systems a place despite any adverse
repercussions?
What are the best practices ethical considerations and ethical regulations for the vigilance
occurring against the perpetuating bias making the business champion under the AI space?
Some startups, research groups and AI consortiums:
Open AI:
This nonprofit from researches towards the path of secured AI.
Equal AI:
It concentrates in determining and eradicating bias in addressing gender distinction in hiring
and tech education.
6GENDER BIAS AND ARTIFICIAL INTELLIGENCE
It is understood from the above study that artificial intelligence has been rousingly affecting
the behaviour and opinions of daily life. Nonetheless, the over-representation of male while
developing the innovations can undo various decades of enhancements inequality of genders. In due
time, human beings have come across critical theories for informed decisions and avoid that are
based solely over experience at personal level. Nevertheless, machine intelligence has been found to
learning mainly from the observing of information that has been seen to be presented with. As the
ability of machine has been processing tremendous amount of information might able to address this,
as the data gets laden with various stereotypical ideas of gender. This application of innovation can
perpetuate bias. Few of the current studies have been fetching the ways of eradiating the bias from
different learning algorithms ignoring in decades of study on how the ideology of gender has been
embedded in the language. Various awareness of the research and including that towards various
approaches of machine learning from the text is helpful to secure the creation of various based
algorithms (Levendowski 2018). Different leading thinkers who are women suggested that people
potentially impacted by the bias can see, attempt, and understand more likely to solve that. Therefore
the gender bias is vital to secure algorithms from perpetuating the gender concepts that bring
disadvantages to women. It is understood that the end-users and builders of various services and
products that are AI-enabled is required for future. Through changing the women, roles and
perception of females within the society, one can is able to correct the bugs at digital level
perpetrating the current bias and make AI lifecycle to be trustworthy. Again, the technology is able
to perform various impressive aspects and never resolve every issue for human beings. As one is
unable to be careful this can end up with making the matters worse through institutionalizing the bias
and the exacerbating the inequality. For securing that from any occurring, the business required to
understand gender bias while implementing and developing artificial intelligence. Hence, the leaders
must be understanding who are those liable to design and develop I in business and whether they
It is understood from the above study that artificial intelligence has been rousingly affecting
the behaviour and opinions of daily life. Nonetheless, the over-representation of male while
developing the innovations can undo various decades of enhancements inequality of genders. In due
time, human beings have come across critical theories for informed decisions and avoid that are
based solely over experience at personal level. Nevertheless, machine intelligence has been found to
learning mainly from the observing of information that has been seen to be presented with. As the
ability of machine has been processing tremendous amount of information might able to address this,
as the data gets laden with various stereotypical ideas of gender. This application of innovation can
perpetuate bias. Few of the current studies have been fetching the ways of eradiating the bias from
different learning algorithms ignoring in decades of study on how the ideology of gender has been
embedded in the language. Various awareness of the research and including that towards various
approaches of machine learning from the text is helpful to secure the creation of various based
algorithms (Levendowski 2018). Different leading thinkers who are women suggested that people
potentially impacted by the bias can see, attempt, and understand more likely to solve that. Therefore
the gender bias is vital to secure algorithms from perpetuating the gender concepts that bring
disadvantages to women. It is understood that the end-users and builders of various services and
products that are AI-enabled is required for future. Through changing the women, roles and
perception of females within the society, one can is able to correct the bugs at digital level
perpetrating the current bias and make AI lifecycle to be trustworthy. Again, the technology is able
to perform various impressive aspects and never resolve every issue for human beings. As one is
unable to be careful this can end up with making the matters worse through institutionalizing the bias
and the exacerbating the inequality. For securing that from any occurring, the business required to
understand gender bias while implementing and developing artificial intelligence. Hence, the leaders
must be understanding who are those liable to design and develop I in business and whether they
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
7GENDER BIAS AND ARTIFICIAL INTELLIGENCE
come from diverse disciplines and backgrounds. It is to be also found out whether they can meet the
diverse requirements of the stakeholders. Furthermore it is also to be understood how businesses can
attract women to jobs in the field of artificial intelligence and how can one can re-skills women to
bring benefits and use from AI applicants. Ultimately, the leaders need to evaluate whether they are
creating suitable frameworks and policies for mandating gender equality at private and public areas
around the full spectrum to the industries.
come from diverse disciplines and backgrounds. It is to be also found out whether they can meet the
diverse requirements of the stakeholders. Furthermore it is also to be understood how businesses can
attract women to jobs in the field of artificial intelligence and how can one can re-skills women to
bring benefits and use from AI applicants. Ultimately, the leaders need to evaluate whether they are
creating suitable frameworks and policies for mandating gender equality at private and public areas
around the full spectrum to the industries.
8GENDER BIAS AND ARTIFICIAL INTELLIGENCE
References:
Caliskan, A., Bryson, J.J. and Narayanan, A., 2017. Semantics derived automatically from language
corpora contain human-like biases. Science, 356(6334), pp.183-186.
Levendowski, A., 2017. How Copyright Law Creates Biased Artificial Intelligence. Washington
Law Review, 579.
Zhao, J., Wang, T., Yatskar, M., Ordonez, V. and Chang, K.W., 2017. Men also like shopping:
Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.
Yapo, A. and Weiss, J., 2018. Ethical implications of bias in machine learning.
Savulescu, J. and Maslen, H., 2015. Moral Enhancement and Artificial Intelligence: Moral AI?. In
Beyond Artificial Intelligence (pp. 79-95). Springer, Cham.
Hacker, P., 2018. Teaching fairness to artificial intelligence: Existing and novel strategies against
algorithmic discrimination under EU law. Common Market Law Review, 55(4), pp.1143-1185.
Castro, D. and New, J., 2016. The promise of artificial intelligence. Center for Data Innovation,
October.
Levendowski, A., 2018. How copyright law can fix artificial intelligence's implicit bias problem.
Wash. L. Rev., 93, p.579.
Flekova, L., Carpenter, J., Giorgi, S., Ungar, L. and Preoţiuc-Pietro, D., 2016, August. Analyzing
biases in human perception of user age and gender from text. In Proceedings of the 54th Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 843-854).
References:
Caliskan, A., Bryson, J.J. and Narayanan, A., 2017. Semantics derived automatically from language
corpora contain human-like biases. Science, 356(6334), pp.183-186.
Levendowski, A., 2017. How Copyright Law Creates Biased Artificial Intelligence. Washington
Law Review, 579.
Zhao, J., Wang, T., Yatskar, M., Ordonez, V. and Chang, K.W., 2017. Men also like shopping:
Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.
Yapo, A. and Weiss, J., 2018. Ethical implications of bias in machine learning.
Savulescu, J. and Maslen, H., 2015. Moral Enhancement and Artificial Intelligence: Moral AI?. In
Beyond Artificial Intelligence (pp. 79-95). Springer, Cham.
Hacker, P., 2018. Teaching fairness to artificial intelligence: Existing and novel strategies against
algorithmic discrimination under EU law. Common Market Law Review, 55(4), pp.1143-1185.
Castro, D. and New, J., 2016. The promise of artificial intelligence. Center for Data Innovation,
October.
Levendowski, A., 2018. How copyright law can fix artificial intelligence's implicit bias problem.
Wash. L. Rev., 93, p.579.
Flekova, L., Carpenter, J., Giorgi, S., Ungar, L. and Preoţiuc-Pietro, D., 2016, August. Analyzing
biases in human perception of user age and gender from text. In Proceedings of the 54th Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 843-854).
9GENDER BIAS AND ARTIFICIAL INTELLIGENCE
Osoba, O.A. and Welser IV, W., 2017. An intelligence in our image: The risks of bias and errors in
artificial intelligence. Rand Corporation.
Crawford, K., 2016. Artificial intelligence’s white guy problem. The New York Times, 25.
Raso, F.A., Hilligoss, H., Krishnamurthy, V., Bavitz, C. and Kim, L., 2018. Artificial Intelligence &
Human Rights: Opportunities & Risks. Berkman Klein Center Research Publication, (2018-6).
Wirtz, B.W., Weyerer, J.C. and Geyer, C., 2019. Artificial Intelligence and the Public Sector—
Applications and Challenges. International Journal of Public Administration, 42(7), pp.596-615.
Dillon, S. and Collett, C., 2019. AI and Gender: Four Proposals for Future Research.
Buolamwini, J. and Gebru, T., 2018, January. Gender shades: Intersectional accuracy disparities in
commercial gender classification. In Conference on fairness, accountability and transparency (pp.
77-91).
Leavy, S., 2018, May. Gender bias in artificial intelligence: The need for diversity and gender theory
in machine learning. In Proceedings of the 1st International Workshop on Gender Equality in
Software Engineering (pp. 14-16). ACM.
Osoba, O.A. and Welser IV, W., 2017. An intelligence in our image: The risks of bias and errors in
artificial intelligence. Rand Corporation.
Crawford, K., 2016. Artificial intelligence’s white guy problem. The New York Times, 25.
Raso, F.A., Hilligoss, H., Krishnamurthy, V., Bavitz, C. and Kim, L., 2018. Artificial Intelligence &
Human Rights: Opportunities & Risks. Berkman Klein Center Research Publication, (2018-6).
Wirtz, B.W., Weyerer, J.C. and Geyer, C., 2019. Artificial Intelligence and the Public Sector—
Applications and Challenges. International Journal of Public Administration, 42(7), pp.596-615.
Dillon, S. and Collett, C., 2019. AI and Gender: Four Proposals for Future Research.
Buolamwini, J. and Gebru, T., 2018, January. Gender shades: Intersectional accuracy disparities in
commercial gender classification. In Conference on fairness, accountability and transparency (pp.
77-91).
Leavy, S., 2018, May. Gender bias in artificial intelligence: The need for diversity and gender theory
in machine learning. In Proceedings of the 1st International Workshop on Gender Equality in
Software Engineering (pp. 14-16). ACM.
1 out of 10
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.