AI Based Algorithm Audit Report on Crime Prediction

Verified

Added on  2022/08/14

|6
|1375
|56
Report
AI Summary
This report provides an overview of an AI-based algorithm used for crime prediction, focusing on the security department's implementation of AI applications. The report details the function of these algorithms, often employing a 'black box' approach, and their role in identifying potential crime hotspots. The study examines software such as Crimescan and PredPol, highlighting concerns around data bias and the potential for unfair outcomes, particularly in relation to racial profiling. It presents test cases and data subsets to illustrate how algorithms might lead to biased results, emphasizing the lack of transparency in these systems. The report concludes with a discussion of the implications of these biases, including the potential for over-policing and the erosion of fairness in judgment, underscoring the need for reassessment and increased transparency in the use of AI-based systems in security applications. The report also refers to the sources used to create the assignment.
Document Page
Running Head: AI BASED ALGORITHM AUDIT 1
AI Based Algorithm Audit
Affiliate Institution
Student
Date
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
2
AI-Based Algorithm Audit
In this report, AI is explained as a technology technique that is progressively gaining
fame in the things to do with financial matters. This is a technology in which most people are
embracing currently as it uses for making decisions. AI-based systems are using knowledgeable
algorithms for making decisions. Algorithms are the procedures that the system reads and
understand in order to perform certain tasks. (Diakopoulos, & Nicholas, 2014)AI-based
applications and systems can also be used by the police in carrying out criminal justice, which
can be used by various hiring departments and even in healthcare services. As for this report, we
are going to describe an algorithm that is used by the security department for predicting crime
based areas.
Police departments have enhanced a new technology in which an AI-based application
can be used to regularly carry out the detectives to criminal injustices among the people who use
those systems. The AI application follows the black box algorithm to determine its
functionalities during the investigation processes. The application is designed to follow some
mathematical and scientific principles that will help in transforming that information offered by
the police and can be verified. This will help in the identification of crimes from various
geographical areas. The applications foresee all the criminal activities, victims, and the areas
where there is an expanded possibility to be a crime scene.
There are quite a number of software applications known to be used for screening crimes
all over the world. (Stark, & Diakopoulos, 2017). The applications include the following; the
Crimes can and the PredPol. The two application has been developed by scientists to help in
maintaining security to remain at its toes and reduce criminal acts. The software is used by
Document Page
3
various police divisions in the US and will help in identifying the regions around the local area
where the violation has been bound to have happened.
However, there is a significant doubt that the policing AI systems might have been build
using the algorithms, which may be a threat to some individuals. The algorithm has been realized
that at some point, it may not be of help to the people but maybe a threat to them in the sense that
it may be causing some inconsistent results under some circumstances. Biasness in these systems
is due to the complexity of the system data like the content stories of various crime occasions,
which will end up adding biases in a very sophisticated manner, which will make almost
impossible to identify. The data records are automated and could be able to show the exact time
when the crime happened and the location where the crime had happened. Changes to the data
provided may be done so that after the victims have been cleared, it can be altered or deleted in
the system depending on the case of the individual by the system itself.
AI algorithms do normally work entirely on the trained data sets available within the
system database. All those crime scenes not presented or undervalued can be identified by the
system as a low-risk value. Similarly, if the crime is upgraded and the location of the crime will
be granted a point of a high score. The algorithms must be able to learn and interpret the scores,
and at a later date, the location will be sited as the crime scene within the system and will be
marked by the use of a red alert. The area will then be reported in the system for investigation.
The algorithm will have to be reassessed after some time.
However, the information may be recorded several and will lead to over investigation,
which leads to the victim recording several crime data in the system and will earn the victim a
high score. Therefore, it will lead to additional disciplinary punishment by the police department.
Also, this idea will automatically lead to violation of fairness in judgment as they will probably
Document Page
4
cause the biasness in terms of race. Additionally, it will point black individuals to be cognizant
criminals and will make their residential areas as the points of high-risk score areas.
These people will be tagged to so many illegality and May cause them not access some of
the things or services they will have to enjoy because of many crimes recorded against them.
Therefore, the algorithms lack transparency since it is difficult to understand how the data are
being interpreted or analyzed within the system. Also, sometimes, the system may not provide
the correct output because of the interpretation and will put the person in tandem, not knowing
whom to be blamed because of that. (Sandvig, Hamilton, Karahalios, and Langbort, 2014)
A test case can be used to determine whether or not the crime areas predicted using the
AI-based technology are affected by biases. Two different test cases are used to determine the
situations. The test is used to aim at the fact that a particular crime location is rated a higher
score on the risk analysis scale basing on what race the people staying there. For example, in the
first category include white-skinned individuals. The first subset of data is talking about the
number and the crime intensity, which have been recorded in the region occupied by black-
skinned individuals. The second subset is about the crime data that were held guilty in the areas
occupied by the white people.
Another test case, there are also two subsets of data, which include the following, black-
skinned criminals with the subset records about the crime performed in the location occupied by
the black-skinned people and the second subset data is about criminal activities occurred in the
area lived by the white people. All the data are provided as inputs to the algorithm. During the
execution time, the algorithm will read through all these data and can be able to determine its
scores based on the data provided to identify which area is more prone to high-risk score in
criminal issues.
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
5
As the test cases were completed, it was identified that the two cases the areas occupied
by the black people describe the results to be prone to risk when it comes to criminal occurrences
and have put the mark of the location to be an area of high-risk score than the areas occupied by
the white people. That is how the algorithm is used to identify all areas with a high-risk score to
security threats.
Conclusion:
It was noted that the AI-based system uses the constructed algorithms to study all the data
sets provided into the system. The data sets are trained by the algorithm so that they can be quick
to be executed and produce the results as needed by the system analyst. AI-based systems can be
biased in some cases where the execution is found to be incorrect and hence will start lacking
transparency. The user does not know who to be blamed when such mistakes occurred.
References
Sandvig, C, K Hamilton, K Karahalios, and C Langbort. Algorithms: Research Methods for
Detecting Discrimination on Internet Platforms” (2014). Link: http://www-
personal.umich.edu/~csandvig/research
Stark, J. A., & Diakopoulos, N. (2017). Using Baselines for Algorithm ******its. In
******opean Conference on Data and Computational Journalism (Vol. 3).
https://www.foxling.co/static/pdf/EDCJC_Extended_Abstract_ImageBaslines-submitted-
REVISED-june26.pdf
Diakopoulos, Nicholas. “Algorithmic Accountability: Journalistic Investigation of
Computational Power Structures.” Digital Journalism 3 (2014): 398–415
Document Page
6
chevron_up_icon
1 out of 6
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]