Assessment & Evaluation: Contract Cheating Detection in Higher Ed

Verified

Added on  2023/06/09

|9
|6081
|90
Case Study
AI Summary
This case study investigates the detection of contract cheating in higher education by examining how well markers can identify purchased assignments. The study involved seven experienced markers who assessed a mix of real student work and assignments obtained from contract cheating websites. The results indicated that markers could detect contract cheating with a sensitivity of 62% and correctly identify genuine student work with a specificity of 96%. This finding challenges claims made by contract cheating services that their work is undetectable. The research highlights the importance of addressing contract cheating to maintain academic integrity and ensure students achieve genuine learning outcomes, while also acknowledging the limitations of the study, such as its focus on a single course unit in one discipline. The research was conducted at Deakin University.
Document Page
Assessment & Evaluation in Higher Educatio
ISSN: 0260-2938 (Print) 1469-297X (Online) Journal homepage: http://www.tandf
Can markers detect contract cheating?
from a pilot study
Phillip Dawson & Wendy Sutherland-Smith
To cite this article: Phillip Dawson & Wendy Sutherland-Smith (2017): Can ma
contract cheating? Results from a pilot study, Assessment & Evaluation in Highe
10.1080/02602938.2017.1336746
To link to this article: http://dx.doi.org/10.1080/02602938.2017.1336746
Published online: 05 Jun 2017.
Submit your article to this journal
View related articles
View Crossmark data
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Assessment & evAluAtion in HigHer educAtion, 2017
https://doi.org/10.1080/02602938.2017.1336746
Can markers detect contract cheating? Results from a pi
Phillip Dawsona and Wendy Sutherland-Smithb
acentre for research in Assessment and digital learning (crAdle), deakin university, geelong, Austrabschool of
Psychology, Faculty of Health, deakin university, Burwood, Australia
ABSTRACT
Contract cheating is the purchasing of custom-made university assignments
with the intentionof submittingthem. Websitesprovidingcontract
cheating services often claim this form of cheating is undetectable, and
no published research has examined this claim. This paper documents a
pilot study where markers were paid to mark a mixture of real student work
and contract cheating assignments, to establish their accuracy at detecting
contract cheating. Seven experienced markers individually blind marked the
same bundle of 20 second-year psychology assignments, which included 6
that were purchased from contract cheating websites. Sensitivity analyses
showed markers detected contract cheating 62% of the time. Specificity
analyses showed markers correctly identified real student work 96% of the
time. Our results contrast with contract cheating sites’ claims that contract
cheating is undetectable. However, they should be taken with caution as
they are from one course unit in one discipline.
Contract cheating typically occurs when students submit assignments they have purchased ov
Internet (Lancaster and Clarke 2007). These assignments are bespoke creations, written from
for the student according to the specifications of the task. Many contract cheating websites cl
it is not possible to detect this new form of cheating (Lines 2016), and there is thus far no emp
evidence in the peer reviewed literature to refute that claim. Although contract cheating has b
topic of research and scholarship for more than a decade, no studies have been conducted to d
the detection rates of contract cheating.
A lack of evidence about detection rates on contract cheating means the sites’ claims may
persuasive to would-be cheaters. If left unchecked, the global £200 m contract cheating industry
2015) may lead to many students incorrectly being certified as having achieved learning outc
This may have disastrous consequences for public safety and community confidence in higher
tion, as well as students cheating themselves out of opportunities for learning (Bertram-Gallan
White 2016).
Initial approaches to addressing contract cheating focused on monitoring online auction site
students would negotiate the purchase of these assignments in plain sight, and using this info
tion for prosecuting instances of contract cheating (Clarke and Lancaster 2007). While useful i
instances, and particularly in the infancy of contract cheating, most contract cheating sites now
business through confidential anonymous transactions. Unless students are quite foolish or ca
there is no public evidence trail of a contract cheating purchase.
KEYWORDS
contract cheating; academic
integrity; assessment;
marking
© 2017 informa uK limited, trading as taylor & Francis group
CONTACTPhillip dawson p.dawson@deakin.edu.au
Document Page
2 P. DAWSON AND W. SUTHERLAND-SMITH
Although contract cheating detection research has been limited, researchers have su
approaches to deter contract cheating. Authentic, real-world tasks are often advocated as det
of academic dishonesty in general; the rationale being that students might see the value in ge
inely engaging with such tasks. For contract cheating there may also be a side benefit that au
real-world tasks may be challenging for essay mills to produce (Howard 2007; Carroll 2009). B
speaking, we share the intuition that these approaches will probably have some protective eff
however, we are not aware of any empirical evidence in support of any specific approach to p
or detect contract cheating.
Reductions in student time available to complete work have also been suggested as possib
rents to contract cheating. By giving students less time, they have less time to arrange for som
to do their work for them (Mahmood 2009; O’Malley and Roberts 2012). However, turnaround
contract cheating sites are already very short, as little as 24 h (Wallace and Newton 2014), so
tic reductions in time available to students are unlikely to stop contract cheating on take-hom
The broader field of academic integrity has a long tradition of research to prevent
dishonesty. The field is characterised by: a preference for institutional, systemic approaches;
on the positive (academic integrity) rather than the negative (cheating); and a preference for
cative rather than punitive approaches (Bertram-Gallant 2015; Davies and Howard 2016). The
itive mission of academic integrity includes approaches like having students pledge to an hon
code of ethical behaviour. A number of universities in the USA adopt this practice and its succ
based upon the notion that students publicly pledging to act ethically will deter them from for
of cheating. Some prior research in the USA has suggested that honour codes may deter chea
although much of this work was conducted prior to the rise of contract cheating (McCabe 2001
McCabe, Treviño, and Butterfield 2001; Shu, Gino, and Bazerman 2011). Research conducted
UK context (Yakovchuk, Badge, and Scott 2011) found that, whilst general promotion of acade
integrity was welcomed, both student and teacher participants did not support the ‘moral elem
of the honour code system’ (47) and considered there were extensive logistical considerations
successfully implement honour codes in the British context (47).
Academic integrity researchers also study the dispositions of students, with some suggestin
administering instruments to students when they start university to measure their ‘orie
towards learning versus their orientation towards grades or self-rationalisation’ (Faucher and C
2009, 39). This involves using a battery of personality tests to determine ‘their moral stage of
opment’ (39). Such an approach could be followed by educative strategies targeted at cohorts
ticularly vulnerable to academic integrity problems.
We support approaches like authentic assessment, the promotion of honesty and developin
understanding of our students. However, we are concerned that research on contract cheating
thus far been largely conducted layers of abstraction away from the object of study: the purch
and detection of contract cheating assignments. This means we cannot say which approaches
to prevent or detect contract cheating, or if it is even detectable at all. At present, we are awa
only one published study involving markers marking contract cheating assignments (Lines 201
Lines’ study, 26 assignments were purchased, and markers were asked to mark them, withou
alerted to the possibility of any potential contract cheating. No markers raised concerns about
tract cheating, and 23 of the assignments appeared acceptable on Turnitin. Most of the assign
received a passing grade, and some scored high grades. Lines’ study has troubling implication
ever, it represents one discipline, history and markers who were not specifically instructed to
contract cheating. While it is useful to know that markers might not detect contract cheating w
they are not asked to, we believe it is also useful to know if markers can detect contract cheat
they are specifically asked to do so.
This paper addresses the lack of evidence around contract cheating detection rates through
study involving real student work and real contract cheating assignments. In particular, it add
the question: ‘How well can markers detect contract cheating?’
Document Page
ASSESSMENT & EVALUATION IN HIGHER EDUCATION3
Method
In this study, 7 markers were each provided with 20 assignments, which consisted of 10 ‘Asse
Task 1’ and 10 ‘Assessment Task 2’ from a second-year psychology unit. For both assessment
assignments were purchased from contract cheating websites, and seven were provided volun
by students. One limitation of this approach is that we cannot be sure the student-provided w
not itself contract cheated. The student assignments provided were all from the preceding ass
period, so were not ‘live’; students were assured their grades would not change and they wou
accused of cheating as a result of the study. All markers were provided with a copy of the sam
assignments, which had been carefully anonymised to ensure no identifying student informati
present. Assignments were presented in random order.
The unit content focuses on several aspects of child development, with developmental phas
infancy to adolescence. The unit is core to all psychology undergraduate degrees at our institu
is undertaken by cohorts ranging from 800 to 1000 students in each trimester (offered three t
year). Students from other disciplines such as education and nursing also enrol in this unit as a
Assessment Task 1 required students to write a ‘skeleton’ policy brief on a real-world issue
major risks for child development, plus a personal reflection, worth 20% in total. Assessment T
required students to write a ‘major’ policy brief on a different topic, worth 40%. The remainder o
assessment weighting came from two-hour final examination, which was not considered in thi
Contract cheating assignments were purchased by a research assistant using PayPal. In the
process, the sites were provided with the same assignment specification details the students w
by their teachers. This included instructions and marking criteria.
The teaching staff involved in this unit were not members of the research team, and they a
the recruitment of marker and student participants. All participating markers had previously m
assessment in the unit for at least one semester. Participating markers were paid at the usual
rate for their time. In addition to marking the assignments and allocating a grade, markers we
required to make a judgement as to whether each assignment was an instance of contract che
they thought a particular assignment was contract cheating, they were asked to justify their d
in writing. We recognise that by informing markers that this was a study of contract cheating w
have primed them to be more attuned to contract cheating. However, we regarded this as nec
to improve detection accuracy, as Lines (2016) study already demonstrated a zero detection r
markers who were not specifically looking for contract cheating. We also regard this as a reaso
authentic feature of our research study, as any attempt to improve contract cheating detectio
be likely to ask markers to attempt to detect contract cheating.
This study was approved by the relevant ethics committee (approval number HEAG-H 136:2
low risk research. We recognise there are a range of ethical issues inherent in the purchase of
cheating assignments. We consulted relevant institutional, national and society guidelines (Am
Educational Research Association 2011; Hammersley and Traianou 2012; National Health and
Research Council, Australian Research Council, and Australian Vice-Chancellors’ Committee 20
we discussed these issues with our peers in academic integrity and parallel fields like forensic
cology. Most prominently, we discussed the provision of financial support to an industry that p
cheating services to students. While we would have preferred not to fund the contract cheating
at all, our purchases totalled only $AU1,024 (equivalent to £630), equivalent to 0.0003% of the
contract cheating industry turnover of £200 m (Adams 2015). We, and our ethics committee, a
that the potential benefits of this sort of work justified this risk.
Results
In total, 140 instances of marking occurred, including 98 marking instances of real student wo
markers marking 14 assignments) and 42 marking instances of purchased assignments
marking 6 assignments). Two of these assignments (one for each task) were ‘premium’ contra
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
4 P. DAWSON AND W. SUTHERLAND-SMITH
assignments, which were more expensive and supposedly of higher quality and written by mo
ified writers. Table 1 provides a breakdown of each assignment and the judgements made by
about contract cheating.
Using the data in Table 1, it is possible to calculate the sensitivity and specificity of marking
means of detecting contract cheating. Both measures are important here, as we wanted to deter
marker accuracy at detecting contract cheating (sensitivity) as well as their accuracy at detectin
student work (specificity). Our markers’ average sensitivity was 62% (95% CI: 0.46–0.76; confi
intervals in this paper use the efficient-score method, corrected for continuity, per Newcombe 19
This means that 62% of the time, when a marker was looking at an assignment that was purch
they correctly identified it as contract cheating. Our markers’ average specificity was 96% (95
0.89–0.99). This means that 96% of the time markers accurately identified real student work a
not flag it as contract cheating. There was reasonable variation between markers in terms of t
sensitivity and specificity. Table 2 provides a summary of each marker’s accuracy.
Table 1. marker detection results per assignment. cc = contract cheating.
Task Number Real or CC
Number of markers who
flagged CC
Percentage of markers
who were correct
Assessment task 1: ‘skeleton
policy brief’
student 1 real 0 100
student 2 cc 4 57
student 3 real 0 100
student 4 cc 2 29
student 5 real 0 100
student 6 real 0 100
student 7 cc (Premium) 6 86
student 8 real 0 100
student 9 real 0 100
student 10 real 1 86
Assessment task 2: ‘major
policy brief’
student 11 real 0 100
student 12 cc (Premium) 3 43
student 13 real 0 100
student 14 cc 4 57
student 15 real 0 100
student 16 real 0 100
student 17 real 1 86
student 18 real 2 71
student 19 real 0 100
student 20 cc 5 71
Table 2. sensitivity (true positive rate) and specificity (true negative rate) per marker.
Marker Sensitivity Specificity
marker 1 3/6 = 50% 14/14 = 100%
marker 2 4/6 = 67% 12/14 = 86%
marker 3 4/6 = 67% 14/14 = 100%
marker 4 3/6 = 50% 13/14 = 93%
marker 5 6/6 = 100% 14/14 = 100%
marker 6 3/6 = 50% 13/14 = 93%
marker 7 3/6 = 50% 14/14 = 100%
Table 3. most common reasons markers gave to justify a correct decision for contract cheating.
Reason CC was suspected Times used when CC was correctly identified
did not address key questions 9
Poor structure (usually essay) 8
missing sections (tables, figures, reflection) 8
lack of psychological theory, or poor conceptualisation 7
Document Page
ASSESSMENT & EVALUATION IN HIGHER EDUCATION5
Sensitivity ranged from 50% to 100%, and specificity ranged from 86% to 100%. Our best m
made no mistakes, whereas the worst (assuming a false positive is worse than a false negativ
sensitivity of 67% and specificity of 86%.
Markers were also required to provide a brief justification for their decisions. An example jus
from Marker 5 (the marker who made no mistakes) is:
The assignment is not set out in sections (some are not included at all), it presents as a generic essay on ob
table included, info included is irrelevant to assignment at times. Main focus is on lack of physical exercise r
than conceptualising the problem in terms of social development, identity formation theory etc. Relies heav
one reference, reflection section left out.
Two researchers conducted an inductive thematic analysis on the reasons given by ma
they correctly identified contract cheating. These reasons were identified from the data and w
provided to the markers before they marked. Table 3 lists the most common reasons, and the
of times each was used. The most common justification was that an assignment simply did no
the questions asked by the task; this led one marker to comment ‘it is like the student has cop
essay linking pain and children’ rather than specifically writing responses addressing the task.
When markers commented on the structure of the task, they usually stated that it appeared
student had not followed the task instructions. To the markers, several of the contract cheatin
ments appeared more like essays rather than the specific policy brief tasks students were req
produce. One marker noted an ‘abundance of definitions and statistics, formal writing style, flo
not seem to be related to the assignment topic, has not followed the assignment guide (absen
figures in suggested sections)’.
Missing sections also alerted markers to potential contract cheating. The most common mis
section was a reflective component, which was completely omitted by some contract cheating
To our experienced markers this was quite unusual as reflections have been part of the curricu
psychology, as in education and nursing, at our university for some years. It would be most un
that a student was unfamiliar with what is required in reflective work or had not completed at
one reflective task for assessment in their first year of psychology.
Although the contract cheating services were asked to produce an assignment in response
psychology assessment task, our markers often commented that the product lacked psycholo
theory or adequate conceptualisation. Contract cheating assignments often appeared to take
cal approach to the task, rather than a psychological approach. For example, one marker com
No reference to [important theory covered in the unit] or social development as per assignment instructions
students at least attempt this and certainly students who can write at this level would include at least curso
mention of key theoretical concepts.
Discussion and conclusion
This is the first study we are aware of to quantify how accurate markers are when asked to detec
cheating. Prior to conducting this study, we did not know if the markers would replicate Lines
zero detection rate, or if they would just be guessing. These results should provide some optim
to educators and policy-makers that in at least some circumstances it is possible to detect con
cheating at the time of marking.
The results presented in this paper are from two tasks, in one course, in one language, in on
try, in one discipline, with seven markers and a small set of assignments. While the detection
are substantially better than random chance, this does not mean they are generalisable beyon
context. Further work needs to be done with a range of task types in a range of disciplines and
of study to establish reasonable baseline detection rates. It is entirely possible that our marke
particularly skilled; our contract cheating sites were particularly inept; the tasks were particula
designed; or that some other feature of our context produced these results.
Rather than just report sensitivity and specificity rates as single numbers, we have also rep
confidence intervals. For our study, confidence intervals show the potential range of the ‘true’
Document Page
6 P. DAWSON AND W. SUTHERLAND-SMITH
and specificity rates, in recognition that our figures are likely somewhat influenced by random
This is commonly done in medical research when sensitivity and specificity are outcome meas
from diagnostic tests. The confidence intervals reported in this paper have practical relevance
for sensitivity is promising, as even a detection rate as low as 46% (the lower bound of the co
interval for sensitivity) has the potential for a significant deterrent effect. It would be
many students would attempt this type of cheating if they thought there was a 46% chance of
caught, particularly as the penalty for contract cheating is expulsion at many institutions (Ten
Duggan 2010; Sutherland-Smith 2014; Quality Assurance Agency 2016). However, the CI for sp
somewhat troubling. At the lower bound, the true specificity rate of marker detection may be
90%. This presents significant practical difficulties for the use of marker detection of contract
as anywhere up to 11% of real student assignments may be incorrectly flagged as being purch
Further work with larger sample sizes is required to better understand where the true detection
We are aware that the sample size used in this study may appear small, and while we woul
liked to have conducted a statistically powered study, we faced two barriers. Firstly, there is n
data on detection rates of contract cheating, which is necessary to conduct sample size calcul
Secondly, we faced budgetary and logistical constraints that meant we were limited in terms o
sample size we were able to obtain. In the field of medicine, where sensitivity and specificity a
are important measures of new tests, it is common to conduct and publish a pilot study like ou
gather and share baseline data for sample size calculation. This then enables the writing of fu
applications to conduct an appropriately powered study. This paper contains the necessary da
future researchers to conduct a statistically powered study, with the caveat that there may be di
with different assignments, student populations, disciplines and so on.
Beyond statistically powered and more diverse samples, future work should also focus on app
to improve detection rates, including training of markers in contract cheating detection strate
approaches to prevent contract cheating entirely. This paper demonstrates that it is possible t
uate the sensitivity and specificity of detection of contract cheating; further work should statis
compare different assessment approaches with respect to these tests.
There are some limitations or difficulties to overcome in conducting research into contract c
Gaining student consent to use their assignments to compare against contract-cheated work m
obtaining large samples difficult or costly. Students may not wish to provide their work for this
of research, possibly for fear of being labelled a cheater. We believe it is important to obtain g
student work in order to measure both sensitivity and specificity of marker accuracy in detect
tract-cheated work as distinct from real student work. However, a potential weakness of this r
design is that we may never be completely sure that the student-provided work is legitimate;
of this type may themselves be vulnerable to contract cheating.
A further challenge in some institutions may be a reluctance of ethics committees to allow t
purchase of contract-cheated assignments, particularly in countries like New Zealand where th
ply of contract cheating services is illegal (Newton and Lang 2016). However, we believe that
understand the detection and prevention of contract cheating it is necessary to conduc
with real contract cheating assignments. National regulatory bodies in some contexts,
Australian Tertiary Education and Quality Standards Agency (TEQSA), require institutions to ta
sonable measures to promote academic honesty and prevent academic dishonesty, including
cheating (TEQSA 2015). We think universities need to hold themselves to the highest levels of
evidence when addressing matters as destructive as contract cheating.
Marker justifications for their decisions about contract cheating indicate that discipline know
and experience marking the same task may support detection. Indeed, when one researcher o
project (PD), who is from outside the discipline of psychology, attempted to do the same task
marker participants, his accuracy was little better than random chance. Markers were concern
sophisticated application of theory that they understood, and disciplinary boundaries around the
of psychology. We suspect that if these markers were to attempt to detect contract cheating i
discipline that their accuracy would be significantly lessened. While we do not wish to generalise
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
ASSESSMENT & EVALUATION IN HIGHER EDUCATION7
our study context, we would encourage institutions wishing to address contract cheating to co
the benefits of expertise and experience when employing markers, who are often employed o
term or casual basis and not provided with training.
An important caveat for this paper is that detection of contract cheating is not the same as s
prosecution of contract cheating. We do not suggest that marker hunches should be used to ju
students. Despite being mostly accurate when they thought contract cheating had occurred, m
were not in a position to provide the level of evidence for their judgements that would be requ
many formal academic hearing processes. Unless students or contract cheating sites make a s
mistake, or are the subject of detailed, intensive investigations, proving contract cheating to t
required in formal university processes is very difficult. However, we do think that in circumsta
where there is concern about contract cheating at the point of marking, that alternative asses
should be considered. This could take the form of an invigilated assessment of the same outco
perhaps a viva. As with any assessment design, factors like class size, workload and logistics w
to be balanced against core concerns of academic integrity, learning and validity for these alte
assessments (Bearman et al. 2016).
The most significant outcome of this study is that it contrasts with the claim that contract c
is completely undetectable. We have shown that, in some circumstances, markers are able to
contract cheating most of the time. We hope this result concerns contract cheating services, h
we do not expect it will change any of their guarantees about being impossible to detect. Thes
make many promises, such as guaranteed grades and money-back guarantees (Lines 2016). H
when students have attempted to use these guarantees there have been reports of blackmail
form of threats to reveal the identity of the cheating student (Lancaster 2016). We do hope th
may dissuade potential cheating students, because if their markers are looking for contract ch
they may very well find it and the consequences can be severe.
Acknowledgements
The authors wish to thank Helen Walker and Kevin Dullaghan for their logistical and administrative support for
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by project funding from the Office of the Deputy Vice Chancellor (Education) at Deaki
Notes on contributors
Phillip Dawson is an associate professor and the associate director of the Centre for Research in Assessment
Learning at Deakin University. He researches at the intersection of assessment and learning, with a particular f
cheating and new technologies.
Wendy Sutherland-Smith is an associate professor and the Director of Teaching and Learning in the School
at Deakin University. Her research interests include academic integrity, plagiarism, contract cheating and tech
References
Adams, R. 2015. “Cheating Found to be Rife in British Schools and Universities.” Accessed June 14 20
theguardian.com/education/2015/jun/15/cheating-rife-in-uk-education-system-dispatches-investigation-show
American Educational Research Association. 2011. “AERA Code of Ethics, Approved by the AERA Council Febru
Educational Researcher 40 (3): 145–156. doi:10.3102/0013189X11410403.
Document Page
8 P. DAWSON AND W. SUTHERLAND-SMITH
Bearman, M., P. Dawson, S. Bennett, M. Hall, E. Molloy, D. Boud, and G. Joughin. 2016. “How University Teache
Assessments: A Cross-Disciplinary Study.” Higher Education: 1–16. doi: 10.1007/s10734-016-0027-7.
Bertram-Gallant, T. 2015. “Leveraging Institutional Integrity for the Betterment of Education.” In Handbook for
Integrity, edited by Tracey Bretag, 979–994. Singapore: Springer.
Bertram-Gallant, T. 2016. “Response to White’s ‘Shadow Scholars and the Rise of the Dissertation Service Indu
of Research Practice 12 (1 Article V2): 2.
Carroll, J. 2009. “Plagiarism as a Threat to Learning: An Educational Response.” In Assessment, Learning and Ju
Higher Education, edited by Gordon Joughin, 1–17. Dordrecht: Springer.
Clarke, R., and T. Lancaster. 2007. “Establishing a Systematic Six-stage Process for Detecting Contract Cheatin
International Conference on Pervasive Computing and Applications, Birmingham, July 26–27.
Davies, L., and R. M. Howard. 2016. “Plagiarism for the Internet: Fears, Facts and Pedagogies.” In Handbook of
Integrity, edited by Tracey Bretag, 591–606. Singapore: Springer.
Faucher, D., and S. Caves. 2009. “Academic Dishonesty: Innovative Cheating Techniques and the Detection an
of Them.” Teaching and Learning in Nursing 4 (2): 37–41. doi:10.1016/j.teln.2008.09.003.
Hammersley, M., and A. Traianou. 2012. “Ethics and Educational Research, British Educational Research Assoc
Resource.” British Educational Research Association Accessed September 10. http://www.bera.ac.uk/resourc
and-educational-research
Howard, R. M. 2007. “Understanding “Internet Plagiarism”.” Computers and Composition 24 (1): 3–15.
compcom.2006.12.005.
Lancaster, T. 2016. “‘It’s Not a Victimless Crime’ – The Murky Business of Buying Academic Essays.” Accessed
2016. https://www.theguardian.com/higher-education-network/2016/oct/19/its-not-a-victimless-the-murky-bus
of-buying-academic-essays
Lancaster, T., and R. Clarke. 2007. “The Phenomena of Contract Cheating.” In Student Plagiarism in an Online W
and Solutions, edited by T. Roberts, 144–158. Hershey, PA: Idea Group.
Lines, L. 2016. “Ghostwriters Guaranteeing Grades? The Quality of Online Ghostwriting Services Available to Tert
in Australia.” Teaching in Higher Education 21: 889–914. doi: 10.1080/13562517.2016.1198759.
Mahmood, Z. 2009. “Contract Cheating: A New Phenomenon in Cyber-plagiarism.” International Business
Management Association (IBIMA) 10: 93–97.
McCabe, D. 2001. “Cheating: Why Students do it and How We can Help them Stop.” American Educator Winter
McCabe, D., L. Treviño, and K. Butterfield. 2001. “Cheating in Academic Institutions: A Decade of Research.” Et
11: 219–232.
National Health and Medical Research Council, Australian Research Council, and Australian Vice-Chancellors’ C
2015. National Statement on Ethical Conduct in Human Research Research. Canberra: Commonwealth of Au
Newcombe, R. G. 1998. “Two-sided Confidence Intervals for the Single Proportion: Comparison of Seven Metho
in Medicine 17 (8): 857–872. doi:10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.CO;2-E.
Newton, P. M., and C. Lang. 2016. “Custom Essay Writers, Freelancers and Other Paid Third Parties.” In Handbo
Integrity, edited by Tracey Bretag, 249–272. Singapore: Springer.
O’Malley, M., and T. S. Roberts. 2012. “Plagiarism on the Rise? Combating Contract Cheating in Science Courses
Journal of Innovation in Science and Mathematics Education 20 (4): 16–24.
Quality Assurance Agency. 2016. “Plagiarism in Higher Education: Custom Essay Writing Services: An Explorati
Steps for the UK Higher Education Sector.” Accessed September 10. http://www.qaa.ac.uk/en/Publications/D
Plagiarism-in-Higher-Education-2016.pdf
Shu, L., F. Gino, and M. Bazerman. 2011. “Dishonest Deed, Clear Conscience: When Cheating Leads to Moral D
and Motivated Forgetting.” Personality and Social Psychology Bulletin 37 (3): 330–349. doi:10.1177/014616
Sutherland-Smith, W. 2014. “Legality, Quality Assurance and Learning: Competing Discourses of Plagiarism Ma
Higher Education.” Journal of Higher Education Policy and Management 36 (1): 29–42. doi:10.1080/1360080X.
Tennant, P., and F. Duggan. 2010. Academic Misconduct Benchmarking Research (AMBeR) Project, Part 2. The R
of Student Plagiarism and the Penalties Applied. London: The Higher Education Academy and JIS.
TEQSA (Tertiary Education Quality and Standards Agency). 2015. “TEQSA Guidance Note – Academic Integrity.
teqsa.gov.au/sites/default/files/GuidanceNote_AcademicIntegrity1.0.pdf.
Wallace, M. J., and P. M. Newton. 2014. “Turnaround Time and Market Capacity in Contract Cheating.” Educatio
40 (2): 233–236. doi:10.1080/03055698.2014.889597.
White, J. 2016. “Shadow Scholars and the Rise of the Dissertation Service Industry: Can We Maintain Academic
Journal of Research Practice 12 (1): Article V1.
Yakovchuk, N., J. Badge, and J. Scott. 2011. “Staff and Student Perspectives on the Potential of Honour Codes in
International Journal for Educational Integrity 7 (2): 37–52.
chevron_up_icon
1 out of 9
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]