logo

Critical Appraisal in Nurse Education Systematic Review

   

Added on  2023-06-03

37 Pages6671 Words186 Views
Running head: CRITICAL APPRAISAL
Nurse education systematic review- chapter 4
Name of the Student
Name of the University
Author Note

1CRITICAL APPRAISAL
Chapter 4
Introduction
The term critical appraisal encompasses the use of transparent and explicit procedures
for evaluating the data that has been published in scholarly articles, by implementing several
rules of evidence. Conducting a critical appraisal in a systematic review is an essential
process that is imperative for extraction and selection of absolute research articles that are to
be included and discussed in the review (Elwood 2017). The stages of critical appraisal
involve a comprehensive assessment of articles for determining their validity, reliability,
methodological strength, rigour, and generalisability (Claydon 2015). Critical evaluation of
research articles also facilitate the process of reducing bias, besides resulting in an upsurge in
the research transparency process (Brown 2014). Critical evaluation of articles provie a
detailed assessment of the research including their strengths and weaknesses to guarantee the
investigation included in data extraction and data synthesis process are of the maximum
excellence (Petticrew and Roberts 2006). Publishing and reporting bias can lead to authors
exaggerating their results to increase the chances of publication (Polit and Beck 2012).
According to Heale and Twycross (2015) ddetermination of the strengths and limitations of
the extracted articles will ascertain the fact that high-quality, available evidences have been
incorporated in the systematic review. Inclusion of any articles that do not have a transparent
methodology, can result in under or over-estimation of the true impacts of the phenomenon
that is being investigated, thereby lowering the validity of the conclusions drawn (LoBiondo-
Wood and Haber 2014). There might be problems with the study design or the results.
Furthermore, publication bias also occurs when the study outcomes influence the decision to
publish or disseminate the findings.

2CRITICAL APPRAISAL
The strength and precision of articles are also determined on the basis of hierarchy of
evidences that facilitate the process of locating best evidences for developing a well-
conducted systematic review. Higher the evidence of a particular study design, greater is the
rigour of such methodology, thus increasing the likelihood of minimising bias in results.
Systematic reviews have the highest evidence and are categorised to level 1, followed by
experimental studies in level 2 such as, randomised controlled or non-randomised trials, and
quasi-experimental studies (Stegenga 2014). The succeeding level 3 comprises of
observational/correlational studies, followed by case-controlled studies in level 4. Qualitative
research encompasses level 5 and 6, and are succeeded by expert opinions in level 7 that are
of the lowest evidence. Almost all articles used for this systematic review belong to level 2
and are experimental or quasi-experimental in design (Greenhalgh 2014). With the aim of
maintaining rigour, validity, and transparency, the identified articles were independently
evaluated by two reviewers having similar educational attainment, and there was no conflict
between them.
Critical appraisal tool
There are a plethora of critical appraisal tools such as, SURE, CASP,
and JBI that provide assistance in analysing scholarly articles and
documenting those that contain bias, thus facilitating the process of
inclusion of pertinent, reliable and valuable scholarly evidences for a
systematic review (Young and Eldermire 2017). While SURE comprises of
a checklist that facilitates the evaluation of randomised controlled trials
and experimental studies, CASP also help in appraising systematic review,
randomised controlled trials, qualitative, case-control, cohort studies,
diagnostic studies, and economic evaluation. In comparison, only the
critical appraisal checklist provided by JBI places a due focus on quasi-

3CRITICAL APPRAISAL
experimental studies, which helped in selection of this tool for the
evaluation of articles (Porritt, Gomersall and Lockwood 2014). The image
below provide an illustration of the questions that are present in the JBI
checklist:
Figure 1- JBI checklist for quasi-experimental studies
(Source- The Joanna Briggs Institute 2017)
Of all the questions that are present in the checklist, questions 1, 2, 3, 5, 7, 8, and 9
are essential for the quasi-experimental studies. This can be accredited to the fact that these
questions help in determining recruitment of participants, discrepancies in intervention

4CRITICAL APPRAISAL
between the two groups, follow-up and its duration, measurement of outcome, and reliability
of the measurement tools (The Joanna Briggs Institute 2017). The rest of the questions are
deemed non-essential since they focus on logical aspects of the research such as, cause and
effect, control group, and multiple measurements. Although these questions strengthen the
quality of research articles, they are absolutely not necessary for determining reliability and
validity of evidences.
Types of bias assessed
As mentioned in the previous sections, quasi-experimental studies were included for the
systematic review owing to the fact that it helps in estimating the causal effect of a particular
intervention on specific target population, without random allocation of participants.
According to White and Sabarwal (2014) these studies are conducted under circumstances
when randomization is unethical or impractical, they are relatively easier to get accomplished
and help in generalization of the findings to the target population. Additionally, absence of
any logical or time constraints help in identifying the result trends. Determination of bias of
the included studies will help in evaluating whether the presence of any systematic error
during the research distorted the process of outcome measurement, thus decreasing validity of
the research.
1. Selection bias: (Were the participants included in any comparisons similar?)
Selection bias are generally introduced by selection of groups, data or individuals for
result analysis in a manner that leads to failure in achieving accurate randomization, thus
ensuring that the sample does not represent the population that is being analyzed (Breen,
Choi and Holm 2015). Owing to the fact that presence of dissimilarities between participant
groups can create an impact on the research outcome, thus bringing about differences in
intervention, it is essential to eliminate chances of such bias in selection. Furthermore,

5CRITICAL APPRAISAL
according to Bareinboim and Tian (2015) non-equivalent control group designs are a major
threat to the validity of quasi-experimental studies. Therefore, presence of a control group
helps in comparison of the intervention effects, thereby reducing bias. The participant
inclusion/sampling criteria were not clearly stated by Habib et al. (1999), Thabet et al.
(2017), Yoo and Park (2015), Choi, Lindquist and Song (2014), and Arrue et al. (2017). On
the other hand, clear statement of the protocol that were followed by the researchers were
mentioned by Gholami et al. (2016), Kang et al. (2015), and Sangestani and Khatiban (2013).
Gholami et al. (2016) selected the participants who were a third-year undergraduate and
registered for critical care course. Display of unwillingness to participate acted as an
exclusion criteria, thus reducing selection bias. Following determination of sample size by
G*Power 3.1, Kang et al. (2015) included students who had a senior grade status, could
complete child health and fundamental nursing with similar credits, and did not participate
previously in any PBL class. The precise inclusion criteria followed by Sangestani and
Khatiban (2013) that prevented selection bias was being midwifery student, similar
educational level, similar semester and credit course, same instructor and consent for
participation.
2. Performance bias: Were the participants included in any comparisons receiving
similar treatment/care, other than the exposure or intervention of interest?
According to Mansournia et al. (2017) performance bias is considered as a serious threat
to internal validity of scientific studies and generally occurs when a group of participants in a
particular experiment receive more attention from the researchers (case), in comparison to
another group (control). Performance bias also occurs when the participants alter their
responses and/or behavior, upon identifying the groups to which they have been allocated.
This performance bias is commonly addressed by after enrolment of study subjects, by
blinding (or masking) the participants and/or research personnel. This plays an important role

6CRITICAL APPRAISAL
in lowering the risks of knowledge regarding the intervention that was given (SchnellInderst
et al. 2017). In contrast detection bias is typically caused due to differences in the procedure
of determining outcomes between the groups. Although effective blinding plays an important
role in ensuring that participants enrolled in the study receive equivalent treatment and
interventions, ensuring blinding is not always conceivable, however (Hróbjartsson et al.
2014). Habib et al. (1999) did not provide similar care to the participants included in both the
groups, apart from the intervention since 52 of them were subjected to traditional teaching
methods during fall semester, and the rest 54 were subjected to community oriented problem-
based teaching (COPBL) approach during spring semester. Difference in time frame of the
two interventions are likely to produce performance bias. Apart from differences in
intervention (problem based learning) in the two groups, all students were subjected to
similar tools namely, Nursing Students’ Decision Making Skills Scale and Nursing Students’
Decision Making Style, thus lowering chances of performance bias (Thabet et al. 2017).
Arrue et al. (2017) also reduced performance bias by subjecting all recruited students to
similar learning outcomes, learning program competencies, and content. In the study by Yoo
and Park (2015), Choi, Lindquist and Song (2014), Gholami et al. (2016), Kang et al. (2015),
and Sangestani and Khatiban (2013) also eliminated performance bias by subjecting all
patients to similar circumstances and course content, apart from the interventions.
3. Attrition bias: Was the follow-up complete and if not, were differences between
groups in terms of their follow up adequately described and analysed?
Attrition bias occurs under circumstances that are marked by a difference between the
participant groups in terms of their follow up or drop-out rates (Portney and Watkins 2014).
With the aim of lowering attrition bias it is significant for all researchers to clarify why the
study subjects have been lost to follow-up or dropped out, and whether the statistical testing
included all of them. One major reason that contributes to attrition bias is the fact that study

7CRITICAL APPRAISAL
participants might remain unsatisfied with the efficacy of the interventions, or the treatment
side effects might also be intolerable (Cheng and Trivedi 2015). Thus, presence of such bias
is considered as a crucial methodological problem and deteriorate validity of the findings if
research participants who are retained at the end of the study, vary from those who withdraw.
Therefore, greater loss of patients to follow-up signify increased threats to the research
validity. There was no attrition bias in the research conducted Sangestani and Khatiban
(2013), owing to the absence of any information on follow-up. Kang et al. (2015) suggested
that conducting a follow-up study would prove beneficial in identifying the existing variances
between pre- and post-test impacts of confidence of skill performance test (CSP) and
satisfaction. There was no scope for attrition bias in the research studies conducted by
Gholami et al. (2016), Thabet et al. (2017), Arrue et al. (2017), Yoo and Park (2015), and
Choi, Lindquist and Song (2014). This can be accredited to the fact that the aforementioned
researchers did not conduct any follow-up for the research. This is one major drawback since
conduction of a follow-up helps in augmenting the overall efficiency of the investigation
effort. Furthermore, no participants also withdrew themselves from the studies at any point of
time.
4. Information bias: Were the outcomes of participants included in any comparisons
measured in the same way?
The effects or outcomes of a research need to be assessed in a similar
manner for both case and control groups. Presence of different methods
for measuring the outcomes of study participants who have been enrolled
in different groups results in threat to validity of the research results
(Sedgwick 2014). This in turn result in misperception about the study
outcomes and whether they are caused due to the treatment effect or due
to cause (intervention of interest). In other words, information bias arise

End of preview

Want to access all the pages? Upload your documents or become a member.

Related Documents
Are chemotherapy drugs effective in treating colorectal cancer?
|25
|2287
|152

Data Extraction for Systematic Reviews and Research
|37
|9990
|444

Conducting a Systematic Review: Methodology and Search Strategy
|4
|863
|336

Management of Obesity Among the Older Adults | Nursing
|8
|2169
|14

Evidence Based Practice: Preventing Hospital Acquired Pressure Ulcers
|12
|3316
|1

Public Health - The Effective of Caffeine on Improving Athlete’s Performance
|31
|2923
|18