Critical Appraisal: Nursing Education Systematic Review - Chapter 4

Verified

Added on  2023/06/03

|37
|6671
|186
Report
AI Summary
This report presents a critical appraisal of a nursing education systematic review, focusing on Chapter 4 and the methodologies employed. The report delves into the process of critical appraisal, emphasizing the importance of evaluating research articles for validity, reliability, and methodological rigor, including the reduction of bias in research. It discusses the hierarchy of evidence and the selection of appropriate critical appraisal tools, specifically the JBI checklist for quasi-experimental studies. The report then analyzes various types of bias, including selection, performance, attrition, and information bias, within the context of the included studies. It examines how these biases can affect the validity of research findings, highlighting the importance of addressing these issues to ensure the reliability and trustworthiness of the conclusions drawn from the systematic review. The report emphasizes the need for transparent and explicit procedures in evaluating published data to enhance the quality and generalizability of research outcomes.
Document Page
Running head: CRITICAL APPRAISAL
Nurse education systematic review- chapter 4
Name of the Student
Name of the University
Author Note
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
1CRITICAL APPRAISAL
Chapter 4
Introduction
The term critical appraisal encompasses the use of transparent and explicit procedures
for evaluating the data that has been published in scholarly articles, by implementing several
rules of evidence. Conducting a critical appraisal in a systematic review is an essential
process that is imperative for extraction and selection of absolute research articles that are to
be included and discussed in the review (Elwood 2017). The stages of critical appraisal
involve a comprehensive assessment of articles for determining their validity, reliability,
methodological strength, rigour, and generalisability (Claydon 2015). Critical evaluation of
research articles also facilitate the process of reducing bias, besides resulting in an upsurge in
the research transparency process (Brown 2014). Critical evaluation of articles provie a
detailed assessment of the research including their strengths and weaknesses to guarantee the
investigation included in data extraction and data synthesis process are of the maximum
excellence (Petticrew and Roberts 2006). Publishing and reporting bias can lead to authors
exaggerating their results to increase the chances of publication (Polit and Beck 2012).
According to Heale and Twycross (2015) ddetermination of the strengths and limitations of
the extracted articles will ascertain the fact that high-quality, available evidences have been
incorporated in the systematic review. Inclusion of any articles that do not have a transparent
methodology, can result in under or over-estimation of the true impacts of the phenomenon
that is being investigated, thereby lowering the validity of the conclusions drawn (LoBiondo-
Wood and Haber 2014). There might be problems with the study design or the results.
Furthermore, publication bias also occurs when the study outcomes influence the decision to
publish or disseminate the findings.
Document Page
2CRITICAL APPRAISAL
The strength and precision of articles are also determined on the basis of hierarchy of
evidences that facilitate the process of locating best evidences for developing a well-
conducted systematic review. Higher the evidence of a particular study design, greater is the
rigour of such methodology, thus increasing the likelihood of minimising bias in results.
Systematic reviews have the highest evidence and are categorised to level 1, followed by
experimental studies in level 2 such as, randomised controlled or non-randomised trials, and
quasi-experimental studies (Stegenga 2014). The succeeding level 3 comprises of
observational/correlational studies, followed by case-controlled studies in level 4. Qualitative
research encompasses level 5 and 6, and are succeeded by expert opinions in level 7 that are
of the lowest evidence. Almost all articles used for this systematic review belong to level 2
and are experimental or quasi-experimental in design (Greenhalgh 2014). With the aim of
maintaining rigour, validity, and transparency, the identified articles were independently
evaluated by two reviewers having similar educational attainment, and there was no conflict
between them.
Critical appraisal tool
There are a plethora of critical appraisal tools such as, SURE, CASP,
and JBI that provide assistance in analysing scholarly articles and
documenting those that contain bias, thus facilitating the process of
inclusion of pertinent, reliable and valuable scholarly evidences for a
systematic review (Young and Eldermire 2017). While SURE comprises of
a checklist that facilitates the evaluation of randomised controlled trials
and experimental studies, CASP also help in appraising systematic review,
randomised controlled trials, qualitative, case-control, cohort studies,
diagnostic studies, and economic evaluation. In comparison, only the
critical appraisal checklist provided by JBI places a due focus on quasi-
Document Page
3CRITICAL APPRAISAL
experimental studies, which helped in selection of this tool for the
evaluation of articles (Porritt, Gomersall and Lockwood 2014). The image
below provide an illustration of the questions that are present in the JBI
checklist:
Figure 1- JBI checklist for quasi-experimental studies
(Source- The Joanna Briggs Institute 2017)
Of all the questions that are present in the checklist, questions 1, 2, 3, 5, 7, 8, and 9
are essential for the quasi-experimental studies. This can be accredited to the fact that these
questions help in determining recruitment of participants, discrepancies in intervention
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
4CRITICAL APPRAISAL
between the two groups, follow-up and its duration, measurement of outcome, and reliability
of the measurement tools (The Joanna Briggs Institute 2017). The rest of the questions are
deemed non-essential since they focus on logical aspects of the research such as, cause and
effect, control group, and multiple measurements. Although these questions strengthen the
quality of research articles, they are absolutely not necessary for determining reliability and
validity of evidences.
Types of bias assessed
As mentioned in the previous sections, quasi-experimental studies were included for the
systematic review owing to the fact that it helps in estimating the causal effect of a particular
intervention on specific target population, without random allocation of participants.
According to White and Sabarwal (2014) these studies are conducted under circumstances
when randomization is unethical or impractical, they are relatively easier to get accomplished
and help in generalization of the findings to the target population. Additionally, absence of
any logical or time constraints help in identifying the result trends. Determination of bias of
the included studies will help in evaluating whether the presence of any systematic error
during the research distorted the process of outcome measurement, thus decreasing validity of
the research.
1. Selection bias: (Were the participants included in any comparisons similar?)
Selection bias are generally introduced by selection of groups, data or individuals for
result analysis in a manner that leads to failure in achieving accurate randomization, thus
ensuring that the sample does not represent the population that is being analyzed (Breen,
Choi and Holm 2015). Owing to the fact that presence of dissimilarities between participant
groups can create an impact on the research outcome, thus bringing about differences in
intervention, it is essential to eliminate chances of such bias in selection. Furthermore,
Document Page
5CRITICAL APPRAISAL
according to Bareinboim and Tian (2015) non-equivalent control group designs are a major
threat to the validity of quasi-experimental studies. Therefore, presence of a control group
helps in comparison of the intervention effects, thereby reducing bias. The participant
inclusion/sampling criteria were not clearly stated by Habib et al. (1999), Thabet et al.
(2017), Yoo and Park (2015), Choi, Lindquist and Song (2014), and Arrue et al. (2017). On
the other hand, clear statement of the protocol that were followed by the researchers were
mentioned by Gholami et al. (2016), Kang et al. (2015), and Sangestani and Khatiban (2013).
Gholami et al. (2016) selected the participants who were a third-year undergraduate and
registered for critical care course. Display of unwillingness to participate acted as an
exclusion criteria, thus reducing selection bias. Following determination of sample size by
G*Power 3.1, Kang et al. (2015) included students who had a senior grade status, could
complete child health and fundamental nursing with similar credits, and did not participate
previously in any PBL class. The precise inclusion criteria followed by Sangestani and
Khatiban (2013) that prevented selection bias was being midwifery student, similar
educational level, similar semester and credit course, same instructor and consent for
participation.
2. Performance bias: Were the participants included in any comparisons receiving
similar treatment/care, other than the exposure or intervention of interest?
According to Mansournia et al. (2017) performance bias is considered as a serious threat
to internal validity of scientific studies and generally occurs when a group of participants in a
particular experiment receive more attention from the researchers (case), in comparison to
another group (control). Performance bias also occurs when the participants alter their
responses and/or behavior, upon identifying the groups to which they have been allocated.
This performance bias is commonly addressed by after enrolment of study subjects, by
blinding (or masking) the participants and/or research personnel. This plays an important role
Document Page
6CRITICAL APPRAISAL
in lowering the risks of knowledge regarding the intervention that was given (SchnellInderst
et al. 2017). In contrast detection bias is typically caused due to differences in the procedure
of determining outcomes between the groups. Although effective blinding plays an important
role in ensuring that participants enrolled in the study receive equivalent treatment and
interventions, ensuring blinding is not always conceivable, however (Hróbjartsson et al.
2014). Habib et al. (1999) did not provide similar care to the participants included in both the
groups, apart from the intervention since 52 of them were subjected to traditional teaching
methods during fall semester, and the rest 54 were subjected to community oriented problem-
based teaching (COPBL) approach during spring semester. Difference in time frame of the
two interventions are likely to produce performance bias. Apart from differences in
intervention (problem based learning) in the two groups, all students were subjected to
similar tools namely, Nursing Students’ Decision Making Skills Scale and Nursing Students’
Decision Making Style, thus lowering chances of performance bias (Thabet et al. 2017).
Arrue et al. (2017) also reduced performance bias by subjecting all recruited students to
similar learning outcomes, learning program competencies, and content. In the study by Yoo
and Park (2015), Choi, Lindquist and Song (2014), Gholami et al. (2016), Kang et al. (2015),
and Sangestani and Khatiban (2013) also eliminated performance bias by subjecting all
patients to similar circumstances and course content, apart from the interventions.
3. Attrition bias: Was the follow-up complete and if not, were differences between
groups in terms of their follow up adequately described and analysed?
Attrition bias occurs under circumstances that are marked by a difference between the
participant groups in terms of their follow up or drop-out rates (Portney and Watkins 2014).
With the aim of lowering attrition bias it is significant for all researchers to clarify why the
study subjects have been lost to follow-up or dropped out, and whether the statistical testing
included all of them. One major reason that contributes to attrition bias is the fact that study
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
7CRITICAL APPRAISAL
participants might remain unsatisfied with the efficacy of the interventions, or the treatment
side effects might also be intolerable (Cheng and Trivedi 2015). Thus, presence of such bias
is considered as a crucial methodological problem and deteriorate validity of the findings if
research participants who are retained at the end of the study, vary from those who withdraw.
Therefore, greater loss of patients to follow-up signify increased threats to the research
validity. There was no attrition bias in the research conducted Sangestani and Khatiban
(2013), owing to the absence of any information on follow-up. Kang et al. (2015) suggested
that conducting a follow-up study would prove beneficial in identifying the existing variances
between pre- and post-test impacts of confidence of skill performance test (CSP) and
satisfaction. There was no scope for attrition bias in the research studies conducted by
Gholami et al. (2016), Thabet et al. (2017), Arrue et al. (2017), Yoo and Park (2015), and
Choi, Lindquist and Song (2014). This can be accredited to the fact that the aforementioned
researchers did not conduct any follow-up for the research. This is one major drawback since
conduction of a follow-up helps in augmenting the overall efficiency of the investigation
effort. Furthermore, no participants also withdrew themselves from the studies at any point of
time.
4. Information bias: Were the outcomes of participants included in any comparisons
measured in the same way?
The effects or outcomes of a research need to be assessed in a similar
manner for both case and control groups. Presence of different methods
for measuring the outcomes of study participants who have been enrolled
in different groups results in threat to validity of the research results
(Sedgwick 2014). This in turn result in misperception about the study
outcomes and whether they are caused due to the treatment effect or due
to cause (intervention of interest). In other words, information bias arise
Document Page
8CRITICAL APPRAISAL
from errors in measurement and are also categorised as a major form of
observational bias, which is responsible for
an inappropriate approximation of the connotation between a particular
exposure and its expected outcome. Some key questions that help in
determining the presence of information bias are namely, (i) whether
similar scales or instruments had been used; ii) whether similar process
were undertaken to determine the research outcomes ; and iii) whether
the procedures, instructions, and measurement time were alike for all
study participants (Althubaiti 2016). This kind of bias makes the
researchers subconsciously impact the contributors of a research. Hence,
this kind of discrepancy from the actual truth during the research process
lessens the reliability and validity of the study findings. There was no
information bias in any of the research studies. Habib et al. (1999) evaluated
three outcome measures of all participants, upon completing the course. Non parametric tests
used by Arrue et al. (2017), Nursing Students’ Decision Making Skills Scale and Nursing
Students’ Decision Making Style tools used by Thabet et al. (2017), measurement of
problem-solving ability, communication skills, and learning motivation among all
participants by Yoo and Park (2015) signify the absence of any information bias in the
included studies. Standardized self-administered questionnaires were used for all research
subjects by Choi, Lindquist and Song (2014), and Gholami et al. (2016) as well. Similar
descriptive statistical tests were also employed by Kang et al. (2015), thus indicating no
information bias. Nonetheless, Sangestani and Khatiban (2013) used a questionnaire for
comparing the outcomes of LBL and PBL only for the experimental group, thereby reducing
research reliability.
5. Reliability- Were outcomes measured in a reliable way?
Document Page
9CRITICAL APPRAISAL
Reliability of the outcomes are a crucial aspect that helps in
determining research validity. Failure to measure research outcomes in a
correct manner would also act as a threat to the study validity and
reliability (Noble and Smith 2015). Habib et al. (1999) used three reliable tools for
evaluating the acquisition of the students on the subject matter. Several outcomes namely,
communication skills, problem-solving capability, and learning motivation were correctly
estimated by Yoo and Park (2015) with the use of Communication Assessment Tool (CAT),
Problem-Solving Inventory (PSI), and Instructional Materials Motivation Scale (IMMS) that
had already been formulated and implemented by several researchers, thereby establishing
their dependability. Use of Shapiro-Wilk test and Kolmogorov–Smirnov nonparametric tests
helped in comparing the two groups, on the basis of distributional adequacy (Arrue et al.
2017). Reliability in outcome measurement were also found in the study conducted by Thabet
et al. (2017), Gholami et al. (2016), Kang et al. (2015), and Choi, Lindquist and Song (2014)
who used Nursing Student’s Decision Making Skills Scale and Nursing Student’s Decision
Making Style, California Critical Thinking Skills Test form-B (CCTST-B) and Meta-
Cognitive Awareness Inventory (MAI), student satisfaction scale, and Critical Thinking
Ability Scale for College Student, Problem-solving Scale for College Students and the Self-
directed Learning Scale for College Students, respectively. Although Sangestani and
Khatiban (2013) used two questionnaires that had already been developed and tested by other
researchers, they designed two other data collection tool themselves, thus leading to
uncertainty regarding the reliability.
6. Was appropriate statistical analysis used?
Statistical analysis are an important aspect of quantitative research
and intend to quantify the collected data that typically involves results
from surveys, descriptive data or observational data. Conducting a
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
10CRITICAL APPRAISAL
statistical analysis is vital for evaluating the usefulness and credibility of
information and results that have been collected from the investigation
(Afifi and Azen 2014). Most of the articles focused on descriptive statistics,
thus ensuring that accurate statistical analysis had been performed by the
researchers. Descriptive statistics, one-way ANOVA, paired-t tests, SPSS,
and chi-square were used by Sangestani and Khatiban (2013), Gholami et al. (2016),
Kang et al. (2015), Choi, Lindquist and Song (2014), Habib et al. (1999), Arrue et al. (2017),
and Yoo and Park (2015). Apart from SPSS software, Mont Carlo exact test, Fishers exact
test, Mc-Nemar test, and Marginal Homogeneity test were also used by Thabet et al. (2017),
thus elaborating on the presence of appropriate statistical comparison of numerical data.
Included studies
A. Thabet et al. (2017)
Determined the impact that PBL exerts on the decision making style of nursing
students.
Strength- Study was unique and novel in Egypt; Could illustrate the effects of PBL
teaching approach; was in accordance with previous findings regarding cognitive
aspect of decision-making style
Limitation- Altering student style is a multifaceted process; follow-up required.
B. Habib et al. (1999)
Explored impacts of COPBL on university nursing programs
Strength- Could generate higher enthusiasm amid students; could increase the
achievement scores with COPBL implementation
Limitation- Interventions not implemented concomitantly; effects might have been
influenced by information exchange and class sequencing
C. Arrue et al. (2017)
Document Page
11CRITICAL APPRAISAL
Analyzed argumentative and declarative knowledge of students about depression
nursing care
Strength- Recognized the importance of declarative knowledge in professional action;
highlighted positive perception of students regarding PBL
Limitation- Greater planning and time management required; students were
overwhelmed; lack of scientific arguments from students
D. Choi, Lindquist and Song (2014)
Determined the effects of PBL on different outcome abilities of students, related to
learning
Strength- Increase in post-test scores after PBL implementation depicted the
association between problem-solving capabilities and nursing care quality; findings
were in accordance to other studies conducted in Korea
Limitation- Small sample size prevented result generalizability; absence of
comparability between nursing students
E. Kang et al. (2015)
Compared changes in different nursing knowledge and skill performance aspects,
based on implementation of three learning modalities
Strength- Helped in determining impacts of all modalities; showed that PBL
simulation improved knowledge
Limitation- Need for assessing other educational modalities as well
F. Gholami et al. (2016)
Compared impacts of traditional lectures and PBL methods
Strength- Significant score improvements highlighted importance of PBL in
enhancing critical thinking; associated PBL as a strong predictor of critical thinking
development skills
chevron_up_icon
1 out of 37
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]