Critical Appraisal: Nursing Education Systematic Review - Chapter 4

Verified

Added on  2023/06/03

|37
|6671
|186
Report
AI Summary
This report presents a critical appraisal of a nursing education systematic review, focusing on Chapter 4 and the methodologies employed. The report delves into the process of critical appraisal, emphasizing the importance of evaluating research articles for validity, reliability, and methodological rigor, including the reduction of bias in research. It discusses the hierarchy of evidence and the selection of appropriate critical appraisal tools, specifically the JBI checklist for quasi-experimental studies. The report then analyzes various types of bias, including selection, performance, attrition, and information bias, within the context of the included studies. It examines how these biases can affect the validity of research findings, highlighting the importance of addressing these issues to ensure the reliability and trustworthiness of the conclusions drawn from the systematic review. The report emphasizes the need for transparent and explicit procedures in evaluating published data to enhance the quality and generalizability of research outcomes.
tabler-icon-diamond-filled.svg

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Running head: CRITICAL APPRAISAL
Nurse education systematic review- chapter 4
Name of the Student
Name of the University
Author Note
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
1CRITICAL APPRAISAL
Chapter 4
Introduction
The term critical appraisal encompasses the use of transparent and explicit procedures
for evaluating the data that has been published in scholarly articles, by implementing several
rules of evidence. Conducting a critical appraisal in a systematic review is an essential
process that is imperative for extraction and selection of absolute research articles that are to
be included and discussed in the review (Elwood 2017). The stages of critical appraisal
involve a comprehensive assessment of articles for determining their validity, reliability,
methodological strength, rigour, and generalisability (Claydon 2015). Critical evaluation of
research articles also facilitate the process of reducing bias, besides resulting in an upsurge in
the research transparency process (Brown 2014). Critical evaluation of articles provie a
detailed assessment of the research including their strengths and weaknesses to guarantee the
investigation included in data extraction and data synthesis process are of the maximum
excellence (Petticrew and Roberts 2006). Publishing and reporting bias can lead to authors
exaggerating their results to increase the chances of publication (Polit and Beck 2012).
According to Heale and Twycross (2015) ddetermination of the strengths and limitations of
the extracted articles will ascertain the fact that high-quality, available evidences have been
incorporated in the systematic review. Inclusion of any articles that do not have a transparent
methodology, can result in under or over-estimation of the true impacts of the phenomenon
that is being investigated, thereby lowering the validity of the conclusions drawn (LoBiondo-
Wood and Haber 2014). There might be problems with the study design or the results.
Furthermore, publication bias also occurs when the study outcomes influence the decision to
publish or disseminate the findings.
Document Page
2CRITICAL APPRAISAL
The strength and precision of articles are also determined on the basis of hierarchy of
evidences that facilitate the process of locating best evidences for developing a well-
conducted systematic review. Higher the evidence of a particular study design, greater is the
rigour of such methodology, thus increasing the likelihood of minimising bias in results.
Systematic reviews have the highest evidence and are categorised to level 1, followed by
experimental studies in level 2 such as, randomised controlled or non-randomised trials, and
quasi-experimental studies (Stegenga 2014). The succeeding level 3 comprises of
observational/correlational studies, followed by case-controlled studies in level 4. Qualitative
research encompasses level 5 and 6, and are succeeded by expert opinions in level 7 that are
of the lowest evidence. Almost all articles used for this systematic review belong to level 2
and are experimental or quasi-experimental in design (Greenhalgh 2014). With the aim of
maintaining rigour, validity, and transparency, the identified articles were independently
evaluated by two reviewers having similar educational attainment, and there was no conflict
between them.
Critical appraisal tool
There are a plethora of critical appraisal tools such as, SURE, CASP,
and JBI that provide assistance in analysing scholarly articles and
documenting those that contain bias, thus facilitating the process of
inclusion of pertinent, reliable and valuable scholarly evidences for a
systematic review (Young and Eldermire 2017). While SURE comprises of
a checklist that facilitates the evaluation of randomised controlled trials
and experimental studies, CASP also help in appraising systematic review,
randomised controlled trials, qualitative, case-control, cohort studies,
diagnostic studies, and economic evaluation. In comparison, only the
critical appraisal checklist provided by JBI places a due focus on quasi-
Document Page
3CRITICAL APPRAISAL
experimental studies, which helped in selection of this tool for the
evaluation of articles (Porritt, Gomersall and Lockwood 2014). The image
below provide an illustration of the questions that are present in the JBI
checklist:
Figure 1- JBI checklist for quasi-experimental studies
(Source- The Joanna Briggs Institute 2017)
Of all the questions that are present in the checklist, questions 1, 2, 3, 5, 7, 8, and 9
are essential for the quasi-experimental studies. This can be accredited to the fact that these
questions help in determining recruitment of participants, discrepancies in intervention
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
4CRITICAL APPRAISAL
between the two groups, follow-up and its duration, measurement of outcome, and reliability
of the measurement tools (The Joanna Briggs Institute 2017). The rest of the questions are
deemed non-essential since they focus on logical aspects of the research such as, cause and
effect, control group, and multiple measurements. Although these questions strengthen the
quality of research articles, they are absolutely not necessary for determining reliability and
validity of evidences.
Types of bias assessed
As mentioned in the previous sections, quasi-experimental studies were included for the
systematic review owing to the fact that it helps in estimating the causal effect of a particular
intervention on specific target population, without random allocation of participants.
According to White and Sabarwal (2014) these studies are conducted under circumstances
when randomization is unethical or impractical, they are relatively easier to get accomplished
and help in generalization of the findings to the target population. Additionally, absence of
any logical or time constraints help in identifying the result trends. Determination of bias of
the included studies will help in evaluating whether the presence of any systematic error
during the research distorted the process of outcome measurement, thus decreasing validity of
the research.
1. Selection bias: (Were the participants included in any comparisons similar?)
Selection bias are generally introduced by selection of groups, data or individuals for
result analysis in a manner that leads to failure in achieving accurate randomization, thus
ensuring that the sample does not represent the population that is being analyzed (Breen,
Choi and Holm 2015). Owing to the fact that presence of dissimilarities between participant
groups can create an impact on the research outcome, thus bringing about differences in
intervention, it is essential to eliminate chances of such bias in selection. Furthermore,
Document Page
5CRITICAL APPRAISAL
according to Bareinboim and Tian (2015) non-equivalent control group designs are a major
threat to the validity of quasi-experimental studies. Therefore, presence of a control group
helps in comparison of the intervention effects, thereby reducing bias. The participant
inclusion/sampling criteria were not clearly stated by Habib et al. (1999), Thabet et al.
(2017), Yoo and Park (2015), Choi, Lindquist and Song (2014), and Arrue et al. (2017). On
the other hand, clear statement of the protocol that were followed by the researchers were
mentioned by Gholami et al. (2016), Kang et al. (2015), and Sangestani and Khatiban (2013).
Gholami et al. (2016) selected the participants who were a third-year undergraduate and
registered for critical care course. Display of unwillingness to participate acted as an
exclusion criteria, thus reducing selection bias. Following determination of sample size by
G*Power 3.1, Kang et al. (2015) included students who had a senior grade status, could
complete child health and fundamental nursing with similar credits, and did not participate
previously in any PBL class. The precise inclusion criteria followed by Sangestani and
Khatiban (2013) that prevented selection bias was being midwifery student, similar
educational level, similar semester and credit course, same instructor and consent for
participation.
2. Performance bias: Were the participants included in any comparisons receiving
similar treatment/care, other than the exposure or intervention of interest?
According to Mansournia et al. (2017) performance bias is considered as a serious threat
to internal validity of scientific studies and generally occurs when a group of participants in a
particular experiment receive more attention from the researchers (case), in comparison to
another group (control). Performance bias also occurs when the participants alter their
responses and/or behavior, upon identifying the groups to which they have been allocated.
This performance bias is commonly addressed by after enrolment of study subjects, by
blinding (or masking) the participants and/or research personnel. This plays an important role
Document Page
6CRITICAL APPRAISAL
in lowering the risks of knowledge regarding the intervention that was given (SchnellInderst
et al. 2017). In contrast detection bias is typically caused due to differences in the procedure
of determining outcomes between the groups. Although effective blinding plays an important
role in ensuring that participants enrolled in the study receive equivalent treatment and
interventions, ensuring blinding is not always conceivable, however (Hróbjartsson et al.
2014). Habib et al. (1999) did not provide similar care to the participants included in both the
groups, apart from the intervention since 52 of them were subjected to traditional teaching
methods during fall semester, and the rest 54 were subjected to community oriented problem-
based teaching (COPBL) approach during spring semester. Difference in time frame of the
two interventions are likely to produce performance bias. Apart from differences in
intervention (problem based learning) in the two groups, all students were subjected to
similar tools namely, Nursing Students’ Decision Making Skills Scale and Nursing Students’
Decision Making Style, thus lowering chances of performance bias (Thabet et al. 2017).
Arrue et al. (2017) also reduced performance bias by subjecting all recruited students to
similar learning outcomes, learning program competencies, and content. In the study by Yoo
and Park (2015), Choi, Lindquist and Song (2014), Gholami et al. (2016), Kang et al. (2015),
and Sangestani and Khatiban (2013) also eliminated performance bias by subjecting all
patients to similar circumstances and course content, apart from the interventions.
3. Attrition bias: Was the follow-up complete and if not, were differences between
groups in terms of their follow up adequately described and analysed?
Attrition bias occurs under circumstances that are marked by a difference between the
participant groups in terms of their follow up or drop-out rates (Portney and Watkins 2014).
With the aim of lowering attrition bias it is significant for all researchers to clarify why the
study subjects have been lost to follow-up or dropped out, and whether the statistical testing
included all of them. One major reason that contributes to attrition bias is the fact that study
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
7CRITICAL APPRAISAL
participants might remain unsatisfied with the efficacy of the interventions, or the treatment
side effects might also be intolerable (Cheng and Trivedi 2015). Thus, presence of such bias
is considered as a crucial methodological problem and deteriorate validity of the findings if
research participants who are retained at the end of the study, vary from those who withdraw.
Therefore, greater loss of patients to follow-up signify increased threats to the research
validity. There was no attrition bias in the research conducted Sangestani and Khatiban
(2013), owing to the absence of any information on follow-up. Kang et al. (2015) suggested
that conducting a follow-up study would prove beneficial in identifying the existing variances
between pre- and post-test impacts of confidence of skill performance test (CSP) and
satisfaction. There was no scope for attrition bias in the research studies conducted by
Gholami et al. (2016), Thabet et al. (2017), Arrue et al. (2017), Yoo and Park (2015), and
Choi, Lindquist and Song (2014). This can be accredited to the fact that the aforementioned
researchers did not conduct any follow-up for the research. This is one major drawback since
conduction of a follow-up helps in augmenting the overall efficiency of the investigation
effort. Furthermore, no participants also withdrew themselves from the studies at any point of
time.
4. Information bias: Were the outcomes of participants included in any comparisons
measured in the same way?
The effects or outcomes of a research need to be assessed in a similar
manner for both case and control groups. Presence of different methods
for measuring the outcomes of study participants who have been enrolled
in different groups results in threat to validity of the research results
(Sedgwick 2014). This in turn result in misperception about the study
outcomes and whether they are caused due to the treatment effect or due
to cause (intervention of interest). In other words, information bias arise
Document Page
8CRITICAL APPRAISAL
from errors in measurement and are also categorised as a major form of
observational bias, which is responsible for
an inappropriate approximation of the connotation between a particular
exposure and its expected outcome. Some key questions that help in
determining the presence of information bias are namely, (i) whether
similar scales or instruments had been used; ii) whether similar process
were undertaken to determine the research outcomes ; and iii) whether
the procedures, instructions, and measurement time were alike for all
study participants (Althubaiti 2016). This kind of bias makes the
researchers subconsciously impact the contributors of a research. Hence,
this kind of discrepancy from the actual truth during the research process
lessens the reliability and validity of the study findings. There was no
information bias in any of the research studies. Habib et al. (1999) evaluated
three outcome measures of all participants, upon completing the course. Non parametric tests
used by Arrue et al. (2017), Nursing Students’ Decision Making Skills Scale and Nursing
Students’ Decision Making Style tools used by Thabet et al. (2017), measurement of
problem-solving ability, communication skills, and learning motivation among all
participants by Yoo and Park (2015) signify the absence of any information bias in the
included studies. Standardized self-administered questionnaires were used for all research
subjects by Choi, Lindquist and Song (2014), and Gholami et al. (2016) as well. Similar
descriptive statistical tests were also employed by Kang et al. (2015), thus indicating no
information bias. Nonetheless, Sangestani and Khatiban (2013) used a questionnaire for
comparing the outcomes of LBL and PBL only for the experimental group, thereby reducing
research reliability.
5. Reliability- Were outcomes measured in a reliable way?
Document Page
9CRITICAL APPRAISAL
Reliability of the outcomes are a crucial aspect that helps in
determining research validity. Failure to measure research outcomes in a
correct manner would also act as a threat to the study validity and
reliability (Noble and Smith 2015). Habib et al. (1999) used three reliable tools for
evaluating the acquisition of the students on the subject matter. Several outcomes namely,
communication skills, problem-solving capability, and learning motivation were correctly
estimated by Yoo and Park (2015) with the use of Communication Assessment Tool (CAT),
Problem-Solving Inventory (PSI), and Instructional Materials Motivation Scale (IMMS) that
had already been formulated and implemented by several researchers, thereby establishing
their dependability. Use of Shapiro-Wilk test and Kolmogorov–Smirnov nonparametric tests
helped in comparing the two groups, on the basis of distributional adequacy (Arrue et al.
2017). Reliability in outcome measurement were also found in the study conducted by Thabet
et al. (2017), Gholami et al. (2016), Kang et al. (2015), and Choi, Lindquist and Song (2014)
who used Nursing Student’s Decision Making Skills Scale and Nursing Student’s Decision
Making Style, California Critical Thinking Skills Test form-B (CCTST-B) and Meta-
Cognitive Awareness Inventory (MAI), student satisfaction scale, and Critical Thinking
Ability Scale for College Student, Problem-solving Scale for College Students and the Self-
directed Learning Scale for College Students, respectively. Although Sangestani and
Khatiban (2013) used two questionnaires that had already been developed and tested by other
researchers, they designed two other data collection tool themselves, thus leading to
uncertainty regarding the reliability.
6. Was appropriate statistical analysis used?
Statistical analysis are an important aspect of quantitative research
and intend to quantify the collected data that typically involves results
from surveys, descriptive data or observational data. Conducting a
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
10CRITICAL APPRAISAL
statistical analysis is vital for evaluating the usefulness and credibility of
information and results that have been collected from the investigation
(Afifi and Azen 2014). Most of the articles focused on descriptive statistics,
thus ensuring that accurate statistical analysis had been performed by the
researchers. Descriptive statistics, one-way ANOVA, paired-t tests, SPSS,
and chi-square were used by Sangestani and Khatiban (2013), Gholami et al. (2016),
Kang et al. (2015), Choi, Lindquist and Song (2014), Habib et al. (1999), Arrue et al. (2017),
and Yoo and Park (2015). Apart from SPSS software, Mont Carlo exact test, Fishers exact
test, Mc-Nemar test, and Marginal Homogeneity test were also used by Thabet et al. (2017),
thus elaborating on the presence of appropriate statistical comparison of numerical data.
Included studies
A. Thabet et al. (2017)
Determined the impact that PBL exerts on the decision making style of nursing
students.
Strength- Study was unique and novel in Egypt; Could illustrate the effects of PBL
teaching approach; was in accordance with previous findings regarding cognitive
aspect of decision-making style
Limitation- Altering student style is a multifaceted process; follow-up required.
B. Habib et al. (1999)
Explored impacts of COPBL on university nursing programs
Strength- Could generate higher enthusiasm amid students; could increase the
achievement scores with COPBL implementation
Limitation- Interventions not implemented concomitantly; effects might have been
influenced by information exchange and class sequencing
C. Arrue et al. (2017)
Document Page
11CRITICAL APPRAISAL
Analyzed argumentative and declarative knowledge of students about depression
nursing care
Strength- Recognized the importance of declarative knowledge in professional action;
highlighted positive perception of students regarding PBL
Limitation- Greater planning and time management required; students were
overwhelmed; lack of scientific arguments from students
D. Choi, Lindquist and Song (2014)
Determined the effects of PBL on different outcome abilities of students, related to
learning
Strength- Increase in post-test scores after PBL implementation depicted the
association between problem-solving capabilities and nursing care quality; findings
were in accordance to other studies conducted in Korea
Limitation- Small sample size prevented result generalizability; absence of
comparability between nursing students
E. Kang et al. (2015)
Compared changes in different nursing knowledge and skill performance aspects,
based on implementation of three learning modalities
Strength- Helped in determining impacts of all modalities; showed that PBL
simulation improved knowledge
Limitation- Need for assessing other educational modalities as well
F. Gholami et al. (2016)
Compared impacts of traditional lectures and PBL methods
Strength- Significant score improvements highlighted importance of PBL in
enhancing critical thinking; associated PBL as a strong predictor of critical thinking
development skills
Document Page
12CRITICAL APPRAISAL
Limitation- Small sample size; case and control group were similar
G. Yoo and Park (2015)
Compared impacts of CBL on different learning aspects of nursing students
Strength- Associated CBL with communication skill and problem solving ability
development
Limitation- Convenience sampling; use of quantitative data collection instrument;
lack of control over professor’s passion and endeavour for new teaching approach
H. Sangestani and Khatiban (2013)
Compared impacts of LBL and PBL on learning progress and student satisfaction
Strength- Only study that associated PBL with rapid learning progress
Limitation- Complete student participation unavailable
JBI
checklist
Rational
e
Thab
et et
al.
2017
Habi
b et
al.
1999
Arru
e et
al.
2017
Choi,
Lind
quist
and
Song
2014
Sange
stani
and
Khati
ban
2013
Kang
et al.
2015
Ghola
mi et
al.
2016
Yoo
and
Park
2015
1. Is it
clear in
the study
what the
‘cause’ is
and what
is the
‘effect’
For
assessing
the
causal
associati
on
between
independ
Yes Yes Yes Yes Yes Yes Yes Yes
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
13CRITICAL APPRAISAL
(i.e. there
is no
confusion
about
which
variable
comes
first)?
ent
variable
(cause)
and
dependen
t variable
(effect)
in non-
RCT,
proper
recogniti
on of the
variables
that
should be
handled
first, to
investigat
e the
impact is
crucial.
Hence,
this is an
important
criterion.
Document Page
14CRITICAL APPRAISAL
2. Were
the
participa
nts
included
in any
comparis
ons
similar?
Alike
baseline
features
between
groups is
crucial
compone
nt for
assessing
internal
validity
and
causal
associati
on
between
outcome
and
interventi
on. In
addition,
it
excludes
probabilit
y of
Yes Not
clear
Not
clear
Not
clear
Yes Yes Yes Not
clear
Document Page
15CRITICAL APPRAISAL
selection
bias and
confound
ing
variable
impacts.
Hence,
this
criterion
is
important
.
3. Were
the
participa
nts
included
in any
comparis
ons
receiving
similar
treatment
/care,
other
Effects in
non-
RCTs
can be
accredite
d to
cause and
assumpti
on of no
selection
bias is
more
controlle
Yes No Yes Yes Yes Yes Yes
(same
partici
pants
in
both
group
s)
Yes
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
16CRITICAL APPRAISAL
than the
exposure
or
interventi
on of
interest?
d, when
there is
no
universal
deferent
in care
provision
, apart
from the
treatment
between
groups.
4. Was
there a
control
group?
Presence
of a
control
group
allows
explorati
on of
outcomes
of the
groups
that are
subjected
to varied
Yes Yes Yes Yes Yes Yes Yes Yes
Document Page
17CRITICAL APPRAISAL
treatment
, which
strengthe
ns the
causal
associati
on
(Portney
and
Watkins
2014).
Nonethel
ess,
encompa
ssing and
exploring
one
cohort,
prior to
and after
treatment
is
adequate
for
averting
Document Page
18CRITICAL APPRAISAL
the
occurren
ce of
selection
bias.
5. Were
there
multiple
measure
ments of
the
outcome
both pre
and post
the
interventi
on/exposu
re?
This
criterion
is
important
because
it
interferes
with
causal
associati
on
between
outcomes
and
managem
ent. If the
outcomes
are not
measured
before
Yes Yes Yes Yes Yes Yes Yes Yes
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
19CRITICAL APPRAISAL
interventi
on, and
recorded
only
post-
interventi
on, it
becomes
difficult
to know
if
participa
nts varied
due to
the
interventi
on, in
comparis
on to
what they
were,
prior to
the
treatment
(Portney
Document Page
20CRITICAL APPRAISAL
and
Watkins
2014).
6. Was
follow up
complete
and if not,
were
difference
s between
groups in
terms of
their
follow up
adequately
described
and
analyzed?
The
variation
in
follow-
up
between
groups
threatens
the
internal
validity.
Several
measure
ments are
done for
addressin
g attrition
rate
incompat
ibility
between
groups.
Yes
(no
follo
w-up
condu
cted)
Yes
(no
follo
w-up
condu
cted)
Yes
(no
follo
w-up
condu
cted)
Yes
(no
follo
w-up
condu
cted)
Yes
(no
follo
w-up
condu
cted)
Yes
(no
follo
w-up
condu
cted)
Yes
(no
follow
-up
condu
cted)
Yes
(no
follo
w-up
condu
cted)
Document Page
21CRITICAL APPRAISAL
This
criterion
was not
essential,
although
desirable.
7. Were
the
outcomes
of
participa
nts
included
in any
comparis
ons
measured
in the
same
way?
The
criterion
explores
the
causal
associati
on
between
interventi
on and
outcome
that is an
important
standard.
Yes Yes Yes Yes Yes Not
clear
Yes Yes
8. Were
outcomes
measured
in a
reliable
Outcome
measure
reliability
is crucial
for
Yes Yes Yes Yes Yes Not
clear
Yes Yes
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
22CRITICAL APPRAISAL
way? ensuring
minimiza
tion of
threats to
research
internal
validity
(Portney
and
Watkins
2014).
Most of
the
outcomes
were
based on
questionn
aires in
the
review,
and other
tools that
were
dependab
le.
Document Page
23CRITICAL APPRAISAL
Therefor
e, this
criterion
is crucial.
9. Was
appropria
te
statistical
analysis
used?
Correct
statistical
processes
must be
conducte
d for
avoiding
chances
of errors,
while
deducing
statistical
inference
related to
the
magnitud
e and
presence
of impact
between
outcomes
Yes Yes Yes Yes Yes Yes Yes Yes
Document Page
24CRITICAL APPRAISAL
and
treatment
(Portney
and
Watkins
2014).
Table 1- JBI checklist for included studies
(Source- The Joanna Briggs Institute 2017)
Article Aim Sampl
e
Design Methodolog
y
Findings Conclusion
Habib
et al.
1999
To compare
efficacy of
community
oriented
problem-
based
learning
(COPBL)
with
conventional
lecture
based
teachings
106
studen
ts
Quasi-
experimen
tal
Equal
allocation
two groups,
one
subjected to
COPBL and
other to
traditional
teaching,
followed by
measuremen
t of faculty
performance
, mean
knowledge
Increased
effectiveness
of COPBL
in faculty
performance
and
knowledge
gathering, in
addition to
greater
satisfaction
COPBL is
more
effective
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
25CRITICAL APPRAISAL
acquisition ,
and course
evaluation
Thabet
et al.
2017
To explore
impacts of
problem-
based
learning on
decision
making style
of nursing
students
84
studen
ts
Quasi-
experimen
tal
Collection
of data using
two tools,
Nursing
Student’s
Decision
Making
Skills Scale
and Nursing
Student’s
Decision
Making
Style
Statistically
significant
increase in
mean scores
in PBL
group
(before: 71 +
8.5, after:
116.3 +
10.4)
PBL
enhances
decision
making
capabilities
Arrue
et al.
2017
To compare
knowledge
gained by
nursing
students in
PBL and
traditional
learning
approaches
114
stud
ents
Quasi-
experimen
tal study
Pre-test and
post-test
design used
for
measuring
variations in
improvemen
t of
argumentati
No
statistically
important
alterations
between the
two
practices for
declarative
knowledge.
PBL helps
in acquiring
argumentati
ve
capacities
Document Page
26CRITICAL APPRAISAL
ve and
declarative
knowledge
Argumentati
ve
knowledge
improved in
PBL.
Yoo and
Park
2015
To analyze
impacts of
case-based
learning
(CBL) on
learning
motivation,
communicati
on skills,
and problem
solving
ability
143
studen
ts
Quasi-
experimen
tal study
Case-based
learning
lectures that
focused on 5
cases of
patient–
nurse
communicati
on
(intervention
group), and
traditional
teaching
methods
(control)
CBL group
demonstrate
d increased
communicati
on skills,
learning
motivation
and problem
solving
capabilities
CBL is
effective
for learning
and
teaching
Choi,
Lindquis
and
Song
2014
To compare
the outcome
capabilities
of nursing
students
90
studen
ts
Quasi-
experimen
tal study
Two groups
subjected to
traditional
lectures and
PBL
No
statistical
difference in
outcomes
between the
More
research
required to
demonstrat
e effects of
Document Page
27CRITICAL APPRAISAL
subjected to
traditional
lectures and
PBL
methods,
followed by
assessment
of problem-
solving,
critical
thinking and
self-directed
learning
abilities
groups PBL
Gholami
et al.
2016
To compare
impacts of
traditional
lecture and
PBL on
metacognitio
n and critical
thinking
40
studen
ts
Quasi-
experimen
tal study
Lecture
method used
for first 8
weeks in
control
group, PBL
used for next
8 weeks in
case group
No
noteworthy
differences
in the
outcomes.
PBL lead to
increase in
critical
thinking
score and
overall
metacognitio
n awareness
PBL can be
used as a
learning
method
Kang et
al. 2015
To compare
variations in
205
studen
Quasi-
experimen
Students
divided into
No
difference in
PBL can be
used as an
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
28CRITICAL APPRAISAL
knowledge,
satisfaction
and skill
performance
from 3
teaching
types
ts tal study three groups
namely, the
PBL,
simulation
with PBL,
and control
baseline
knowledge.
Mean
knowledge
scores and
student
satisfaction
greater in
PBL.
effective
learning
approach
Sangesta
ni and
Khatiba
n 2013
To compare
impacts of
PBL and
lecture
based
learning
(LBL)
56
studen
ts
Quasi-
experimen
tal study
Random
allocation to
PBL+LBL
class, and
only LBL
class
Greater
learning
progress in
the
combined
class, and
improvemen
t in theory
application
PBL should
be used
more in
undergradu
ate courses
Table 2- Summary table of included studies
Excluded studies
Four studies were excluded from the critical appraisal since they failed to fulfil the
essential criteria selected from the JBI checklist.
JBI checklist Kim et al.
2016
Santra and
Mani 2017
Mosalanejad,
Ghodsi and
Ghobadifar
Lira and
Lopes 2011
Document Page
29CRITICAL APPRAISAL
2013
1. Is it clear in the study
what the ‘cause’ is and
what is the ‘effect’ (i.e.
there is no confusion about
which variable comes
first)?
No Yes Yes Yes
2. Were the participants
included in any
comparisons similar?
Yes Yes Yes Yes
3. Were the participants
included in any
comparisons receiving
similar treatment/care,
other than the exposure
or intervention of
interest?
No Yes No Not clear
4. Was there a control
group?
Yes No Yes Yes
5. Were there multiple
measurements of the
outcome both pre and post
the intervention/exposure?
Yes Yes Yes Yes
Document Page
30CRITICAL APPRAISAL
6. Was follow up
complete and if not, were
differences between
groups in terms of their
follow up adequately
described and analyzed?
Not clear (no
follow-up
conducted)
Not clear (no
follow-up
conducted)
Not clear (no
follow-up
conducted)
Not clear (no
follow-up
conducted)
7. Were the outcomes of
participants included in
any comparisons
measured in the same
way?
Yes Yes Yes Yes
8. Were outcomes
measured in a reliable
way?
Yes Yes Yes Not clear
9. Was appropriate
statistical analysis used?
Yes Yes Yes Yes
Table 3- JBI checklist for excluded studies
(Source- The Joanna Briggs Institute 2017)
Article Aim Sampl
e
Design Methodolo
gy
Findings Conclusio
n
Kim et al.
2016
To
determine
impacts of
63
student
s
RCT 2 hour
weekly
classes/3
Significant
improveme
nt in
TBL is
effective
for
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
31CRITICAL APPRAISAL
Team-
based
learning
(TBL) on
learning
outcomes
and
problem
solving
capabilitie
s
weeks for
both TBL
and control
group
problem
solving
abilities in
TBL group
teaching
students
Santra and
Mani 2017
To explore
effects of
PBL and
traditional
teaching
83
student
s
Quasi-
experiment
al study
42 students
subjected to
traditional
teaching
and 41 to
PBL
PBL
increased
post-test
knowledge
score
PBL is
more
effective
Mosalaneja
d, Ghodsi
and
Ghobadifar
2013
To explore
two
scenario
methods
and their
impact in
student
performan
45
student
s
Comparativ
e study
Training
provided by
peer
feedback
and scenario
based
learning in
two stages
Notable
differences
with
greater
student
performanc
e and
knowledge
Active
method of
teaching
should be
used
Document Page
32CRITICAL APPRAISAL
ce and
knowledge
from peer
assessment
Lira and
Lopes 2011
To explore
efficacy of
PBL
teaching
strategy
30
student
s
Experiment
al study
5 modules
taught with
PBL
approach
Data
pooling
capability
of students
in PBL
group were
larger
PBL
exerts a
positive
influence
in
learning
Table 4- Summary table of excluded studies
Document Page
33CRITICAL APPRAISAL
References
Afifi, A.A. and Azen, S.P., 2014. Statistical analysis: a computer oriented approach.
Academic press.
Althubaiti, A., 2016. Information bias in health research: definition, pitfalls, and adjustment
methods. Journal of multidisciplinary healthcare, 9, p.211.
Arrue, M., de Alegría, B.R., Zarandona, J. and Cillero, I.H., 2017. Effect of a PBL teaching
method on learning about nursing care for patients with depression. Nurse education today,
52, pp.109-115.
Bareinboim, E. and Tian, J., 2015, January. Recovering Causal Effects from Selection Bias.
In AAAI (pp. 3475-3481).
Breen, R., Choi, S. and Holm, A., 2015. Heterogeneous causal effects and sample selection
bias. Sociological Science, 2, pp.351-369.
Cheng, T.C. and Trivedi, P.K., 2015. Attrition bias in panel data: A sheep in wolf's clothing?
A case study based on the mabel survey. Health economics, 24(9), pp.1101-1117.
Choi, E., Lindquist, R. and Song, Y., 2014. Effects of problem-based learning vs. traditional
lecture on Korean nursing students' critical thinking, problem-solving, and self-directed
learning. Nurse education today, 34(1), pp.52-56.
Claydon, L.S., 2015. Rigour in quantitative research. Nursing Standard (2014+), 29(47),
p.43.
Elwood, M., 2017. Critical appraisal of epidemiological studies and clinical trials. Oxford
University Press.
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
34CRITICAL APPRAISAL
Gholami, M., Moghadam, P.K., Mohammadipoor, F., Tarahi, M.J., Sak, M., Toulabi, T. and
Pour, A.H.H., 2016. Comparing the effects of problem-based learning and the traditional
lecture method on critical thinking skills and metacognitive awareness in nursing students in
a critical care nursing course. Nurse education today, 45, pp.16-21.
Greenhalgh, T., 2014. How to read a paper: The basics of evidence-based medicine. John
Wiley & Sons.
Habib, F., Eshra, D.K., Weaver, J. and Newcomer, W., 1999. Problem based learning: a new
approach for nursing education in Egypt. Journal of Multicultural Nursing & Health, 5(3),
p.6.
Heale, R. and Twycross, A., 2015. Validity and reliability in quantitative studies. Evidence-
based nursing, pp.ebnurs-2015.
Hróbjartsson, A., Emanuelsson, F., Skou Thomsen, A.S., Hilden, J. and Brorson, S., 2014.
Bias due to lack of patient blinding in clinical trials. A systematic review of trials
randomizing patients to blind and nonblind sub-studies. International journal of
epidemiology, 43(4), pp.1272-1283.
Kang, K.A., Kim, S., Kim, S.J., Oh, J. and Lee, M., 2015. Comparison of knowledge,
confidence in skill performance (CSP) and satisfaction in problem-based learning (PBL) and
simulation with PBL educational modalities in caring for children with bronchiolitis. Nurse
education today, 35(2), pp.315-321.
Kim, H.R., Song, Y., Lindquist, R. and Kang, H.Y., 2016. Effects of team-based learning on
problem-solving, knowledge and clinical performance of Korean nursing students. Nurse
education today, 38, pp.115-118.
Document Page
35CRITICAL APPRAISAL
Lira, A.L.B.D.C. and Lopes, M.V.D.O., 2011. Nursing diagnosis: educational strategy based
on problem-based learning. Revista Latino-Americana de Enfermagem, 19(4), pp.936-943.
LoBiondo-Wood, G. and Haber, J., 2014. Nursing Research-E-Book: Methods and Critical
Appraisal for Evidence-Based Practice. Elsevier Health Sciences.
Mansournia, M.A., Higgins, J.P., Sterne, J.A. and Hernán, M.A., 2017. Biases in randomized
trials: a conversation between trialists and epidemiologists. Epidemiology (Cambridge,
Mass.), 28(1), p.54.
Mosalanejad, L., Ghodsi, Z. and Ghobadifar, M.A., 2013. The efficacy of two Active
Methods of Teaching on Students' Competency. International Journal of Nursing
Education, 5(1), p.242.
Noble, H. and Smith, J., 2015. Issues of validity and reliability in qualitative
research. Evidence-Based Nursing, pp.ebnurs-2015.
Porritt, K., Gomersall, J. and Lockwood, C., 2014. JBI's systematic reviews: study selection
and critical appraisal. AJN The American Journal of Nursing, 114(6), pp.47-52.
Portney, L.G. and Watkins, M.P., 2014. Foundations of clinical research: applications to
pratice. FA Davis.
Sangestani, G. and Khatiban, M., 2013. Comparison of problem-based learning and lecture-
based learning in midwifery. Nurse education today, 33(8), pp.791-795.
Santra, P. and Mani, S., 2017. Comparative Assessment of Problem-Based Learning and
Traditional Teaching to Acquire Knowledge on Ventilator Associated
Pneumonia. International Journal of Nursing Education, 9(4).
SchnellInderst, P., Iglesias, C.P., Arvandi, M.A.R.J.A.N., Ciani, O.R.I.A.N.A., Matteucci
Gothe, R., Peters, J., Blom, A.W., Taylor, R.S. and Siebert, U., 2017. A biasadjusted
Document Page
36CRITICAL APPRAISAL
evidence synthesis of RCT and observational data: the case of total hip replacement. Health
economics, 26, pp.46-69.
Sedgwick, P., 2014. Bias in observational study designs: prospective cohort
studies. Bmj, 349, p.g7731.
Stegenga, J., 2014. Down with the hierarchies. Topoi, 33(2), pp.313-322.
Thabet, M., Eman, E.L., Abood, S.A. and Morsy, S.R., 2017. The effect of problem-based
learning on nursing students' decision making skills and styles. Journal of Nursing Education
and Practice, 7(6), p.108.
The Joanna Briggs Institute., 2017. Checklist for Quasi-Experimental Studies (non-
randomized experimental studies). [online] Available at:
http://joannabriggs.org/assets/docs/critical-appraisal-tools/JBI_Quasi-
Experimental_Appraisal_Tool2017.pdf [Accessed 15 Feb. 2019].
White, H. and Sabarwal, S., 2014. Quasi-experimental design and methods. Methodological
Briefs: Impact Evaluation, 8, pp.1-16.
Yoo, M.S. and Park, H.R., 2015. Effects of casebased learning on communication skills,
problemsolving ability, and learning motivation in nursing students. Nursing & health
sciences, 17(2), pp.166-172.
Young, S. and Eldermire, E., 2017. The big picture. Assembling the Pieces of a Systematic
Review: A Guide for Librarians, p.13.
chevron_up_icon
1 out of 37
circle_padding
hide_on_mobile
zoom_out_icon
logo.png

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]