Healthy Start Program Evaluation Analysis
VerifiedAdded on 2020/04/07
|13
|3307
|255
AI Summary
This assignment examines the evaluation of the Healthy Start Program. While acknowledging its strengths, the analysis focuses on key weaknesses, particularly the exclusion of Eritrean refugee feedback. The document emphasizes the need for diverse participant perspectives to ensure a comprehensive understanding of the program's impact and effectiveness.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
1
Running head: PUBLIC HEALTH
Public Health
Name of student:
Name of university:
Author note:
Running head: PUBLIC HEALTH
Public Health
Name of student:
Name of university:
Author note:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
2PUBLIC HEALTH
Part 1
Program evaluation for health care programs holds much importance when it comes to
understanding the utility and outcome of these programs against the set objectives. For
understanding whether the healthcare program has been able to meet its aim or not, it is
pivotal to evaluate the program rigorously against a set of guidelines outlined by a public
healthcare body. Through evaluation, concerned stakeholders and policymakers are able to
extract information about the program based on which the performance and operation of the
same can be judged. Different formats are present that support a wide array of tools and aids
to be used for program evaluation. The ‘Realist Evaluation’ approach was outlined by
researchers Pawson and Tilley who proposed the idea of evaluating health care programs by
understanding what works for whom in what circumstances and in what respects and how1.
The present discussion compares the Realist Evaluation approach and the model of evaluation
put forward by Michelle Issel. It is a critical analysis that highlights the similarities and points
of difference between the two.
The public health concerns at the contemporary era reflect the need of developing
health care programs that are tailored for individuals, populations and service delivery
systems across the globe. Practical application of healthcare programs would be successful
when the evaluation and successive dissemination of findings are stable enough so that based
on such data further modifications can be done. The two most debatable evaluation methods
at present are the Realist Evaluation model and the model outlined by Michelle Issel. Though
both the models are theory-driven evaluation tools, the Realist Evaluation model is set apart
as it has explicit and overt philosophical underpinnings unlike the other model being
discussed here2.
1 Grembowski, David. The practice of health program evaluation. Sage Publications, 2015.
2 McKenzie, James F., Brad L. Neiger, and Rosemary Thackeray. Planning, implementing & evaluating health
promotion programs: A primer. Pearson, 2016.
Part 1
Program evaluation for health care programs holds much importance when it comes to
understanding the utility and outcome of these programs against the set objectives. For
understanding whether the healthcare program has been able to meet its aim or not, it is
pivotal to evaluate the program rigorously against a set of guidelines outlined by a public
healthcare body. Through evaluation, concerned stakeholders and policymakers are able to
extract information about the program based on which the performance and operation of the
same can be judged. Different formats are present that support a wide array of tools and aids
to be used for program evaluation. The ‘Realist Evaluation’ approach was outlined by
researchers Pawson and Tilley who proposed the idea of evaluating health care programs by
understanding what works for whom in what circumstances and in what respects and how1.
The present discussion compares the Realist Evaluation approach and the model of evaluation
put forward by Michelle Issel. It is a critical analysis that highlights the similarities and points
of difference between the two.
The public health concerns at the contemporary era reflect the need of developing
health care programs that are tailored for individuals, populations and service delivery
systems across the globe. Practical application of healthcare programs would be successful
when the evaluation and successive dissemination of findings are stable enough so that based
on such data further modifications can be done. The two most debatable evaluation methods
at present are the Realist Evaluation model and the model outlined by Michelle Issel. Though
both the models are theory-driven evaluation tools, the Realist Evaluation model is set apart
as it has explicit and overt philosophical underpinnings unlike the other model being
discussed here2.
1 Grembowski, David. The practice of health program evaluation. Sage Publications, 2015.
2 McKenzie, James F., Brad L. Neiger, and Rosemary Thackeray. Planning, implementing & evaluating health
promotion programs: A primer. Pearson, 2016.
3PUBLIC HEALTH
The common element between the Realist Evaluation model and the model outlined
by Michelle Issel is that both use the well-established program theory as the basis of their
evaluation. The theory is a conceptual model that aims to justify how the selected
intervention, in the form of a project, program, strategy or policy, contributes to the results
produced by the actual or intended impacts. It includes both negative and positive impacts
and highlights other factors contributing to impacts3. Both the Realist Evaluation model and
Michelle Issel’s model have the objective of assessing whether the program is designed in a
manner so that the intended outcomes can be achieved. The theory becomes the backbone of
the programs, and it lays out a clear, logical description of the reasons behind the activities
leading to results of benefits. Both the evaluation approaches clarify the agreement about how
the program being evaluated works and identifies the gaps in the evidence.
The first realist evaluation approach was developed by Pawson and Tilley who argued
that in order to understand what the actual outcomes of a health program are; the decision
makers must identify the effectiveness of the program keeping in mind the population it
addresses and the circumstances under which it has been implemented. The main focus of
this type of evaluation is to examine what elements of the chosen program have been
successful in bringing about the outcomes given the corresponding circumstances and who
have been affected by the program4. In contrast, the evaluation approach of Issel had been
focusing on the sole understanding of whether the program has worked or not. While Pawson
and Tilley have attempted to justify the fact that a program might be successful under certain
circumstances and not be successful under other, Issel has not considered this aspect of
program evaluation in his model. Realist evaluation model keeps into consideration the
contexts in which the program might have given different results, assuming that program
3 Issel, L. Michele, and Rebecca Wells. Health program planning and evaluation. Jones & Bartlett Learning,
2017.
4 Pawson, R., and N. Tilley. "Realist evaluation. 2004." (2015).
The common element between the Realist Evaluation model and the model outlined
by Michelle Issel is that both use the well-established program theory as the basis of their
evaluation. The theory is a conceptual model that aims to justify how the selected
intervention, in the form of a project, program, strategy or policy, contributes to the results
produced by the actual or intended impacts. It includes both negative and positive impacts
and highlights other factors contributing to impacts3. Both the Realist Evaluation model and
Michelle Issel’s model have the objective of assessing whether the program is designed in a
manner so that the intended outcomes can be achieved. The theory becomes the backbone of
the programs, and it lays out a clear, logical description of the reasons behind the activities
leading to results of benefits. Both the evaluation approaches clarify the agreement about how
the program being evaluated works and identifies the gaps in the evidence.
The first realist evaluation approach was developed by Pawson and Tilley who argued
that in order to understand what the actual outcomes of a health program are; the decision
makers must identify the effectiveness of the program keeping in mind the population it
addresses and the circumstances under which it has been implemented. The main focus of
this type of evaluation is to examine what elements of the chosen program have been
successful in bringing about the outcomes given the corresponding circumstances and who
have been affected by the program4. In contrast, the evaluation approach of Issel had been
focusing on the sole understanding of whether the program has worked or not. While Pawson
and Tilley have attempted to justify the fact that a program might be successful under certain
circumstances and not be successful under other, Issel has not considered this aspect of
program evaluation in his model. Realist evaluation model keeps into consideration the
contexts in which the program might have given different results, assuming that program
3 Issel, L. Michele, and Rebecca Wells. Health program planning and evaluation. Jones & Bartlett Learning,
2017.
4 Pawson, R., and N. Tilley. "Realist evaluation. 2004." (2015).
4PUBLIC HEALTH
outcomes are a variable aspect. The aim of Issel, on the other hand, had been only to answer
the question of whether the program has worked or not, notwithstanding the fact that a
program might give different results under different settings and with different set of
population5.
Scientific realism is the underlying notion of the realist philosophy and argues that an
intervention works, or does not work since the actors take up or do not take up certain
decisions against the program. The realism model further argues that the ‘reasoning’ of the
actors as a response to the opportunities and resources provided by the selected program leads
to the outcomes. The approach of Issel do not consider this concept and undermines the role
played by the actors in program outcomes. While realist evaluators identify the generative
mechanisms are explaining how the outcomes have been achieved, followers of Issel’s model
do not take this step6. As per the realist model, the intervention, that is the program, and the
actors are entrenched in the social reality while this reality in the community context exerts
an influence on the implementation of the program and the degree to which the actors would
respond to it. The context-mechanism-outcome (CMO) configuration is considered as the
primary framework for realist analysis. Since the realist evaluation holds up the idea that
generative causality is applicable to program effectiveness, the claims made by the realists
are modest. Their statements have the underlying principle that a program evaluation can
never give rise to findings that are universally applicable. Such notion is not mentioned in the
evaluation approach of Issel, whose approach towards generalizability of findings different.
From the above discussion, it can be stated that program evaluation is the systematic
method used for collection, analysis and interpreting the information from programs that
answer questions about the same program related to the efficiency and effectiveness. The
5 Dalkin, Sonia Michelle, et al. "13 What works, for whom and in which circumstances when implementing the
namaste advanced dementia care programme in the home setting?." (2017): A351-A352.
6 Porter, Sam. "Realist evaluation: an immanent critique." Nursing Philosophy 16.4 (2015): 239-251.
outcomes are a variable aspect. The aim of Issel, on the other hand, had been only to answer
the question of whether the program has worked or not, notwithstanding the fact that a
program might give different results under different settings and with different set of
population5.
Scientific realism is the underlying notion of the realist philosophy and argues that an
intervention works, or does not work since the actors take up or do not take up certain
decisions against the program. The realism model further argues that the ‘reasoning’ of the
actors as a response to the opportunities and resources provided by the selected program leads
to the outcomes. The approach of Issel do not consider this concept and undermines the role
played by the actors in program outcomes. While realist evaluators identify the generative
mechanisms are explaining how the outcomes have been achieved, followers of Issel’s model
do not take this step6. As per the realist model, the intervention, that is the program, and the
actors are entrenched in the social reality while this reality in the community context exerts
an influence on the implementation of the program and the degree to which the actors would
respond to it. The context-mechanism-outcome (CMO) configuration is considered as the
primary framework for realist analysis. Since the realist evaluation holds up the idea that
generative causality is applicable to program effectiveness, the claims made by the realists
are modest. Their statements have the underlying principle that a program evaluation can
never give rise to findings that are universally applicable. Such notion is not mentioned in the
evaluation approach of Issel, whose approach towards generalizability of findings different.
From the above discussion, it can be stated that program evaluation is the systematic
method used for collection, analysis and interpreting the information from programs that
answer questions about the same program related to the efficiency and effectiveness. The
5 Dalkin, Sonia Michelle, et al. "13 What works, for whom and in which circumstances when implementing the
namaste advanced dementia care programme in the home setting?." (2017): A351-A352.
6 Porter, Sam. "Realist evaluation: an immanent critique." Nursing Philosophy 16.4 (2015): 239-251.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
5PUBLIC HEALTH
evaluation process is a relatively evolving phenomena as novice tools are emerging with
more distinct features. The Realist Evaluation approach of Pawson and Tilley, and the model
outlined by Issel are both based on the program theory; however, they are distinct from each
other. While the former is embedded in philosophical concepts, the latter is not. The prime
difference between the realist theory and that of Issel is that the former one specifies the
mechanisms generating the outcomes. Understanding the implications of both the models
would justify the application of each under different circumstances against the chosen
program objectives.
Part 2
Chosen health care program
‘Healthy Start’ is a well-reputed preventive health education project based in
Australia that works with the refugees arriving in the country and aims at increasing health
literacy within the community. The program started in the year 2012 by a group of volunteers
of comprising of medical students at ‘Hope for Health’. The program receives its funding
from the Brisbane South Health Network (BSPHN) and runs in collaboration with
Multicultural Development Association (MDA) Ltd in Brisbane. MDA Ltd is a chief
settlement agency working for new humanitarian refugee arriving in Brisbane7
The mission of Healthy ‘Start’ is to engage the refugees settled in Australia in health
activities for achieving better health outcomes and equip them with the necessary skills
required to maintain a healthy lifestyle. Since the process of taking refugee in Australia is a
tiring and traumatic one, refuges suffer poor health outcomes after their struggle for
existence. The program helps this population with the adequate information required to take
7 "Healthy Start." Healthystart.org.au. N.p., 2017. Web. 30 Sept. 2017.
evaluation process is a relatively evolving phenomena as novice tools are emerging with
more distinct features. The Realist Evaluation approach of Pawson and Tilley, and the model
outlined by Issel are both based on the program theory; however, they are distinct from each
other. While the former is embedded in philosophical concepts, the latter is not. The prime
difference between the realist theory and that of Issel is that the former one specifies the
mechanisms generating the outcomes. Understanding the implications of both the models
would justify the application of each under different circumstances against the chosen
program objectives.
Part 2
Chosen health care program
‘Healthy Start’ is a well-reputed preventive health education project based in
Australia that works with the refugees arriving in the country and aims at increasing health
literacy within the community. The program started in the year 2012 by a group of volunteers
of comprising of medical students at ‘Hope for Health’. The program receives its funding
from the Brisbane South Health Network (BSPHN) and runs in collaboration with
Multicultural Development Association (MDA) Ltd in Brisbane. MDA Ltd is a chief
settlement agency working for new humanitarian refugee arriving in Brisbane7
The mission of Healthy ‘Start’ is to engage the refugees settled in Australia in health
activities for achieving better health outcomes and equip them with the necessary skills
required to maintain a healthy lifestyle. Since the process of taking refugee in Australia is a
tiring and traumatic one, refuges suffer poor health outcomes after their struggle for
existence. The program helps this population with the adequate information required to take
7 "Healthy Start." Healthystart.org.au. N.p., 2017. Web. 30 Sept. 2017.
6PUBLIC HEALTH
care of their health in the Australian context. The vision is to present the refuges with the
chance to secure a healthy future. The aims of the program are multifold. The primary aim is
to educate the refugees on health topics such as medications, nutrition, general health,
pregnancy, emergency health conditions and adolescent health. The second aim is to
encourage efficient and prompt interaction with the healthcare system of Australia. Further,
the next aim is to address negative typecasts in relation to service delivery. Lastly, it aims to
permit an environment of interaction between the refuges and the locals. Health knowledge
imparted to the population aids in increasing their confidence and morale so that there are no
restrictions in accessing health care services or maintaining own health and hygiene.
Integration of refuges in the Australian context is of prime importance; hence the program
fosters a foundation for efficacious settlement of the population in the country8
Evaluation report of the healthcare program
The evaluation of the Healthy Start program was undertaken in the year 2016 whose
elaboration and analysis would be presented in here. The Brisbane South Primary Health
Network (BSPHN) thought it significant to conduct a thorough evaluation of the impact of
the program in relation to behaviour change and long-term knowledge. The Mater/UQ Centre
for Primary Health Care Innovation (MUQCPHCI) at Mater Health Services (MHS) were
considered for undertaking this evaluation. Two consultant professionals from G8 and a
Coordinator engaged in the evaluation. The Healthy Start Team and MDA Ltd were
contacted for gaining insight into the program. Information was given to them by the delivery
of two Healthy Start Programs- one each for the Eritrean and Somali community. As the
program proceeded, cancellation of the Eritrean group was necessary as a result of date clash
with the Orthodox religious day. The evaluation as done only for the Somali group.
8 Duckett, Stephen, and Sharon Willcox. The Australian health care system. No. Ed. 5. Oxford University Press,
2015.
care of their health in the Australian context. The vision is to present the refuges with the
chance to secure a healthy future. The aims of the program are multifold. The primary aim is
to educate the refugees on health topics such as medications, nutrition, general health,
pregnancy, emergency health conditions and adolescent health. The second aim is to
encourage efficient and prompt interaction with the healthcare system of Australia. Further,
the next aim is to address negative typecasts in relation to service delivery. Lastly, it aims to
permit an environment of interaction between the refuges and the locals. Health knowledge
imparted to the population aids in increasing their confidence and morale so that there are no
restrictions in accessing health care services or maintaining own health and hygiene.
Integration of refuges in the Australian context is of prime importance; hence the program
fosters a foundation for efficacious settlement of the population in the country8
Evaluation report of the healthcare program
The evaluation of the Healthy Start program was undertaken in the year 2016 whose
elaboration and analysis would be presented in here. The Brisbane South Primary Health
Network (BSPHN) thought it significant to conduct a thorough evaluation of the impact of
the program in relation to behaviour change and long-term knowledge. The Mater/UQ Centre
for Primary Health Care Innovation (MUQCPHCI) at Mater Health Services (MHS) were
considered for undertaking this evaluation. Two consultant professionals from G8 and a
Coordinator engaged in the evaluation. The Healthy Start Team and MDA Ltd were
contacted for gaining insight into the program. Information was given to them by the delivery
of two Healthy Start Programs- one each for the Eritrean and Somali community. As the
program proceeded, cancellation of the Eritrean group was necessary as a result of date clash
with the Orthodox religious day. The evaluation as done only for the Somali group.
8 Duckett, Stephen, and Sharon Willcox. The Australian health care system. No. Ed. 5. Oxford University Press,
2015.
7PUBLIC HEALTH
The purpose of the evaluation of the program was multi-faceted. The first aim was to
assess whether the program objectives and goal were achieved. The evaluation also aimed to
assess the degree to which the program had contributed towards increasing the health literacy
and bringing about behaviour change among the refugee community. After that, the
evaluation attempted to document critical factors of success and barriers to implementing the
program. Lastly, evaluation was done to come up with an overview of the value of the
program to the funding body.
The evaluation process considered utilising a Post-Workshop Questionnaire for
understanding the immediate learning after the workshops and conducting a six-week Post
Workshop phone interview with participants for understanding the influence on behaviour
change. The participants of the program were guided on the day on the workshop to fill up
the requisite forms. Recommendations and observations were gained by the evaluators about
the participants when the workshop ended. The bicultural worker from MUQCPHCI was
responsible for conducting a telephonic interview with 9 participants six weeks after the
workshop was completed. The measures for the evaluation tool pertained to the health topics
included in the workshop. The evaluation concluded that the program had been successful in
fostering a positive change among the targeted population. The report indicated that the
Healthy Start Program had the potential to be continued in future. It showed benefits for
enhanced health literacy levels of refugee communities9
Assessment of the evaluation’s relative strengths and weaknesses
A strong and rational health care program evaluation is of prime importance when it
comes to understanding the effectiveness of the program. It is the evaluation phase that
9 The ‘Healthy Start Program’ Evaluation Report. 2016. Web. 30 Sept. 2017.
The purpose of the evaluation of the program was multi-faceted. The first aim was to
assess whether the program objectives and goal were achieved. The evaluation also aimed to
assess the degree to which the program had contributed towards increasing the health literacy
and bringing about behaviour change among the refugee community. After that, the
evaluation attempted to document critical factors of success and barriers to implementing the
program. Lastly, evaluation was done to come up with an overview of the value of the
program to the funding body.
The evaluation process considered utilising a Post-Workshop Questionnaire for
understanding the immediate learning after the workshops and conducting a six-week Post
Workshop phone interview with participants for understanding the influence on behaviour
change. The participants of the program were guided on the day on the workshop to fill up
the requisite forms. Recommendations and observations were gained by the evaluators about
the participants when the workshop ended. The bicultural worker from MUQCPHCI was
responsible for conducting a telephonic interview with 9 participants six weeks after the
workshop was completed. The measures for the evaluation tool pertained to the health topics
included in the workshop. The evaluation concluded that the program had been successful in
fostering a positive change among the targeted population. The report indicated that the
Healthy Start Program had the potential to be continued in future. It showed benefits for
enhanced health literacy levels of refugee communities9
Assessment of the evaluation’s relative strengths and weaknesses
A strong and rational health care program evaluation is of prime importance when it
comes to understanding the effectiveness of the program. It is the evaluation phase that
9 The ‘Healthy Start Program’ Evaluation Report. 2016. Web. 30 Sept. 2017.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
8PUBLIC HEALTH
determines whether the program has been apt and helps in outlining the future
recommendations for the betterment of such program. Reviewing program evaluation gives
the opportunity to identify the gaps in the evaluation process, based on which more stable
evaluation process can be determined. While conducting an evaluation, it is pivotal to collect
data in an accurate manner and work on them10.
The aims of a program determine the manner in which the evaluation is to be carried
out at its best. The aims of the evaluation are to be so set that the focus is on the assessment
of the degree to which the objectives have been met adequately. Secondary aims might be
there based on the assessment of the increase of enforcement and acceptability of the program
for the stakeholders11. The Healthy Start Program had the aim of increasing the health
knowledge of the refugees and the changes in health behaviour showcased by them. The
evalution focused on assessing the degree to which the program contributed towards
increased health literacy and changes in refugee health behaviour. In addition, there were
some secondary aims of the evaluation as it assessed the facilitators and barriers for
implementation of the program.
Program evaluation can be of different forms and appropriateness of these depends on
the aims of the program to be evaluated. The common forms are processed evaluation and
impact evaluation. Impact evaluation is carried out after the completion of the program , and
it aims to assess the extent to which the program has met the ultimate goals. The usefulness
of impact evaluation lies in the fact that such evaluation provides empirical evidence for
application in funding decisions and policy-making12. The Healthy Start Program evaluation
had been an impact evaluation with the goal of examining the wider influence of the program
10 Issel, L. Michele, and Rebecca Wells. Health program planning and evaluation. Jones & Bartlett Learning,
2017.
11 Funderburk, Jennifer S., and Robyn L. Shepardson. "Real-world program evaluation of integrated behavioral
health care: Improving scientific rigor." Families, Systems, & Health 35.2 (2017): 114.
12 Fink, Arlene. Evaluation Fundamentals: Insights into the Outcomes, Effectiveness, and Quality of Health
Programs: Insights Into the Outcomes, Effectiveness, and Quality of Health Programs. Sage, 2005.
determines whether the program has been apt and helps in outlining the future
recommendations for the betterment of such program. Reviewing program evaluation gives
the opportunity to identify the gaps in the evaluation process, based on which more stable
evaluation process can be determined. While conducting an evaluation, it is pivotal to collect
data in an accurate manner and work on them10.
The aims of a program determine the manner in which the evaluation is to be carried
out at its best. The aims of the evaluation are to be so set that the focus is on the assessment
of the degree to which the objectives have been met adequately. Secondary aims might be
there based on the assessment of the increase of enforcement and acceptability of the program
for the stakeholders11. The Healthy Start Program had the aim of increasing the health
knowledge of the refugees and the changes in health behaviour showcased by them. The
evalution focused on assessing the degree to which the program contributed towards
increased health literacy and changes in refugee health behaviour. In addition, there were
some secondary aims of the evaluation as it assessed the facilitators and barriers for
implementation of the program.
Program evaluation can be of different forms and appropriateness of these depends on
the aims of the program to be evaluated. The common forms are processed evaluation and
impact evaluation. Impact evaluation is carried out after the completion of the program , and
it aims to assess the extent to which the program has met the ultimate goals. The usefulness
of impact evaluation lies in the fact that such evaluation provides empirical evidence for
application in funding decisions and policy-making12. The Healthy Start Program evaluation
had been an impact evaluation with the goal of examining the wider influence of the program
10 Issel, L. Michele, and Rebecca Wells. Health program planning and evaluation. Jones & Bartlett Learning,
2017.
11 Funderburk, Jennifer S., and Robyn L. Shepardson. "Real-world program evaluation of integrated behavioral
health care: Improving scientific rigor." Families, Systems, & Health 35.2 (2017): 114.
12 Fink, Arlene. Evaluation Fundamentals: Insights into the Outcomes, Effectiveness, and Quality of Health
Programs: Insights Into the Outcomes, Effectiveness, and Quality of Health Programs. Sage, 2005.
9PUBLIC HEALTH
on the target population and coming up with a true value of the program to the funding body.
Impact evaluation can be done through different study designs, one of which is before-and-
after study design. The The Healthy Start Program evaluation considered this before-and-
after design that assessed the levels of health literacy and nature of health behaviour before
and after the program. The before-and-after design offers a substantial amount of evidence
about intervention effectiveness than the other non-experimental designs. The design is most
suitable for representing the instantaneous influences of programs carried out for short time
frame. The study without a control group is simple to be carried out as the only requirements
are sampling frame and a team of researchers to collect data13.
The Healthy Start Program evaluation process emphasises on understanding the
changes in health behaviour of the participants and the increment in the level of health
knowledge. The evaluation looked at immediate learning following the workshop using a
Post-Workshop Questionnaire and longer-term impact on behaviour change with a six-week
Post Workshop phone interview with volunteer participants who had attended the workshop.
Health literacy is known to incorporate health-related knowledge, motivation, attitudes,
behavioural intentions, confidence and personal skills related to lifestyles, along with the
knowledge of accession of health care services14. Against this concept, the approach of the
evaluation of the program can be stated as justified.
Another point of strength of the evaluation was the centre that was involved in the
process. The centre was well placed for the purpose of the evaluation as it hosted the Greater
Brisbane Refugee Health Advisory Group (G8), a group helping the refugee community
13 Kruk, Margaret E., et al. "Evaluation of a maternal health program in Uganda and Zambia finds mixed results
on quality of care and satisfaction." Health Affairs 35.3 (2016): 510-519.
14 Berkman, N., S. Sheridan, and K. Donahue. "Health literacy interventions and outcomes: an updated
systematic review. 2011." Rockville, MD: Agency for Healthcare Research and Quality(2016)
on the target population and coming up with a true value of the program to the funding body.
Impact evaluation can be done through different study designs, one of which is before-and-
after study design. The The Healthy Start Program evaluation considered this before-and-
after design that assessed the levels of health literacy and nature of health behaviour before
and after the program. The before-and-after design offers a substantial amount of evidence
about intervention effectiveness than the other non-experimental designs. The design is most
suitable for representing the instantaneous influences of programs carried out for short time
frame. The study without a control group is simple to be carried out as the only requirements
are sampling frame and a team of researchers to collect data13.
The Healthy Start Program evaluation process emphasises on understanding the
changes in health behaviour of the participants and the increment in the level of health
knowledge. The evaluation looked at immediate learning following the workshop using a
Post-Workshop Questionnaire and longer-term impact on behaviour change with a six-week
Post Workshop phone interview with volunteer participants who had attended the workshop.
Health literacy is known to incorporate health-related knowledge, motivation, attitudes,
behavioural intentions, confidence and personal skills related to lifestyles, along with the
knowledge of accession of health care services14. Against this concept, the approach of the
evaluation of the program can be stated as justified.
Another point of strength of the evaluation was the centre that was involved in the
process. The centre was well placed for the purpose of the evaluation as it hosted the Greater
Brisbane Refugee Health Advisory Group (G8), a group helping the refugee community
13 Kruk, Margaret E., et al. "Evaluation of a maternal health program in Uganda and Zambia finds mixed results
on quality of care and satisfaction." Health Affairs 35.3 (2016): 510-519.
14 Berkman, N., S. Sheridan, and K. Donahue. "Health literacy interventions and outcomes: an updated
systematic review. 2011." Rockville, MD: Agency for Healthcare Research and Quality(2016)
10PUBLIC HEALTH
improve and foster the level of health literacy and assisting them to understand adequately
their health needs.
Though the Healthy Start Program evaluation has some key strengths, there are
certain points of weakness as well embedded in the process. Firstly, the evaluation was done
with the help of a questionnaire that had close-ended questions, and there are a number of
limitations of such questions. Misinterpretation of a question usually goes unnoticed.
Discrepancies between answers of respondents might be blurred. Marking the incorrect
response is probable. Further, the questions had to be responded in the form of ‘strongly
agree’, ‘agree’, ‘neutral’, ‘disagree’,’strongly disagree’. It might have been difficult for the
participants to understand the differences between “Strongly Agree’ and ‘Agree’ and thus all
responses received might not have been true to its value. It is confusing for the participants to
come up with an answer15.
The second yet the much significant element of weakness was the fact that only the
Somali refugee group was considered for the evaluation and as the Eritrean group could not
be included. In order to understand the effectiveness of the program to its true sense, it was
important to include to include both the groups in the evaluation process. For generalizability
of a study outcome, it is necessary to select a respondent sample from diverse backgrounds
and levels.16 While the Somali respondents indicated a positive impact of the program, the
Eritrean respondents might have highlighted some flaws of the Healthy Start Program. Any
additional information would have crucial for gaining insights into the impact of the program.
From the above discussion it is concluded that though the Healthy Start Program
evaluation had certain remarkable points of strengths, the weaknesses of the evaluation
15 Holloway, Immy, and Kathleen Galvin. Qualitative research in nursing and healthcare. John Wiley & Sons,
2016.
16 Posavac, Emil. Program evaluation: Methods and case studies. Routledge, 2015.
improve and foster the level of health literacy and assisting them to understand adequately
their health needs.
Though the Healthy Start Program evaluation has some key strengths, there are
certain points of weakness as well embedded in the process. Firstly, the evaluation was done
with the help of a questionnaire that had close-ended questions, and there are a number of
limitations of such questions. Misinterpretation of a question usually goes unnoticed.
Discrepancies between answers of respondents might be blurred. Marking the incorrect
response is probable. Further, the questions had to be responded in the form of ‘strongly
agree’, ‘agree’, ‘neutral’, ‘disagree’,’strongly disagree’. It might have been difficult for the
participants to understand the differences between “Strongly Agree’ and ‘Agree’ and thus all
responses received might not have been true to its value. It is confusing for the participants to
come up with an answer15.
The second yet the much significant element of weakness was the fact that only the
Somali refugee group was considered for the evaluation and as the Eritrean group could not
be included. In order to understand the effectiveness of the program to its true sense, it was
important to include to include both the groups in the evaluation process. For generalizability
of a study outcome, it is necessary to select a respondent sample from diverse backgrounds
and levels.16 While the Somali respondents indicated a positive impact of the program, the
Eritrean respondents might have highlighted some flaws of the Healthy Start Program. Any
additional information would have crucial for gaining insights into the impact of the program.
From the above discussion it is concluded that though the Healthy Start Program
evaluation had certain remarkable points of strengths, the weaknesses of the evaluation
15 Holloway, Immy, and Kathleen Galvin. Qualitative research in nursing and healthcare. John Wiley & Sons,
2016.
16 Posavac, Emil. Program evaluation: Methods and case studies. Routledge, 2015.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
11PUBLIC HEALTH
process are not to be neglected. For conducting a flawless and impeccable program
evaluation, a number of factors are to be adjudged prior to the commencement of the process
of evaluation.
process are not to be neglected. For conducting a flawless and impeccable program
evaluation, a number of factors are to be adjudged prior to the commencement of the process
of evaluation.
12PUBLIC HEALTH
References
Berkman, N., S. Sheridan, and K. Donahue. "Health literacy interventions and outcomes: an
updated systematic review. 2011." Rockville, MD: Agency for Healthcare Research and
Quality(2016)
Dalkin, Sonia Michelle, et al. "13 What works, for whom and in which circumstances when
implementing the namaste advanced dementia care programme in the home setting?." (2017):
A351-A352.
Duckett, Stephen, and Sharon Willcox. The Australian health care system. No. Ed. 5. Oxford
University Press, 2015.
Fink, Arlene. Evaluation Fundamentals: Insights into the Outcomes, Effectiveness, and
Quality of Health Programs: Insights Into the Outcomes, Effectiveness, and Quality of Health
Programs. Sage, 2005.
Funderburk, Jennifer S., and Robyn L. Shepardson. "Real-world program evaluation of
integrated behavioral health care: Improving scientific rigor." Families, Systems, &
Health 35.2 (2017): 114.
Grembowski, David. The practice of health program evaluation. Sage Publications, 2015.
Healthy Start. Healthystart.org.au. N.p., 2017. Web. 30 Sept. 2017.
Holloway, Immy, and Kathleen Galvin. Qualitative research in nursing and healthcare. John
Wiley & Sons, 2016.
Issel, L. Michele, and Rebecca Wells. Health program planning and evaluation. Jones &
Bartlett Learning, 2017.
References
Berkman, N., S. Sheridan, and K. Donahue. "Health literacy interventions and outcomes: an
updated systematic review. 2011." Rockville, MD: Agency for Healthcare Research and
Quality(2016)
Dalkin, Sonia Michelle, et al. "13 What works, for whom and in which circumstances when
implementing the namaste advanced dementia care programme in the home setting?." (2017):
A351-A352.
Duckett, Stephen, and Sharon Willcox. The Australian health care system. No. Ed. 5. Oxford
University Press, 2015.
Fink, Arlene. Evaluation Fundamentals: Insights into the Outcomes, Effectiveness, and
Quality of Health Programs: Insights Into the Outcomes, Effectiveness, and Quality of Health
Programs. Sage, 2005.
Funderburk, Jennifer S., and Robyn L. Shepardson. "Real-world program evaluation of
integrated behavioral health care: Improving scientific rigor." Families, Systems, &
Health 35.2 (2017): 114.
Grembowski, David. The practice of health program evaluation. Sage Publications, 2015.
Healthy Start. Healthystart.org.au. N.p., 2017. Web. 30 Sept. 2017.
Holloway, Immy, and Kathleen Galvin. Qualitative research in nursing and healthcare. John
Wiley & Sons, 2016.
Issel, L. Michele, and Rebecca Wells. Health program planning and evaluation. Jones &
Bartlett Learning, 2017.
13PUBLIC HEALTH
Kruk, Margaret E., et al. "Evaluation of a maternal health program in Uganda and Zambia
finds mixed results on quality of care and satisfaction." Health Affairs 35.3 (2016): 510-519.
McKenzie, James F., Brad L. Neiger, and Rosemary Thackeray. Planning, implementing &
evaluating health promotion programs: A primer. Pearson, 2016.
Pawson, R., and N. Tilley. "Realist evaluation. 2004." (2015).
Porter, Sam. "Realist evaluation: an immanent critique." Nursing Philosophy 16.4 (2015):
239-251.
Posavac, Emil. Program evaluation: Methods and case studies. Routledge, 2015.
The ‘Healthy Start Program’ Evaluation Report. 2016. Web. 30 Sept. 2017.
Kruk, Margaret E., et al. "Evaluation of a maternal health program in Uganda and Zambia
finds mixed results on quality of care and satisfaction." Health Affairs 35.3 (2016): 510-519.
McKenzie, James F., Brad L. Neiger, and Rosemary Thackeray. Planning, implementing &
evaluating health promotion programs: A primer. Pearson, 2016.
Pawson, R., and N. Tilley. "Realist evaluation. 2004." (2015).
Porter, Sam. "Realist evaluation: an immanent critique." Nursing Philosophy 16.4 (2015):
239-251.
Posavac, Emil. Program evaluation: Methods and case studies. Routledge, 2015.
The ‘Healthy Start Program’ Evaluation Report. 2016. Web. 30 Sept. 2017.
1 out of 13
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.