SPSS Bivariate and Multivariate Regression Analysis for Desklib
VerifiedAdded on 2023/06/05
|31
|6069
|78
AI Summary
This article discusses the bivariate and multivariate regression analysis using SPSS for Desklib, an online library for study material. It includes descriptive statistics, correlation analysis, regression equations, and R-squared values.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Statistics
Student Name:
Instructor Name:
Course Number:
20 September 2018
Student Name:
Instructor Name:
Course Number:
20 September 2018
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
SPSS Bivariate Regression Assignment
1. As with all data, it is important to take a look at the data to examine potential
univariate and bivariate outliers. Use procedures described in earlier assignments
to examine outliers and to get to know your data (if you see potential outliers you
can transform the data, delete the outliers, or leave them in the data; justify your
reasoning). For this assignment focus on only two variables: timedrs, stress.
Supply a box plot and/or histogram of the two variables.
Answer
From table 1 below, it can be seen that the average number of visits to health
professionals was 7.80 with a standard deviation of 10.95. The skewness value is
3.25 implying the variable is highly skewed.
Table 1: Statistics
Visits to health
professionals
Stressful life events
N Valid 465 465
Missing 1 1
Mean 7.90 204.22
Median 4.00 178.00
Mode 2.00 0.00
Std. Deviation 10.95 135.79
Variance 119.87 18439.66
Skewness 3.25 1.04
Std. Error of Skewness 0.11 0.11
Kurtosis 13.10 1.80
Std. Error of Kurtosis 0.23 0.23
Range 81.00 920.00
Minimum 0.00 0.00
Maximum 81.00 920.00
Percentiles 25 2.00 98.00
50 4.00 178.00
75 10.00 278.00
1. As with all data, it is important to take a look at the data to examine potential
univariate and bivariate outliers. Use procedures described in earlier assignments
to examine outliers and to get to know your data (if you see potential outliers you
can transform the data, delete the outliers, or leave them in the data; justify your
reasoning). For this assignment focus on only two variables: timedrs, stress.
Supply a box plot and/or histogram of the two variables.
Answer
From table 1 below, it can be seen that the average number of visits to health
professionals was 7.80 with a standard deviation of 10.95. The skewness value is
3.25 implying the variable is highly skewed.
Table 1: Statistics
Visits to health
professionals
Stressful life events
N Valid 465 465
Missing 1 1
Mean 7.90 204.22
Median 4.00 178.00
Mode 2.00 0.00
Std. Deviation 10.95 135.79
Variance 119.87 18439.66
Skewness 3.25 1.04
Std. Error of Skewness 0.11 0.11
Kurtosis 13.10 1.80
Std. Error of Kurtosis 0.23 0.23
Range 81.00 920.00
Minimum 0.00 0.00
Maximum 81.00 920.00
Percentiles 25 2.00 98.00
50 4.00 178.00
75 10.00 278.00
Histogram and Boxplot for the stressful life events
Figure 2: Boxplot for visits to the health professional
As can be seen from the histogram, it is evident that the variable (timedrs) is skewed. The
data is skewed to the right (longer tail to the right). The boxplot clearly shows that there
are several outliers in the dataset.
Histogram and Boxplot for the number of
visits to the health professionals
Figure 1: Histogram for visits to health professional
As can be seen from the histogram, it is evident that the variable (timedrs) is skewed. The
data is skewed to the right (longer tail to the right). The boxplot clearly shows that there
are several outliers in the dataset.
Histogram and Boxplot for the number of
visits to the health professionals
Figure 1: Histogram for visits to health professional
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Figure 4: Boxplot for stressful life events
Again, just like for the case of the number of visits to the health professional, it is evident
that the variable (stress) is skewed. The data is skewed to the right (longer tail to the
right). The boxplot also clearly shows that there are several outliers in the dataset.
2. Run a correlation between stress and timedrs (ANALYZE-CORRELATE-
BIVARIATE). What is the correlation between these two variables?
The correlation can run between -1 and +1. The closer to the extremes (-1 and
+1) the stronger the correlation. A correlation of 0 means that there is no
relationship. How strong is this relationship?
Answer
Table 2: Correlations
Visits to health
professionals
Stressful
life events
Visits to health
professionals
Pearson
Correlation
1 .287**
Sig. (2-tailed) .000
N 465 465
Stressful life events Pearson
Correlation
.287** 1
Again, just like for the case of the number of visits to the health professional, it is evident
that the variable (stress) is skewed. The data is skewed to the right (longer tail to the
right). The boxplot also clearly shows that there are several outliers in the dataset.
2. Run a correlation between stress and timedrs (ANALYZE-CORRELATE-
BIVARIATE). What is the correlation between these two variables?
The correlation can run between -1 and +1. The closer to the extremes (-1 and
+1) the stronger the correlation. A correlation of 0 means that there is no
relationship. How strong is this relationship?
Answer
Table 2: Correlations
Visits to health
professionals
Stressful
life events
Visits to health
professionals
Pearson
Correlation
1 .287**
Sig. (2-tailed) .000
N 465 465
Stressful life events Pearson
Correlation
.287** 1
Sig. (2-tailed) .000
N 465 465
**. Correlation is significant at the 0.01 level (2-tailed).
Results shows that the there is a weak positive relationship between stress and
timedrs (r = 0.287, p = 0.000).
3. Run a bivariate regression (ANALYZE-REGRESSION-LINEAR) with stress as a
INDEPENDENT variable and timedrs as an DEPENDENT variable. That is, you
are trying to predict the number of visits a person takes to the doctor with how
much stress they have in their life.
a. The second table in the output gives you a few important values, R and R-Square.
What are these values and what do they mean?
Answer
Table 3: Model Summary
Model R R Square Adjusted R
Square
Std. Error of
the Estimate
1 .287a .082 .080 10.501
a. Predictors: (Constant), Stressful life events
The value of R is 0.287; this means that a weak positive correlation exists
between the variables.
The value of R-Square is 0.082; this implies that 8.2% of the variation in the
dependent variable (timedrs) is explained by the one independent variable (stress)
in the model.
b. The third table is also important. It is labeled ANOVA (hey, wait, I thought we
were doing regression). This table tells you whether your regression equation is
N 465 465
**. Correlation is significant at the 0.01 level (2-tailed).
Results shows that the there is a weak positive relationship between stress and
timedrs (r = 0.287, p = 0.000).
3. Run a bivariate regression (ANALYZE-REGRESSION-LINEAR) with stress as a
INDEPENDENT variable and timedrs as an DEPENDENT variable. That is, you
are trying to predict the number of visits a person takes to the doctor with how
much stress they have in their life.
a. The second table in the output gives you a few important values, R and R-Square.
What are these values and what do they mean?
Answer
Table 3: Model Summary
Model R R Square Adjusted R
Square
Std. Error of
the Estimate
1 .287a .082 .080 10.501
a. Predictors: (Constant), Stressful life events
The value of R is 0.287; this means that a weak positive correlation exists
between the variables.
The value of R-Square is 0.082; this implies that 8.2% of the variation in the
dependent variable (timedrs) is explained by the one independent variable (stress)
in the model.
b. The third table is also important. It is labeled ANOVA (hey, wait, I thought we
were doing regression). This table tells you whether your regression equation is
significant (thus does the regression line significantly predict the DV). Is your
regression line significant?
Answer
Table 4: ANOVA
Model Sum of
Squares
df Mean
Square
F Sig.
1 Regressi
on
4568.402 1 4568.402 41.432 .000b
Residual 51051.047 463 110.261
Total 55619.449 464
a. Dependent Variable: Visits to health professionals
b. Predictors: (Constant), Stressful life events
Looking at the ANOVA table, we can see that the p-value for the F-Statistics is
0.000 (a value less than 5% level of significance), we therefore reject the null
hypothesis and conclude that the regression line significant at 5% level of
significance.
c. The Coefficients table gives you the values you need for your regression equation.
The ‘Constant’ variable is your intercept or a. The Stressful Life Events is you
slope or b coefficient. Each one of those values has a significance value telling
you whether each value is significant (note: the above ANOVA table tells you
whether the whole equation is significant, these values are for individual parts of
the equation). Please give me the regression equation for this data (Y = a + bX)
and explain what that equation means.
Answer
Table 5: Regression coefficients
Model Unstandardized
Coefficients
Standardiz
ed
t Sig.
regression line significant?
Answer
Table 4: ANOVA
Model Sum of
Squares
df Mean
Square
F Sig.
1 Regressi
on
4568.402 1 4568.402 41.432 .000b
Residual 51051.047 463 110.261
Total 55619.449 464
a. Dependent Variable: Visits to health professionals
b. Predictors: (Constant), Stressful life events
Looking at the ANOVA table, we can see that the p-value for the F-Statistics is
0.000 (a value less than 5% level of significance), we therefore reject the null
hypothesis and conclude that the regression line significant at 5% level of
significance.
c. The Coefficients table gives you the values you need for your regression equation.
The ‘Constant’ variable is your intercept or a. The Stressful Life Events is you
slope or b coefficient. Each one of those values has a significance value telling
you whether each value is significant (note: the above ANOVA table tells you
whether the whole equation is significant, these values are for individual parts of
the equation). Please give me the regression equation for this data (Y = a + bX)
and explain what that equation means.
Answer
Table 5: Regression coefficients
Model Unstandardized
Coefficients
Standardiz
ed
t Sig.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Coefficient
s
B Std. Error Beta
1 (Constant) 3.182 .880 3.616 .000
Stressful life
events
.023 .004 .287 6.437 .000
a. Dependent Variable: Visits to health professionals
The regression equation is given as follows;
Timedrs=3.182+0.023( Stress)
The constant variable (intercept) is given as 3.182; this means that holding stress
constant (zero value for stress0 we would expect the average number of visits to
the health professional to be 3.182.
The slope coefficient is 0.023; this implies that a unit increase in the stress would
result to an increase in the number of visits to the professional (timedrs).
Similarly, a unit decrease in the stress would result to a decrease in the number of
visits to the professional (timedrs).
d. Write up a simple results section for this analysis. Feel free to use a bivariate
scatterplot as a figure. Make sure you describe the regression equation and R-
square in your results.
Answer
This part sought to find out how best stress predicts timedrs.
In regard to the descriptive statistics, it was found that the average number of
visits to health professionals was M = 7.90 with SD = 10.95, N = 365. Also, the
average number of stressful events was M = 204.22, SD = 135.79, N = 465.
s
B Std. Error Beta
1 (Constant) 3.182 .880 3.616 .000
Stressful life
events
.023 .004 .287 6.437 .000
a. Dependent Variable: Visits to health professionals
The regression equation is given as follows;
Timedrs=3.182+0.023( Stress)
The constant variable (intercept) is given as 3.182; this means that holding stress
constant (zero value for stress0 we would expect the average number of visits to
the health professional to be 3.182.
The slope coefficient is 0.023; this implies that a unit increase in the stress would
result to an increase in the number of visits to the professional (timedrs).
Similarly, a unit decrease in the stress would result to a decrease in the number of
visits to the professional (timedrs).
d. Write up a simple results section for this analysis. Feel free to use a bivariate
scatterplot as a figure. Make sure you describe the regression equation and R-
square in your results.
Answer
This part sought to find out how best stress predicts timedrs.
In regard to the descriptive statistics, it was found that the average number of
visits to health professionals was M = 7.90 with SD = 10.95, N = 365. Also, the
average number of stressful events was M = 204.22, SD = 135.79, N = 465.
Predictive results showed that even though the variable (stress) significantly
predicts the timedrs, the influence is very small as could be seen from the value of
R-squared (only 8.2%).
As can be seen we had F(1, 463) = 41.432, p < 0.05.
The regression equation is given as follows;
Timedrs=3.182+0.023( Stress)
Figure 5: Scatterplot of number of visits to health professional versus number of stressful life events
The scatterplot further shows that there is positive relationship though a weak as
can be seen from the scatters.
Part 2
SPSS Multivariate Regression Assignment
predicts the timedrs, the influence is very small as could be seen from the value of
R-squared (only 8.2%).
As can be seen we had F(1, 463) = 41.432, p < 0.05.
The regression equation is given as follows;
Timedrs=3.182+0.023( Stress)
Figure 5: Scatterplot of number of visits to health professional versus number of stressful life events
The scatterplot further shows that there is positive relationship though a weak as
can be seen from the scatters.
Part 2
SPSS Multivariate Regression Assignment
1. Examine potential univariate and bivariate outliers. Use procedures described in
earlier assignments to examine outliers and to get to know your data. You already
investigated timedrs and stress in last week’s assignment. Thus, focus on how the
other variables correlate with each other and with timedrs and stress. If you find
outliers, report how you would handle them (i.e., keep, delete or transform) but do
NOT do this. I would like everyone to work with the same data set.
Answer
Figure 7: Boxplot of stressful life events
The above boxplots indicates that there are several outliers in the dataset. The outliers are
on the upper part. To handle these outliers it would be advisable to remove them.
2. Perform a standard multiple regression analysis (ANALYZE-REGRESSION-
LINEAR) with timedrs as the DV and phyheal, menheal, and stress as IVs. Make
sure you click on STATISTICS and mark the box that provides PART AND
PARTIAL CORRELATIONS, as well as R-SQUARE CHANGE. (Note: the output
will be similar to what you see in section 5.7. However, the file used in the book is
slightly different, so the values will not be exactly the same).
Figure 6: Boxplot of visits to health professional
earlier assignments to examine outliers and to get to know your data. You already
investigated timedrs and stress in last week’s assignment. Thus, focus on how the
other variables correlate with each other and with timedrs and stress. If you find
outliers, report how you would handle them (i.e., keep, delete or transform) but do
NOT do this. I would like everyone to work with the same data set.
Answer
Figure 7: Boxplot of stressful life events
The above boxplots indicates that there are several outliers in the dataset. The outliers are
on the upper part. To handle these outliers it would be advisable to remove them.
2. Perform a standard multiple regression analysis (ANALYZE-REGRESSION-
LINEAR) with timedrs as the DV and phyheal, menheal, and stress as IVs. Make
sure you click on STATISTICS and mark the box that provides PART AND
PARTIAL CORRELATIONS, as well as R-SQUARE CHANGE. (Note: the output
will be similar to what you see in section 5.7. However, the file used in the book is
slightly different, so the values will not be exactly the same).
Figure 6: Boxplot of visits to health professional
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
a. The second table in the output gives you a few important values, R and R-Square. R
is a bit different than what we saw last week because there are multiple predictors.
What are these values and what do they mean?
Answer
Table 6: Model Summary
Mode
l
R R Square Adjusted R
Square
Std. Error of
the Estimate
1 .468a .219 .214 9.708
a. Predictors: (Constant), Stressful life events, Physical health
symptoms, Mental health symptoms
The value of R is 0.468 while the R-Squared is 0.22.
These two values are a bit different than what we saw last week because there are
multiple predictors. The value of R shows a moderate positive relationship between the
dependent variable and the independent variables. R-squared value shows that 22% of the
variation in the dependent variable is explained by the three independent variables in the
model.
b. The third table is also important. It is labeled ANOVA. This table tells you whether
your regression equation is significant (thus does the regression line significantly
predict the DV). Is your regression line significant?
Answer
Table 7: ANOVA Table
Model Sum of
Squares
df Mean
Square
F Sig.
1 Regression 12168.315 3 4056.105 43.034 .000b
Residual 43451.134 461 94.254
Total 55619.449 464
is a bit different than what we saw last week because there are multiple predictors.
What are these values and what do they mean?
Answer
Table 6: Model Summary
Mode
l
R R Square Adjusted R
Square
Std. Error of
the Estimate
1 .468a .219 .214 9.708
a. Predictors: (Constant), Stressful life events, Physical health
symptoms, Mental health symptoms
The value of R is 0.468 while the R-Squared is 0.22.
These two values are a bit different than what we saw last week because there are
multiple predictors. The value of R shows a moderate positive relationship between the
dependent variable and the independent variables. R-squared value shows that 22% of the
variation in the dependent variable is explained by the three independent variables in the
model.
b. The third table is also important. It is labeled ANOVA. This table tells you whether
your regression equation is significant (thus does the regression line significantly
predict the DV). Is your regression line significant?
Answer
Table 7: ANOVA Table
Model Sum of
Squares
df Mean
Square
F Sig.
1 Regression 12168.315 3 4056.105 43.034 .000b
Residual 43451.134 461 94.254
Total 55619.449 464
a. Dependent Variable: Visits to health professionals
b. Predictors: (Constant), Stressful life events, Physical health symptoms, Mental health
symptoms
Looking at the ANOVA table, we can see that the p-value for the F-Statistics is p<.001 (a
value less than 5% level of significance), we therefore reject the null hypothesis and
conclude that the regression line significant at 5% level of significance [F(3, 461) =
43.03, p < 0.05].
c. The Coefficients table gives you the values you need for your regression equation.
The ‘Constant’ variable is your intercept or a. The other variables are b coefficients.
Each one of those values has a significance value telling you whether each value is
significant (note: the above ANOVA table tells you whether the whole equation is
significant, these values are for individual parts of the equation). Please give me the
regression equation for this data (Y = a + b1X + b2X + b3X) and explain what that
equation means.
Answer
Table 8: Regression coefficients
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) -3.705 1.124 -3.296 .001
Physical health
symptoms
1.787 .221 .390 8.083 .000
Mental health
symptoms
-.010 .129 -.004 -.075 .940
Stressful life events .014 .004 .169 3.769 .000
a. Dependent Variable: Visits to health professionals
b. Predictors: (Constant), Stressful life events, Physical health symptoms, Mental health
symptoms
Looking at the ANOVA table, we can see that the p-value for the F-Statistics is p<.001 (a
value less than 5% level of significance), we therefore reject the null hypothesis and
conclude that the regression line significant at 5% level of significance [F(3, 461) =
43.03, p < 0.05].
c. The Coefficients table gives you the values you need for your regression equation.
The ‘Constant’ variable is your intercept or a. The other variables are b coefficients.
Each one of those values has a significance value telling you whether each value is
significant (note: the above ANOVA table tells you whether the whole equation is
significant, these values are for individual parts of the equation). Please give me the
regression equation for this data (Y = a + b1X + b2X + b3X) and explain what that
equation means.
Answer
Table 8: Regression coefficients
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) -3.705 1.124 -3.296 .001
Physical health
symptoms
1.787 .221 .390 8.083 .000
Mental health
symptoms
-.010 .129 -.004 -.075 .940
Stressful life events .014 .004 .169 3.769 .000
a. Dependent Variable: Visits to health professionals
As ca be seen, two independent variables are significant in the model. The
independent variables are physical health symptoms and stressful life events (p <
0.05). The mental health symptoms was found to be insignificant in the model.
Considering significant vaiables only, we have the regression equations as follows;
Timedrs=−3.705+1.787 ( Phyheal ) −0.010 ( Mental)+0.014 ( Stress )
d. Write up a simple results section for this analysis. Make sure you describe the
regression equation and R-square in your results. Write up a simple results section for
this analysis. Feel free to use a bivariate scatterplot as a figure. Make sure you
describe the regression equation and R-square in your results.
Answer
This part sought to find out how best three independent variables predicts timedrs.
In regard to the descriptive statistics, it was found that the average number of visits to
health professionals was M = 7.90 with SD = 10.95, N = 365. Also, the average
number of stressful events was M = 204.22, SD = 135.79, N = 465.
Results showed that only two of the three independent variables were significant in
the model and that the two independent variables only accounted for 22% of the
variation in the dependent variable.
3. Perform a sequential (hierarchical) multiple regression analysis (ANALYZE-
REGRESSION-LINEAR) with timedrs as the DV and phyheal, menheal, and stress
as IVs. The difference in the hierarchical analysis is that you will put each IV in the
INDEPENDENT box separately and then click on the NEXT box above it. This step
processes each IV in the regression into separate blocks. Thus, the first variable you
independent variables are physical health symptoms and stressful life events (p <
0.05). The mental health symptoms was found to be insignificant in the model.
Considering significant vaiables only, we have the regression equations as follows;
Timedrs=−3.705+1.787 ( Phyheal ) −0.010 ( Mental)+0.014 ( Stress )
d. Write up a simple results section for this analysis. Make sure you describe the
regression equation and R-square in your results. Write up a simple results section for
this analysis. Feel free to use a bivariate scatterplot as a figure. Make sure you
describe the regression equation and R-square in your results.
Answer
This part sought to find out how best three independent variables predicts timedrs.
In regard to the descriptive statistics, it was found that the average number of visits to
health professionals was M = 7.90 with SD = 10.95, N = 365. Also, the average
number of stressful events was M = 204.22, SD = 135.79, N = 465.
Results showed that only two of the three independent variables were significant in
the model and that the two independent variables only accounted for 22% of the
variation in the dependent variable.
3. Perform a sequential (hierarchical) multiple regression analysis (ANALYZE-
REGRESSION-LINEAR) with timedrs as the DV and phyheal, menheal, and stress
as IVs. The difference in the hierarchical analysis is that you will put each IV in the
INDEPENDENT box separately and then click on the NEXT box above it. This step
processes each IV in the regression into separate blocks. Thus, the first variable you
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
put into the equation will get all of the shared variance. The next one gets what is left
over and so on until all variables are entered (You can put the variables in the
equation in any order; usually you will have a theory or reason for doing it in a
particular order). Make sure you click on STATISTICS and mark the box that
provides PART AND PARTIAL CORRELATIONS, as well as R-SQUARE
CHANGE (R-SQUARE CHANGE will tell you how much more each variable
explains and whether it is a significant amount of additional explanation).
a. Describe the second table and how it differs from the standard regression (i.e.,
multiple R-Square values and R-Square change). What are these values and what do
they mean?
Answer
Table 9: Model Summary
Model R R
Square
Adjusted
R
Square
Std. Error
of the
Estimate
Change Statistics
R Square
Change
F
Change
df1 df2 Sig. F
Change
1 .440a .193 .191 9.845 .193 110.86
2
1 463 .000
2 .441b .195 .191 9.846 .002 .871 1 462 .351
3 .468c .219 .214 9.708 .024 14.206 1 461 .000
a. Predictors: (Constant), Physical health symptoms
b. Predictors: (Constant), Physical health symptoms, Mental health symptoms
c. Predictors: (Constant), Physical health symptoms, Mental health symptoms, Stressful life events
The table provides the R and R-squares values for each model based on the entry
of the independent variables. The R-Square increases as the variables are added to
the model. The end results (model 3) has the same R and R-Square as the
standard regression model. The value of R-square for model 3 is 0.22 implying
over and so on until all variables are entered (You can put the variables in the
equation in any order; usually you will have a theory or reason for doing it in a
particular order). Make sure you click on STATISTICS and mark the box that
provides PART AND PARTIAL CORRELATIONS, as well as R-SQUARE
CHANGE (R-SQUARE CHANGE will tell you how much more each variable
explains and whether it is a significant amount of additional explanation).
a. Describe the second table and how it differs from the standard regression (i.e.,
multiple R-Square values and R-Square change). What are these values and what do
they mean?
Answer
Table 9: Model Summary
Model R R
Square
Adjusted
R
Square
Std. Error
of the
Estimate
Change Statistics
R Square
Change
F
Change
df1 df2 Sig. F
Change
1 .440a .193 .191 9.845 .193 110.86
2
1 463 .000
2 .441b .195 .191 9.846 .002 .871 1 462 .351
3 .468c .219 .214 9.708 .024 14.206 1 461 .000
a. Predictors: (Constant), Physical health symptoms
b. Predictors: (Constant), Physical health symptoms, Mental health symptoms
c. Predictors: (Constant), Physical health symptoms, Mental health symptoms, Stressful life events
The table provides the R and R-squares values for each model based on the entry
of the independent variables. The R-Square increases as the variables are added to
the model. The end results (model 3) has the same R and R-Square as the
standard regression model. The value of R-square for model 3 is 0.22 implying
that 22% of the variation in the dependent variable is explained by the three
independent variables in the model. However, when we check the adjusted R-
Square we observe that there is no change in adjusted R square in model 1 and 2
implying that the added variable is insignificant in the model.
b. Does the third table differ from standard regression (and how)?
Answer
Table 10: ANOVA Table
Model Sum of Squares df Mean Square F Sig.
1
Regression 10744.897 1 10744.897 110.862 .000b
Residual 44874.552 463 96.921
Total 55619.449 464
2
Regression 10829.337 2 5414.669 55.851 .000c
Residual 44790.112 462 96.948
Total 55619.449 464
3
Regression 12168.315 3 4056.105 43.034 .000d
Residual 43451.134 461 94.254
Total 55619.449 464
a. Dependent Variable: Visits to health professionals
b. Predictors: (Constant), Physical health symptoms
c. Predictors: (Constant), Physical health symptoms, Mental health symptoms
d. Predictors: (Constant), Physical health symptoms, Mental health symptoms, Stressful life
events
The table does not differ from the standard regressions table. We can see that the table
shows that the model is significant. However if we were to select individual independent
variable say for mental health symptoms we would found that the table would have been
insignificant.
independent variables in the model. However, when we check the adjusted R-
Square we observe that there is no change in adjusted R square in model 1 and 2
implying that the added variable is insignificant in the model.
b. Does the third table differ from standard regression (and how)?
Answer
Table 10: ANOVA Table
Model Sum of Squares df Mean Square F Sig.
1
Regression 10744.897 1 10744.897 110.862 .000b
Residual 44874.552 463 96.921
Total 55619.449 464
2
Regression 10829.337 2 5414.669 55.851 .000c
Residual 44790.112 462 96.948
Total 55619.449 464
3
Regression 12168.315 3 4056.105 43.034 .000d
Residual 43451.134 461 94.254
Total 55619.449 464
a. Dependent Variable: Visits to health professionals
b. Predictors: (Constant), Physical health symptoms
c. Predictors: (Constant), Physical health symptoms, Mental health symptoms
d. Predictors: (Constant), Physical health symptoms, Mental health symptoms, Stressful life
events
The table does not differ from the standard regressions table. We can see that the table
shows that the model is significant. However if we were to select individual independent
variable say for mental health symptoms we would found that the table would have been
insignificant.
c. The Coefficients table gives you the values you need for your regression equation.
How is this table different from the standard regression? Why is it different?
Answer
Table 11: Regression coefficients table
Model Unstandardized
Coefficients
Standardi
zed
Coefficie
nts
t Sig. Correlations
B Std. Error Beta Zero-
order
Partial Part
1
(Constant) -2.117 1.055 -2.006 .045
Physical health
symptoms
2.015 .191 .440 10.529 .000 .440 .440 .440
2
(Constant) -2.319 1.077 -2.152 .032
Physical health
symptoms
1.910 .222 .417 8.616 .000 .440 .372 .360
Mental health
symptoms
.118 .126 .045 .933 .351 .256 .043 .039
3
(Constant) -3.705 1.124 -3.296 .001
Physical health
symptoms
1.787 .221 .390 8.083 .000 .440 .352 .333
Mental health
symptoms
-.010 .129 -.004 -.075 .940 .256 -.003 -.003
Stressful life events .014 .004 .169 3.769 .000 .287 .173 .155
a. Dependent Variable: Visits to health professionals
The coefficients obtained are different from those obtained in the standard regression. It
is different because the independent variables are entered at separate levels hence making
How is this table different from the standard regression? Why is it different?
Answer
Table 11: Regression coefficients table
Model Unstandardized
Coefficients
Standardi
zed
Coefficie
nts
t Sig. Correlations
B Std. Error Beta Zero-
order
Partial Part
1
(Constant) -2.117 1.055 -2.006 .045
Physical health
symptoms
2.015 .191 .440 10.529 .000 .440 .440 .440
2
(Constant) -2.319 1.077 -2.152 .032
Physical health
symptoms
1.910 .222 .417 8.616 .000 .440 .372 .360
Mental health
symptoms
.118 .126 .045 .933 .351 .256 .043 .039
3
(Constant) -3.705 1.124 -3.296 .001
Physical health
symptoms
1.787 .221 .390 8.083 .000 .440 .352 .333
Mental health
symptoms
-.010 .129 -.004 -.075 .940 .256 -.003 -.003
Stressful life events .014 .004 .169 3.769 .000 .287 .173 .155
a. Dependent Variable: Visits to health professionals
The coefficients obtained are different from those obtained in the standard regression. It
is different because the independent variables are entered at separate levels hence making
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
the coefficients to be different. The table is different in that it provides a unique intercept
for each level of the hierarchical regression analysis. This is necessary because as each
new IV is added to the equation the regression line is altered. This alteration affects not
only the slope of the line, but also at which location the line would intersect the Y axis.
d. Write up a simple results section for this analysis. Make sure you describe the
regression equation and R-square in your results.
Answer
This part sought to find out how best three independent variables predicts timedrs.
Results showed that only two (Physical health symptoms and Stressful life events) of
the three independent variables were significant in the model and that the two
independent variables only accounted for 21.9% of the variation in the dependent
variable.
Part 3
SPSS Polynomial Regression Assignment
1) As with all data, it is important to take a look at the data to examine potential
univariate and bivariate outliers. Use procedures described in earlier assignments to
examine outliners and to get to know your data. You only need to use Interest and
Credits when looking at outliers. Supply a scatterplot of the two variables.
Answer
As can be seen in the table below, the mean interest in subject area was found to be M =
18.12, SD = 4.73 and N = 101 while the mean credits was M = 8.17, SD = 3.08 and N =
101.
for each level of the hierarchical regression analysis. This is necessary because as each
new IV is added to the equation the regression line is altered. This alteration affects not
only the slope of the line, but also at which location the line would intersect the Y axis.
d. Write up a simple results section for this analysis. Make sure you describe the
regression equation and R-square in your results.
Answer
This part sought to find out how best three independent variables predicts timedrs.
Results showed that only two (Physical health symptoms and Stressful life events) of
the three independent variables were significant in the model and that the two
independent variables only accounted for 21.9% of the variation in the dependent
variable.
Part 3
SPSS Polynomial Regression Assignment
1) As with all data, it is important to take a look at the data to examine potential
univariate and bivariate outliers. Use procedures described in earlier assignments to
examine outliners and to get to know your data. You only need to use Interest and
Credits when looking at outliers. Supply a scatterplot of the two variables.
Answer
As can be seen in the table below, the mean interest in subject area was found to be M =
18.12, SD = 4.73 and N = 101 while the mean credits was M = 8.17, SD = 3.08 and N =
101.
Table 12: Descriptive Statistics
Interest in subject
area
credits
N Valid 101 101
Missing 0 0
Mean 18.116715 8.1683
Median 19.325300 8.0000
Mode 3.5636a 9.00a
Std. Deviation 4.7257381 3.07919
Variance 22.333 9.481
Skewness -1.001 .151
Std. Error of Skewness .240 .240
Kurtosis .961 -.169
Std. Error of Kurtosis .476 .476
Range 22.9237 16.00
Minimum 3.5636 1.00
Maximum 26.4873 17.00
Percentiles
25 15.806900 6.0000
50 19.325300 8.0000
75 21.271000 10.0000
a. Multiple modes exist. The smallest value is shown
Interest in subject
area
credits
N Valid 101 101
Missing 0 0
Mean 18.116715 8.1683
Median 19.325300 8.0000
Mode 3.5636a 9.00a
Std. Deviation 4.7257381 3.07919
Variance 22.333 9.481
Skewness -1.001 .151
Std. Error of Skewness .240 .240
Kurtosis .961 -.169
Std. Error of Kurtosis .476 .476
Range 22.9237 16.00
Minimum 3.5636 1.00
Maximum 26.4873 17.00
Percentiles
25 15.806900 6.0000
50 19.325300 8.0000
75 21.271000 10.0000
a. Multiple modes exist. The smallest value is shown
Figure 8: A scatterplot of credits versus interest in subject area
The scatter plot shows that a positive linear relationship exists between credits and
interest in the subject area.
Figure 10: Boxplots of interest in subject area
As can be seen, few outliers can be observed in the interest in subject area and only one
outlier is present in credits.
Figure 9: Boxplot of credits
The scatter plot shows that a positive linear relationship exists between credits and
interest in the subject area.
Figure 10: Boxplots of interest in subject area
As can be seen, few outliers can be observed in the interest in subject area and only one
outlier is present in credits.
Figure 9: Boxplot of credits
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
2) Similar to other forms of model fitting we are going to run a multiple-step process of
analyzing the data (well, the output will reflect a multiple-step process). The data has
been centered for you (if you don’t know why this was done please ask as part of this
week’s discussion so that we can all learn together)
a. Run a regression between interest and credits (centered)
i. ANALYZE > REGRESSION > LINEAR
ii. Enter Interest as your DV
iii. Enter creditsc as your independent variable (hit next)
iv. Enter creditsc2 as your independent variable (hit next)
v. Enter creditsc3 as your independent variable
vi. Click on statistics
2. Some of the things that you need are already checked off but you will need to also
click on ‘R square change’, ‘Descriptives’, and ‘Confidence intervals’ (Level %:
95)
Output
Table 13: Correlations
Interest in
subject area
Credits centered
at mean
Credits Squared Credits Cubed
Pearson Correlation
Interest in subject area 1.000 .749 -.250 .586
Credits centered at mean .749 1.000 .110 .787
Credits Squared -.250 .110 1.000 .279
Credits Cubed .586 .787 .279 1.000
Sig. (1-tailed)
Interest in subject area . .000 .006 .000
Credits centered at mean .000 . .137 .000
Credits Squared .006 .137 . .002
Credits Cubed .000 .000 .002 .
N Interest in subject area 100 100 100 100
analyzing the data (well, the output will reflect a multiple-step process). The data has
been centered for you (if you don’t know why this was done please ask as part of this
week’s discussion so that we can all learn together)
a. Run a regression between interest and credits (centered)
i. ANALYZE > REGRESSION > LINEAR
ii. Enter Interest as your DV
iii. Enter creditsc as your independent variable (hit next)
iv. Enter creditsc2 as your independent variable (hit next)
v. Enter creditsc3 as your independent variable
vi. Click on statistics
2. Some of the things that you need are already checked off but you will need to also
click on ‘R square change’, ‘Descriptives’, and ‘Confidence intervals’ (Level %:
95)
Output
Table 13: Correlations
Interest in
subject area
Credits centered
at mean
Credits Squared Credits Cubed
Pearson Correlation
Interest in subject area 1.000 .749 -.250 .586
Credits centered at mean .749 1.000 .110 .787
Credits Squared -.250 .110 1.000 .279
Credits Cubed .586 .787 .279 1.000
Sig. (1-tailed)
Interest in subject area . .000 .006 .000
Credits centered at mean .000 . .137 .000
Credits Squared .006 .137 . .002
Credits Cubed .000 .000 .002 .
N Interest in subject area 100 100 100 100
Credits centered at mean 100 100 100 100
Credits Squared 100 100 100 100
Credits Cubed 100 100 100 100
Table 14: Model Summary
Mode
l
R R
Square
Adjusted R
Square
Std. Error of
the Estimate
Change Statistics
R Square
Change
F
Change
df1 df2 Sig. F
Change
1 .749a .561 .556 3.1625661 .561 125.144 1 98 .000
2 .820b .673 .666 2.7447571 .112 33.106 1 97 .000
3 .827c .684 .674 2.7120486 .011 3.354 1 96 .070
a. Predictors: (Constant), Credits centered at mean
b. Predictors: (Constant), Credits centered at mean, Credits Squared
c. Predictors: (Constant), Credits centered at mean, Credits Squared, Credits Cubed
Table 15: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 1251.667 1 1251.667 125.144 .000b
Residual 980.179 98 10.002
Total 2231.846 99
2
Regression 1501.078 2 750.539 99.624 .000c
Residual 730.768 97 7.534
Total 2231.846 99
3
Regression 1525.746 3 508.582 69.146 .000d
Residual 706.100 96 7.355
Total 2231.846 99
a. Dependent Variable: Interest in subject area
b. Predictors: (Constant), Credits centered at mean
c. Predictors: (Constant), Credits centered at mean, Credits Squared
d. Predictors: (Constant), Credits centered at mean, Credits Squared, Credits Cubed
Credits Squared 100 100 100 100
Credits Cubed 100 100 100 100
Table 14: Model Summary
Mode
l
R R
Square
Adjusted R
Square
Std. Error of
the Estimate
Change Statistics
R Square
Change
F
Change
df1 df2 Sig. F
Change
1 .749a .561 .556 3.1625661 .561 125.144 1 98 .000
2 .820b .673 .666 2.7447571 .112 33.106 1 97 .000
3 .827c .684 .674 2.7120486 .011 3.354 1 96 .070
a. Predictors: (Constant), Credits centered at mean
b. Predictors: (Constant), Credits centered at mean, Credits Squared
c. Predictors: (Constant), Credits centered at mean, Credits Squared, Credits Cubed
Table 15: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 1251.667 1 1251.667 125.144 .000b
Residual 980.179 98 10.002
Total 2231.846 99
2
Regression 1501.078 2 750.539 99.624 .000c
Residual 730.768 97 7.534
Total 2231.846 99
3
Regression 1525.746 3 508.582 69.146 .000d
Residual 706.100 96 7.355
Total 2231.846 99
a. Dependent Variable: Interest in subject area
b. Predictors: (Constant), Credits centered at mean
c. Predictors: (Constant), Credits centered at mean, Credits Squared
d. Predictors: (Constant), Credits centered at mean, Credits Squared, Credits Cubed
Table 16: Coefficientsa
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig. 95.0% Confidence Interval for
B
B Std. Error Beta Lower Bound Upper Bound
1 (Constant) 18.105 .316 57.247 .000 17.477 18.732
Credits centered at mean 1.149 .103 .749 11.187 .000 .945 1.353
2
(Constant) 19.305 .345 56.000 .000 18.621 19.989
Credits centered at mean 1.206 .090 .786 13.446 .000 1.028 1.384
Credits Squared -.127 .022 -.336 -5.754 .000 -.170 -.083
3
(Constant) 19.394 .344 56.368 .000 18.711 20.077
Credits centered at mean .996 .145 .649 6.862 .000 .708 1.284
Credits Squared -.140 .023 -.371 -6.104 .000 -.185 -.094
Credits Cubed .008 .005 .179 1.831 .070 -.001 .017
a. Dependent Variable: Interest in subject area
3. You have now conducted a linear, quadratic, and cubic regression analysis. Which
one is the best fitting model? Why?
Answer
The best fitting model is model 3 which is the squared regression model. This is
based on the fact that all the variables in the model are significant and that it has a
much higher value for the R-Squared as compared to model 1. The R-square value is
0.666; this means that 66.6% of the variation in the dependent variable (Interest) is
explained by the two independent variables.
4. Write up a simple results section for this analysis. You should only include the result
of the model that you considered to be the best fitting model.
Answer :
This section sought to find out the best fitting model.
Model Unstandardized
Coefficients
Standardized
Coefficients
t Sig. 95.0% Confidence Interval for
B
B Std. Error Beta Lower Bound Upper Bound
1 (Constant) 18.105 .316 57.247 .000 17.477 18.732
Credits centered at mean 1.149 .103 .749 11.187 .000 .945 1.353
2
(Constant) 19.305 .345 56.000 .000 18.621 19.989
Credits centered at mean 1.206 .090 .786 13.446 .000 1.028 1.384
Credits Squared -.127 .022 -.336 -5.754 .000 -.170 -.083
3
(Constant) 19.394 .344 56.368 .000 18.711 20.077
Credits centered at mean .996 .145 .649 6.862 .000 .708 1.284
Credits Squared -.140 .023 -.371 -6.104 .000 -.185 -.094
Credits Cubed .008 .005 .179 1.831 .070 -.001 .017
a. Dependent Variable: Interest in subject area
3. You have now conducted a linear, quadratic, and cubic regression analysis. Which
one is the best fitting model? Why?
Answer
The best fitting model is model 3 which is the squared regression model. This is
based on the fact that all the variables in the model are significant and that it has a
much higher value for the R-Squared as compared to model 1. The R-square value is
0.666; this means that 66.6% of the variation in the dependent variable (Interest) is
explained by the two independent variables.
4. Write up a simple results section for this analysis. You should only include the result
of the model that you considered to be the best fitting model.
Answer :
This section sought to find out the best fitting model.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
The mean interest in subject area was found to be M = 18.12, SD = 4.73 and N = 101
while the mean credits was M = 8.17, SD = 3.08 and N = 101.
Predictive results showed that the squared regression model was the best fitting model
where it was established that 68% of the variation in the dependent variable (Interest)
is explained by the three independent variables. Multiple regression analysis was used
to test if Credits centered at mean and Credits Squared significantly predicted
participants' Interest in subject area. The results of the regression indicated the two
predictors explained 66.6% of the variance (R2 =.666, F(2,97) = 99.62, p < .01). It
was found that Credits centered at mean significantly and positively predicted
participants' Interest in subject area (β1 = 1.206, p < .001) while credit squared
significantly and negatively predicted participants' Interest in subject area (β2 = -.127,
p < .01).
Part 4
SPSS Mediation Assignment
1. Outlier analysis - Examine potential univariate and bivariate outliers. Use
procedures described in earlier assignments to examine outliers and to get to know
your data. Be sure to report descriptive statistics (e.g., N, M and SD) and also to
report how you examined your outliers (e.g., skewness and kurtosis, box plots, and/or
scatterplots). Do not remove any outliers, simply tell me whether you have any
while the mean credits was M = 8.17, SD = 3.08 and N = 101.
Predictive results showed that the squared regression model was the best fitting model
where it was established that 68% of the variation in the dependent variable (Interest)
is explained by the three independent variables. Multiple regression analysis was used
to test if Credits centered at mean and Credits Squared significantly predicted
participants' Interest in subject area. The results of the regression indicated the two
predictors explained 66.6% of the variance (R2 =.666, F(2,97) = 99.62, p < .01). It
was found that Credits centered at mean significantly and positively predicted
participants' Interest in subject area (β1 = 1.206, p < .001) while credit squared
significantly and negatively predicted participants' Interest in subject area (β2 = -.127,
p < .01).
Part 4
SPSS Mediation Assignment
1. Outlier analysis - Examine potential univariate and bivariate outliers. Use
procedures described in earlier assignments to examine outliers and to get to know
your data. Be sure to report descriptive statistics (e.g., N, M and SD) and also to
report how you examined your outliers (e.g., skewness and kurtosis, box plots, and/or
scatterplots). Do not remove any outliers, simply tell me whether you have any
outliers that should be dealt with and if so, how you would handle them (but do not
delete or transform them).
Answer
Table 17: Statistics
pain depress function
N Valid 149 149 149
Mean 4.933557 1.546309 2.617346
Median 4.800000 1.400000 2.761905
Mode 4.8000 1.0000 3.0000
Std. Deviation 2.0265005 .6712759 .4328936
Variance 4.107 .451 .187
Skewness -.068 1.595 -1.517
Kurtosis -.794 2.651 2.150
Std. Error of Kurtosis .395 .395 .395
Range 8.0000 3.4000 2.0625
Minimum 1.0000 1.0000 .9375
Maximum 9.0000 4.4000 3.0000
Percentiles
25 3.400000 1.000000 2.400000
50 4.800000 1.400000 2.761905
75 6.550000 1.800000 3.000000
The main pain is M = 4.93, SD = 2.03 and N = 149 while for the depression we have M =
1.55, SD = 0.67 and N = 149. Lastly for the variable function, we have M = 2.62, SD =
0.43 and N = 149
delete or transform them).
Answer
Table 17: Statistics
pain depress function
N Valid 149 149 149
Mean 4.933557 1.546309 2.617346
Median 4.800000 1.400000 2.761905
Mode 4.8000 1.0000 3.0000
Std. Deviation 2.0265005 .6712759 .4328936
Variance 4.107 .451 .187
Skewness -.068 1.595 -1.517
Kurtosis -.794 2.651 2.150
Std. Error of Kurtosis .395 .395 .395
Range 8.0000 3.4000 2.0625
Minimum 1.0000 1.0000 .9375
Maximum 9.0000 4.4000 3.0000
Percentiles
25 3.400000 1.000000 2.400000
50 4.800000 1.400000 2.761905
75 6.550000 1.800000 3.000000
The main pain is M = 4.93, SD = 2.03 and N = 149 while for the depression we have M =
1.55, SD = 0.67 and N = 149. Lastly for the variable function, we have M = 2.62, SD =
0.43 and N = 149
Figure 12: Boxplot for the depression
Figure 13: Boxplot for the function
Looking at the above table as well as the box plots, it is clear that there are outliers in the
variables depress and function. The skewness value shows that the two are skewed. The
skewness value for depression is 1.595 (a value greater than 0.5), which clearly shows
that depression values are heavily positively skewed. Similarly, the skewness value for
function was found to be -1.517 (a value less than -0.5), which clearly shows that
function values are heavily negatively skewed. However, the variable pain has no outlier
neither does not look skewed; this is based on the fact that the skewness value for the
variable pain is -0.068 (a value close to zero), indicating very little if no skewness.
Figure 11: Boxplot for the Pain
Figure 13: Boxplot for the function
Looking at the above table as well as the box plots, it is clear that there are outliers in the
variables depress and function. The skewness value shows that the two are skewed. The
skewness value for depression is 1.595 (a value greater than 0.5), which clearly shows
that depression values are heavily positively skewed. Similarly, the skewness value for
function was found to be -1.517 (a value less than -0.5), which clearly shows that
function values are heavily negatively skewed. However, the variable pain has no outlier
neither does not look skewed; this is based on the fact that the skewness value for the
variable pain is -0.068 (a value close to zero), indicating very little if no skewness.
Figure 11: Boxplot for the Pain
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
2. Baron and Kenny Method – Describe the steps that Baron and Kenny recommend
in performing a mediation analysis using regression. Specifically, what are the three
regressions that they recommend.
Answer
The following are the recommended egressions;
Independent variable (Pain) predicting the dependent variable (Depression)
Independent variable (Pain) predicting the mediator (Function)
Independent variable (Pain) and mediator (Function) predicting the dependent
variable (Depression)
3. Regression Analysis – Run the three regression analyses that were recommended
by Baron and Kenny and describe the results.
Answer
Independent variable (Pain) predicting the dependent variable (Depression)
Table 18: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .337a .114 .108 .6341074
a. Predictors: (Constant), pain
Table 19: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 7.583 1 7.583 18.859 .000b
Residual 59.108 147 .402
Total 66.690 148
a. Dependent Variable: depress
b. Predictors: (Constant), pain
Table 20: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
in performing a mediation analysis using regression. Specifically, what are the three
regressions that they recommend.
Answer
The following are the recommended egressions;
Independent variable (Pain) predicting the dependent variable (Depression)
Independent variable (Pain) predicting the mediator (Function)
Independent variable (Pain) and mediator (Function) predicting the dependent
variable (Depression)
3. Regression Analysis – Run the three regression analyses that were recommended
by Baron and Kenny and describe the results.
Answer
Independent variable (Pain) predicting the dependent variable (Depression)
Table 18: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .337a .114 .108 .6341074
a. Predictors: (Constant), pain
Table 19: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 7.583 1 7.583 18.859 .000b
Residual 59.108 147 .402
Total 66.690 148
a. Dependent Variable: depress
b. Predictors: (Constant), pain
Table 20: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) .995 .137 7.258 .000
pain .112 .026 .337 4.343 .000
a. Dependent Variable: depress
Independent variable (Pain) predicting the mediator (Function)
Table 21: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .455a .207 .202 .3867113
a. Predictors: (Constant), pain
Table 22: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 5.752 1 5.752 38.460 .000b
Residual 21.983 147 .150
Total 27.735 148
a. Dependent Variable: function
b. Predictors: (Constant), pain
Table 23: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) 3.097 .084 37.039 .000
pain -.097 .016 -.455 -6.202 .000
a. Dependent Variable: function
Independent variable (Pain) and mediator (Function) predicting the dependent
variable (Depression)
1 (Constant) .995 .137 7.258 .000
pain .112 .026 .337 4.343 .000
a. Dependent Variable: depress
Independent variable (Pain) predicting the mediator (Function)
Table 21: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .455a .207 .202 .3867113
a. Predictors: (Constant), pain
Table 22: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 5.752 1 5.752 38.460 .000b
Residual 21.983 147 .150
Total 27.735 148
a. Dependent Variable: function
b. Predictors: (Constant), pain
Table 23: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) 3.097 .084 37.039 .000
pain -.097 .016 -.455 -6.202 .000
a. Dependent Variable: function
Independent variable (Pain) and mediator (Function) predicting the dependent
variable (Depression)
Table 24: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .451a .204 .193 .6030787
a. Predictors: (Constant), function, pain
Table 25: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 13.590 2 6.795 18.682 .000b
Residual 53.101 146 .364
Total 66.690 148
a. Dependent Variable: depress
b. Predictors: (Constant), function, pain
Table 26: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1
(Constant) 2.614 .419 6.236 .000
pain .061 .027 .184 2.215 .028
function -.523 .129 -.337 -4.064 .000
a. Dependent Variable: depress
As can be seen, the model is significant at 5% level of significance (p < 0.05). Multiple
regression analysis was used to test if pain and function significantly predicted
participants' depression. The results of the regression indicated the two predictors
explained 20% of the variance (R2 =.20, F(2,146) = 6.80, p < .01). It was found that pain
significantly and positively predicted depression (β1 = .061, p < .001) while function
significantly and negatively predicted depression (β2 = -.523, p < .01).
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .451a .204 .193 .6030787
a. Predictors: (Constant), function, pain
Table 25: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 13.590 2 6.795 18.682 .000b
Residual 53.101 146 .364
Total 66.690 148
a. Dependent Variable: depress
b. Predictors: (Constant), function, pain
Table 26: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1
(Constant) 2.614 .419 6.236 .000
pain .061 .027 .184 2.215 .028
function -.523 .129 -.337 -4.064 .000
a. Dependent Variable: depress
As can be seen, the model is significant at 5% level of significance (p < 0.05). Multiple
regression analysis was used to test if pain and function significantly predicted
participants' depression. The results of the regression indicated the two predictors
explained 20% of the variance (R2 =.20, F(2,146) = 6.80, p < .01). It was found that pain
significantly and positively predicted depression (β1 = .061, p < .001) while function
significantly and negatively predicted depression (β2 = -.523, p < .01).
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
4. Sequential or Hierarchical Analysis – Another way to run a mediation analysis
is to 1) determine that there is a significant correlation between the IV and DV, 2)
determine that there is a significant relationship between the DV and the
mediator, and 3) run a sequential regression where the Mediator is entered first (to
control for the effect of the mediator on the DV) and the IV is entered on the
second block (to see whether there is any variance left over for the IV to explain
once the mediated path is removed). The third step will not have a significant
second step if the model is mediated. Run this analysis and describe the results.
Answer
Table 27: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .421a .177 .171 .6110345
2 .451b .204 .193 .6030787
a. Predictors: (Constant), function
b. Predictors: (Constant), function, pain
Table 28: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 11.806 1 11.806 31.621 .000b
Residual 54.884 147 .373
Total 66.690 148
2
Regression 13.590 2 6.795 18.682 .000c
Residual 53.101 146 .364
Total 66.690 148
a. Dependent Variable: depress
b. Predictors: (Constant), function
c. Predictors: (Constant), function, pain
is to 1) determine that there is a significant correlation between the IV and DV, 2)
determine that there is a significant relationship between the DV and the
mediator, and 3) run a sequential regression where the Mediator is entered first (to
control for the effect of the mediator on the DV) and the IV is entered on the
second block (to see whether there is any variance left over for the IV to explain
once the mediated path is removed). The third step will not have a significant
second step if the model is mediated. Run this analysis and describe the results.
Answer
Table 27: Model Summary
Model R R Square Adjusted R
Square
Std. Error of the
Estimate
1 .421a .177 .171 .6110345
2 .451b .204 .193 .6030787
a. Predictors: (Constant), function
b. Predictors: (Constant), function, pain
Table 28: ANOVAa
Model Sum of Squares df Mean Square F Sig.
1
Regression 11.806 1 11.806 31.621 .000b
Residual 54.884 147 .373
Total 66.690 148
2
Regression 13.590 2 6.795 18.682 .000c
Residual 53.101 146 .364
Total 66.690 148
a. Dependent Variable: depress
b. Predictors: (Constant), function
c. Predictors: (Constant), function, pain
Table 29: Coefficientsa
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) 3.254 .308 10.572 .000
function -.652 .116 -.421 -5.623 .000
2
(Constant) 2.614 .419 6.236 .000
function -.523 .129 -.337 -4.064 .000
pain .061 .027 .184 2.215 .028
a. Dependent Variable: depress
The above results shows that the model is significant at 5% level of significance (p <
0.05).
For model 1, simple regression analysis was used to test if function significantly
predicted participants' depression. The results of the regression indicated the one
predictor variable, function, significantly explained 18% of the variance (R2 =.18,
F(1,147) = 11.81, p < .01). It was found that the predictor variable, function, significantly
and negatively predicted depression (β1 = -.652, p < .001). In the second model, a
multiple regression analysis was used to test if pain and function significantly predicted
participants' depression. The results of the regression indicated the two predictors
explained 20% of the variance (R2 =.20, F(2,146) = 6.80, p < .01). It was found that pain
significantly and positively predicted depression (β1 = .061, t = 2.22, p < .001) while
function significantly and negatively predicted depression (β2 = -.523, t = -4.06, p < .01).
In terms of the correlations, the study established that there is a weak positive but
significant relationship between pain and depression (r = 0.247, p < 0.01). There was
Model Unstandardized Coefficients Standardized
Coefficients
t Sig.
B Std. Error Beta
1 (Constant) 3.254 .308 10.572 .000
function -.652 .116 -.421 -5.623 .000
2
(Constant) 2.614 .419 6.236 .000
function -.523 .129 -.337 -4.064 .000
pain .061 .027 .184 2.215 .028
a. Dependent Variable: depress
The above results shows that the model is significant at 5% level of significance (p <
0.05).
For model 1, simple regression analysis was used to test if function significantly
predicted participants' depression. The results of the regression indicated the one
predictor variable, function, significantly explained 18% of the variance (R2 =.18,
F(1,147) = 11.81, p < .01). It was found that the predictor variable, function, significantly
and negatively predicted depression (β1 = -.652, p < .001). In the second model, a
multiple regression analysis was used to test if pain and function significantly predicted
participants' depression. The results of the regression indicated the two predictors
explained 20% of the variance (R2 =.20, F(2,146) = 6.80, p < .01). It was found that pain
significantly and positively predicted depression (β1 = .061, t = 2.22, p < .001) while
function significantly and negatively predicted depression (β2 = -.523, t = -4.06, p < .01).
In terms of the correlations, the study established that there is a weak positive but
significant relationship between pain and depression (r = 0.247, p < 0.01). There was
however a strong negative and significant relationship between function and depression (r
= - 0.7231, p < 0.01).
5. A bonus method (optional)– The mediation.sps syntax file provides an
additional method employed by Sobel to investigate mediation. If your version of
SPSS opens syntax, you can play around with the syntax file (you have to change
the file locations) to see the outcome of the Sobel method.
Answer
The syntax is given as follows;
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT depress
/METHOD=ENTER function
/METHOD=ENTER pain.
= - 0.7231, p < 0.01).
5. A bonus method (optional)– The mediation.sps syntax file provides an
additional method employed by Sobel to investigate mediation. If your version of
SPSS opens syntax, you can play around with the syntax file (you have to change
the file locations) to see the outcome of the Sobel method.
Answer
The syntax is given as follows;
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT depress
/METHOD=ENTER function
/METHOD=ENTER pain.
1 out of 31
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.