SPSS Report: Correlation, Factor Analysis, Model Fit and Testing
VerifiedAdded on 2023/01/18
|9
|2933
|64
Report
AI Summary
This report provides a comprehensive analysis of correlation and factor analysis using SPSS. It begins with an explanation of correlation, including the Pearson test, and its application in identifying relationships between variables. The report then delves into factor analysis, explaining its use in reducing data dimensionality and exploring relationships between factors in questionnaires. It covers exploratory factor analysis (EFA) and its application, including model identification, and the tests used to determine the quality of the sampling. Furthermore, the report discusses model fit, including various tests such as Normed Chi-Square, RMSEA, CFI, TLI, and RMR, providing a detailed understanding of how to assess the goodness of fit for statistical models. The report explores the process of model identification within the Structural Equation Modelling (SEM) framework, including the use of co-variance matrices and the degree of freedom. Overall, the report offers a practical guide to applying these statistical techniques in research and data analysis, along with how to interpret the results and make informed decisions.

SPSS- Correlation - Factor Analysis
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Table of Contents
5.10 Correlation ..............................................................................................................................3
6.6 Factor Analysis .........................................................................................................................4
6.7 Exploratory Factor Analysis (EFA) ..........................................................................................4
Model Identification ........................................................................................................................5
6.10 Model Fit .................................................................................................................................6
6.10.1 Normed Chi- Square Test (χ2 /df) .......................................................................................6
6.10.2 Root Mean Square Error of Approximation (RMSEA) .......................................................7
6.10.3 Comparative Fit Index (CFI) ...............................................................................................7
6.10.4 Tucker-Lewis Index (TLI) ...................................................................................................7
6.10.5 Root Mean Square Residual (RMR) and Standardized RMR (SRMR) ...............................8
5.10 Correlation ..............................................................................................................................3
6.6 Factor Analysis .........................................................................................................................4
6.7 Exploratory Factor Analysis (EFA) ..........................................................................................4
Model Identification ........................................................................................................................5
6.10 Model Fit .................................................................................................................................6
6.10.1 Normed Chi- Square Test (χ2 /df) .......................................................................................6
6.10.2 Root Mean Square Error of Approximation (RMSEA) .......................................................7
6.10.3 Comparative Fit Index (CFI) ...............................................................................................7
6.10.4 Tucker-Lewis Index (TLI) ...................................................................................................7
6.10.5 Root Mean Square Residual (RMR) and Standardized RMR (SRMR) ...............................8

5.10 Correlation
Pearson test correlation is the one of the most important method that is used to identify
relation between two variables (Pallant 2016). It is the one of the most important method that is
used to explore relationship between variables. Usually, value of the correlation remain in range
of -1 to +1. In no case value will go outside of this range. If value of correlation is positive then
in that case it is assumed that there is positive relationship between variables. On other hand, in
case correlation value is negative then in that case it can be said that both variables are inversely
related to each other. Means that if one variable value will increase then in that case value of
another variable will decline. It can be said that it is one of the most important approach that is
used by the data analyst to make decisions.
Important point to note is that in the correlation coefficient value always remain in
number with plus and minus symbol. Number indicate magnitude of relationship between
variables and symbol reflect the direction in which both variables are associated with each other.
Two variables are assumed to be perfectly correlated in case value of correlation coefficient is
one. If value is between 0.50 to 1 then in that case it is assumed that both variables are highly
related to each other. On other hand, if correlation value is between 0.30 to 0.50 then in that case
it can be said that there is moderate correlation between variables. If correlation value is less than
0.30 then in that case it is assumed that there is less correlation between variables. If there is zero
coefficient then in that case it is concluded that there is no correlation between variables. On
other hand, if correlation value is negative and nearby to -1 then in that case it can be said that
there is highly negative relationship between variables (Pallant 2016). Hence, correlation is the
one of the most important approach that can be used to measure relationship between variables
In the current research study relationship is identified between varied variables and
attempt is made to identify answer of the question related to the relationship between the
management practices and commitment and the relationship between the commitment and
organizational food safety performance. Three categories are suggested by George & Mallery
(2003) for the strength of the association between variables based on the correlation coefficient
value as small, medium, and large.
Pearson test correlation is the one of the most important method that is used to identify
relation between two variables (Pallant 2016). It is the one of the most important method that is
used to explore relationship between variables. Usually, value of the correlation remain in range
of -1 to +1. In no case value will go outside of this range. If value of correlation is positive then
in that case it is assumed that there is positive relationship between variables. On other hand, in
case correlation value is negative then in that case it can be said that both variables are inversely
related to each other. Means that if one variable value will increase then in that case value of
another variable will decline. It can be said that it is one of the most important approach that is
used by the data analyst to make decisions.
Important point to note is that in the correlation coefficient value always remain in
number with plus and minus symbol. Number indicate magnitude of relationship between
variables and symbol reflect the direction in which both variables are associated with each other.
Two variables are assumed to be perfectly correlated in case value of correlation coefficient is
one. If value is between 0.50 to 1 then in that case it is assumed that both variables are highly
related to each other. On other hand, if correlation value is between 0.30 to 0.50 then in that case
it can be said that there is moderate correlation between variables. If correlation value is less than
0.30 then in that case it is assumed that there is less correlation between variables. If there is zero
coefficient then in that case it is concluded that there is no correlation between variables. On
other hand, if correlation value is negative and nearby to -1 then in that case it can be said that
there is highly negative relationship between variables (Pallant 2016). Hence, correlation is the
one of the most important approach that can be used to measure relationship between variables
In the current research study relationship is identified between varied variables and
attempt is made to identify answer of the question related to the relationship between the
management practices and commitment and the relationship between the commitment and
organizational food safety performance. Three categories are suggested by George & Mallery
(2003) for the strength of the association between variables based on the correlation coefficient
value as small, medium, and large.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

6.6 Factor Analysis
Factor analysis is that tactic that is used to determine whether there is actually link or relation
between diverse factor in the questionnaire or not. Therefore, it is used generally to decrease the
information by minimize the variables numbers which provide similar information. Hence, in
order to handle the analyses in better way, it is aid to solve multicollinearity issues while
applying multiple regression test in which information is contain in big quantity with association
within each other variables.
For this, there are two test conducted such that Kaiser-Mayrt Olkin (KMO) that is used to
measure quality of sampling as well as applicability for factor analysis so that it show the amount
of variance (Field 2013). In this, if the KMO test is near to 1.0 to the data then it is not valuable
or else if the test is more than 0.50 or in between 0.50 and 0.70 then it is good. Another test is
Barlett's test that is actually used to check hypothesis that means variables are not related. Such
that when H0> 0.05 then null hypothesis is accepted which s not suitable for factor analysis and
vice versa. Further, when the population correlation matrix looks like identity matrix that it
reflect every variable relates badly with other variables of data Kaiser (1974) .
6.7 Exploratory Factor Analysis (EFA)
Factor is used as an another name for an independent variable which is used
synonymously in factor analysis with Latent variable and it is used for the same, to indicate
observed variables. The main purpose of factor analysis that is constructed on the reducing of
discernible variables to lesser latent variables that further share common variance. This factor are
generally constructed hypothetically which is used to represents variable as unobservable factors
that is not measured directly (Field 2009, p. 786).
In order to generate better result of EFA, SPSS is used by the researcher in different
fields because it is available in Universities and also operates in Excel that is easy to use. In the
same way, for exploratory factor analysis is used in the present research to determine the
component of latent variable because it is not completely depend upon the current scales.
Further, EFA is a multivariate technique that is used to determine correlation among different
observed group variables which are driven by latent construct. It is also analysed that to define a
smaller set of dimension, it is necessary to identify the strength of association between variables
(Bartholomew, Knott, & Moustaki 2011). So that it is decrease the large number of correlated
Factor analysis is that tactic that is used to determine whether there is actually link or relation
between diverse factor in the questionnaire or not. Therefore, it is used generally to decrease the
information by minimize the variables numbers which provide similar information. Hence, in
order to handle the analyses in better way, it is aid to solve multicollinearity issues while
applying multiple regression test in which information is contain in big quantity with association
within each other variables.
For this, there are two test conducted such that Kaiser-Mayrt Olkin (KMO) that is used to
measure quality of sampling as well as applicability for factor analysis so that it show the amount
of variance (Field 2013). In this, if the KMO test is near to 1.0 to the data then it is not valuable
or else if the test is more than 0.50 or in between 0.50 and 0.70 then it is good. Another test is
Barlett's test that is actually used to check hypothesis that means variables are not related. Such
that when H0> 0.05 then null hypothesis is accepted which s not suitable for factor analysis and
vice versa. Further, when the population correlation matrix looks like identity matrix that it
reflect every variable relates badly with other variables of data Kaiser (1974) .
6.7 Exploratory Factor Analysis (EFA)
Factor is used as an another name for an independent variable which is used
synonymously in factor analysis with Latent variable and it is used for the same, to indicate
observed variables. The main purpose of factor analysis that is constructed on the reducing of
discernible variables to lesser latent variables that further share common variance. This factor are
generally constructed hypothetically which is used to represents variable as unobservable factors
that is not measured directly (Field 2009, p. 786).
In order to generate better result of EFA, SPSS is used by the researcher in different
fields because it is available in Universities and also operates in Excel that is easy to use. In the
same way, for exploratory factor analysis is used in the present research to determine the
component of latent variable because it is not completely depend upon the current scales.
Further, EFA is a multivariate technique that is used to determine correlation among different
observed group variables which are driven by latent construct. It is also analysed that to define a
smaller set of dimension, it is necessary to identify the strength of association between variables
(Bartholomew, Knott, & Moustaki 2011). So that it is decrease the large number of correlated
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

variables and to refine the factors, principle components method is used that helps to determine
the factors that justify largest share of variance.
As EFA modify researchers to identify the factor structure of a measure and the test of its
internal relatability. The researcher also uses EFA when there is no hypotheses is concerning the
nature by considering the factors structure. That is why, three basic decision points are used by
researcher while applying EFA i.e. deciding different factor, selecting extraction method and last
rotation method.
Generating a line plot is consider one of the most common method which may be used by
researcher to decide number of factors (Wilkins 2013). It describe two line graph with X axis and
Y axis. In this, eigenvalue on y axis simplify variance accounted by each factor and it is also
presented in scores in 34 total number of item, the same have 34 underlying factor and the first
factor has 12.543 value for 38.008% of variance. When researcher find out number of factors,
uses Principle Component Technique is used. When principle concern is limiting, there are
minimum number of factors produces the largest variance in the collected data so that the
researcher may use multivariate analysis. After that, factor rotation is used for correct
interpretation because it is not consist of vague information. Further to achieve simple structure
which is the aim of rotation and it tries to have each variable load which is a few factor. For
example, the variable such that knowledge, learning are highly replies on training factor.
Thus, after obtaining the initial solution, rotation will start by maximising high loads and
minimize low loads in order to get solutions. For that same, researcher uses Orthogonal and
Olique rotation, in which orthogonal rotation derive those factor that relies on assumption, while
oblique assumed that factors are correlated with utmost measures (Conway & Huffcutt 2003). In
the present research, Promax rotation is used that helps to attain the aim. The confirmatory factor
analysis is also conducted in order to evaluate model fit because EFA did not deliver unique
solution it did not fit to model and data.
Model Identification
In Structural Equating Modelling (SEM), researcher find out ssolution to set structural
equation which is needed to gather information and reach to correct solution as well. Thus, it also
help to analyse the variance and which is not analyse in war data. In the same way, co-variance
matrix offers degree of freedom that helps in estimates parameters. For that 1/2[p (p+1) is
the factors that justify largest share of variance.
As EFA modify researchers to identify the factor structure of a measure and the test of its
internal relatability. The researcher also uses EFA when there is no hypotheses is concerning the
nature by considering the factors structure. That is why, three basic decision points are used by
researcher while applying EFA i.e. deciding different factor, selecting extraction method and last
rotation method.
Generating a line plot is consider one of the most common method which may be used by
researcher to decide number of factors (Wilkins 2013). It describe two line graph with X axis and
Y axis. In this, eigenvalue on y axis simplify variance accounted by each factor and it is also
presented in scores in 34 total number of item, the same have 34 underlying factor and the first
factor has 12.543 value for 38.008% of variance. When researcher find out number of factors,
uses Principle Component Technique is used. When principle concern is limiting, there are
minimum number of factors produces the largest variance in the collected data so that the
researcher may use multivariate analysis. After that, factor rotation is used for correct
interpretation because it is not consist of vague information. Further to achieve simple structure
which is the aim of rotation and it tries to have each variable load which is a few factor. For
example, the variable such that knowledge, learning are highly replies on training factor.
Thus, after obtaining the initial solution, rotation will start by maximising high loads and
minimize low loads in order to get solutions. For that same, researcher uses Orthogonal and
Olique rotation, in which orthogonal rotation derive those factor that relies on assumption, while
oblique assumed that factors are correlated with utmost measures (Conway & Huffcutt 2003). In
the present research, Promax rotation is used that helps to attain the aim. The confirmatory factor
analysis is also conducted in order to evaluate model fit because EFA did not deliver unique
solution it did not fit to model and data.
Model Identification
In Structural Equating Modelling (SEM), researcher find out ssolution to set structural
equation which is needed to gather information and reach to correct solution as well. Thus, it also
help to analyse the variance and which is not analyse in war data. In the same way, co-variance
matrix offers degree of freedom that helps in estimates parameters. For that 1/2[p (p+1) is

identified in which p is represented as a measured items. Further, the identification is also
examined by comparing the number of data points to the number of parameters (Byrne 2010,
p.34). Such that in the present research model 25 parameters are estimated and calculated where
10 is observable variable. The just identified model is with zero degree of freedom that clearly
indicates sufficient degree of freedom by using all parameters. It is not scientifically proved that
data with zero degree may be rejected, due to its fit is actually identified by circumstances.
While using the software, researcher make mistakes like an item is not linking to a
construct or one indicator. Another mistake may also happen related to error term I each
indicator which is used (Hair et al. 2010, p. 699). Therefore, researcher should not only depend
upon the SEM software because it did not highlight identification problem and as a result, it may
leads to high standards of error. Thus, it is necessary to identify the error and taken out negative
error variance in big parameters as well.
6.10 Model Fit
In any statistical study it is very important for the analyst to identify whether model
prepared best fit the requirements. In any research if model is not best fit then in that case it can
generate accurate results. In tools like regression sum of squares and standard error of mean are
analyzed deeply to find out whether model is fit and can accurately predict results.
It can be observed that if in any model standardized errors are high or low then it reflects that
model is not best fit and it cannot make accurate prediction.
Important point to note is that in SEM, when the chi square (χ2) value is not significant then in
that case it can be assumed that model is a good fit.
6.10.1 Normed Chi- Square Test (χ2 /df)
In order to obtain accurate results by making comparison by using χ2 test specially when
sample size is less than 200 sample units. In order to compute relevant test statistics varied items
are required like degree of freedom and chi square values. In order to compute normed chi
square simply test simply chi square value is divided by the degree of freedom. It is the one of
the most important approach that is time to time used by the managers to make decisions. Hence,
it can be said that there is huge significance of the discussed approach.
examined by comparing the number of data points to the number of parameters (Byrne 2010,
p.34). Such that in the present research model 25 parameters are estimated and calculated where
10 is observable variable. The just identified model is with zero degree of freedom that clearly
indicates sufficient degree of freedom by using all parameters. It is not scientifically proved that
data with zero degree may be rejected, due to its fit is actually identified by circumstances.
While using the software, researcher make mistakes like an item is not linking to a
construct or one indicator. Another mistake may also happen related to error term I each
indicator which is used (Hair et al. 2010, p. 699). Therefore, researcher should not only depend
upon the SEM software because it did not highlight identification problem and as a result, it may
leads to high standards of error. Thus, it is necessary to identify the error and taken out negative
error variance in big parameters as well.
6.10 Model Fit
In any statistical study it is very important for the analyst to identify whether model
prepared best fit the requirements. In any research if model is not best fit then in that case it can
generate accurate results. In tools like regression sum of squares and standard error of mean are
analyzed deeply to find out whether model is fit and can accurately predict results.
It can be observed that if in any model standardized errors are high or low then it reflects that
model is not best fit and it cannot make accurate prediction.
Important point to note is that in SEM, when the chi square (χ2) value is not significant then in
that case it can be assumed that model is a good fit.
6.10.1 Normed Chi- Square Test (χ2 /df)
In order to obtain accurate results by making comparison by using χ2 test specially when
sample size is less than 200 sample units. In order to compute relevant test statistics varied items
are required like degree of freedom and chi square values. In order to compute normed chi
square simply test simply chi square value is divided by the degree of freedom. It is the one of
the most important approach that is time to time used by the managers to make decisions. Hence,
it can be said that there is huge significance of the discussed approach.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

6.10.2 Root Mean Square Error of Approximation (RMSEA)
Root mean square error of approximation is the one of the approach under which
difference between computed covariates and actual values is calculated on each degree of
freedom. It is important to note that difference is computed by considering population not
sample. Data of population and sample are quite different (Hair et al. 2010, p.667). Main reason
behind using population data is that main aim of this approach is to identify whether model fit
for entire population. The difference measurement according to the population rather than the
sample therefore it signifies how good model fits a population, not only the used sample. The
REMSEA attempts clearly to do correction for complexity of model and size of sample through
considering both in its computation
6.10.3 Comparative Fit Index (CFI)
Important point to note is that in the index given above is modified form of Normed Fit
Index (NFI), in calculation of comparative fit index sample size is taken in to account. For
application of relevant method there must be sufficient sample size in the business (Byrne1998).
However, in case of small sample also mentioned approach give better results. Sample must not
be too small, otherwise index may generate wrong results.
All latent variables are uncorrelated (null/independence model) as assumed by this statistic and
compares the sample covariance matrix with this null model. CFI is one of the most commonly
used in all SEM programs as it is least effected by the size of sample (Fan et al. 1999). The
statistic range for this index between 0.0 and 1.0 and the value that close to 1.0 showing good fit.
A cut- off criterion of CFI ≥ 0.90.
6.10.4 Tucker-Lewis Index (TLI)
Tucker Lewis Index (TLI) is an example of incremental fit index which is consider as an
advance index that is influenced by the size of sample. Therefore, if there is high value in TLI
which means that this model is quite suited for the research and the value should be more than
0.9, whereas, if the value is in between 0 and 1, which is not suited with the deal of research
(Schermelleh-Engel & Moosbrugger 2003).
Root mean square error of approximation is the one of the approach under which
difference between computed covariates and actual values is calculated on each degree of
freedom. It is important to note that difference is computed by considering population not
sample. Data of population and sample are quite different (Hair et al. 2010, p.667). Main reason
behind using population data is that main aim of this approach is to identify whether model fit
for entire population. The difference measurement according to the population rather than the
sample therefore it signifies how good model fits a population, not only the used sample. The
REMSEA attempts clearly to do correction for complexity of model and size of sample through
considering both in its computation
6.10.3 Comparative Fit Index (CFI)
Important point to note is that in the index given above is modified form of Normed Fit
Index (NFI), in calculation of comparative fit index sample size is taken in to account. For
application of relevant method there must be sufficient sample size in the business (Byrne1998).
However, in case of small sample also mentioned approach give better results. Sample must not
be too small, otherwise index may generate wrong results.
All latent variables are uncorrelated (null/independence model) as assumed by this statistic and
compares the sample covariance matrix with this null model. CFI is one of the most commonly
used in all SEM programs as it is least effected by the size of sample (Fan et al. 1999). The
statistic range for this index between 0.0 and 1.0 and the value that close to 1.0 showing good fit.
A cut- off criterion of CFI ≥ 0.90.
6.10.4 Tucker-Lewis Index (TLI)
Tucker Lewis Index (TLI) is an example of incremental fit index which is consider as an
advance index that is influenced by the size of sample. Therefore, if there is high value in TLI
which means that this model is quite suited for the research and the value should be more than
0.9, whereas, if the value is in between 0 and 1, which is not suited with the deal of research
(Schermelleh-Engel & Moosbrugger 2003).
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

6.10.5 Root Mean Square Residual (RMR) and Standardized RMR (SRMR)
It is to be noted that RMR and SRMR are expressed as under root of the gap among
sample covariance matrix remaining amounts and estimated covariance model. Important point is
that RMR is computed by taking into account scales of each indicator. The RMR range is
calculated which considering the scales of each indicator. RMR calculation is more difficult to
interpret because in case when series of questions encompassed different scales like likert scale
from 1to 5 or 1 to 7 (Kline, 2005). In order to solve such kind of problem SRMR is used which
help in more meaningful interpretation of the values of RMR.
Important point to note is that in case of SRMR range of value remain in range of 0 and 1, and if
obtained result value is lower relative to .05 then it can be assumed that model is best fitted
(Diamantopoulos & Siguaw 2000), while a high values like 0.08 are considered tolerable (Hu &
Bentler 1999). Perfect fit can be obtained when SRMR of 0 value and generally in case of a
model has high number of parameters and large sample sizes, the SRMR will be lower.
Another advise from Hairs et al. ( 2010,p.678) that applying one set of cut-off values to all
measurements or structural models is not practical , equally set of index values that able to
distinct the good models from the poor ones is not exist.
It is quite essential for the chi- square statistic, which is linked with degree of freedom and p is
also associated and the researcher also identifies different mistakes that affect the entire project.
Hu and Bentler (1999) also states that there should be two index format presentation such as
NNFI (TLI) and SRMR, RMSEA and SRMR, CFI and SRMR and using chi square test these
presentation format will support and helps to attain the defined aim and objectives in better way.
Moreover, author also suggested that using chi- square test, researcher will easily analyses the its
degree of freedom and its p value. It is also selected because it is consider one of the most
insensitive size of sample and parameters estimation as well.
The tolerable ranges and cut-off values are: for chi-square (CMIN) the p-value > 0.05 (Bagozzi
& Yi 2012; Hair et al.2010) ; CMIN/ DF with a range from 1 to 5 (Schumacker & Lomax, 2004;
Ullman 2001); TLI with a range from 0 to 1 (Hu & Bentler 1999); CFI ≥ 0.9 (Bentler 1995); and
RMSEA ≤ 0.1 (MacCallum, Browne, & Sugawara1996) ;RMR with a range from 0 to 1 with
well- fitting models obtaining values less than .05 (Diamantopoulos & Siguaw 2000). The
measurement model fit results for the supply network strategies, knowledge sharing, innovation
It is to be noted that RMR and SRMR are expressed as under root of the gap among
sample covariance matrix remaining amounts and estimated covariance model. Important point is
that RMR is computed by taking into account scales of each indicator. The RMR range is
calculated which considering the scales of each indicator. RMR calculation is more difficult to
interpret because in case when series of questions encompassed different scales like likert scale
from 1to 5 or 1 to 7 (Kline, 2005). In order to solve such kind of problem SRMR is used which
help in more meaningful interpretation of the values of RMR.
Important point to note is that in case of SRMR range of value remain in range of 0 and 1, and if
obtained result value is lower relative to .05 then it can be assumed that model is best fitted
(Diamantopoulos & Siguaw 2000), while a high values like 0.08 are considered tolerable (Hu &
Bentler 1999). Perfect fit can be obtained when SRMR of 0 value and generally in case of a
model has high number of parameters and large sample sizes, the SRMR will be lower.
Another advise from Hairs et al. ( 2010,p.678) that applying one set of cut-off values to all
measurements or structural models is not practical , equally set of index values that able to
distinct the good models from the poor ones is not exist.
It is quite essential for the chi- square statistic, which is linked with degree of freedom and p is
also associated and the researcher also identifies different mistakes that affect the entire project.
Hu and Bentler (1999) also states that there should be two index format presentation such as
NNFI (TLI) and SRMR, RMSEA and SRMR, CFI and SRMR and using chi square test these
presentation format will support and helps to attain the defined aim and objectives in better way.
Moreover, author also suggested that using chi- square test, researcher will easily analyses the its
degree of freedom and its p value. It is also selected because it is consider one of the most
insensitive size of sample and parameters estimation as well.
The tolerable ranges and cut-off values are: for chi-square (CMIN) the p-value > 0.05 (Bagozzi
& Yi 2012; Hair et al.2010) ; CMIN/ DF with a range from 1 to 5 (Schumacker & Lomax, 2004;
Ullman 2001); TLI with a range from 0 to 1 (Hu & Bentler 1999); CFI ≥ 0.9 (Bentler 1995); and
RMSEA ≤ 0.1 (MacCallum, Browne, & Sugawara1996) ;RMR with a range from 0 to 1 with
well- fitting models obtaining values less than .05 (Diamantopoulos & Siguaw 2000). The
measurement model fit results for the supply network strategies, knowledge sharing, innovation

and supply chain competence are summarised in Table 6.11 that are within standards and showed
overall, the measurement model fit is acceptable.
Table 6.11: Summary of goodness- of fit tests and values indicating good measurement model
fit
Test Default model
CMIN 1140.674
p-value 0.000
CMIN/ DF 2.381
RMR 0.142
IFI 0.929
TLI 0.921
CFI 0.928
RMSEA 0.060
overall, the measurement model fit is acceptable.
Table 6.11: Summary of goodness- of fit tests and values indicating good measurement model
fit
Test Default model
CMIN 1140.674
p-value 0.000
CMIN/ DF 2.381
RMR 0.142
IFI 0.929
TLI 0.921
CFI 0.928
RMSEA 0.060
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide
1 out of 9
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.




