An Overview of Statistical Methods in Quantitative Research
VerifiedAdded on 2023/06/05
|8
|1815
|497
Report
AI Summary
This report provides a comprehensive overview of statistical concepts used in quantitative research. It begins by defining statistics and biostatistics, differentiating between descriptive and inferential statistics, and explaining variable measurements across nominal, ordinal, interval, and ratio scales. The report covers exploratory data analysis techniques, including various graphs and measures of central tendency such as mean, median, and mode. It outlines the steps in planning research, emphasizing problem definition, literature review, proposal preparation, data collection, and data analysis, while also addressing accuracy, reliability, and validity. Ethical considerations in experimental and non-experimental studies are discussed, focusing on informed consent and minimizing harm to subjects. Furthermore, the report delves into normal distributions, transformation techniques, the central limit theorem, standard normal distributions, Z-values, and standard error of means. Hypothesis testing, including null and alternative hypotheses, type I and type II errors, power, and confidence intervals, is explained. Finally, the report covers correlation analysis, Pearson correlation, regression, simple and multiple linear regression, and the significance of the R-squared value. The document serves as a valuable resource for understanding the fundamental statistical methods applied in quantitative research, and Desklib offers additional tools and resources for students.

QUANTITATIVE RESEARCH
Week 5
Statistics can be defined as the science of data manipulation by collecting and analyzing
the data, then interpreting the analysis results (Barbara & Susan, 2014). Biostatistics on the other
hand is a discipline that applies the concepts of statistics to biological events and data. There are
two forms of statistics: descriptive statistics, that generally describes the observations in the data,
and inferential statistics that draws conclusions or inferences from the data.
A variable is a given property in the selected population while the measurements of this
variables that are collected in statistics are called data. Data can either be discrete (in cases where
the measurements take finite integer values) or continuous (for cases where the measurements
are in the form of intervals on the real line). Data is usually measured in four different scales:
Nominal, Ordinal, Interval and Ratio (Norman, 2010).
Data is displayed by either using tables or graphs in what is referred to as exploratory
data analysis (Martinez, Martinez, & Solka, 2010). The graphs include bar graphs, histograms,
pie charts, boxplots, scatterplots and line graphs. There are three measures of tendency in
statistics. These are Mean (the average value), Median (the most centered value) and Mode (the
most frequent value).
1
Week 5
Statistics can be defined as the science of data manipulation by collecting and analyzing
the data, then interpreting the analysis results (Barbara & Susan, 2014). Biostatistics on the other
hand is a discipline that applies the concepts of statistics to biological events and data. There are
two forms of statistics: descriptive statistics, that generally describes the observations in the data,
and inferential statistics that draws conclusions or inferences from the data.
A variable is a given property in the selected population while the measurements of this
variables that are collected in statistics are called data. Data can either be discrete (in cases where
the measurements take finite integer values) or continuous (for cases where the measurements
are in the form of intervals on the real line). Data is usually measured in four different scales:
Nominal, Ordinal, Interval and Ratio (Norman, 2010).
Data is displayed by either using tables or graphs in what is referred to as exploratory
data analysis (Martinez, Martinez, & Solka, 2010). The graphs include bar graphs, histograms,
pie charts, boxplots, scatterplots and line graphs. There are three measures of tendency in
statistics. These are Mean (the average value), Median (the most centered value) and Mode (the
most frequent value).
1
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

QUANTITATIVE RESEARCH
Week 6
The general basic steps in planning for a research include: Problem definition, review of
literature, preparation of proposal, collection of data and, coding and preparation of analysis of
the data.
Problem definition is a description of what the research entails, what the research is about
and what goals the research aims at achieving (Babbie, 2010). Review of literature concerns the
familiarizing of similar research on the problem of interest. This review also provides the
researcher with information on the challenges faced in previous research works on a similar
topic.
The proposal preparation stage involves the writing down of the plan for conducting the
research. The proposal usually contains the full features of a report and is expected to clearly
outline the research process. Data collection stage deals with the actual gathering of information
on the variables through methods including laboratory tests, observation and interviews. The
final stage is the coding and preparation of analysis of the data collected.
For every research, aspects such as accuracy, reliability and validity should be checked to
gauge how fit the research design is for the analysis (Abramson & Abramson, 2008). The
parameters of accuracy that can be measured for a research are sensitivity, specificity and
predictive value.
2
Week 6
The general basic steps in planning for a research include: Problem definition, review of
literature, preparation of proposal, collection of data and, coding and preparation of analysis of
the data.
Problem definition is a description of what the research entails, what the research is about
and what goals the research aims at achieving (Babbie, 2010). Review of literature concerns the
familiarizing of similar research on the problem of interest. This review also provides the
researcher with information on the challenges faced in previous research works on a similar
topic.
The proposal preparation stage involves the writing down of the plan for conducting the
research. The proposal usually contains the full features of a report and is expected to clearly
outline the research process. Data collection stage deals with the actual gathering of information
on the variables through methods including laboratory tests, observation and interviews. The
final stage is the coding and preparation of analysis of the data collected.
For every research, aspects such as accuracy, reliability and validity should be checked to
gauge how fit the research design is for the analysis (Abramson & Abramson, 2008). The
parameters of accuracy that can be measured for a research are sensitivity, specificity and
predictive value.
2

QUANTITATIVE RESEARCH
Week 7
Any researcher conducting a study is expected to ensure that the study they are carrying
out is done in a manner that can be considered as responsible and ethical (O'Neil, 2011). This
would go a long way in presenting the research as reliable and accurate, hence it can be referred
to in future.
The experimental study, which involves an intervention by the researcher on the subject,
is expected to observe two main ethical consideration. These are exposer of the subject to harm
and informed consent from the subject (especially in the cases of human subjects). The research
has to be conducted in a way that will not bring any harm to the research subject. It is also
expected that for human subjects, they are given adequate information about the research and
their consent sort for their participation in the study (Pierre, 2011). The risks are less likely in the
non-experimental studies, however the participant(s) informed consent is a vital ethical
consideration.
The integrity of a study is also fundamental in giving it acceptability. The researcher
should provide full disclosure on all the steps involved in the study, acknowledge all involved
parties and outline the strengths and weaknesses of the study.
3
Week 7
Any researcher conducting a study is expected to ensure that the study they are carrying
out is done in a manner that can be considered as responsible and ethical (O'Neil, 2011). This
would go a long way in presenting the research as reliable and accurate, hence it can be referred
to in future.
The experimental study, which involves an intervention by the researcher on the subject,
is expected to observe two main ethical consideration. These are exposer of the subject to harm
and informed consent from the subject (especially in the cases of human subjects). The research
has to be conducted in a way that will not bring any harm to the research subject. It is also
expected that for human subjects, they are given adequate information about the research and
their consent sort for their participation in the study (Pierre, 2011). The risks are less likely in the
non-experimental studies, however the participant(s) informed consent is a vital ethical
consideration.
The integrity of a study is also fundamental in giving it acceptability. The researcher
should provide full disclosure on all the steps involved in the study, acknowledge all involved
parties and outline the strengths and weaknesses of the study.
3
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

QUANTITATIVE RESEARCH
Week 8
Normal distributions (also known as Gaussian distributions) are probability distributions
for continuous data. The distribution describes the probabilities of the data points of a variable.
In instances when the data is not normal, then transformation techniques can be applied to make
the data normal. The transformation techniques include the logarithmic and square methods (Han
Kamber & Jaiwei, 2011).
The central limit theorem states that for a large enough sample of random variables, their
mean tends towards a normal distribution. This is provided that the random variables are
independent with finite means and variances.
A standard normal distribution is a normal distribution that has a mean of 0 and a
variance equal to 1 (Oscar, 2009). Mathematically, for a standard normal distribution f (x) and
variable set X, then the distribution is denoted as follows:
X N (0,1)
Using the concept of the standard normal distribution, we can obtain the Z value. In order to
convert the data points of a dataset that is not normal to normal we use the formula below:
z=(x−μ)
δ
The standard error of means is a statistic that is representative of the standard deviation of
the distributions of means of the samples form a population. This value gives an indication of the
how close the sample mean is to the mean of the population.
4
Week 8
Normal distributions (also known as Gaussian distributions) are probability distributions
for continuous data. The distribution describes the probabilities of the data points of a variable.
In instances when the data is not normal, then transformation techniques can be applied to make
the data normal. The transformation techniques include the logarithmic and square methods (Han
Kamber & Jaiwei, 2011).
The central limit theorem states that for a large enough sample of random variables, their
mean tends towards a normal distribution. This is provided that the random variables are
independent with finite means and variances.
A standard normal distribution is a normal distribution that has a mean of 0 and a
variance equal to 1 (Oscar, 2009). Mathematically, for a standard normal distribution f (x) and
variable set X, then the distribution is denoted as follows:
X N (0,1)
Using the concept of the standard normal distribution, we can obtain the Z value. In order to
convert the data points of a dataset that is not normal to normal we use the formula below:
z=(x−μ)
δ
The standard error of means is a statistic that is representative of the standard deviation of
the distributions of means of the samples form a population. This value gives an indication of the
how close the sample mean is to the mean of the population.
4
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

QUANTITATIVE RESEARCH
Week 9
A hypothesis can be defined as a statement about a variable(s) in a research that is either
asserted or disapproved by the analysis in the research (Usama & Padhraic, 2008). The main or
baseline hypothesis is referred to as the null hypothesis (H0). The opposing hypothesis is the
alternative hypothesis (H1/HA).
Type I error refers to the error of rejecting the null hypothesis when it is true while type II
error refers to the error of accepting the null hypothesis when it is false. The type I error is
denoted as α while the type II error denoted as β. The power of a hypothesis test is the measure
of the probability of rejecting the null hypothesis when it is false and is given by 1-β.
The confidence interval is a statistic that gives the range within which the value of a
population parameter can lie. The interval is given using the highest probable value of the
parameter and the lowest probable value of the parameter. Therefore any value appearing outside
these limits cannot be considered as reasonable for the value of the parameter in question.
5
Week 9
A hypothesis can be defined as a statement about a variable(s) in a research that is either
asserted or disapproved by the analysis in the research (Usama & Padhraic, 2008). The main or
baseline hypothesis is referred to as the null hypothesis (H0). The opposing hypothesis is the
alternative hypothesis (H1/HA).
Type I error refers to the error of rejecting the null hypothesis when it is true while type II
error refers to the error of accepting the null hypothesis when it is false. The type I error is
denoted as α while the type II error denoted as β. The power of a hypothesis test is the measure
of the probability of rejecting the null hypothesis when it is false and is given by 1-β.
The confidence interval is a statistic that gives the range within which the value of a
population parameter can lie. The interval is given using the highest probable value of the
parameter and the lowest probable value of the parameter. Therefore any value appearing outside
these limits cannot be considered as reasonable for the value of the parameter in question.
5

QUANTITATIVE RESEARCH
Week 10
Correlation is the association that exist between variables in a dataset. Correlation
analysis is a statistical method for determining the level and nature of the association that may
exist between two variables in a data set (Jorge, Angela, & Edson, 2013).
The Pearson Correlation method is the most used correlation analysis method. This
method produces the Pearson Correlation Coefficient whose value ranges between -1 and 1. A
positive value indicates positive association between the variables while a negative value
indicates a negative association between the variables. Values closer to 0 indicate a weak
association with those closer to either -1 or 1 representing a strong association.
Regression on the other hand, is a statistical method that presents the evaluation of the
relationship between variables in a dataset in the form of an equation (Faraway, 2016). The
effect of a variable(s), referred to as independent variable(s), on another variable(s), the
dependent variable, is examined in this method.
When we only have one independent and one dependent variable, the regression is
described as a simple linear regression. In cases where we have one dependent variable and
numerous independent variables, then the regression is described as a multiple linear regression
model. The R squared value obtained from the regression analysis gives the squared value of the
correlation coefficient.
References
6
Week 10
Correlation is the association that exist between variables in a dataset. Correlation
analysis is a statistical method for determining the level and nature of the association that may
exist between two variables in a data set (Jorge, Angela, & Edson, 2013).
The Pearson Correlation method is the most used correlation analysis method. This
method produces the Pearson Correlation Coefficient whose value ranges between -1 and 1. A
positive value indicates positive association between the variables while a negative value
indicates a negative association between the variables. Values closer to 0 indicate a weak
association with those closer to either -1 or 1 representing a strong association.
Regression on the other hand, is a statistical method that presents the evaluation of the
relationship between variables in a dataset in the form of an equation (Faraway, 2016). The
effect of a variable(s), referred to as independent variable(s), on another variable(s), the
dependent variable, is examined in this method.
When we only have one independent and one dependent variable, the regression is
described as a simple linear regression. In cases where we have one dependent variable and
numerous independent variables, then the regression is described as a multiple linear regression
model. The R squared value obtained from the regression analysis gives the squared value of the
correlation coefficient.
References
6
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

QUANTITATIVE RESEARCH
Abramson, J. H., & Abramson, Z. H. (2008). Research Methods in Community Medicine:
Surveys, Epidemiology Research, Programme Evaluation, Clinical Trials. 6th Edition.
Chichester, England: Hoboken, N.J., Wiley.
Babbie, E. R. (2010). The Practice of Social Research 12th edition (1st ed.). Belmont, CA:
Wadsworth Cengage.
Barbara, I., & Susan, D. (2014). Introductory Statistics (1st ed.). New York: OpenStax CNX.
F, U., & S, P. (2008). From data mining to Knowledge Discovery in Databases (4th ed.). New
York: CRC Press.
Faraway, J. J. (2016). Extending The Linear Model with R (2nd ed.). New York: Chapman &
Hall/CRC.
Han Kamber, & Jaiwei, P. (2011). Data Mining: Concepts and Techniques (3rd ed.). London:
Morgan Kaufman.
Jorge, A. A., Angela, A., & Edson, Z. M. (2013). Robust Linear Regression Models: Use of
Stable Distribution for the Response Data. Open Journal of Statistics, 3, 3-5.
Martinez, W. L., Martinez, A. R., & Solka, J. (2010). Exploratory Data Analysis With MATLAB,
2nd Edition (1 ed.). London: CRC/Chapmann & Hall.
Norman, G. (2010). Likert Scales, Levels of Measurement and the Laws of Statistics. Advances
in Health Science Education , 15(5), 625-632.
O'Neil, P. (2011). The Evolution of Research Ethics in Canada; Current Developments.
Canadian Psycology, 52(3), 2-9.
7
Abramson, J. H., & Abramson, Z. H. (2008). Research Methods in Community Medicine:
Surveys, Epidemiology Research, Programme Evaluation, Clinical Trials. 6th Edition.
Chichester, England: Hoboken, N.J., Wiley.
Babbie, E. R. (2010). The Practice of Social Research 12th edition (1st ed.). Belmont, CA:
Wadsworth Cengage.
Barbara, I., & Susan, D. (2014). Introductory Statistics (1st ed.). New York: OpenStax CNX.
F, U., & S, P. (2008). From data mining to Knowledge Discovery in Databases (4th ed.). New
York: CRC Press.
Faraway, J. J. (2016). Extending The Linear Model with R (2nd ed.). New York: Chapman &
Hall/CRC.
Han Kamber, & Jaiwei, P. (2011). Data Mining: Concepts and Techniques (3rd ed.). London:
Morgan Kaufman.
Jorge, A. A., Angela, A., & Edson, Z. M. (2013). Robust Linear Regression Models: Use of
Stable Distribution for the Response Data. Open Journal of Statistics, 3, 3-5.
Martinez, W. L., Martinez, A. R., & Solka, J. (2010). Exploratory Data Analysis With MATLAB,
2nd Edition (1 ed.). London: CRC/Chapmann & Hall.
Norman, G. (2010). Likert Scales, Levels of Measurement and the Laws of Statistics. Advances
in Health Science Education , 15(5), 625-632.
O'Neil, P. (2011). The Evolution of Research Ethics in Canada; Current Developments.
Canadian Psycology, 52(3), 2-9.
7
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

QUANTITATIVE RESEARCH
Oscar, M. (2009). A data mining and knowledge discovery process model (1st ed.). Vienna: Julio
Ponce.
Pierre, D. (2011). A New Perspective on Research Ethics. Health Law Review, 19(3), 1-5.
8
Oscar, M. (2009). A data mining and knowledge discovery process model (1st ed.). Vienna: Julio
Ponce.
Pierre, D. (2011). A New Perspective on Research Ethics. Health Law Review, 19(3), 1-5.
8
1 out of 8
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.





