Advanced Statistical Hydrology Project: Flood Analysis in NSW
VerifiedAdded on 2022/10/17

ADVANCED STATISTICAL HYDROLOGY
NAME OF STUDENT
NAME OF PROFESSOR
NAME OF CLASS
NAME OF SCHOOL
DATE
Paraphrase This Document

INTRODUCTION
The dealings and the focus of this report will be entirely on hydrology and the discharge
of hydrological water from the areas that are within peripheries of NSW state. Actually, the
discharge is measured in seconds and the rate of discharge is excessively high when summed up
in a year and this eventually causes floods that require frequent discharges from an area. This is
what leads to the interest in the development of this report. The assignment itself has two parts,
there will be the analysis part which will lead to the development of the report. The report is
bound to have the introductory part that is bound to have the explanations of what the whole
assignment will entail. The whole structure of the report is also bound to be explained at this
point as well.
Leaving the report structure out for a while, we will be focusing on the analysis part first.
In the analysis of the dataset that was provided for use during analysis, there is need to analyze R
over the discharge or flood flow of the hydrological water that is experienced in NSW state (a
state in Australia). There needs to predict the discharge probability of up to one per cent (1%) of
the total flood flow that is experienced. There will be a total of up to four (4) probability
distributions that are required to be explored. There will also be a no parametric method that will
surely give a probability prediction out of the probability distribution codes. After the analytical
predictions, then there will be the reported build-up. This report, therefore, will have the
introduction that carries everything that there is to be focused on but in a skeleton manner. After
the introduction, there will be a literature review of the whole methods that have been used on
analysis. There will be the methodology section that talks about methods developed in project
analysis. This will be followed by he realized results and the actual discussions of the same
results. There needs to be actual evidence and the visuals as well. After this section, what follows

is the conclusion which will serve as both the recommendation section as well. There will be the
recommendation made as to what analytical or predictive model should be used for being more
reliable and which ones as well do not actually do not give the best predictive results and
therefore should be dropped. The final bit will be the references section and the referencing
should be made in Harvard style and there should also be in-text citations as well. Deeply
indicated, there will also need to deeply do a heavy in-text citation on the literature review
section.
LITERATURE REVIEW
i. Hydrological Floods Overview
Flood in NSW state is something that has hit headlines of the newspapers produced by different
media houses in the state and the entire country altogether. It is a matter that has therefore proven
to be of great interest both nationally and state wise. Floods or so-called high amounts of
hydrological water discharged from the grounds in large numbers have been of great interest that
it requires the critical look in the sense that it is taken through intellectual analysis in different
levels (Clark and Hanson, 2017). Estimates can be looked into, and this can be done statistically
and after proper analysis of the hydrological related data appropriate steps can be taken. These
steps are no different from the steps that are usually being taken by the other many different
nations that do try to curb the effects of high amounts of floodwater when it pours (Hutton et al.
2016). Looking at the steps taken currently, both in Africa and the western countries at large in
order to curb flood effects, we realize that there is actually water reservoirs that are put up and
additionally proper drainage systems are also set up to help channel excess water out of the
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

whole area that if not checked would experience full dose effects of the actual downpour that
leads to flooding. Looking at the dataset that has been provided and taking the average which is
above 200 m3/s (Montanari, 2015). This is a very high amount of discharge of hydrological water
if now summed up into an hour's downpour of rain, if the same amount of discharge was almost
the same amount of pour with the little amount being absorbed by the soil, then there can be
adverse effects if matters are largely not taken into account. Imagine there were no proper
drainage systems, reservoirs were not raised to actually aid in the hold up of excess water, and
therefore water were to stagnate at the same point for a long time, this would definitely cause
very many effects to the lives and the comfort of those that actually live in the areas that
experience the actual flood flows due to high amount of waters that are supposed to be
discharged and yet are actually not discharged (Bolch, 2017).
Look for example, at other parts of the content and in other continents as well for
example in Asia, Africa and South America as well, some deaths are related to high amounts of
stagnation of the downpour that is meant to be discharged. We have to look at how high amounts
of hydrological water from the rains, if not discharged as fast as expected, actually affect the
lives of people in a space that this occurs (Liljedahl et al. 2016). To begin with, high amounts of
water expected to be discharged, if stagnated at one point for a longer time, may block paths and
there is no way an individual can know which part of the path that they are using is safe to use.
take for example a driver of a car driving along a road which was being rectified during the dry
sessions maybe during the da. If this person was not aware of the fact that there were sections of
the road that required slow spend, actual precision and change of lanes, then this individual is
bound to drive directly into an unsafe space and maybe to a fatal trap (Flowers, 2018). This is
Paraphrase This Document

why there is a need for proper drainage systems which help in getting read of excess
hydrological water that needs to be gotten rid of overtime.
A second case of how lives are affected when water is not precipitated or discharged off the
ground as fast as required is there can be causes of deceases. Stagnated water on the ground or
too much water that is not flowing forms breeding ground for; mosquitoes, snakes, snails and
along with that it makes the soil poorer for farming as soil only requires very little amounts of
water to aid with the growth of plants. Lack of discharge or precipitation will ensure venomous
snakes that survive in water the time to get the chance to and a breading space. This alone proves
to be detrimental to the lives of the locals as most snakes bite to kill (Loucks, 2015). The
mosquitoes without a doubt, have a real breeding ground when the large amounts of hydrological
water are not discharged as wished. Excess breeding grounds for the mosquitoes give an excess
of mosquitoes and this can cause deaths as mosquitoes spread malaria at a high rate. There can
actually be the spread of bilharzia from snails as well as snails love the feel of water and I if
there is in excess of that in stagnation then there can be a really conducive space for them to
bread and therefore lives, by extension can be affected in the ways mentioned above, among
others though, that actually can be discussed in details in a more elaborate literature (Eslamian,
2018).
ii. Discharge from the station
From the assignment's specifications, there is a need for us to offer a recommendation on
the appropriate flood discharge processes from the actual results that will be gotten from the
analysis. The station is a question here, from the assignment details is Station 204017 at
Bielsdown Creek at Dorrigo number 2 and number 3. This is an indication that there were two
stations to be taken into account during consideration. The stations are adjacent to one another

and therefore experience the same level of discharge problem as the level of hydrological
circulation that one station experiences are the same level as the other. this one translates into the
stations having one and only one mega drainage system that seeks to expel or discharge
floodwater out of both the stations, having been adjacent to one another. There are
recommendations on using the data that has been derived from the analysis to aid in the design of
a bridge up the stream. What this translates to is that these are the weather of electric supply
stations and requires something like a dam that can hold more water upstream. This conclusion
though cannot be reached until an actual analysis that is needed to be carried out is effected on R,
the required platform.
iii. Methods used
In this section, there will be a discussion on what analytical methods to use as given in all
the roman numbers of the requirements of the assignment. Since the analysis is to be done in R
software, there needs to be actual copying of the data dataset that has been provided into a CSV
file from the pdf file that contains the assignment as R does not accept files in pdf formats. So
initially, they will be taken in an excel format and then later changed into a CSV format, which is
comma-delimited and can be opened in notepad to get the fact that it is a comma-delimited file
format. The example of the vie of the dataset as a CSV format in notepad is as below;
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

As can be seen from the above diagram it is evident that the dataset has commas in each and
case. It is only noted pad that brings out the ideal structure of the dataset that is to be used.
Notepad as well can be used to save codes from R for presentation to someone who does not
have R software to view and run the codes to get results (Giner and Smyth, 2016).
There will be graphs produced which must be displayed in the methodology section, there are
equations that will be derived, written as per the code rules o R and all that will be displayed in R
altogether. R studio is the best version of R for use as it contains four windows, the first window
is the code writing window where actually, the codes that give results are written under. What
this means is there a window called the console window that runs the codes that are written in
them. There is an additional window that contains all the history of the codes that are written in
R. There is the environment window that gives the return of what is run but in a skeleton aspect.
The final window is the plots, files, packages, help, viewer tabs window. Using the packages tab,
libraries can be installed. Using the plots tab one can view the plots generated by the developed
plot codes. Help tab allows for actual help on what most R documents mean (Riviello et al.
2016).
After knowing how R works, it is only important to have an actual discussion of what has been
asked to be the basis of analysis in the requirements of the assignment. There is a need to analyze
while predicting the occurrence of 1% of the total discharge. This is to be done through some
probability distribution. Before we look at the probability distributions, there will need to have a
look into each probability distribution and explain what they mean in words. The very first
probability distribution to be looked into will be the normal probability distribution ad this is the
simplest probability distribution to get to understand (McElreath, 2018).
Paraphrase This Document

From the illustrative phone above, we have a normal curve, representing a normal distribution,
the distribution under the curve is continuous and moves from negative infinity to positive
infinity. The total area under the curve is supposed to be equal to 1 as this is a probability
distribution ranging from zero (0) to one (1). Z on the graph represents the independent values
that are fed in the actual function to get the probability results represented by f(Z) (Faraway,
2016).
Moving progressively, we get to the EV1 probability distribution which in full is Extreme
value type 1 distribution. Extreme value type 1 distribution is of two types actually, the smallest
extreme and the largest extreme and they are both referred to as minimum and the maximum
cases respectively. Another reference of the EV1 is Gumbel distribution. There are several
functions under this distribution and just in listings they will be; Cumulative Distribution
Function, Percent Point Function, Hazard Function, Cumulative Hazard Function, Survival
Function and Inverse Survival Function (Benson, Ransohoff and Huang, 2017).
The next distribution which we will be looking at will be the P3 distribution which is called the
is largely based on whether a variable is random or non-random. It depends on the type of

variability of study. All the other illustrative instances will be looked at when running the actual
analysis in R studio itself.
GEV in full, goes by the name, General extreme value and is a family of a continuous
probability distribution. They are developed under extreme value theory to combine Gumbel,
Frechet, Weibull families also known as type I, II and III extreme value distributions. In other
cases, this distribution is known as Fisher-Tippet distribution. This is a naming derived from the
names Ronald Fisher and L. H. C Tippet. This distribution proves only to be the only possible
limit distribution of properly normalized maxima of a sequence where variables are distributed
randomly in an identical manner and are also independent. There is no need for any limit
distribution to existing at all (Garvey, Book and Covert, 2016).
In the second requirement of the assignment, there is a need to focus on a non-parametric
method of analyzing to help make a non-parametric prediction on the 1% of discharge that is
required to be made. The non-parametric method for use, in this case, will be the non-parametric
regression, which is a regression but can only make predictions when one numeric dependent
variable is applied. There should be no integer and no constants (Taylor et al. 2018).
Finally, there will be a recommendation on the actual recommendations made as per the
derivation that is made from the analysis itself, whether a bridge, a dam, a proper and elaborate
drainage system should be constructed. This will be looked at in different angle as the stations
geographical area must be taken into account. Different measures must be taken due to them that
there might be discrepancies in the geographical alignment of the areas that are surrounding the
stations that are under study (Delignette-Muller and Dutang, 2015).
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

METHODOLOGY
This is the methodology section where all the codes as required to be typed using the
equation editor. What is most important and that must be taken into consideration is that the
assignment that is to be carried out has its analysis in R. Therefore, the pen and paper way of
writing the required equations will not be applicable in this case at all. The kind of equations that
will be written will be based on the format that R provides (Kopp et al. 2017). To get a recap of
the methods used at an introductory level, please get a recap at the last sub-section (methods
used) of the second section (Literature review). In the second section, the last subsection all the
four probability sections that are required to be used including the final analytic level, the non-
parametric method, there needs to be a clear illustration on the results and the methods and how
they unroll out themselves (McElreath, 2018). There will be a clear comparison of the
probability estimates from the probability distribution methods that will be employed.
Taking a recap in this paragraph there will be consideration of these four probability
methods, we note that we will be starting with the binomial probability distribution, which has a
normal distribution graph that is plotted for the same. The area under the graph is totalling up to
1. Probability values range from zero (0) to one (1) and that is where the summation gets to 1.
The next is the EV1, the extreme value distribution of type 1 which is of two types. The
minimum and maximum extreme levels of distributions. The EV1 has several functions that are
tied up to it and depends on the functions that are run from time to time (Hsiang at al. 2017).
Therefore, a scientist is bound to make proper and informed decisions when it comes to the
choice of extreme value for type 1 distribution that is intended for use. Then there is the P3
distribution and then there is the GEV probability distribution, which is a General value extreme
value, that consists of a group of continuous probability distributions. In the final list of the
Paraphrase This Document

algorithms that are run, will be the non-numeric list and this will have it R code equations written
at the last (Hensen et al. 2016).
i. Actual R Codes and Equations of the Probability Distributions
a. Normal Probability Distribution
The original equation as per the pen and paper without considering the part of the code;
y= ( 1
σ ❑
√2 π )e¿
)² )/2σ²)).
b. EV1 (Extreme Value Probability Distribution of Type 1
f ( x )= ( 1
β )e ( x−μ
β )e (−e ( x−μ
β ))
c. GEV (Greatest Extreme Value Probability Distribution)
y=¿σ)t(𝓍)^(έ+1)e(-t(x)).
From the above, there is an actual listing of the actual probability distribution equation and are
listed in number sequentially. The next section will now be focusing largely on the application of
these equations in R codes.
RESULTS AND DISCUSSION
This will be the most interesting part of the whole report at this forms the major part of
the whole of the report. This is the section where the estimates that are supposed to be made are
made. The software that we have chosen for use in making the estimates will be R and this is
where the probabilistic distribution predictions will be made. All the codes that do give the

relevant estimates in terms of probability of the 1% of discharge that is required will all be
written here and the results will be gotten from the console or the results view. In the previous
section, there was the listing of the equations of the relative probabilistic distributions but in this
section, the actual case will be different as there will be no listing of the equations but the major
focus will be entirely on R software (Hoyos, Morales and Akhavan-Tabatabaei, 2015).
Critically turning back and looking at R, it was talking about in an earlier section but there will
need to dig deeper in this case as well. The twist of looking at it will be a bit different as
compared to the earlier section. R, as it is known, is a statistical software build by statisticians for
a statistician to aid them with their statistical analysis and purposes. This software has been in
the topnotch arena of statistical tools for a very long time. This was before other competitive
statistical software like python came up (Thisted, 2017). A python is multipurpose software that
allows app, website, different system development as well as statistical analysis. The tools that
are used for and statistical analysis and in statistical analysis using python are Spyders and
Jupiter notebook. Rubi as well offers some level of competition to R but because Rubi is not so
much known, this puts R ahead of Rubi as a statistical software for analysis. On the other hand,
though python as statistical analysis software, is still ahead of R. reason for this is because it was
realized that python had a better interface for users that R, a better outlook and more attractive
that R itself. Its interface as well is very easy to handle. The only reason as to why R still comes
on top is the fact that there is the fact that R has more libraries tied to its operation. There are
more elaborative languages that can aid in the development analytical codes that are more
elaborative as opposed to the ones that are found in the other platforms (Tüselmann, Sinkovics
and Pishchulov, 2016). The arrival is the analytical results being sorted for is always way faster
as opposed to like in the cases of other software. Libraries found and used for R, are bulkier and
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

they tend to contain more mini-libraries in themselves. Therefore, most work that can be carried
out with multiple libraries is carried out with few libraries and therefore few steps. The
visualization that R portrays and the actual libraries that are used to give these are world-class.
The visualizations are always elaborative and more revealing and one needs not to strain before
they can easily pick out results from the visuals that are presented up under the plots section.
There is an actual chance to make a varied number of plots and these can range from scatters to
boxplots, histograms, box plots, line graphs and the rest. There are always options to colour the
plots or not to. If all of the things mentioned above does not serve as enough reasons for
someone to choose R for analysis, then what does (Thomas et al. 2016)
The study here actually has given a different turn of events as the requirements of study
have proven to be going away from the machine learning arena and has gotten into the more of
analysis part. In the machine learning part, the use of R helps in different cases, for example,
there can be a classification, which is a supervised machine learning type. The next one can be
the clustering which is the unsupervised machine learning type. There can be reinforced and the
semi-supervised machine learning as well. The choice that will be made on the machine learning
type will be based on the problem at hand. Let's look at the machine learning types and how they
work. For the supervised learning type, we have a classification and here we will take a linear
classification algorithm which in our case is the logistic regression or Naïve Bayes. In the
examples that are listed here, there is clear evidence of the fact that the codes that are built for
this case are trained and monitored and changed from time to time, machines are not able to learn
on their own. The case though is different from the unsupervised learning like the hierarchical
clustering where the systems learn on their own from previous events can do future clustering
based on the past events and processes that were learnt with previous data. The reinforced
Paraphrase This Document

learning on the other hand though has a clear use in the robotics and operational machines like
humans where the machines or robots are reinforced to work and perform actions that humans
only do. Semi-supervised learning though on the other hand, is a mix-up and is partly supervised
as well as partly unsupervised. This requires the highest degree of understanding that one has on
machine learning. Machine learning has aided in ways that are more than one and this ranges
from the medical sector to the business sector. Since this is entirely not a machine learning
literature but only data analytics (predictive analytics) literature, we will be focusing on one
aspect only, the business sector and this too will be made to be in brief. Machine learning can aid
in the classification of fraudulent customers as well as those suppliers that are fraudulent. This
aids in eradicating related costs and therefore in the long run boosts profits as fraudulent
activities are gotten rid of a bi time. this when R is used ensures efficiency as there can be real
integration of the machine learning models that are developed and therefore this eventually bring
operatives into required systems.
After all that discussion that there is in R, we dive back into the reporting of the results. First of
all, we start our case with the initial probability distribution. Here, we will be using the
distributions to test for the probability of the occurrence of the 1% of discharge in both the
stations. The stations being adjacent to each other proves that there is actually a sense of sharing
of hydrological aspects and therefore this is why there will only be one estimate of the
probability.
i. Normal Distribution Estimates
Data that had been collected and saved under CSV is first loaded using the code;
Flow=read.csv(file.choose(), header = T). As per R, we have named the dataset as Flow, and
when we want to see what the data set have we either use the commands; View(Flow) or Flow.

When the first one is run, what we are supposed to see will be displayed on the codes view
window and when the second one is run, there will be a view in the console window and what
this means is commanded operate differently. The structure of the data when checked, the dataset
only has two variables which are; integer and numeric. The numeric is the maximum flow where
the integer is the Year variable. The summary gives the constants of a dataset and this can be
done under the code; summary(Flow). The results as of below will be given;
As of the # notation in the code, the one that follows closely after summary, it is very clear to see
that these are a statistical summary. Of the summaries, it is evident the mean, median, the
quartiles, the minimum and the maximum. The most important value that we will be needing for
analysis is the standard deviation and in this case, the code, in this case, will be;
sd(Flow$Annual_Maximum_Flow) which give the value of 168.4743. when interested
invariance then one can get the square of this but what is known is the code that best describes
this will be something close to; var(Flow$Annual_Maximum_Flow). The mean, in this case, will
be recorded under the mean name and the standard deviation too will be recorded under sd. What
we are to estimate is 1% of the total discharge and as per the mean value, we have the value of
1% o the total discharge saved under a name ‘one'. What this means is under the code;
one=0.01*mean. The value of one is 2.0275, from here, our x value will be one with the value
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

2.0275 and our u (the mean) will be 202.75 and finally, the standard deviation will be 168.4743.
these values will be the value of keen interest. Using the porn command, one can get the
estimates of the lower bound probability for the occurrence of the 1% of the discharge to occur
as well as get the upper bound of the occurrence of the 1%a and the code for the lower bound is;
norm(q=2.0275, mean = 202.75, sd=168.4743) and that of the upper bound is; norm(q=2.0275,
mean = 202.75, sd=168.4743, lower.tail = F), the results that they give equals; 0.1167 and 0.8832
respectively. Which are lower than 1 and what this indicator is that the lower bound which
happens by default records a very low percentage and therefore what this means is that the
occurrence of the 1% has very low probabilities. When norm command is used, we get a
negative value of -189.1798 to the side and what this implies is that on the binomial plot there is
a clear towards infinity kind of flow. There will be codes in R, can't be listed here as they are
four lines of codes, that will be used in developing a normal distribution. The normally
distributed graph will be;
The part that is coloured in blue will be the estimates that are being tried to be made. This falls to
the infinity side.
Paraphrase This Document

Moving to the second probability distribution, there will be a clear illustration of how the P3
performs and in this case since we had looked at how data is being cleaned in most cases, we will
just get a dive into the actual Pareto and the dpareto probability distribution development. The
dpareto gives o results but the ppareto gives a very high probability indication and this can be
shown by the 0.911 which is very high and therefore, in this case, the 1% of the discharge can
happen with ease.
From the above discrepancies on the two models, it is clear that the other models will
also give different results as they operate on different parameters. Therefore, as illustrated by the
R codes that will be attached to the appendix section, there is a difference in all that.
CONCLUSION
There is a realization that all the codes produce different results because they operate on
different parameters. Like in the case above, there is a clear indication of us having a real spread
in the differences of the lower bound that are realized and this goes on in all the other cases a
well. Other cases give the actual indication that there are higher chances of the 1% occurring and
in other cases there are very low chances of the 1% happening at all.
Whether or whether not there is a higher chance, precautions should be taken to curb the
menace that comes about when there is a clear discharge problem. There is the importance of
building the bridges and the rest that can aid hold water up the station and as well be used for
other purposes like irrigation, hydro-electric power production. Supply of water to households as
well.

References
Benson, B., Ransohoff, R.M. and Huang, A.Y.C., 2017. Extensional stress modulates leukocyte
adhesion probability.
Bolch, T., 2017. Hydrology: Asian glaciers are a reliable water source. Nature, 545(7653), p.161.
Clark, M.P. and Hanson, R.B., 2017. The citation impact of hydrology journals. Water Resources
Research, 53(6), pp.4533-4541.
Delignette-Muller, M.L. and Dutang, C., 2015. fitdistrplus: An R package for fitting
distributions. Journal of Statistical Software, 64(4), pp.1-34.
Eslamian, S., 2018. Climate Change Impacts on Hydrology and Water Resources. In Handbook
of Engineering Hydrology (Three-Volume Set) (pp. 754-767). CRC Press.
Faraway, J.J., 2016. Extending the linear model with R: generalized linear, mixed effects and
nonparametric regression models. Chapman and Hall/CRC.
Flowers, G.E., 2018. Hydrology and the future of the Greenland Ice Sheet. Nature
communications, 9(1), p.2729.
Garvey, P.R., Book, S.A. and Covert, R.P., 2016. Probability methods for cost uncertainty
analysis: A systems engineering perspective. Chapman and Hall/CRC.
Giner, G. and Smyth, G.K., 2016. statmod: probability calculations for the inverse Gaussian
distribution. arXiv preprint arXiv:1603.06687.
Hensen, B., Kalb, N., Blok, M.S., Dréau, A.E., Reiserer, A., Vermeulen, R.F.L., Schouten, R.N.,
Markham, M., Twitchen, D.J., Goodenough, K. and Elkouss, D., 2016. Loophole-free Bell test
using electron spins in diamond: second experiment and additional analysis. Scientific reports, 6,
p.30289.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

Hoyos, M.C., Morales, R.S. and Akhavan-Tabatabaei, R., 2015. OR models with stochastic
components in disaster operations management: A literature survey. Computers & Industrial
Engineering, 82, pp.183-197.
Hsiang, S., Kopp, R., Jina, A., Rising, J., Delgado, M., Mohan, S., Rasmussen, D.J., Muir-Wood,
R., Wilson, P., Oppenheimer, M. and Larsen, K., 2017. Estimating economic damage from
climate change in the United States. Science, 356(6345), pp.1362-1369.
Hutton, C., Wagener, T., Freer, J., Han, D., Duffy, C. and Arheimer, B., 2016. Most
computational hydrology is not reproducible, so is it really science?. Water Resources Research,
52(10), pp.7548-7555.
McElreath, R., 2018. Statistical rethinking: A Bayesian course with examples in R and Stan.
Chapman and Hall/CRC.
Montanari, A., 2015. Debates—Perspectives on socio‐hydrology: Introduction. Water Resources
Research, 51(6), pp.4768-4769.
Kopp, R.E., DeConto, R.M., Bader, D.A., Hay, C.C., Horton, R.M., Kulp, S., Oppenheimer, M.,
Pollard, D. and Strauss, B.H., 2017. Evolving understanding of Antarctic ice‐sheet physics and
ambiguity in probabilistic sea‐level projections. Earth's Future, 5(12), pp.1217-1233.
Liljedahl, A.K., Boike, J., Daanen, R.P., Fedorov, A.N., Frost, G.V., Grosse, G., Hinzman, L.D.,
Iijma, Y., Jorgenson, J.C., Matveyeva, N. and Necsoiu, M., 2016. Pan-Arctic ice-wedge
degradation in warming permafrost and its influence on tundra hydrology. Nature Geoscience,
9(4), p.312.
Loucks, D.P., 2015. Debates—Perspectives on socio‐hydrology: Simulating hydrologic‐human
interactions. Water Resources Research, 51(6), pp.4789-4794.
McElreath, R., 2018. Statistical rethinking: A Bayesian course with examples in R and Stan.
Chapman and Hall/CRC.
Riviello, E.D., Kiviri, W., Fowler, R.A., Mueller, A., Novack, V., Banner-Goodspeed, V.M.,
Weinkauf, J.L., Talmor, D.S. and Twagirumugabe, T., 2016. Predicting mortality in low-income
country ICUs: the Rwanda Mortality Probability Model (R-MPM). PloS one, 11(5), p.e0155858.
Taylor, G.K., Brasel, K.R., Dawkins, M.C. and Dugan, M.T., 2018. Keeping pace: The
conditional probability of accounting academics to continue publishing in elite accounting
journals. Advances in accounting, 41, pp.97-113.
Thisted, R.A., 2017. Elements of statistical computing: Numerical computation. Routledge.
Thomas, T., Moitinho-Silva, L., Lurgi, M., Björk, J.R., Easson, C., Astudillo-García, C., Olson,
J.B., Erwin, P.M., López-Legentil, S., Luter, H. and Chaves-Fonnegra, A., 2016. Diversity,
Paraphrase This Document

structure and convergent evolution of the global sponge microbiome. Nature communications, 7,
p.11870.
Tüselmann, H., Sinkovics, R.R. and Pishchulov, G., 2016. Revisiting the standing of
international business journals in the competitive landscape. Journal of World Business, 51(4),
pp.487-498.
Appendix
Normal Distribution

P3
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

EV1
Paraphrase This Document

GEV

Non-Parametric
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

Paraphrase This Document

Related Documents

Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
© 2024 | Zucol Services PVT LTD | All rights reserved.