The advent of AWS to Data Management in the 24th Century
VerifiedAdded on 2023/01/11
|58
|20799
|85
AI Summary
This dissertation explores the challenges faced by Amazon in data management before the advent of cloud computing and the potential impact of AWS on their functionalities. It discusses the administrative process of data management and the importance of managing data effectively. The dissertation also highlights the various services offered by AWS for cloud data management and their benefits. The research aims to identify the challenges faced by Amazon in data management and the ways in which AWS helps in business expansion through effective utilization of data management.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
The advent of AWS to
Data Mgt in the 24th
Century
Data Mgt in the 24th
Century
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Abstract
Data management refers to the administrative process that comprises of acquisition,
validation, storage, protection along with processing of data is needed for ensuring accessibility,
reliability along with timeliness of data for their users. Enterprises makes use of big data for
informing business decisions through which deeper insight can be attained with reference to
trends, opportunities and customer behaviour through which extraordinary customer experiences
can be created. There are different challenges that are faced by organisation while delivering
their services within market. It becomes necessary to ensure that data is managed in an adequate
manner so that relevant decisions can be formulated. With evolution of technology, data
management has also evolved a lot with respect to issues that are faced in this aspect and
designing a relevant solution for dealing with that.
Amazon Web Services renders on demand services to firms through which primitive
distributed computing and technical infrastructure can be provided as a building tools and blocks.
AWS not only provides the firms with an option for data management but also offers services at
low cost, high elasticity, agility, flexibility, security, backup, application hosting, database and
many other features are being furnished. Along with this, the pay per use model aids firms to
easily expand their operations and functionalities within the global market. The challenges that
are being faced by organisation while delivering their services includes integration, storage, cost
of infrastructure, security, migration, lack of training & human workforce, analysis and many
more are there. In addition to this, the amount of data is continuously increasing which makes it
important for firms to take adequate measures or techniques through which this can be analysed
and decisions can be formulated by making use of knowledge.
Data management refers to the administrative process that comprises of acquisition,
validation, storage, protection along with processing of data is needed for ensuring accessibility,
reliability along with timeliness of data for their users. Enterprises makes use of big data for
informing business decisions through which deeper insight can be attained with reference to
trends, opportunities and customer behaviour through which extraordinary customer experiences
can be created. There are different challenges that are faced by organisation while delivering
their services within market. It becomes necessary to ensure that data is managed in an adequate
manner so that relevant decisions can be formulated. With evolution of technology, data
management has also evolved a lot with respect to issues that are faced in this aspect and
designing a relevant solution for dealing with that.
Amazon Web Services renders on demand services to firms through which primitive
distributed computing and technical infrastructure can be provided as a building tools and blocks.
AWS not only provides the firms with an option for data management but also offers services at
low cost, high elasticity, agility, flexibility, security, backup, application hosting, database and
many other features are being furnished. Along with this, the pay per use model aids firms to
easily expand their operations and functionalities within the global market. The challenges that
are being faced by organisation while delivering their services includes integration, storage, cost
of infrastructure, security, migration, lack of training & human workforce, analysis and many
more are there. In addition to this, the amount of data is continuously increasing which makes it
important for firms to take adequate measures or techniques through which this can be analysed
and decisions can be formulated by making use of knowledge.
Table of Contents
Abstract............................................................................................................................................2
Title: “The advent of AWS to Data Management in the 24th Century”...........................................1
Chapter 1..........................................................................................................................................1
Introduction......................................................................................................................................1
Background of the research....................................................................................................3
Problem statement..................................................................................................................4
Research aim..........................................................................................................................4
Research Objectives...............................................................................................................5
Research Questions................................................................................................................5
Statement of Hypothesis.........................................................................................................5
Rationale of the study.............................................................................................................5
Significance of the study........................................................................................................6
Structure of the dissertation....................................................................................................6
Summary..........................................................................................................................................8
Chapter 2..........................................................................................................................................9
Literature Review.............................................................................................................................9
Introduction......................................................................................................................................9
Review...........................................................................................................................................10
History of data management...............................................................................................10
Identify challenges faced by Amazon in data management...............................................16
Benefits collected by Amazon by using AWS (cloud computing) in data management...19
Ways through which AWS help in business expansion by effective utilisation of data
management.........................................................................................................................23
Summary........................................................................................................................................27
Chapter 3........................................................................................................................................28
Research Methodology..................................................................................................................28
Introduction....................................................................................................................................28
Methodologies................................................................................................................................28
Summary........................................................................................................................................36
Abstract............................................................................................................................................2
Title: “The advent of AWS to Data Management in the 24th Century”...........................................1
Chapter 1..........................................................................................................................................1
Introduction......................................................................................................................................1
Background of the research....................................................................................................3
Problem statement..................................................................................................................4
Research aim..........................................................................................................................4
Research Objectives...............................................................................................................5
Research Questions................................................................................................................5
Statement of Hypothesis.........................................................................................................5
Rationale of the study.............................................................................................................5
Significance of the study........................................................................................................6
Structure of the dissertation....................................................................................................6
Summary..........................................................................................................................................8
Chapter 2..........................................................................................................................................9
Literature Review.............................................................................................................................9
Introduction......................................................................................................................................9
Review...........................................................................................................................................10
History of data management...............................................................................................10
Identify challenges faced by Amazon in data management...............................................16
Benefits collected by Amazon by using AWS (cloud computing) in data management...19
Ways through which AWS help in business expansion by effective utilisation of data
management.........................................................................................................................23
Summary........................................................................................................................................27
Chapter 3........................................................................................................................................28
Research Methodology..................................................................................................................28
Introduction....................................................................................................................................28
Methodologies................................................................................................................................28
Summary........................................................................................................................................36
Chapter 4........................................................................................................................................37
Implementation..............................................................................................................................37
Introduction....................................................................................................................................37
Themes...........................................................................................................................................37
Theme 1: Illustrate the history of data management?..........................................................37
Theme 2: What are the challenges faced by Amazon in data management?.......................40
Theme 3: Depict benefits collected by Amazon through usage of AWS (cloud computing) in
data management?................................................................................................................41
Theme 4: How AWS help in business expansion by effective utilisation of data management?
..............................................................................................................................................43
Summary........................................................................................................................................45
Chapter 5........................................................................................................................................46
Conclusion and Recommendation.................................................................................................46
Introduction....................................................................................................................................46
Overview of dissertation.......................................................................................................46
Recommendation...........................................................................................................................47
Summary........................................................................................................................................47
References......................................................................................................................................49
Implementation..............................................................................................................................37
Introduction....................................................................................................................................37
Themes...........................................................................................................................................37
Theme 1: Illustrate the history of data management?..........................................................37
Theme 2: What are the challenges faced by Amazon in data management?.......................40
Theme 3: Depict benefits collected by Amazon through usage of AWS (cloud computing) in
data management?................................................................................................................41
Theme 4: How AWS help in business expansion by effective utilisation of data management?
..............................................................................................................................................43
Summary........................................................................................................................................45
Chapter 5........................................................................................................................................46
Conclusion and Recommendation.................................................................................................46
Introduction....................................................................................................................................46
Overview of dissertation.......................................................................................................46
Recommendation...........................................................................................................................47
Summary........................................................................................................................................47
References......................................................................................................................................49
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Title: “The advent of AWS to Data Management in the 24th Century”
Chapter 1
Introduction
The administrative process that comprises of acquisition, validation, storage, protection
and processing of required data for ensuring that data is being created as well as are collected by
firm is referred to as data management. It acts as critical piece within deployment of IT systems
that are liable for execution of business applications thereby, leads to analytical information for
driving operational decision making and strategic planning by business managers, corporate
executives as well as other end users (Demirbas, 2020). Data management process comprises of
aggregation of distinct functions which aims at ensuring that data within corporate system is
precise, accessible and available. With evolution of technology, various enhancements are being
made with respect to ways in which it is being processed and utilised by firms for delivering
their functionalities. AWS (Amazon Web Services) renders on-demand cloud platforms along
with APIs to companies, governments and individuals. This cloud service is liable for furnishing
a abstract technical infrastructure along with building blocks and tools of distributed computing.
This dissertation will furnish relevant details associated with challenges that were being faced by
firm before cloud computing existed and potential impact created by this aspect on their
functionalities.
Data management can be defined as an administrative process where data or information
can be stored, managed, protected, processed and so on in order to improve the reliability,
accessibility and including timeliness of information for the users. implies ambiguous and broad
concept which denotes development of policies, procedures, architectures and practices for
management of data lifecycle (Kaoudi, Manolescu and Zampetakis, 2020). With reference to
data management the five possibilities are specified beneath: Cloud data management: This implies the process associated with integration of data
from ecosystem of cloud applications. All the intake, data storage as well as processing
takes place within cloud dependent storage medium. ETL and data integration: It comprises of loading of data from sources within the data
warehouse, transforming them, their summarisation along with their aggregation in the
adequate format for carrying out in-depth analysis.
1
Chapter 1
Introduction
The administrative process that comprises of acquisition, validation, storage, protection
and processing of required data for ensuring that data is being created as well as are collected by
firm is referred to as data management. It acts as critical piece within deployment of IT systems
that are liable for execution of business applications thereby, leads to analytical information for
driving operational decision making and strategic planning by business managers, corporate
executives as well as other end users (Demirbas, 2020). Data management process comprises of
aggregation of distinct functions which aims at ensuring that data within corporate system is
precise, accessible and available. With evolution of technology, various enhancements are being
made with respect to ways in which it is being processed and utilised by firms for delivering
their functionalities. AWS (Amazon Web Services) renders on-demand cloud platforms along
with APIs to companies, governments and individuals. This cloud service is liable for furnishing
a abstract technical infrastructure along with building blocks and tools of distributed computing.
This dissertation will furnish relevant details associated with challenges that were being faced by
firm before cloud computing existed and potential impact created by this aspect on their
functionalities.
Data management can be defined as an administrative process where data or information
can be stored, managed, protected, processed and so on in order to improve the reliability,
accessibility and including timeliness of information for the users. implies ambiguous and broad
concept which denotes development of policies, procedures, architectures and practices for
management of data lifecycle (Kaoudi, Manolescu and Zampetakis, 2020). With reference to
data management the five possibilities are specified beneath: Cloud data management: This implies the process associated with integration of data
from ecosystem of cloud applications. All the intake, data storage as well as processing
takes place within cloud dependent storage medium. ETL and data integration: It comprises of loading of data from sources within the data
warehouse, transforming them, their summarisation along with their aggregation in the
adequate format for carrying out in-depth analysis.
1
Master data management: This denotes method for management of critical data of
organisation that is accounts, parties and customers within the business transactions in the
standardised manner which will further prevent duplication in the organisation (Mbelli,
2019). Reference data management: This denotes a permissible values that are utilised by other
data fields like list of countries, cities, postal codes, product serial number and regions.
This might be externally provided or within the premises of organisation.
Data analytics & visualisation: The process that is utilised for processing of data via data
warehouses of big data sources for carrying out enhanced data analytics and enables
analysts to slice, dice as well as illustrate the visualisations along with dashboards.
With the massive quantities of data, relevant tools are needed with reference to aspect
illustrated above. Firms which deals with large amount of data , they have to store, sift through
this and also conduct analysis routinely for storing as well as managing them across the cloud
(Nagpure and et. al, 2019). The tools which firm can utilise are Amazon web services, Microsoft
Azure, Google Cloud, and Panoply. With reference to this dissertation, emphasis will be laid on
Amazon Web Service (AWS) which offers ever-expanding set of tools which can be put together
within effectual cloud data management stack. The key services are: Amazon S3: Simple storage service which is being delivered by AWS and furnish object
storage via web service interface. This can be utilised storing any kind of object that will
assist within storage for backup, internet applications, disaster recovery, data lakes for
having analytics, data archives along with cloud storage. This provides temporary and
intermediate storage. Amazon Glacier: It is a secured, durable as well as low cost storage class for data
archival as well as maintaining long-term backup. This is designed for having long-term
storage of the information which is accessed infrequently and for this latency time that is
acceptable is around 3 to 5 hours.
AWS Glue: It is a fully managed ETL that denotes extraction, transformation and loading
of service which makes this simple as well as cost efficacious for categorising data,
cleaning it, enriching this and moving it reliably among different data streams and stores
(Roe, 2020).
2
organisation that is accounts, parties and customers within the business transactions in the
standardised manner which will further prevent duplication in the organisation (Mbelli,
2019). Reference data management: This denotes a permissible values that are utilised by other
data fields like list of countries, cities, postal codes, product serial number and regions.
This might be externally provided or within the premises of organisation.
Data analytics & visualisation: The process that is utilised for processing of data via data
warehouses of big data sources for carrying out enhanced data analytics and enables
analysts to slice, dice as well as illustrate the visualisations along with dashboards.
With the massive quantities of data, relevant tools are needed with reference to aspect
illustrated above. Firms which deals with large amount of data , they have to store, sift through
this and also conduct analysis routinely for storing as well as managing them across the cloud
(Nagpure and et. al, 2019). The tools which firm can utilise are Amazon web services, Microsoft
Azure, Google Cloud, and Panoply. With reference to this dissertation, emphasis will be laid on
Amazon Web Service (AWS) which offers ever-expanding set of tools which can be put together
within effectual cloud data management stack. The key services are: Amazon S3: Simple storage service which is being delivered by AWS and furnish object
storage via web service interface. This can be utilised storing any kind of object that will
assist within storage for backup, internet applications, disaster recovery, data lakes for
having analytics, data archives along with cloud storage. This provides temporary and
intermediate storage. Amazon Glacier: It is a secured, durable as well as low cost storage class for data
archival as well as maintaining long-term backup. This is designed for having long-term
storage of the information which is accessed infrequently and for this latency time that is
acceptable is around 3 to 5 hours.
AWS Glue: It is a fully managed ETL that denotes extraction, transformation and loading
of service which makes this simple as well as cost efficacious for categorising data,
cleaning it, enriching this and moving it reliably among different data streams and stores
(Roe, 2020).
2
In present context researcher has taken into consideration Amazon which is an American
multinational technological firm that emphasise on delivering e-commerce services, digital
streaming, artificial intelligence and cloud computing. This was founded on 5th July, 1994 by Jeff
Bezos and is headquartered in Seattle, US. They deliver their services worldwide and for this
they have near about 8,40,400 employees. This is regarded as one of the most influential cultural
as well as economic force in the world and is valuable brand. They are also known for their
disruption within the technological innovation. Amazon provides AI assistant, live streaming,
cloud computing platform and online marketplace. They are the world' s largest provider with
respect to cloud computing services in context of infrastructure as a service (IaaS) and platform
as a service (PaaS).
Background of the research
Data management has important role within the organisation in terms of manner in which
operations are being conducted by them. The reason behind using data management in an
organisation is that this helps in in making sense towards vast quantities of information that does
already been collected by accompany where is storage of that particular data gathering and
managing of it considering an effective platform and Data Management solutions. With the help
of this, and organisation can easily conduct essential functions in a lesser time frame. The
practice that assists within collection, keeping as well as usage of data in secured, efficient and
cost efficacious manner is referred to as data management (Roh, Heo and Whang, 2019). This
aims at assisting organisations, individuals and associated things for having optimised usage of
data in certain bounds of regulations along with policies through which decisions can be made
effectively by taking actions in an appropriate manner through which organisation can have
maximised benefits. At present scenario, firms need to have relevant data management solution
that will render them with efficient ways through which data can be managed across distinct
tiers. Data management systems can be defined as solutions which helps in managing data in
much more effective and efficient ways considering diverse but unified data tier. These systems
are built on data management platforms which consist with different range of data basis,
warehouses, big data management systems, data analytics and so on. All these entities work in
collaboration like a data utility for delivering relevant management capabilities. But when
manual interventions are being carried out then the chances of errors also enhances. The new
system like AWS, big data or cloud computing aims at making sure that information stored
3
multinational technological firm that emphasise on delivering e-commerce services, digital
streaming, artificial intelligence and cloud computing. This was founded on 5th July, 1994 by Jeff
Bezos and is headquartered in Seattle, US. They deliver their services worldwide and for this
they have near about 8,40,400 employees. This is regarded as one of the most influential cultural
as well as economic force in the world and is valuable brand. They are also known for their
disruption within the technological innovation. Amazon provides AI assistant, live streaming,
cloud computing platform and online marketplace. They are the world' s largest provider with
respect to cloud computing services in context of infrastructure as a service (IaaS) and platform
as a service (PaaS).
Background of the research
Data management has important role within the organisation in terms of manner in which
operations are being conducted by them. The reason behind using data management in an
organisation is that this helps in in making sense towards vast quantities of information that does
already been collected by accompany where is storage of that particular data gathering and
managing of it considering an effective platform and Data Management solutions. With the help
of this, and organisation can easily conduct essential functions in a lesser time frame. The
practice that assists within collection, keeping as well as usage of data in secured, efficient and
cost efficacious manner is referred to as data management (Roh, Heo and Whang, 2019). This
aims at assisting organisations, individuals and associated things for having optimised usage of
data in certain bounds of regulations along with policies through which decisions can be made
effectively by taking actions in an appropriate manner through which organisation can have
maximised benefits. At present scenario, firms need to have relevant data management solution
that will render them with efficient ways through which data can be managed across distinct
tiers. Data management systems can be defined as solutions which helps in managing data in
much more effective and efficient ways considering diverse but unified data tier. These systems
are built on data management platforms which consist with different range of data basis,
warehouses, big data management systems, data analytics and so on. All these entities work in
collaboration like a data utility for delivering relevant management capabilities. But when
manual interventions are being carried out then the chances of errors also enhances. The new
system like AWS, big data or cloud computing aims at making sure that information stored
3
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
within them is precise, is available as per the requirements and can accessed easily through
which overall functionalities of firms can be amplified. The basic reason behind utilising new
systems like big data, AWS including cloud computing is that it helps in storing the data in a
precise manner which improve the overall functionality of an organisation at the time of
conducting different range of operations that would help in profit maximization (Sudmanns and
et. al, 2019).
Problem statement
Information acts as critical asset for each organisation that enables them to deliver their
services in precise manner (Sanchez, Beak and Saxena, 2019). An instance can be taken into
consideration with respect to this aspect suppose if organisation do not have information related
with their clients in terms of their preferences then there is a high probability that services
delivered by them will not be regarded by them as adequate as their requirements might not have
been addressed. Thus, it is important to ensure that information is maintained otherwise it leads
to wastage of human resources, raw materials, cost and efforts that are being made by the
organisation. But this is done in wrong direction. Therefore, for having a precise results or
outcomes it is necessary to have adequate information. Before the evolution of technology also
firms deliver their services and they face certain challenges while doing this (Sanchez, Beak and
Saxena, 2019). The other example can be taken to acknowledge this aspect like previously
Amazon or any other organisation made use of manual methods for making sure that they deliver
their services. But effort required was and high probability of errors, redundancies, cost and
cross verification of transactions also require more time. So, with the advent of time and
evolution of services that are being provided by Amazon Web Services will furnish them with
precise tools for dealing with the information.
Research aim
This is regarded as one of the important section of the dissertation as this assist researcher
within conducting complete research along with associated activities within the systematic as
well as planned manner through which prospective outcome can be attained. Research aim refers
a predetermined statement on which all the activities are dependent on while conducting research
(SGIT, 2019). This acts as a guide for research project which assists within carrying out analysis
of research through usage of distinct methods as well as measures within effectual manner for
accomplishment of the purpose of research. The goal of researcher behind this is to achieve aims
4
which overall functionalities of firms can be amplified. The basic reason behind utilising new
systems like big data, AWS including cloud computing is that it helps in storing the data in a
precise manner which improve the overall functionality of an organisation at the time of
conducting different range of operations that would help in profit maximization (Sudmanns and
et. al, 2019).
Problem statement
Information acts as critical asset for each organisation that enables them to deliver their
services in precise manner (Sanchez, Beak and Saxena, 2019). An instance can be taken into
consideration with respect to this aspect suppose if organisation do not have information related
with their clients in terms of their preferences then there is a high probability that services
delivered by them will not be regarded by them as adequate as their requirements might not have
been addressed. Thus, it is important to ensure that information is maintained otherwise it leads
to wastage of human resources, raw materials, cost and efforts that are being made by the
organisation. But this is done in wrong direction. Therefore, for having a precise results or
outcomes it is necessary to have adequate information. Before the evolution of technology also
firms deliver their services and they face certain challenges while doing this (Sanchez, Beak and
Saxena, 2019). The other example can be taken to acknowledge this aspect like previously
Amazon or any other organisation made use of manual methods for making sure that they deliver
their services. But effort required was and high probability of errors, redundancies, cost and
cross verification of transactions also require more time. So, with the advent of time and
evolution of services that are being provided by Amazon Web Services will furnish them with
precise tools for dealing with the information.
Research aim
This is regarded as one of the important section of the dissertation as this assist researcher
within conducting complete research along with associated activities within the systematic as
well as planned manner through which prospective outcome can be attained. Research aim refers
a predetermined statement on which all the activities are dependent on while conducting research
(SGIT, 2019). This acts as a guide for research project which assists within carrying out analysis
of research through usage of distinct methods as well as measures within effectual manner for
accomplishment of the purpose of research. The goal of researcher behind this is to achieve aims
4
through execution of objectives as well as ensure that targets of research are attained in adequate
way. It is crucial element of research that aids within creation of statement with respect to the
research issue is depicted within the form of statement. In addition to this, it has to be measured
like an evidence that will lead to identify exact issues that have to resolved for attainment of
peculiar outcome. This aspect of research aids researcher to have exact direction for
accumulation of final outcomes. The probable aim of research is depicted beneath:
“To identify the challenges faced by a company before cloud computing in data
management.” Case study on Amazon
Research Objectives
1. To study the history of data management.
2. To identify challenges faced by Amazon in data management
3. To determine benefits collected by Amazon by using AWS (cloud computing) in data
management.
4. To classify ways through which AWS help in business expansion by effective utilisation
of data management.
Research Questions
1. Illustrate the history of data management?
2. What are the challenges faced by Amazon in data management?
3. Depict benefits collected by Amazon through usage of AWS (cloud computing) in data
management?
4. How AWS help in business expansion by effective utilisation of data management?
Statement of Hypothesis
H0: The new data management methodologies or tools leads organisation to error free results
with minimised redundancy and high accuracy.
H1: The new data management tools are not adequate as compared to old tools in concept of
errors, accuracy and also offers duplication of information.
Rationale of the study
The present research will provide an insight into aspects associated with challenges that
are being faced by organisation while delivering their services before the advent of technology.
The reasons associated with need of technology is that there are various problems that are being
5
way. It is crucial element of research that aids within creation of statement with respect to the
research issue is depicted within the form of statement. In addition to this, it has to be measured
like an evidence that will lead to identify exact issues that have to resolved for attainment of
peculiar outcome. This aspect of research aids researcher to have exact direction for
accumulation of final outcomes. The probable aim of research is depicted beneath:
“To identify the challenges faced by a company before cloud computing in data
management.” Case study on Amazon
Research Objectives
1. To study the history of data management.
2. To identify challenges faced by Amazon in data management
3. To determine benefits collected by Amazon by using AWS (cloud computing) in data
management.
4. To classify ways through which AWS help in business expansion by effective utilisation
of data management.
Research Questions
1. Illustrate the history of data management?
2. What are the challenges faced by Amazon in data management?
3. Depict benefits collected by Amazon through usage of AWS (cloud computing) in data
management?
4. How AWS help in business expansion by effective utilisation of data management?
Statement of Hypothesis
H0: The new data management methodologies or tools leads organisation to error free results
with minimised redundancy and high accuracy.
H1: The new data management tools are not adequate as compared to old tools in concept of
errors, accuracy and also offers duplication of information.
Rationale of the study
The present research will provide an insight into aspects associated with challenges that
are being faced by organisation while delivering their services before the advent of technology.
The reasons associated with need of technology is that there are various problems that are being
5
faced due to manual delivery of the services and to identify this aspect the study is being carried
out. This dissertation aims at providing insight into different tools associated with AWS so that
in-depth knowledge can be attained and the reader will also be able to identify the tool which
will be apt for them (Smith and et. al, 2019). The study will illustrate challenges associated with
data management and ways in which AWS resolves the issues associated with this aspect. This
will further enable firm within expansion of their operations. Like if any e-commerce firm want
to increase their product portfolio then they need to have additional space according to this.
AWS which is a service provided by Amazon for can be used for having relevant storage and
firm can also have more space as per their requirement. The benefits that will be attained by its
usage will be analysed to ensure that they are apt for dealing with problems associated with data
management.
Significance of the study
The present dissertation emphasise on challenges that were being faced by organisation
when adequate data management tools were not available. This study will provide an insight into
problems that were faced, their elimination by usage of AWS and expansion of operations
carried out by firm through relevant data management method (Sudmanns and et. al, 2019). The
research will render adequate knowledge on the advent of technology in 21st century. This study
will also illustrate different tools that are being provided by Amazon for data management and
ways in which problems associated with this are minimised through their usage. The research
will assist within having exact need for tools and as a result organisations will be able to deliver
their services in an adequate manner. In addition to this, it will also enable them maximise their
overall productivity as they will be having a platform through which they can have accurate
details related with the operation conducted by the. This section is significant as this will also
minimise the redundancy of the information, cross verification of the data, transactions that are
made will be automatically stored and will further lead to minimisation of efforts that are being
made by humans (Sun and Scanlon, 2019). This will furnish knowledge related with
technologies that are being used by firms for having competitive edge by making use of peculiar
tool.
Structure of the dissertation
The research work is dependent on adequate framework and this has to be followed by
researcher with reference to completion of this research in a significant way. The befitting
6
out. This dissertation aims at providing insight into different tools associated with AWS so that
in-depth knowledge can be attained and the reader will also be able to identify the tool which
will be apt for them (Smith and et. al, 2019). The study will illustrate challenges associated with
data management and ways in which AWS resolves the issues associated with this aspect. This
will further enable firm within expansion of their operations. Like if any e-commerce firm want
to increase their product portfolio then they need to have additional space according to this.
AWS which is a service provided by Amazon for can be used for having relevant storage and
firm can also have more space as per their requirement. The benefits that will be attained by its
usage will be analysed to ensure that they are apt for dealing with problems associated with data
management.
Significance of the study
The present dissertation emphasise on challenges that were being faced by organisation
when adequate data management tools were not available. This study will provide an insight into
problems that were faced, their elimination by usage of AWS and expansion of operations
carried out by firm through relevant data management method (Sudmanns and et. al, 2019). The
research will render adequate knowledge on the advent of technology in 21st century. This study
will also illustrate different tools that are being provided by Amazon for data management and
ways in which problems associated with this are minimised through their usage. The research
will assist within having exact need for tools and as a result organisations will be able to deliver
their services in an adequate manner. In addition to this, it will also enable them maximise their
overall productivity as they will be having a platform through which they can have accurate
details related with the operation conducted by the. This section is significant as this will also
minimise the redundancy of the information, cross verification of the data, transactions that are
made will be automatically stored and will further lead to minimisation of efforts that are being
made by humans (Sun and Scanlon, 2019). This will furnish knowledge related with
technologies that are being used by firms for having competitive edge by making use of peculiar
tool.
Structure of the dissertation
The research work is dependent on adequate framework and this has to be followed by
researcher with reference to completion of this research in a significant way. The befitting
6
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
structure is crucial for ensuring effectual implementation of research activities along with
presentation of needs of work must be adequately designed (Taori and Dasararaju, 2019). In
context of present case, the investigator will follow the structure that is illustrated below:
Chapter 1: Introduction
The research work in initialised with introduction which furnish an in-depth insight into
the chosen area of study. The variable of research which are available within this are depicted in
term dependent as well as independent variables that have to be acknowledged in trenchant way.
In addition to this, this also liable for identification of aims as well as objectives which are
dependent on the issue that has to be explored through which investigator can have relevant
guidance. This implies the initial step by which researcher can identify the probable rationale
behind the research.
Chapter 2: Literature review
This section will be after the introduction chapter is completed. This implies second
chapter of the dissertation which furnishes core benefits for researcher by accumulation of
relevant data along with information that are associated with challenges that are experienced by
organisation before the advent of AWS for management of data (Wingerath, Ritter and Gessert,
2019). Within this, section, information is collected via distinct sources such as journals, books,
published articles or any other source in which data is published. This provides a strong base or
platform for conducting the dissertation by ensuring that proposed aims along with objectives
can be attained in peculiar time span.
Chapter 3: Research Methodology
This denotes next chapter within the dissertation that is being done after the section of
literature review is completed. This part have an efficacious role within ensuring that right
direction is taken with reference to usage of research methodologies. This involves various tools
along with techniques that will assist within collection as well as making analysis of data through
which valid outcomes can be produced. It implies primary concern for researcher where research
methodologies are being utilised for creation of relevant results in context of the topic.
Chapter 4: Data Analysis on Case Study
It is critical part within the research work as within this section associated with the
investigation, data that is being gathered will be analysed with respect to research questions
along with their objectives. This is liable for furnishing data related with findings and by
7
presentation of needs of work must be adequately designed (Taori and Dasararaju, 2019). In
context of present case, the investigator will follow the structure that is illustrated below:
Chapter 1: Introduction
The research work in initialised with introduction which furnish an in-depth insight into
the chosen area of study. The variable of research which are available within this are depicted in
term dependent as well as independent variables that have to be acknowledged in trenchant way.
In addition to this, this also liable for identification of aims as well as objectives which are
dependent on the issue that has to be explored through which investigator can have relevant
guidance. This implies the initial step by which researcher can identify the probable rationale
behind the research.
Chapter 2: Literature review
This section will be after the introduction chapter is completed. This implies second
chapter of the dissertation which furnishes core benefits for researcher by accumulation of
relevant data along with information that are associated with challenges that are experienced by
organisation before the advent of AWS for management of data (Wingerath, Ritter and Gessert,
2019). Within this, section, information is collected via distinct sources such as journals, books,
published articles or any other source in which data is published. This provides a strong base or
platform for conducting the dissertation by ensuring that proposed aims along with objectives
can be attained in peculiar time span.
Chapter 3: Research Methodology
This denotes next chapter within the dissertation that is being done after the section of
literature review is completed. This part have an efficacious role within ensuring that right
direction is taken with reference to usage of research methodologies. This involves various tools
along with techniques that will assist within collection as well as making analysis of data through
which valid outcomes can be produced. It implies primary concern for researcher where research
methodologies are being utilised for creation of relevant results in context of the topic.
Chapter 4: Data Analysis on Case Study
It is critical part within the research work as within this section associated with the
investigation, data that is being gathered will be analysed with respect to research questions
along with their objectives. This is liable for furnishing data related with findings and by
7
analysing perspectives of different work that is being conducted by distinct authors or
researchers. This chapter will aid researcher within presenting results within relevant format so
that readers can be provided with trenchant understanding with references to nature of peculiar
study area.
Chapter 5: Conclusion and Recommendations
This implies last chapter associated with research dissertation in which conclusion or
implications will be drawn with reference to research objectives along with aim depending upon
peculiar research issue that is to be studied. Within this section suitable summary will be created
for illustration of significant understanding with references to data management and challenges
which are involved with this. The summarised discussion will be rendered on probable
consequences of the research. In addition to this, relevant recommendations will be given
through which improvisations can be brought in the organisation by effectual usage of AWS
tools for managing data for having and delivering enhanced services.
Summary
This section illustrates the brief overview of dissertation topic that is challenges that are
being faced by firms before the advent of technology. This is a critical aspect that have to be
handled by an organisation to ensure that they are able to deliver their services in precise
manner. In addition to this, it is important for firms to have records of the the activities and
strategies that are used by them as this will assist them within making decisions. The aim and
objectives are also provided on the basis of which the entire dissertation will be done. In addition
to this, the overview of what other chapters will contain is also presented.
The next chapter is literature review that implies the work that is conducted by various
authors will be studied for identification of gap so that researcher conduct a review in an
adequate manner with reference to the research topic. This will enable investigator within having
relevant information with respect to what has been already and what aspects are left that are
critical as well as needs to be studied.
8
researchers. This chapter will aid researcher within presenting results within relevant format so
that readers can be provided with trenchant understanding with references to nature of peculiar
study area.
Chapter 5: Conclusion and Recommendations
This implies last chapter associated with research dissertation in which conclusion or
implications will be drawn with reference to research objectives along with aim depending upon
peculiar research issue that is to be studied. Within this section suitable summary will be created
for illustration of significant understanding with references to data management and challenges
which are involved with this. The summarised discussion will be rendered on probable
consequences of the research. In addition to this, relevant recommendations will be given
through which improvisations can be brought in the organisation by effectual usage of AWS
tools for managing data for having and delivering enhanced services.
Summary
This section illustrates the brief overview of dissertation topic that is challenges that are
being faced by firms before the advent of technology. This is a critical aspect that have to be
handled by an organisation to ensure that they are able to deliver their services in precise
manner. In addition to this, it is important for firms to have records of the the activities and
strategies that are used by them as this will assist them within making decisions. The aim and
objectives are also provided on the basis of which the entire dissertation will be done. In addition
to this, the overview of what other chapters will contain is also presented.
The next chapter is literature review that implies the work that is conducted by various
authors will be studied for identification of gap so that researcher conduct a review in an
adequate manner with reference to the research topic. This will enable investigator within having
relevant information with respect to what has been already and what aspects are left that are
critical as well as needs to be studied.
8
Chapter 2
Literature Review
Introduction
This chapter of research will cover the point of view of different researchers and scholars
that will provide insight into aim as well as objectives of the research. Along with this, distinct
secondary sources of research will be utilised that comprises of articles, website of organisation,
past reports, journals, newspapers, online sources for identification, measuring as well as
evaluation of research questions and objectives of the research (Xu and et. al, 2019). This section
has critical role within each research as this aids researcher within development of knowledge
with respect to selected area or topic within a deeper manner. This has been analysed that, in this
section information will be contained from perspective attained of diverse authors or researchers.
As a outcome, it will become easier for researcher for drawing in valid results for specific
research problem. In context of this research, researcher will efficaciously utilise distinct
secondary sources such as journals, books, online articles and many other published information
through which valid conclusion can be attained towards the same.
Within this section, elaborated literature will be conferred on challenges faced by firm
before advent of cloud computing in context of data management within effectual way. This
section is further bifurcated within several parts that will be explicated one after the other within
effectual way (Demirbas, 2020). The initial part will illustrate the history of data management
which will depict the ways in which information is being managed before the technological
advent. The other section will provide detailed data related with challenges which were being
faced by Amazon while dealing with huge amount of information. The next section will illustrate
benefits firm will attain by opting for Amazon web services with respect to data management.
The last section specify the way by which AWS will aid within expanding operations by
effectual usage of data management. By carrying out analysis of information that is presented
within this section, it will become easy for researcher within identification of challenges that
were faced by organisation before cloud computing was brought in within the market (Kaoudi,
Manolescu and Zampetakis, 2020). Through this researcher will be able address the exact results
of the research in and appropriate way.
9
Literature Review
Introduction
This chapter of research will cover the point of view of different researchers and scholars
that will provide insight into aim as well as objectives of the research. Along with this, distinct
secondary sources of research will be utilised that comprises of articles, website of organisation,
past reports, journals, newspapers, online sources for identification, measuring as well as
evaluation of research questions and objectives of the research (Xu and et. al, 2019). This section
has critical role within each research as this aids researcher within development of knowledge
with respect to selected area or topic within a deeper manner. This has been analysed that, in this
section information will be contained from perspective attained of diverse authors or researchers.
As a outcome, it will become easier for researcher for drawing in valid results for specific
research problem. In context of this research, researcher will efficaciously utilise distinct
secondary sources such as journals, books, online articles and many other published information
through which valid conclusion can be attained towards the same.
Within this section, elaborated literature will be conferred on challenges faced by firm
before advent of cloud computing in context of data management within effectual way. This
section is further bifurcated within several parts that will be explicated one after the other within
effectual way (Demirbas, 2020). The initial part will illustrate the history of data management
which will depict the ways in which information is being managed before the technological
advent. The other section will provide detailed data related with challenges which were being
faced by Amazon while dealing with huge amount of information. The next section will illustrate
benefits firm will attain by opting for Amazon web services with respect to data management.
The last section specify the way by which AWS will aid within expanding operations by
effectual usage of data management. By carrying out analysis of information that is presented
within this section, it will become easy for researcher within identification of challenges that
were faced by organisation before cloud computing was brought in within the market (Kaoudi,
Manolescu and Zampetakis, 2020). Through this researcher will be able address the exact results
of the research in and appropriate way.
9
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Review
History of data management
According to Da Silva and Nascimento (2020), data management can be defined as a
process associated with ingestion, storage, organisation along with maintenance of data that is
being created as well as collection by firm. It implies the practice of collection, keeping as well
as usage of data efficiently, securely and in a cost efficient manner. The goal behind this is assist
people as well as firms within having optimised usage of data in boundaries of their policies as
well as regulations that will aid them within formulation of decisions and have precise actions
that will enhance benefits of firm (da Silva and Nascimento, 2020). The robust data management
strategy is critical for being more essential as compared to relying on intangible assets for
creation of values. This involves wide range of tasks, practices, policies and procedures. This
involves creation, assessment along with updation of data around distinct data tiers. It is liable
for storage of data around various clouds as well as on premises and render higher availability
for disaster recovery. This aids within having surety associated with data privacy and attaining
security of data stored within system. This is also liable for destroying data with reference to
retention schedules and compliance with demands that are being made. The reason that came in
front is that, privacy policy which has been followed by organizations has not been that much
effective and that may result into damage to the data when it comes to meet the demands of
particular aim or the objective which may result into negative outcomes.
Data management is defined as organisations management of data as well as information
for having structured and secured access & storage. This aims at organising and controlling data
resources in such manner that they can be easily accessed, relied on and are available as per the
needs of an individual (Mbelli, 2019). The tasks associated with this comprises of formulation of
data governance policies, carrying out their analysis, architecture, DMS (database management
system) integration, data source identification & security, storage and segregation. The initial
mechanical enhancement within data management is tracked back in 1890s with invention of
mechanical punch cards which were liable for recording information on the thick card. The
concept of data management was not emphasised until 1960s when ADPSO (Association of Data
Processing Service Organisations) started rendering advice to professionals with reference to this
aspect. In 1950, the management of data became issue as computers were clumsy, slow and
needed huge amount of data with manual labour for operating on them (Nagpure and et. al,
10
History of data management
According to Da Silva and Nascimento (2020), data management can be defined as a
process associated with ingestion, storage, organisation along with maintenance of data that is
being created as well as collection by firm. It implies the practice of collection, keeping as well
as usage of data efficiently, securely and in a cost efficient manner. The goal behind this is assist
people as well as firms within having optimised usage of data in boundaries of their policies as
well as regulations that will aid them within formulation of decisions and have precise actions
that will enhance benefits of firm (da Silva and Nascimento, 2020). The robust data management
strategy is critical for being more essential as compared to relying on intangible assets for
creation of values. This involves wide range of tasks, practices, policies and procedures. This
involves creation, assessment along with updation of data around distinct data tiers. It is liable
for storage of data around various clouds as well as on premises and render higher availability
for disaster recovery. This aids within having surety associated with data privacy and attaining
security of data stored within system. This is also liable for destroying data with reference to
retention schedules and compliance with demands that are being made. The reason that came in
front is that, privacy policy which has been followed by organizations has not been that much
effective and that may result into damage to the data when it comes to meet the demands of
particular aim or the objective which may result into negative outcomes.
Data management is defined as organisations management of data as well as information
for having structured and secured access & storage. This aims at organising and controlling data
resources in such manner that they can be easily accessed, relied on and are available as per the
needs of an individual (Mbelli, 2019). The tasks associated with this comprises of formulation of
data governance policies, carrying out their analysis, architecture, DMS (database management
system) integration, data source identification & security, storage and segregation. The initial
mechanical enhancement within data management is tracked back in 1890s with invention of
mechanical punch cards which were liable for recording information on the thick card. The
concept of data management was not emphasised until 1960s when ADPSO (Association of Data
Processing Service Organisations) started rendering advice to professionals with reference to this
aspect. In 1950, the management of data became issue as computers were clumsy, slow and
needed huge amount of data with manual labour for operating on them (Nagpure and et. al,
10
2019). Various computer-oriented firms utilised floors for warehousing and emphasised on
managing their data by storing them on punch cards. They also had a floor in which they
maintained tabulators, sorters and banks of punch cards. The programs at that instance of time
frame were setup within decimal or binary form which were read from toggled off/on switches at
front of magnetic tape, computer and punch cards. This was referred to as first generation
programming language (absolute machine language).
Assembly language or second generation programming language are utilised like
early method for organisation and management of data. These languages became popular and
utilised letters for programming instead of making use of critical strings that comprised of ones
and zeros (Roe, 2020). This enabled programmers to have assembly mnemonics through which it
becomes easy to remember codes. It made easy for the programmers to have programs that are
readable as well as freed them from error-prone and tedious calculations. High level languages
are older that were easy for human beings to interpret and allowed them to compose generic
programs that were entirely not dependent on peculiar system. Their primary purpose is
organisation as well as management of data. The different languages used for doing this are: FORTRAN: This was created in 1950 by IBM for dealing with science and engineering
applications. IT is still utilised within making predictions of weather, computational fluid
dynamics, finite element analysis, crystallography, computational physics and chemistry. LISP: In 1958, this became popular for carrying out AI research and this was the first
language that initiated wide range of ideas in context of computer science like automatic
storage management, tree data structures and dynamic typing (Roh, Heo and Whang,
2019).
In addition to these languages, there are many other which were used. They are common
business oriented language (COBOL), beginner's all-purpose symbolic instruction code
(BASIC), C and C++. After this the online data management came which involved, making
travel reservations, having stock market trading that must be coordinated as well as manage data
in quick and efficient manner. Various industries conducts online transactions through usage of
online data management systems (Sanchez, Beak and Saxena, 2019). In context of productivity
around 7.5 million sessions are conducted per day with respect to healthcare information. These
systems are liable fro allowing programs for reading records or files, carrying out updates as well
11
managing their data by storing them on punch cards. They also had a floor in which they
maintained tabulators, sorters and banks of punch cards. The programs at that instance of time
frame were setup within decimal or binary form which were read from toggled off/on switches at
front of magnetic tape, computer and punch cards. This was referred to as first generation
programming language (absolute machine language).
Assembly language or second generation programming language are utilised like
early method for organisation and management of data. These languages became popular and
utilised letters for programming instead of making use of critical strings that comprised of ones
and zeros (Roe, 2020). This enabled programmers to have assembly mnemonics through which it
becomes easy to remember codes. It made easy for the programmers to have programs that are
readable as well as freed them from error-prone and tedious calculations. High level languages
are older that were easy for human beings to interpret and allowed them to compose generic
programs that were entirely not dependent on peculiar system. Their primary purpose is
organisation as well as management of data. The different languages used for doing this are: FORTRAN: This was created in 1950 by IBM for dealing with science and engineering
applications. IT is still utilised within making predictions of weather, computational fluid
dynamics, finite element analysis, crystallography, computational physics and chemistry. LISP: In 1958, this became popular for carrying out AI research and this was the first
language that initiated wide range of ideas in context of computer science like automatic
storage management, tree data structures and dynamic typing (Roh, Heo and Whang,
2019).
In addition to these languages, there are many other which were used. They are common
business oriented language (COBOL), beginner's all-purpose symbolic instruction code
(BASIC), C and C++. After this the online data management came which involved, making
travel reservations, having stock market trading that must be coordinated as well as manage data
in quick and efficient manner. Various industries conducts online transactions through usage of
online data management systems (Sanchez, Beak and Saxena, 2019). In context of productivity
around 7.5 million sessions are conducted per day with respect to healthcare information. These
systems are liable fro allowing programs for reading records or files, carrying out updates as well
11
as send updated information to online users. For this different languages are utilised, they are
mentioned beneath: Structured query language (SQL): In 1970 this was developed that emphasised on
relational database and rendered consistent data processing by minimised duplication of
the data. This assisted within processing huge amount of data in an efficient and quick
manner. Relational models illustrates both subject and relationships within a uniform
manner and they assist within navigation, manipulation as well as defining data instead of
making use of different languages for each individual (Sanchez, Beak and Saxena, 2019).
Relational algebra is being utilised as a process record that sets like a group and operators
are applied for entire record sets. This aids within client-server computing, parallel
processing and graphical user interfaces. Furthermore, it will enable various users to have
access for identical database within simultaneous manner.
NoSQL: The primary rationale of NoSQL implies processing along with research for Big
Data and this is not a part of relational database. This is utilised due to high storage
capacity as well as filtering ample of unstructured and structured data. It will support
firms to have horizontal scalability that will allow huge data warehouses such as CIA,
Amazon and Google for processing enhanced amount of information. This concept has
gained popularity after 2005.
Enterprise or data warehousing: In 1990's this concept was widely opted by
organisations around the world and was viewed like a means for dealing with chaos that was
created while integration as well as orientation of the subject. The emphasis was laid on creation
of single version of truth (SGIT, 2019). It is an important aspect that is associated formalisation
of architecture of data along with its management practices. This will also assist within dealing
with inconsistencies with different data sources by furnishing comprehensible source for data
through which user can have access to this, complexities will be reduced from being tangled and
fragile from point-to-point interface of application.
Data lakes: In 1990's data quake shaken the foundations for data management practices
and this aspect became volatile as well as dynamic. It is a core architectural component that is
liable for elimination of restrictions with respect to RDBMS like dominant database technologies
(Smith and et. al, 2019). Data lakes assists within making affirmative contributions for the
12
mentioned beneath: Structured query language (SQL): In 1970 this was developed that emphasised on
relational database and rendered consistent data processing by minimised duplication of
the data. This assisted within processing huge amount of data in an efficient and quick
manner. Relational models illustrates both subject and relationships within a uniform
manner and they assist within navigation, manipulation as well as defining data instead of
making use of different languages for each individual (Sanchez, Beak and Saxena, 2019).
Relational algebra is being utilised as a process record that sets like a group and operators
are applied for entire record sets. This aids within client-server computing, parallel
processing and graphical user interfaces. Furthermore, it will enable various users to have
access for identical database within simultaneous manner.
NoSQL: The primary rationale of NoSQL implies processing along with research for Big
Data and this is not a part of relational database. This is utilised due to high storage
capacity as well as filtering ample of unstructured and structured data. It will support
firms to have horizontal scalability that will allow huge data warehouses such as CIA,
Amazon and Google for processing enhanced amount of information. This concept has
gained popularity after 2005.
Enterprise or data warehousing: In 1990's this concept was widely opted by
organisations around the world and was viewed like a means for dealing with chaos that was
created while integration as well as orientation of the subject. The emphasis was laid on creation
of single version of truth (SGIT, 2019). It is an important aspect that is associated formalisation
of architecture of data along with its management practices. This will also assist within dealing
with inconsistencies with different data sources by furnishing comprehensible source for data
through which user can have access to this, complexities will be reduced from being tangled and
fragile from point-to-point interface of application.
Data lakes: In 1990's data quake shaken the foundations for data management practices
and this aspect became volatile as well as dynamic. It is a core architectural component that is
liable for elimination of restrictions with respect to RDBMS like dominant database technologies
(Smith and et. al, 2019). Data lakes assists within making affirmative contributions for the
12
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
management of business data. They furnish principles and architectures for management of data
through which analysis can be made.
Salesforce delivered the applications by making use of website and this was imitated by
firms such as Amazon in 2002 by delivering Cloud data management services. This provides
enterprises with dedicated data management resources as per their requirements. The benefits
that were being attained by managing data across cloud were that it provided access for having
cutting-edge technology, eradicated in-house maintenance cost of the system, processing for big
data and enhanced flexibility within addressing altering requirements of the businesses (Agarwal
and Alam, 2020). For ensuring that relevant services are being provided, the service level
agreements takes place that implies the contracts which are utilised to ensure the guarantees
among customers as well as service providers. The access to storage and security concerns are
critical concerns for cloud managers that must be thoroughly researched.
Data management and artificial intelligence: Within upcoming years it is predicted that
AI will sort out most of the aspects associated with ample of data storage along with formulation
routine decisions depending upon the certain procedures. This will be more valuable as it will
assist within processing, management as well as storage of unstructured data. Furthermore,
irrelevant data will be discarded through which data integration will be maximised and value of
data will be determined easily. Artificial intelligence aids within development along with
management of highly functional aspects associated with data management.
Catalogs, fabric and hubs: Data catalogs will restrict the difficulties that are associated
with identification as well as understanding of data and then further ensuring metadata
management. Data hubs are liable for mitigation of difficulties that are associated with data
warehouses and data lakes like a disconnected silos (Baldwin and et. al, 2018). The Data fabrics
are liable for adding up critical smart tools for management and satisfies requirements associated
with DataOps. Data catalogs are self-service data analysis tools that furnished data along with
business analysts tools for reporting, analysis and visualisation of data. They also provides
capabilities for improvisation, enrichment, formatting and blending of the data. In addition to
this, it also eliminates blinders, that will make datasets easily searchable and described that will
enable analysts to carry out evaluation.
13
through which analysis can be made.
Salesforce delivered the applications by making use of website and this was imitated by
firms such as Amazon in 2002 by delivering Cloud data management services. This provides
enterprises with dedicated data management resources as per their requirements. The benefits
that were being attained by managing data across cloud were that it provided access for having
cutting-edge technology, eradicated in-house maintenance cost of the system, processing for big
data and enhanced flexibility within addressing altering requirements of the businesses (Agarwal
and Alam, 2020). For ensuring that relevant services are being provided, the service level
agreements takes place that implies the contracts which are utilised to ensure the guarantees
among customers as well as service providers. The access to storage and security concerns are
critical concerns for cloud managers that must be thoroughly researched.
Data management and artificial intelligence: Within upcoming years it is predicted that
AI will sort out most of the aspects associated with ample of data storage along with formulation
routine decisions depending upon the certain procedures. This will be more valuable as it will
assist within processing, management as well as storage of unstructured data. Furthermore,
irrelevant data will be discarded through which data integration will be maximised and value of
data will be determined easily. Artificial intelligence aids within development along with
management of highly functional aspects associated with data management.
Catalogs, fabric and hubs: Data catalogs will restrict the difficulties that are associated
with identification as well as understanding of data and then further ensuring metadata
management. Data hubs are liable for mitigation of difficulties that are associated with data
warehouses and data lakes like a disconnected silos (Baldwin and et. al, 2018). The Data fabrics
are liable for adding up critical smart tools for management and satisfies requirements associated
with DataOps. Data catalogs are self-service data analysis tools that furnished data along with
business analysts tools for reporting, analysis and visualisation of data. They also provides
capabilities for improvisation, enrichment, formatting and blending of the data. In addition to
this, it also eliminates blinders, that will make datasets easily searchable and described that will
enable analysts to carry out evaluation.
13
Artificial intelligence and machine learning are central for hub concept which enables
analytical processing of data rather then moving huge volumes of data across the network for
carrying out processing (Mbelli, 2019). Data fabric implies integration of architecture and
technologies which are designed for providing ease with respect to management of distinct types
of data through utilisation of DBMS that are deployed across wide range of platforms (Beck,
Hao and Campan, 2017). Data orchestration acts as fundamental job associated with data fabric
that requires interoperability with ingestion, data storage, transport, pipeline and preparation.
14
Il
lustration 1: Data Catalog, Hub and Fabric
analytical processing of data rather then moving huge volumes of data across the network for
carrying out processing (Mbelli, 2019). Data fabric implies integration of architecture and
technologies which are designed for providing ease with respect to management of distinct types
of data through utilisation of DBMS that are deployed across wide range of platforms (Beck,
Hao and Campan, 2017). Data orchestration acts as fundamental job associated with data fabric
that requires interoperability with ingestion, data storage, transport, pipeline and preparation.
14
Il
lustration 1: Data Catalog, Hub and Fabric
This provides data management with new capabilities that will enable them to resolve distinct
problems associated with data lake.
The concept of data management has evolved and now there are self-driving data
capabilities that acts as logical extension for dealing with data fabric.
This will reshape the world associated with personal transportation (The Continuing
Evolution of Data Management, 2019). The autonomous vehicle have knowledge related with
their location as well as destination across which they navigate, tends to avoid collision as well
as send alerts when any kind of assistance is required. Identical to this aspect, there may be a
self-driving data which will know its location along with their destination, navigates pipeline as
well deliver data with respect to where this is needed. This will attained by appropriate
integration, its transformation and by applying automatic tagging. On the other hand, if it is
talked about graphical DBMS then it is also called with another name and that is graph oriented
DBMS which specifically delivers the information about ages and nodes that are linking or has
15
Illustration 1: The Evolution of Data Management
problems associated with data lake.
The concept of data management has evolved and now there are self-driving data
capabilities that acts as logical extension for dealing with data fabric.
This will reshape the world associated with personal transportation (The Continuing
Evolution of Data Management, 2019). The autonomous vehicle have knowledge related with
their location as well as destination across which they navigate, tends to avoid collision as well
as send alerts when any kind of assistance is required. Identical to this aspect, there may be a
self-driving data which will know its location along with their destination, navigates pipeline as
well deliver data with respect to where this is needed. This will attained by appropriate
integration, its transformation and by applying automatic tagging. On the other hand, if it is
talked about graphical DBMS then it is also called with another name and that is graph oriented
DBMS which specifically delivers the information about ages and nodes that are linking or has
15
Illustration 1: The Evolution of Data Management
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
developed a relationship with nodes. On the other side, Object-Oriented Database
Management System (OODBMS) is said to be a type of DBMS where data is
represented in the form of objects, as used in object-oriented programming. OODB
implements object-oriented concepts such as classes of objects, object identity,
polymorphism, encapsulation, and inheritance. An object-oriented database stores
complex data as compared to relational database. Some examples of OODBMS are
Versant Object Database, Objectivity/DB, ObjectStore, Caché and ZODB.
Identify challenges faced by Amazon in data management
As per Hurwitz and Kirsch (2020), the data driven insight is mostly fragmented as well as
is driven by various stakeholders which leaves firm with higher degrees of disparate and
inaccurate data which leads to wide range of challenges that has to be maintained. Amazon is e-
commerce site which delivers wide range of services in a data. This denotes that huge bytes of
data are being generated which leads them to face challenges associated with aggregation,
management as well as formulation of value from data. The sheer amount of data is generated
that makes it essential for firm the concept of data management which is elusive goal (Hurwitz
and Kirsch, 2020). The other challenge is that firm do not identify that there is a problem with
data which makes them to opt for reactive approach for handling data management and will look
forward to have solution to fix those peculiar issues.
When this comes to competitiveness within the market then it becomes important for
organisations to formulate strategic decisions through which they can maintain relevant
standards of the service. Thus, data management is a critical aspect that has to be considered by
firms irrespective of sector they are delivering their services and size. When this aspect will be
managed in an appropriate manner then it will enable organisation like Amazon to have visibility
within cost associated with their operations, influence that will be created on their decisions,
status of supply chain and ability for monitoring compliance for forecasts as well as agreements.
But this aspect involves wide range challenges which firm need to identify as without
identification organisation cannot conduct there operations smoothly as they will not be knowing
about factors that are hampering their overall performance (Caesarendra and et. al, 2018).
Irrespective of wide range of tools that assist the processes things do not seems simple and there
16
Management System (OODBMS) is said to be a type of DBMS where data is
represented in the form of objects, as used in object-oriented programming. OODB
implements object-oriented concepts such as classes of objects, object identity,
polymorphism, encapsulation, and inheritance. An object-oriented database stores
complex data as compared to relational database. Some examples of OODBMS are
Versant Object Database, Objectivity/DB, ObjectStore, Caché and ZODB.
Identify challenges faced by Amazon in data management
As per Hurwitz and Kirsch (2020), the data driven insight is mostly fragmented as well as
is driven by various stakeholders which leaves firm with higher degrees of disparate and
inaccurate data which leads to wide range of challenges that has to be maintained. Amazon is e-
commerce site which delivers wide range of services in a data. This denotes that huge bytes of
data are being generated which leads them to face challenges associated with aggregation,
management as well as formulation of value from data. The sheer amount of data is generated
that makes it essential for firm the concept of data management which is elusive goal (Hurwitz
and Kirsch, 2020). The other challenge is that firm do not identify that there is a problem with
data which makes them to opt for reactive approach for handling data management and will look
forward to have solution to fix those peculiar issues.
When this comes to competitiveness within the market then it becomes important for
organisations to formulate strategic decisions through which they can maintain relevant
standards of the service. Thus, data management is a critical aspect that has to be considered by
firms irrespective of sector they are delivering their services and size. When this aspect will be
managed in an appropriate manner then it will enable organisation like Amazon to have visibility
within cost associated with their operations, influence that will be created on their decisions,
status of supply chain and ability for monitoring compliance for forecasts as well as agreements.
But this aspect involves wide range challenges which firm need to identify as without
identification organisation cannot conduct there operations smoothly as they will not be knowing
about factors that are hampering their overall performance (Caesarendra and et. al, 2018).
Irrespective of wide range of tools that assist the processes things do not seems simple and there
16
pop ups various challenges in each step that creates a pessimistic impact on overall functioning
of the organisation. They are specified below:
Automation: Data comes from various sources that is internally from organisation and
externally from distinct data vendors. This leads to creation of multi-faceted datasets that
comprises of streams of data supplying that have their own set of procedures or standards
that involves distinct formats, granularities (publication periods) and acceptable ranges.
With reference to this aspect, the major challenge that is being fiercely faced within the
competitive environment is handling of multi-faceted data that comprises of distinct
standards. But, this gets critical when data is not published as per the requirements and
this leads to creation of tragic time lag. This implies that data automation is critical as this
will aid within capturing of data within timely manner with minimised amount of
resources. Automation acts as reliable way for management of data with reference to
manual processes as in this case it leads to enhancement within the errors. This provides
standardised information within complicated endeavour but in case if data automation is
done appropriately then firms can revolutionise ways in which they deliver their services. Lack of relevant technology & human workforce: Various firms lack technology that
are required to ensure that they can deliver their services in context of data management
in an adequate manner (Cianfrocco and et. al, 2018). In addition to this, there are no
adequate people who can manage their data appropriately. Firms which internalize
technologies which they tend to use within their working environment are restrictive on
ability of firm through which they can adapt to the evolving capabilities related with data
management or making sure sustainability of system when they grow. Organisations are
dependent on scenario in which various people of the organisation possess continued
functionality. Their responsibilities comprises of wide range of data input, coding along
with problem solving abilities for having adequate understanding associated with systems
legacy and ways in which this operates. It acts as huge obstacle for optimisation efforts
made by firm. An example can be taken with respect to lack of human workforce like
with evolution of technology there are various data management techniques that are
available for Amazon to have ease within the ways operations are being carried out by
them. This also implies that when organisation will opt for new technology then either
they need to have new workforce or enhance the existent skills of their workforce.
17
of the organisation. They are specified below:
Automation: Data comes from various sources that is internally from organisation and
externally from distinct data vendors. This leads to creation of multi-faceted datasets that
comprises of streams of data supplying that have their own set of procedures or standards
that involves distinct formats, granularities (publication periods) and acceptable ranges.
With reference to this aspect, the major challenge that is being fiercely faced within the
competitive environment is handling of multi-faceted data that comprises of distinct
standards. But, this gets critical when data is not published as per the requirements and
this leads to creation of tragic time lag. This implies that data automation is critical as this
will aid within capturing of data within timely manner with minimised amount of
resources. Automation acts as reliable way for management of data with reference to
manual processes as in this case it leads to enhancement within the errors. This provides
standardised information within complicated endeavour but in case if data automation is
done appropriately then firms can revolutionise ways in which they deliver their services. Lack of relevant technology & human workforce: Various firms lack technology that
are required to ensure that they can deliver their services in context of data management
in an adequate manner (Cianfrocco and et. al, 2018). In addition to this, there are no
adequate people who can manage their data appropriately. Firms which internalize
technologies which they tend to use within their working environment are restrictive on
ability of firm through which they can adapt to the evolving capabilities related with data
management or making sure sustainability of system when they grow. Organisations are
dependent on scenario in which various people of the organisation possess continued
functionality. Their responsibilities comprises of wide range of data input, coding along
with problem solving abilities for having adequate understanding associated with systems
legacy and ways in which this operates. It acts as huge obstacle for optimisation efforts
made by firm. An example can be taken with respect to lack of human workforce like
with evolution of technology there are various data management techniques that are
available for Amazon to have ease within the ways operations are being carried out by
them. This also implies that when organisation will opt for new technology then either
they need to have new workforce or enhance the existent skills of their workforce.
17
Format changes & revisions: This is major challenge that is vendor inconsistency. Like
data vendors evolve as well as transform the certain aspects associated with their
business, it tends to alter the formats of data or they need tos top periodic publishing due
to the maintenance or any other reason (Divate, Sah and Singh, 2018). In addition to this,
when format alterations are depicted within short notice then it becomes difficult for firm
to have peculiar format or software that will disable them to have adequate information.
For an instance if any application is designed for Apple then it will be supported by
Android system. For managing all the variables as well as having accuracy within the
data consistently there is a requirement to have viable solution. Analysis: The false knowledge may lead to carry out inconsistent or inadequate analysis
that may be devastating for managers, traders, compliance officers and analysts in
Amazon. Through conducting over-analysis of errors, the decision-making engine will be
useless and may be destructive which may lead organisation within the wrong direction.
With reference to data, it is often late, nulls may be regarded as valid and the volume of
data also changes, all of these are taxing in context of sub-par business intelligence of the
system (Doukoure and Mnkandla, 2018). It becomes difficult or there are rare cases when
experts are able to identify or catch up the errors through detection of abnormal patterns.
The man-power required for dealing with these aspects will be very costly and also many
be unfeasible. But in case organisation like Amazon is able to have meaningful analysis
from critical data will lead to save ample of time of time, stress, money and uncertainties
that will allow them to have benefits from having clean data. Integration: The ultimate challenge that is being faced by organisation is that data that is
being gathered from distinct sources must acknowledge internal system requirements.
When data is being transferred from one application to the other then this might lead to
cause various errors along with this, it will become a time consuming method (Kareem,
2018). Suppose if any application is built on Java runtime environment then it can be
executed on any other system it has JRE. The other simple instance can be taken into
consideration like PDF format of data is being supported by Adobe Acrobat or any other
version of Abode but it is not supported by Microsoft Office. This illustrates that
information that is being gathered is platform dependent but this is on small scale and
different online tools can be identified to have access to them. The same cannot be done
18
data vendors evolve as well as transform the certain aspects associated with their
business, it tends to alter the formats of data or they need tos top periodic publishing due
to the maintenance or any other reason (Divate, Sah and Singh, 2018). In addition to this,
when format alterations are depicted within short notice then it becomes difficult for firm
to have peculiar format or software that will disable them to have adequate information.
For an instance if any application is designed for Apple then it will be supported by
Android system. For managing all the variables as well as having accuracy within the
data consistently there is a requirement to have viable solution. Analysis: The false knowledge may lead to carry out inconsistent or inadequate analysis
that may be devastating for managers, traders, compliance officers and analysts in
Amazon. Through conducting over-analysis of errors, the decision-making engine will be
useless and may be destructive which may lead organisation within the wrong direction.
With reference to data, it is often late, nulls may be regarded as valid and the volume of
data also changes, all of these are taxing in context of sub-par business intelligence of the
system (Doukoure and Mnkandla, 2018). It becomes difficult or there are rare cases when
experts are able to identify or catch up the errors through detection of abnormal patterns.
The man-power required for dealing with these aspects will be very costly and also many
be unfeasible. But in case organisation like Amazon is able to have meaningful analysis
from critical data will lead to save ample of time of time, stress, money and uncertainties
that will allow them to have benefits from having clean data. Integration: The ultimate challenge that is being faced by organisation is that data that is
being gathered from distinct sources must acknowledge internal system requirements.
When data is being transferred from one application to the other then this might lead to
cause various errors along with this, it will become a time consuming method (Kareem,
2018). Suppose if any application is built on Java runtime environment then it can be
executed on any other system it has JRE. The other simple instance can be taken into
consideration like PDF format of data is being supported by Adobe Acrobat or any other
version of Abode but it is not supported by Microsoft Office. This illustrates that
information that is being gathered is platform dependent but this is on small scale and
different online tools can be identified to have access to them. The same cannot be done
18
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
for massive amount of data as this will become a time consuming process. During
integration process, there are some fields that may transfer data seamlessly but there are
some which may not be able to do so.
Volume: More than 2 trillion gigabytes data is being generated which denotes that more
monitoring as well as validation is needed. This will only lead to ensure they data
protection, security and availability of information in the precise manner. But without
having any sophisticated or intelligent data management system, this will be wasteland of
fragmented and disordered information (Jaiswal, 2018). For an example, if knowledge is
not extracted from data then it will be of no use as the peculiar information is not utilised
for formulation of the decisions. Along with this, it becomes difficult to find out the
information that will lead to creation of huge impact on the working of organisation and
what all has to be considered. This often leads to leaving certain aspects which might be
of great importance of organisation. When organisation like Amazon have relevant tools
then it becomes easy to have intelligence, this can be understood by the fact that more
volume denotes more intelligence. The adequate data management system is liable for
harnessing ample of data by distilling it within adequate as well as actionable
information.
These are some challenges associated with data management that firms need to take into
consideration while dealing with massive amount of data. The reason behind this is that it will
lead to creation of adverse impact on the ways in which they handle information, its utilisation
and formulation of decisions (Khan, Shakil and Alam, 2018). There is a high probability that if
analysis is not carried out in an adequate manner then decisions made with respect to their
operations will not yield them adequate outcome as all the aspects might not have taken into
consideration. The other critical challenge is security, privacy and availability of information as
per the requirements of organisation.
Benefits collected by Amazon by using AWS (cloud computing) in data management
According to Karim and et. al (2020), an Amazon web service denotes cloud storage
solution that is liable for presenting solution to host so that firms can have enhanced benefits.
AWS offers a broad range of global computation, storing, database, applications, deployment
and analytics that will assist organisation within moving faster, scale their applications and lower
overall IT cost. AWS renders services that are affordable for firms irrespective of their size along
19
integration process, there are some fields that may transfer data seamlessly but there are
some which may not be able to do so.
Volume: More than 2 trillion gigabytes data is being generated which denotes that more
monitoring as well as validation is needed. This will only lead to ensure they data
protection, security and availability of information in the precise manner. But without
having any sophisticated or intelligent data management system, this will be wasteland of
fragmented and disordered information (Jaiswal, 2018). For an example, if knowledge is
not extracted from data then it will be of no use as the peculiar information is not utilised
for formulation of the decisions. Along with this, it becomes difficult to find out the
information that will lead to creation of huge impact on the working of organisation and
what all has to be considered. This often leads to leaving certain aspects which might be
of great importance of organisation. When organisation like Amazon have relevant tools
then it becomes easy to have intelligence, this can be understood by the fact that more
volume denotes more intelligence. The adequate data management system is liable for
harnessing ample of data by distilling it within adequate as well as actionable
information.
These are some challenges associated with data management that firms need to take into
consideration while dealing with massive amount of data. The reason behind this is that it will
lead to creation of adverse impact on the ways in which they handle information, its utilisation
and formulation of decisions (Khan, Shakil and Alam, 2018). There is a high probability that if
analysis is not carried out in an adequate manner then decisions made with respect to their
operations will not yield them adequate outcome as all the aspects might not have taken into
consideration. The other critical challenge is security, privacy and availability of information as
per the requirements of organisation.
Benefits collected by Amazon by using AWS (cloud computing) in data management
According to Karim and et. al (2020), an Amazon web service denotes cloud storage
solution that is liable for presenting solution to host so that firms can have enhanced benefits.
AWS offers a broad range of global computation, storing, database, applications, deployment
and analytics that will assist organisation within moving faster, scale their applications and lower
overall IT cost. AWS renders services that are affordable for firms irrespective of their size along
19
with generation of heavy traffic. The major benefit of going towards AWS is that when firm will
grow then they can accordingly have services which will render flexibility, ecommerce along
with support for business (Karim and et. al, 2020). The secured benefit will be provided that will
offer enhanced as well as robust security features as there will be 24*7 access given to data
experts with reference to if any case occurs, IAM services for tracking users access, multi factor
authentication along with encrypted data storage capabilities and there are many more.
Amazon web service is a cloud computing platform that delivers easy, versatile as well
as profoundly dependable framework. It eradicates up-front capital infrastructure cost which in
turn aids within scaling up the business. Amazon dominates the public cloud market as per the
statistics of 2019 (What is Amazon Web Services and Why Should You Consider it?, 2020).
AWS provides ease to use as well as maintenance of IT infrastructure along with
optimisation of operational cost according to pay per use payment model. AWS will also make
sure reliability and security of business data that comprises of high end security configuration.
The benefits that are being attained by organisation through usage of AWS are illustrated below:
20
Illustration 1: Amazon dominates public cloud market
grow then they can accordingly have services which will render flexibility, ecommerce along
with support for business (Karim and et. al, 2020). The secured benefit will be provided that will
offer enhanced as well as robust security features as there will be 24*7 access given to data
experts with reference to if any case occurs, IAM services for tracking users access, multi factor
authentication along with encrypted data storage capabilities and there are many more.
Amazon web service is a cloud computing platform that delivers easy, versatile as well
as profoundly dependable framework. It eradicates up-front capital infrastructure cost which in
turn aids within scaling up the business. Amazon dominates the public cloud market as per the
statistics of 2019 (What is Amazon Web Services and Why Should You Consider it?, 2020).
AWS provides ease to use as well as maintenance of IT infrastructure along with
optimisation of operational cost according to pay per use payment model. AWS will also make
sure reliability and security of business data that comprises of high end security configuration.
The benefits that are being attained by organisation through usage of AWS are illustrated below:
20
Illustration 1: Amazon dominates public cloud market
Storage: Amazon web services renders high storage capability that can be utilised in
combination or independently. High storage instances of EC2 will assist users when
users are making use of applications such as data warehousing, Hadoop, etc. EC2
(Elastic compute cloud) furnishes scalable capacity through which users can launch
their virtual servers as per requirements, their configuration in context of security as
well as networking and management of storage (Kodali and John, 2020). This also
minimise the requirement for forecasting traffic. Amazon renders distinct storage
option to their users that enables them to deliver their services as per the requirements.
They are specified beneath:
▪ Amazon EBS (Elastic Book Store): It is a block storage system that is utilised
for furnishing storage for persistent data. This is apt ec2 instances that delivers
highly available block levels for storage volumes. IT aids within having
persistent data.
▪ Amazon Glacier: This denotes extremely low cost storage service that is liable
for delivering secured as well as durable storage for data archival and having its
backup (Linthicum, 2017). This is optimised for data which is infrequently
accessed as well as the retrieval time for several hours is apt.
▪ Amazon S3 (Simple Storage Service): This implies object storage which
delivers industry leading performance, scalability, security and data availability.
This illustrates that customers of all industries and sizes can utilise it for storage
as well as protecting it for any range of data like IoT devices, enterprise
applications, big data analytics, etc. It provides storage via web-based interface.
▪ Storage Transport Devices: Amazon renders certain storage devices like
Snowball and Snowmobile which can be easily transported. Snowmobile
enables to transfer ample of data that comprises of hard drives in which
petabytes of data can be stored or is stored. Snowball aids within transfer of data
that is present in and out with 1/5th less transferring cost.
▪ Amazon Elastic File System: This provides simple, fully managed and scalable
NFS file system that can be utilised with AWS cloud services along with on-
premises resources. This is liable for dealing with data up to petabytes without
any kind of disruption of application as well as automatic adding or removing of
21
combination or independently. High storage instances of EC2 will assist users when
users are making use of applications such as data warehousing, Hadoop, etc. EC2
(Elastic compute cloud) furnishes scalable capacity through which users can launch
their virtual servers as per requirements, their configuration in context of security as
well as networking and management of storage (Kodali and John, 2020). This also
minimise the requirement for forecasting traffic. Amazon renders distinct storage
option to their users that enables them to deliver their services as per the requirements.
They are specified beneath:
▪ Amazon EBS (Elastic Book Store): It is a block storage system that is utilised
for furnishing storage for persistent data. This is apt ec2 instances that delivers
highly available block levels for storage volumes. IT aids within having
persistent data.
▪ Amazon Glacier: This denotes extremely low cost storage service that is liable
for delivering secured as well as durable storage for data archival and having its
backup (Linthicum, 2017). This is optimised for data which is infrequently
accessed as well as the retrieval time for several hours is apt.
▪ Amazon S3 (Simple Storage Service): This implies object storage which
delivers industry leading performance, scalability, security and data availability.
This illustrates that customers of all industries and sizes can utilise it for storage
as well as protecting it for any range of data like IoT devices, enterprise
applications, big data analytics, etc. It provides storage via web-based interface.
▪ Storage Transport Devices: Amazon renders certain storage devices like
Snowball and Snowmobile which can be easily transported. Snowmobile
enables to transfer ample of data that comprises of hard drives in which
petabytes of data can be stored or is stored. Snowball aids within transfer of data
that is present in and out with 1/5th less transferring cost.
▪ Amazon Elastic File System: This provides simple, fully managed and scalable
NFS file system that can be utilised with AWS cloud services along with on-
premises resources. This is liable for dealing with data up to petabytes without
any kind of disruption of application as well as automatic adding or removing of
21
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
files (Merla and Liang, 2017). Basically, it provides the capability for dealing
with applications and workloads which are present within the public cloud. Scalable and cost effective: Data volumes are exponentially growing this means that
cost associated with storage and analysis will also grow at same rate. AWS furnishes
comprehensible tool that will assist within controlling cost for storage and having
analysis of data at scale. This also involves features such as intelligent tiering to data
storage within the S3 and will further minimise cost associated with compute usage
such as integration and auto-scaling with EC2 Spot instances. As per the needs user can
scale up their infrastructures, the cost of utilisation will be low in case if user scales
down the instances which are being utilised by them. The pay per use model offers high
scalability and acts as appropriate option for Amazon or any other large firm as they do
not need to have any kind of additional resources in case if they are running out of
storage space. Comprehensive and open: AWS acts as broadest as well as deepest portfolio for for
purpose built analytical tools through which firm like Amazon or any other can have
insight from data that is being utilised by them through usage of adequate tool
(Mukherjee, 2019). The analytics service support open file format such as Apache
Parquet eliminates the need to move as well as transform data for analysing this. Secure: AWS furnishes standard as well as secured infrastructure in which user pay
only for services which they make use of. This renders enhanced range of services in
context of security. IAM (Identity access and management) in service in which
manager have access to users. AWS also provides Amazon and other firms with tools
that will automatically assess the security risks and also provides tools that renders
encryption for both software as well as hardware, security against DDoS, security
certificates fro transport layer and filtering of applications against harmful traffic.
Amazon Inspector is being provided by AWS that automatically detects security
deficiencies and threats. Scale securely with superior control and visibility: With Amazon web services
(AWS), users can control where data is stored, who all have access to this and what
resources are being consumed by firm at any instance of time (Mukherjee and Kar,
2017). Access control along with fine-grain identity is merged with continuous
22
with applications and workloads which are present within the public cloud. Scalable and cost effective: Data volumes are exponentially growing this means that
cost associated with storage and analysis will also grow at same rate. AWS furnishes
comprehensible tool that will assist within controlling cost for storage and having
analysis of data at scale. This also involves features such as intelligent tiering to data
storage within the S3 and will further minimise cost associated with compute usage
such as integration and auto-scaling with EC2 Spot instances. As per the needs user can
scale up their infrastructures, the cost of utilisation will be low in case if user scales
down the instances which are being utilised by them. The pay per use model offers high
scalability and acts as appropriate option for Amazon or any other large firm as they do
not need to have any kind of additional resources in case if they are running out of
storage space. Comprehensive and open: AWS acts as broadest as well as deepest portfolio for for
purpose built analytical tools through which firm like Amazon or any other can have
insight from data that is being utilised by them through usage of adequate tool
(Mukherjee, 2019). The analytics service support open file format such as Apache
Parquet eliminates the need to move as well as transform data for analysing this. Secure: AWS furnishes standard as well as secured infrastructure in which user pay
only for services which they make use of. This renders enhanced range of services in
context of security. IAM (Identity access and management) in service in which
manager have access to users. AWS also provides Amazon and other firms with tools
that will automatically assess the security risks and also provides tools that renders
encryption for both software as well as hardware, security against DDoS, security
certificates fro transport layer and filtering of applications against harmful traffic.
Amazon Inspector is being provided by AWS that automatically detects security
deficiencies and threats. Scale securely with superior control and visibility: With Amazon web services
(AWS), users can control where data is stored, who all have access to this and what
resources are being consumed by firm at any instance of time (Mukherjee and Kar,
2017). Access control along with fine-grain identity is merged with continuous
22
monitoring with respect to real-time security information that is liable for ensuring that
precise resources are provided with access as per the requirements each time they are
needed irrespective of wherever data is present. Risks will be reduced while Amazon or
other firms opt for security automation along with activity monitoring services for
detection of any suspicious security event such as alterations within the configuration.
Along with this, services can be easily integrated with existent solution for supporting
workflows, streamlining their operations and simplification of compliance reporting.
Multi-region backup: Amazon renders several regions in which users can place their
data as well as instances (Shakil and et. al, 2018). These comprises of availability of
zones which are insulated from any kind of damage in any zone. The rationale of this is
to launch different instances of EC2 within any location for protection of applications
of user. In present context, an example can be taken is of Amazon where An EC2
instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running
applications on the Amazon Web Services (AWS) infrastructure. AWS is a
comprehensive, evolving cloud computing platform; EC2 is a service that allows
business subscribers to run application programs in the computing environment. In the
same way, there are other organisations as well that has taken into consideration of
EC2, which resulted into positive outcomes towards their goals and objectives and help
them in gaining competitive advantages as well. The EC2 can fill in as a basically
boundless arrangement of virtual machines. Amazon gives an assortment of sorts of
occasions with various designs of CPU, memory, stockpiling, and systems
administration assets to suit client needs. Each type is likewise accessible in two
distinct sizes to address outstanding burden prerequisites. These types are gathered into
families dependent on track application profiles. These gatherings include: universally
useful, register enhanced, GPU occurrences, memory improved, capacity streamlined
and miniaturized scale occasions. In case of zones are within the same region then cost
and network latency associated is low. Users can opt for the region as per their
convenience. Cloud Ranger which is a third party service is liable for creation of
automatic backup within distinct regions (Pérez and et. al, 2019).
These are the benefits that can be attained by organisation by making use of AWS for
data management. In addition to this, security aspects can also be handled in precise manner and
23
precise resources are provided with access as per the requirements each time they are
needed irrespective of wherever data is present. Risks will be reduced while Amazon or
other firms opt for security automation along with activity monitoring services for
detection of any suspicious security event such as alterations within the configuration.
Along with this, services can be easily integrated with existent solution for supporting
workflows, streamlining their operations and simplification of compliance reporting.
Multi-region backup: Amazon renders several regions in which users can place their
data as well as instances (Shakil and et. al, 2018). These comprises of availability of
zones which are insulated from any kind of damage in any zone. The rationale of this is
to launch different instances of EC2 within any location for protection of applications
of user. In present context, an example can be taken is of Amazon where An EC2
instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running
applications on the Amazon Web Services (AWS) infrastructure. AWS is a
comprehensive, evolving cloud computing platform; EC2 is a service that allows
business subscribers to run application programs in the computing environment. In the
same way, there are other organisations as well that has taken into consideration of
EC2, which resulted into positive outcomes towards their goals and objectives and help
them in gaining competitive advantages as well. The EC2 can fill in as a basically
boundless arrangement of virtual machines. Amazon gives an assortment of sorts of
occasions with various designs of CPU, memory, stockpiling, and systems
administration assets to suit client needs. Each type is likewise accessible in two
distinct sizes to address outstanding burden prerequisites. These types are gathered into
families dependent on track application profiles. These gatherings include: universally
useful, register enhanced, GPU occurrences, memory improved, capacity streamlined
and miniaturized scale occasions. In case of zones are within the same region then cost
and network latency associated is low. Users can opt for the region as per their
convenience. Cloud Ranger which is a third party service is liable for creation of
automatic backup within distinct regions (Pérez and et. al, 2019).
These are the benefits that can be attained by organisation by making use of AWS for
data management. In addition to this, security aspects can also be handled in precise manner and
23
depending upon the tools that firm have opted for, they can attain the benefits associated with
automatic detection of malicious activities and many more.
Ways through which AWS help in business expansion by effective utilisation of data
management
As per Li and et. al (2020), AWS will assist them within rendering their services as per
required needs like programming language, database, operating system and many more assets as
per their requirement. If demands for storage space increases when their customer base excel
then they can opt for more storage space, processing capabilities and many more aspects which
will aid Amazon to have on time services as per their requirements. This will lead them to
expand their operations easily without thinking out maintenance cost of servers, having in house
equipments or any other devices for carrying out their functionalities (Li and et. al, 2020). AWS
aids admin of Amazon to conduct tasks like tracking of the resources, health of the applications
along with configuration of resources. Furthermore, it is also liable for automation of
infrastructure configuration and retain the activities conducted by the users. AWS provides
organisation to build up data lakes and carry out analytics. Firms can easily set up as well as
manage data lakes that comprises of wide range of manual along with time taking tasks that
involves transforming, auditing access, securing and loading data. AWS Lake formation is liable
for automation of different manual steps that will minimise the time that is needed for successful
building of data lake (Pouyanfar and et. al, 2018). The expansion can be understood by the fact
that Amazon offers various database for different applications as per the requirements of the
user. They are specified beneath:
Type Applications AWS Service Meaning Description for
different
participants
Document Content
management
User Profiling
Amazon
DocumentDB
This type of
document database is
fast, scalable and
fully managed which
supports workloads.
The users of this
document database
needs to know that
this database is easy
to store as it stores
the query and index
effectively.
24
automatic detection of malicious activities and many more.
Ways through which AWS help in business expansion by effective utilisation of data
management
As per Li and et. al (2020), AWS will assist them within rendering their services as per
required needs like programming language, database, operating system and many more assets as
per their requirement. If demands for storage space increases when their customer base excel
then they can opt for more storage space, processing capabilities and many more aspects which
will aid Amazon to have on time services as per their requirements. This will lead them to
expand their operations easily without thinking out maintenance cost of servers, having in house
equipments or any other devices for carrying out their functionalities (Li and et. al, 2020). AWS
aids admin of Amazon to conduct tasks like tracking of the resources, health of the applications
along with configuration of resources. Furthermore, it is also liable for automation of
infrastructure configuration and retain the activities conducted by the users. AWS provides
organisation to build up data lakes and carry out analytics. Firms can easily set up as well as
manage data lakes that comprises of wide range of manual along with time taking tasks that
involves transforming, auditing access, securing and loading data. AWS Lake formation is liable
for automation of different manual steps that will minimise the time that is needed for successful
building of data lake (Pouyanfar and et. al, 2018). The expansion can be understood by the fact
that Amazon offers various database for different applications as per the requirements of the
user. They are specified beneath:
Type Applications AWS Service Meaning Description for
different
participants
Document Content
management
User Profiling
Amazon
DocumentDB
This type of
document database is
fast, scalable and
fully managed which
supports workloads.
The users of this
document database
needs to know that
this database is easy
to store as it stores
the query and index
effectively.
24
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Key-Value Web applications
for high traffic
Gaming
applications
Amazon DynamoDBThis database
management system
is a fully managed
NoSQL database
which is programmed
to support key value
and document data
structure.
The participants will
interest to know that
this database is a data
model having
different underlying
implementation than
dynamo.
Graph Fraud detection
Recommendation
engine
Amazon Neptune Amazon Neptune is a
managed service
which is used to
create and manage
the graph based
database.
It is important to
know that this
processor is used to
write queries which
are for connected
data points and low
latency.
In-Memory Caching
Geospatial
applications
Game leader
boards
Amazon ElastiCache
– For Memcached
and Redis
It is a fully managed
cache service which
helps in improving
the performance of
wen applications.
About this serice, it is
important for
participants to know
that this service is
slower relying disk
base database.
Ledger Supply chains
Banking
transactions
Amazon QLDB
(Quantum Ledger
Database)
This database helps
in building ledger
database which
provides a
cryptographically
verifiable history.
It is important to
know for participants
that this database
helps in making
changes in the
application data.
Time
Series
IoT
DevOps
Industrial
telemetry
Amazon TimestreamThis is a fast,
scalable and fully
managed rime series
database service for
This is important for
them to finish the
work on time along
with quality.
25
for high traffic
Gaming
applications
Amazon DynamoDBThis database
management system
is a fully managed
NoSQL database
which is programmed
to support key value
and document data
structure.
The participants will
interest to know that
this database is a data
model having
different underlying
implementation than
dynamo.
Graph Fraud detection
Recommendation
engine
Amazon Neptune Amazon Neptune is a
managed service
which is used to
create and manage
the graph based
database.
It is important to
know that this
processor is used to
write queries which
are for connected
data points and low
latency.
In-Memory Caching
Geospatial
applications
Game leader
boards
Amazon ElastiCache
– For Memcached
and Redis
It is a fully managed
cache service which
helps in improving
the performance of
wen applications.
About this serice, it is
important for
participants to know
that this service is
slower relying disk
base database.
Ledger Supply chains
Banking
transactions
Amazon QLDB
(Quantum Ledger
Database)
This database helps
in building ledger
database which
provides a
cryptographically
verifiable history.
It is important to
know for participants
that this database
helps in making
changes in the
application data.
Time
Series
IoT
DevOps
Industrial
telemetry
Amazon TimestreamThis is a fast,
scalable and fully
managed rime series
database service for
This is important for
them to finish the
work on time along
with quality.
25
IOT and operational
applications.
Relational ERP
E-Commerce
Traditional
Amazon Aurura
Amazon Redshift
Amazon RDS
(Relational database
service)
This is a MySQL and
PostgreSQL
compatible relational
database build for the
cloud that combines
the performance and
availability of
traditional enterprise
databases.
This is helpful for the
participants in
completion of work
along within
maintaining safety.
While opting for expansion firm can have access to different database with respect to
their requirements and this is being provided by AWS. This enables them to deliver their services
as per their requirements. It also implies that Amazon offers an environment that can be scaled
according to their requirements or usage (Pramod and et. al, 2020). In case, if user scales down
their usage or need for resources then the associated cost will be low. The scalability will enable
Amazon to spread their functionalities without having need to accommodate hardware to deliver
services. AWS provide organisations with platform that will aid them to deliver services in an
adequate manner. In addition this, major concern for each firm is security aspects but by making
use of AWS firm will have an option through which if any kind of malicious data is present then
this will be detected automatically. This will further lead to have secured platform through which
they can ensure effectiveness and provide quality services to their customers. The ways in which
AWS transform the manner in which functionalities are carried out them are depicted below: Migration and free-up resources: AWS will assist Amazon with migration to robust set
of services and tools for simplification as well as automation of processes (Rafique,
2019). Through this, firms can have access to relevant methodologies, relevant support
and operations can be easily migrated by assistance of professionals. When firm will
execute their application on AWS then it will free up valuable resources through which
they can further emphasise on innovation of their business operations.
26
applications.
Relational ERP
E-Commerce
Traditional
Amazon Aurura
Amazon Redshift
Amazon RDS
(Relational database
service)
This is a MySQL and
PostgreSQL
compatible relational
database build for the
cloud that combines
the performance and
availability of
traditional enterprise
databases.
This is helpful for the
participants in
completion of work
along within
maintaining safety.
While opting for expansion firm can have access to different database with respect to
their requirements and this is being provided by AWS. This enables them to deliver their services
as per their requirements. It also implies that Amazon offers an environment that can be scaled
according to their requirements or usage (Pramod and et. al, 2020). In case, if user scales down
their usage or need for resources then the associated cost will be low. The scalability will enable
Amazon to spread their functionalities without having need to accommodate hardware to deliver
services. AWS provide organisations with platform that will aid them to deliver services in an
adequate manner. In addition this, major concern for each firm is security aspects but by making
use of AWS firm will have an option through which if any kind of malicious data is present then
this will be detected automatically. This will further lead to have secured platform through which
they can ensure effectiveness and provide quality services to their customers. The ways in which
AWS transform the manner in which functionalities are carried out them are depicted below: Migration and free-up resources: AWS will assist Amazon with migration to robust set
of services and tools for simplification as well as automation of processes (Rafique,
2019). Through this, firms can have access to relevant methodologies, relevant support
and operations can be easily migrated by assistance of professionals. When firm will
execute their application on AWS then it will free up valuable resources through which
they can further emphasise on innovation of their business operations.
26
Adopt to ultramodern application development practices: AWS renders firm with
rapidly build, test and deployment of new services. By having complete platform for
server-less and containers, the operations can be easily build as well as microservices can
be operated at large scale (Sarkar and Shah, 2018). Through the usage of unified set of
CI/CD (continuous integration and continuous delivery) that are provided by AWS, it
becomes for automating the development workflow through which emphasis can be laid
on having market easily. Gain deep insight into business: By having deepest and broadest set of analytics,
artificial services and machine learning, Amazon Web Service will firm within having
valuable insight for data which exist across the business and also enables to formulate
quick decisions by improvisation of customer experience and also minimise the probable
risks which might create a significant impact on the way in which operations are being
conducted by the Amazon. Ensure resilience, security and compliance: Security is at high priority for AWS as it
strengthens compliance along with security posture. Through 5X more services are being
offered in terms of encryption that are not provided by any other cloud service provider.
In addition to this, by opting for 7X the downtime hours is also reduced (Shakil and et. al,
2018). This implies that AWS is is secured, flexible and reliable computing environment
that is available at present scenario. Minimised business risks: AWS will protect data by maintaining customer as well as
stakeholders trust towards the business. It renders secured and flexible cloud platform
that will be backed up by certain set security, governance and compliance to services that
will aid within minimisation of risks without having any compromise within the services
in the market in context of agility and scalability.
Scalable performance: Elastic Load Balancing: Amazon comprises of scalable and
powerful load balancing solution for organisation. ELB make sure that client requests are
being sent to the adequate servers which further leads to avoid server hotspots. This
eliminates over utilisation of one server as well as under utilisation of others. It comprises
of Classic load balancing that is liable for making analysis with reference to basic
network along with application data that is liable for ensuring fault tolerance in case if
any instance of EC2 web application fails. Application load balancing looks at distinct
27
rapidly build, test and deployment of new services. By having complete platform for
server-less and containers, the operations can be easily build as well as microservices can
be operated at large scale (Sarkar and Shah, 2018). Through the usage of unified set of
CI/CD (continuous integration and continuous delivery) that are provided by AWS, it
becomes for automating the development workflow through which emphasis can be laid
on having market easily. Gain deep insight into business: By having deepest and broadest set of analytics,
artificial services and machine learning, Amazon Web Service will firm within having
valuable insight for data which exist across the business and also enables to formulate
quick decisions by improvisation of customer experience and also minimise the probable
risks which might create a significant impact on the way in which operations are being
conducted by the Amazon. Ensure resilience, security and compliance: Security is at high priority for AWS as it
strengthens compliance along with security posture. Through 5X more services are being
offered in terms of encryption that are not provided by any other cloud service provider.
In addition to this, by opting for 7X the downtime hours is also reduced (Shakil and et. al,
2018). This implies that AWS is is secured, flexible and reliable computing environment
that is available at present scenario. Minimised business risks: AWS will protect data by maintaining customer as well as
stakeholders trust towards the business. It renders secured and flexible cloud platform
that will be backed up by certain set security, governance and compliance to services that
will aid within minimisation of risks without having any compromise within the services
in the market in context of agility and scalability.
Scalable performance: Elastic Load Balancing: Amazon comprises of scalable and
powerful load balancing solution for organisation. ELB make sure that client requests are
being sent to the adequate servers which further leads to avoid server hotspots. This
eliminates over utilisation of one server as well as under utilisation of others. It comprises
of Classic load balancing that is liable for making analysis with reference to basic
network along with application data that is liable for ensuring fault tolerance in case if
any instance of EC2 web application fails. Application load balancing looks at distinct
27
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
content request as well as will route traffic to relevant microservice or container
depending upon application content information.
These are the aspects that will assist organisation within expanding their operations and
services across the different countries as well as their products portfolio (Siddiqa, Karim and
Gani, 2017). AWS provides Amazon with enhanced benefits that will aid them within ensuring
that they can address requirements of the market in an efficacious manner
Summary
This chapter provides an in-depth insight into work that is done by different authors with
reference to data management. It has been identified that this concept has evolved a lot that
creates a relevant impact on ways in which operations are being conducted. With the advent of
cloud computing, organisations need not to make any kind of investment within infrastructure
rather they can have services by paying for them. This eliminates the unnecessary cost that is
involved with infrastructure, its maintenance, security of their services and human resources. The
overall influence that is created by advent of cloud computing is affirmative as it provides
services as per needs of the organisation and they also have an option to shrink or expand them.
AWS provides wide range of services which firms can opt for as per their requirements.
The next chapter will provide the detailed analysis of methodologies that are being
utilised for completion of this dissertation. Different research methodology will be depicted and
those which are used within this research will also be illustrated.
28
depending upon application content information.
These are the aspects that will assist organisation within expanding their operations and
services across the different countries as well as their products portfolio (Siddiqa, Karim and
Gani, 2017). AWS provides Amazon with enhanced benefits that will aid them within ensuring
that they can address requirements of the market in an efficacious manner
Summary
This chapter provides an in-depth insight into work that is done by different authors with
reference to data management. It has been identified that this concept has evolved a lot that
creates a relevant impact on ways in which operations are being conducted. With the advent of
cloud computing, organisations need not to make any kind of investment within infrastructure
rather they can have services by paying for them. This eliminates the unnecessary cost that is
involved with infrastructure, its maintenance, security of their services and human resources. The
overall influence that is created by advent of cloud computing is affirmative as it provides
services as per needs of the organisation and they also have an option to shrink or expand them.
AWS provides wide range of services which firms can opt for as per their requirements.
The next chapter will provide the detailed analysis of methodologies that are being
utilised for completion of this dissertation. Different research methodology will be depicted and
those which are used within this research will also be illustrated.
28
Chapter 3
Research Methodology
Introduction
In every single investigation, Research Methodology is considered to be a crucial section
that plays an important role. The reason that came in front of conducting research methodology
section in a dissertation is that it helps in taking into consideration of philosophy, approach,
choices, data collection tools, research types and so on. With the help of this, identification,
analysis and selection can be done of different approaches along with the philosophies which are
specifically helpful in conducting an investigation on the chosen topic (Tomar and Kaur, 2017).
Also, it helps researcher in critically evaluating the chosen topic in much more detailed manner
and in short period of time. In present context, where researcher is performing investigation on
the topic i.e. To identify the challenges faced by Amazon before cloud computing in data
management, research onion has been utilised which will be helping researcher in identifying,
selecting, processing, and analysing the data or information that has been gathered by
investigator on the chosen topic. Including this, it is the responsibility of researcher to take into
consideration or select the best suitable approach, choice, type data collection tool, and so on.
Failure and to this may lead the investigator to go through different difficulties like less
favourable outcomes can be carried out and so on. Particularly, the method which has been
chosen in order to conduct this section of the dissertation is Saunders research onion. In present
context, it can be said that Saunders Research onion represents the stages associated with the
improvement of an exploration work and was created by Saunders. Away with this, the onion
layers give a progressively itemized portrayal of the phases of an exploration procedure. It gives
a viable movement through which an examination approach can be structured. Its value lies in its
flexibility for practically any sort of examination strategy and can be utilized in an assortment of
settings. Saunders et al (2012) noticed that while utilizing research onion one needs to go from
the external layer to the internal layer. When seen all things considered, each layer of the onion
portrays a progressively itemized phase of the exploration procedure.
29
Research Methodology
Introduction
In every single investigation, Research Methodology is considered to be a crucial section
that plays an important role. The reason that came in front of conducting research methodology
section in a dissertation is that it helps in taking into consideration of philosophy, approach,
choices, data collection tools, research types and so on. With the help of this, identification,
analysis and selection can be done of different approaches along with the philosophies which are
specifically helpful in conducting an investigation on the chosen topic (Tomar and Kaur, 2017).
Also, it helps researcher in critically evaluating the chosen topic in much more detailed manner
and in short period of time. In present context, where researcher is performing investigation on
the topic i.e. To identify the challenges faced by Amazon before cloud computing in data
management, research onion has been utilised which will be helping researcher in identifying,
selecting, processing, and analysing the data or information that has been gathered by
investigator on the chosen topic. Including this, it is the responsibility of researcher to take into
consideration or select the best suitable approach, choice, type data collection tool, and so on.
Failure and to this may lead the investigator to go through different difficulties like less
favourable outcomes can be carried out and so on. Particularly, the method which has been
chosen in order to conduct this section of the dissertation is Saunders research onion. In present
context, it can be said that Saunders Research onion represents the stages associated with the
improvement of an exploration work and was created by Saunders. Away with this, the onion
layers give a progressively itemized portrayal of the phases of an exploration procedure. It gives
a viable movement through which an examination approach can be structured. Its value lies in its
flexibility for practically any sort of examination strategy and can be utilized in an assortment of
settings. Saunders et al (2012) noticed that while utilizing research onion one needs to go from
the external layer to the internal layer. When seen all things considered, each layer of the onion
portrays a progressively itemized phase of the exploration procedure.
29
Methodologies
Research design: The whole movement of thesis is relies upon research structure as it
will help investigator in identifying the challenges faced by Amazon before cloud computing in
data management. In present context, it can be said that investigating design is significant piece
of exploration technique which is fundamentally separated into three sorts, for example,
exploratory, experimental along with descriptive (Vijayakumar, 2018). These are fundamental
kinds of examination configuration yet as per the detailed examination, descriptive design going
to be utilised by investigator. As it will be valuable and appropriate structure for the examination
theme and through this, the end can be inferred in successful way. This sort of configuration is
progressively compelling for investigator to break down non-measured issues. Alongside this,
exploratory along with the experimental are another sort of examination plan which isn't suitable
as per the current point or examination done on the topic that is challenges faced by a company
like Amazon before cloud computing in order to manage the data.
Types of study: While conducting an investigation, it has been analysed that researcher
wood required to take into consideration of a particular type of study and in present context it
can be said that some of these techniques can be commonly characterized into two
classifications, for example, qualitative and quantitative. Quantitative exploration resolves,
identifies and depicts issues utilizing numbers, formulas and stays linked with numerical terms
only. Then again, qualitative examination depends on feelings, words, sounds, emotions and
other unquantifiable and non-numerical components. The two sorts of study are significant
however for directing examination over the challenges faced by Amazon before cloud computing
while managing the data researcher has taken into consideration of quantitative kind of study as
it is directly linking to feelings former emotions and unquantifiable components (Wadia and
Gupta, 2017). As this examination help investigator in social affair of numerical data about the
point. There are different points of interest of utilizing quantitative kind of study, for example,
significant level of unwavering quality, clear reliant and autonomous variable, explicit
exploration issue, least close to home judgment and so on. These are fundamental purposes
behind utilizing quantitative sort of examination. Then again, qualitative is another sort of study
which will be not valuable as indicated by the current examination as it creates tremendous
measure of qualitative data that can be increasingly hard to break down. There are different
explanations for choosing quantitative exploration strategy including replicable, explicit,
30
Research design: The whole movement of thesis is relies upon research structure as it
will help investigator in identifying the challenges faced by Amazon before cloud computing in
data management. In present context, it can be said that investigating design is significant piece
of exploration technique which is fundamentally separated into three sorts, for example,
exploratory, experimental along with descriptive (Vijayakumar, 2018). These are fundamental
kinds of examination configuration yet as per the detailed examination, descriptive design going
to be utilised by investigator. As it will be valuable and appropriate structure for the examination
theme and through this, the end can be inferred in successful way. This sort of configuration is
progressively compelling for investigator to break down non-measured issues. Alongside this,
exploratory along with the experimental are another sort of examination plan which isn't suitable
as per the current point or examination done on the topic that is challenges faced by a company
like Amazon before cloud computing in order to manage the data.
Types of study: While conducting an investigation, it has been analysed that researcher
wood required to take into consideration of a particular type of study and in present context it
can be said that some of these techniques can be commonly characterized into two
classifications, for example, qualitative and quantitative. Quantitative exploration resolves,
identifies and depicts issues utilizing numbers, formulas and stays linked with numerical terms
only. Then again, qualitative examination depends on feelings, words, sounds, emotions and
other unquantifiable and non-numerical components. The two sorts of study are significant
however for directing examination over the challenges faced by Amazon before cloud computing
while managing the data researcher has taken into consideration of quantitative kind of study as
it is directly linking to feelings former emotions and unquantifiable components (Wadia and
Gupta, 2017). As this examination help investigator in social affair of numerical data about the
point. There are different points of interest of utilizing quantitative kind of study, for example,
significant level of unwavering quality, clear reliant and autonomous variable, explicit
exploration issue, least close to home judgment and so on. These are fundamental purposes
behind utilizing quantitative sort of examination. Then again, qualitative is another sort of study
which will be not valuable as indicated by the current examination as it creates tremendous
measure of qualitative data that can be increasingly hard to break down. There are different
explanations for choosing quantitative exploration strategy including replicable, explicit,
30
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
generalizable and so on. These are fundamental purposes behind choosing quantitative
examination strategy but in present context, it can be said that it will be required by researcher to
perform qualitative investigation because the topic that has been considered in order to conduct
this investigation is directly linking to emotions, perceptions and so on.
Sources of data collection: Basically, there are two different types of data collection
tools and these are primary and secondary data collection methods which consist of different
tools through which researcher can easily collect data. Both of these type of data collection
sources are presented in a detailed manner underneath:
Primary data collection tools: It has been analysed to that primary data stays much more
authentic, reliable and helps researcher and collecting valuable and valid data which was
never been accumulated before by any investigator (Wijaya and et. al, 2018). If it is talk
about the different tools that can be utilised while considering primary data collection
technique then these are questionnaire, Focus Group, interview and so on. while
conducting the investigation on challenges faced by Amazon before cloud computing
while managing data researcher may utilise questionnaire as one of the crucial data
collection technique in order to collect data but as the investigation is being performed on
secondary data therefore, this type of investigation will not be effective in nature.
Secondary data collection tools.: For collecting this type of data comma researcher can
take into consideration of different studies from various sources and these are publication
researches which has been conducted by researchers in the past, books, magazines,
articles, online sources and many more (Wilson, 2018). If it is talked about present
investigation, researcher has utilised secondary data collection tools online books,
articles, magazines, online sources and so on. With the help of this, investigator will
become able to pull out favourable outcomes in relation to how challenges can easily be
reduced to minimal in near future which was faced by Amazon before using cloud
computing as a crucial source to manage data.
Therefore, the best suitable data collection tool which will be utilised by researcher is
secondary data collection technique, which will not only impact positively while pulling out
favourable outcomes but it will also increase the knowledge of both researcher and readers. With
the help of this, proper investigation can also be done in near future in relation to how cloud
31
examination strategy but in present context, it can be said that it will be required by researcher to
perform qualitative investigation because the topic that has been considered in order to conduct
this investigation is directly linking to emotions, perceptions and so on.
Sources of data collection: Basically, there are two different types of data collection
tools and these are primary and secondary data collection methods which consist of different
tools through which researcher can easily collect data. Both of these type of data collection
sources are presented in a detailed manner underneath:
Primary data collection tools: It has been analysed to that primary data stays much more
authentic, reliable and helps researcher and collecting valuable and valid data which was
never been accumulated before by any investigator (Wijaya and et. al, 2018). If it is talk
about the different tools that can be utilised while considering primary data collection
technique then these are questionnaire, Focus Group, interview and so on. while
conducting the investigation on challenges faced by Amazon before cloud computing
while managing data researcher may utilise questionnaire as one of the crucial data
collection technique in order to collect data but as the investigation is being performed on
secondary data therefore, this type of investigation will not be effective in nature.
Secondary data collection tools.: For collecting this type of data comma researcher can
take into consideration of different studies from various sources and these are publication
researches which has been conducted by researchers in the past, books, magazines,
articles, online sources and many more (Wilson, 2018). If it is talked about present
investigation, researcher has utilised secondary data collection tools online books,
articles, magazines, online sources and so on. With the help of this, investigator will
become able to pull out favourable outcomes in relation to how challenges can easily be
reduced to minimal in near future which was faced by Amazon before using cloud
computing as a crucial source to manage data.
Therefore, the best suitable data collection tool which will be utilised by researcher is
secondary data collection technique, which will not only impact positively while pulling out
favourable outcomes but it will also increase the knowledge of both researcher and readers. With
the help of this, proper investigation can also be done in near future in relation to how cloud
31
computing has helped various organisations like Amazon that are performing their operations at
international level.
Research Approach: It is the another part of research methodology which is prominently
relies on data collection and data analysis. In terms of this data collection include two prominent
approaches that are qualitative and quantitative and in order to collect information regarding the
effective management of data and other resources that leads to positively impact the performance
and profitability of employees whereas quantitative research approach will be applicable on these
methods (Zambrano, 2018). The specific reason behind the use of this approach represent as it is
helpful for the researcher in order to collect numerical information and it does not also require
much sources for investigator in comparison to qualitative approach. Besides from this in order
to analyse data there are two other prominent approaches that are deductive and inductive. In
terms of this analysis information regarding the management of data deductive approach will be
used by the respective company and the researcher. Moreover both quantitative and deductive
approach of research is helpful for the researcher in order to collector and identify suitable
information considering the existing topic. Further more deductive approach is prominently
relies on Quantitative approach of Data Collection as it is useful for the researcher in order to
collect quantitative information with the help of secondary data collection method.
Research philosophy: It is important and vital part of research methodology as researcher
reach towards the significant conclusion. It is also broadly classified into two research
philosophy that are interpretivism and positivism. positivism philosophy uses quantitative
method meanwhile interpretivism philosophy uses qualitative approach. According to the
perspective of investigator positivism philosophy is useful in order to support investigator to
collect significant information from the respondents. Moreover it also include questionnaire and
other methods to collect specific information in order to get relevant outcomes (Agarwal and
Alam, 2020). The biggest demerit of interpretivism research philosophy is it is prominently
related with qualitative nature. Moreover there are various reasons for not considering
interpretivism research philosophy like research in position not so much representative low
reliability and many more. Hence positivism type of philosophy is considered by the investigator
as it is useful and valuable and also prominently helpful in order to collect quantitative data that
does not need much amount of resources.
32
international level.
Research Approach: It is the another part of research methodology which is prominently
relies on data collection and data analysis. In terms of this data collection include two prominent
approaches that are qualitative and quantitative and in order to collect information regarding the
effective management of data and other resources that leads to positively impact the performance
and profitability of employees whereas quantitative research approach will be applicable on these
methods (Zambrano, 2018). The specific reason behind the use of this approach represent as it is
helpful for the researcher in order to collect numerical information and it does not also require
much sources for investigator in comparison to qualitative approach. Besides from this in order
to analyse data there are two other prominent approaches that are deductive and inductive. In
terms of this analysis information regarding the management of data deductive approach will be
used by the respective company and the researcher. Moreover both quantitative and deductive
approach of research is helpful for the researcher in order to collector and identify suitable
information considering the existing topic. Further more deductive approach is prominently
relies on Quantitative approach of Data Collection as it is useful for the researcher in order to
collect quantitative information with the help of secondary data collection method.
Research philosophy: It is important and vital part of research methodology as researcher
reach towards the significant conclusion. It is also broadly classified into two research
philosophy that are interpretivism and positivism. positivism philosophy uses quantitative
method meanwhile interpretivism philosophy uses qualitative approach. According to the
perspective of investigator positivism philosophy is useful in order to support investigator to
collect significant information from the respondents. Moreover it also include questionnaire and
other methods to collect specific information in order to get relevant outcomes (Agarwal and
Alam, 2020). The biggest demerit of interpretivism research philosophy is it is prominently
related with qualitative nature. Moreover there are various reasons for not considering
interpretivism research philosophy like research in position not so much representative low
reliability and many more. Hence positivism type of philosophy is considered by the investigator
as it is useful and valuable and also prominently helpful in order to collect quantitative data that
does not need much amount of resources.
32
Research strategy: It is a significant part of research methodology as it over the
prominent direction to the overall study considering the procedure by which investigation is
organised. There are various types of research strategy like experiments, archival research, case
studies, ethnography, surveys and many more. In terms of this these are considered as a
significant strategy in research as it is useful in order to collect prominent information about the
topic. Along with this in order to collect significant information considering the management of
data of an organisation and also influences the overall performance and productivity of company
market survey strategy will be applied and used by the researcher (Baldwin and et. al, 2018).
Moreover this is strategy is beneficial in order to collect suitable information from the large
number of respondents. As there are various merits of considering data survey like reducing risk
portfolio understanding the perspective of employees and many more. Hence, market survey is
quite useful strategy and effective regarding the identification of the opinion and perspective of
employees towards the management of data.
Research instrument, techniques and procedure: This instrument is prominently
concerned with the data collection and also undertake certain techniques for the data collection
considering books journals articles interviews and many more. These are the main components
of Data Collection but in order to collect data regarding the management of data secondary data
collection is used. Moreover it is also valuable and useful technique in order to collect data and
get specific outcomes as through online sources researcher can get effective information
considering it investigation (Beck, Hao and Campan, 2017). Moreover there are number of
reasons for using secondary method like it is not length and time consuming easy method wide
coverage flexible and many more. It is beneficial in order to collect significant information
regarding the management of employees and their data that leads to to influence their
performance and productivity.
Time horizon: Majorly there are two different types of time horizons that are specifically
being utilised by investigators and these are cross-sectional and longitudinal. Cross-sectional
kind of studies are constrained to a specific time span. Longitudinal kind of studies are rehashed
over a significant stretch (Mukherjee, 2019). Inside this investigation, the cross-sectional time
horizon will be followed on the grounds that this exploration is have to finish in constrained time
length. Alongside this, Gantt chart will likewise utilized by specialist as a time horizons. In this
33
prominent direction to the overall study considering the procedure by which investigation is
organised. There are various types of research strategy like experiments, archival research, case
studies, ethnography, surveys and many more. In terms of this these are considered as a
significant strategy in research as it is useful in order to collect prominent information about the
topic. Along with this in order to collect significant information considering the management of
data of an organisation and also influences the overall performance and productivity of company
market survey strategy will be applied and used by the researcher (Baldwin and et. al, 2018).
Moreover this is strategy is beneficial in order to collect suitable information from the large
number of respondents. As there are various merits of considering data survey like reducing risk
portfolio understanding the perspective of employees and many more. Hence, market survey is
quite useful strategy and effective regarding the identification of the opinion and perspective of
employees towards the management of data.
Research instrument, techniques and procedure: This instrument is prominently
concerned with the data collection and also undertake certain techniques for the data collection
considering books journals articles interviews and many more. These are the main components
of Data Collection but in order to collect data regarding the management of data secondary data
collection is used. Moreover it is also valuable and useful technique in order to collect data and
get specific outcomes as through online sources researcher can get effective information
considering it investigation (Beck, Hao and Campan, 2017). Moreover there are number of
reasons for using secondary method like it is not length and time consuming easy method wide
coverage flexible and many more. It is beneficial in order to collect significant information
regarding the management of employees and their data that leads to to influence their
performance and productivity.
Time horizon: Majorly there are two different types of time horizons that are specifically
being utilised by investigators and these are cross-sectional and longitudinal. Cross-sectional
kind of studies are constrained to a specific time span. Longitudinal kind of studies are rehashed
over a significant stretch (Mukherjee, 2019). Inside this investigation, the cross-sectional time
horizon will be followed on the grounds that this exploration is have to finish in constrained time
length. Alongside this, Gantt chart will likewise utilized by specialist as a time horizons. In this
33
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
graph, number of exercises included which bolster agent in recognizable proof of beginning and
ending time of exposition.
34
ending time of exposition.
34
Sampling techniques: Probability along with the non-probability are the two crucial
sampling techniques that can be adopted by researcher in order to conduct investigation in much
successful and in efficient manner. In present context, researcher is specifically performing
investigation on the chosen topic which is to identify the challenges faced by Amazon before
cloud computing in data management (Caesarendra and et. al, 2018). On the other hand, if it is
talk about probability sampling method or technique then this specifically type of technique
consider subjects of population and focuses over various equal alterations that are required to be
recruited as sample. Away with this, if it is talked about non-probability sampling, this is
considered to be a technique that do not focuses on qualitative investigation. Convenience
sampling technique which is coming under non probability sampling technique that cat be
adopted by researcher and the reason behind opting convenience sampling is that it helps in
collecting information in much effective, urgent and simple way of pulling out favourable
information. Since the investigation is performed on the topic i.e. analysing the challenge faced
by Amazon in data management before about computing is performed with the help of secondary
data therefore probabilistic methodology or sampling technique will be adopted where saving
precious time, availability of data, saves previous money, useful for pilot studies, and so on. on
the other hand it can also be said that there are various sort of random sampling methodology
that can be adopted by a researcher but in present context, secondary investigation will be
conducted therefore there will be no use of sampling methodology as the information will be
gathered from various sources like online websites, books, articles and so on. Away with this, there
are various sampling methods and these are: simple random, stratified random, cluster, multistage
etc. These are not useful according to the current investigation because for doing current study non-
probability type of sampling will be going to applied (Cianfrocco and et. al, 2018).
Method of data analysis: Researcher may adopt both quantitative and qualitative
research analysis approach and on the other hand investigative may also take into consideration
of mixed methodology. Data analysis is based on quantitative and qualitative analysis of
35
sampling techniques that can be adopted by researcher in order to conduct investigation in much
successful and in efficient manner. In present context, researcher is specifically performing
investigation on the chosen topic which is to identify the challenges faced by Amazon before
cloud computing in data management (Caesarendra and et. al, 2018). On the other hand, if it is
talk about probability sampling method or technique then this specifically type of technique
consider subjects of population and focuses over various equal alterations that are required to be
recruited as sample. Away with this, if it is talked about non-probability sampling, this is
considered to be a technique that do not focuses on qualitative investigation. Convenience
sampling technique which is coming under non probability sampling technique that cat be
adopted by researcher and the reason behind opting convenience sampling is that it helps in
collecting information in much effective, urgent and simple way of pulling out favourable
information. Since the investigation is performed on the topic i.e. analysing the challenge faced
by Amazon in data management before about computing is performed with the help of secondary
data therefore probabilistic methodology or sampling technique will be adopted where saving
precious time, availability of data, saves previous money, useful for pilot studies, and so on. on
the other hand it can also be said that there are various sort of random sampling methodology
that can be adopted by a researcher but in present context, secondary investigation will be
conducted therefore there will be no use of sampling methodology as the information will be
gathered from various sources like online websites, books, articles and so on. Away with this, there
are various sampling methods and these are: simple random, stratified random, cluster, multistage
etc. These are not useful according to the current investigation because for doing current study non-
probability type of sampling will be going to applied (Cianfrocco and et. al, 2018).
Method of data analysis: Researcher may adopt both quantitative and qualitative
research analysis approach and on the other hand investigative may also take into consideration
of mixed methodology. Data analysis is based on quantitative and qualitative analysis of
35
information. Under qualitative data analysis, number of techniques included which help in
analysation of information about the specific area of study, it includes content analysis, framework
analysis, discourse analysis, grounded theory etc. These are essential but current research is based
on qualitative method and analysis quantitative data, different techniques will be suggested such as
cross-tabulation, trend analysis, MaxDiff analysis, Gap analysis, content analysis and so on. These
are consider main techniques of quantitative data analysis. According to the current investigation,
content analysis will be going to applied because it focused on counting and measuring information
present in secondary data (Divate, Sah and Singh, 2018). Under this method of data analysis,
number of themes will be created on each questions that support researcher to analysis information
or data about the leadership in minutely.
Research Ethics: This presents as a use of expert sets of accepted rules and good
principles to the social occasion, examination, revealing and distribution of information with
respect to explicit territory of study. There are number of exploration morals, for example,
objectivity, watchfulness, secrecy, mindful distribution, regard for protected innovation and
genuineness. These are principle research morals which must be follow by specialist as it help
them in assortment of essential information effectively and rivalry of each examination exercises
in moral way. Alongside this, research morals in essential exploration will be followed widely to
guarantee that the investigation yields are viewed as pertinent and strong. Fundamental
examination morals rules that will be utilized in current examination are privacy, secrecy and
educated assent. Under educated assent, scientist must give precise comprehension about
exploration point and destinations to the members and empowering them to select in or out of
including inside the essential examination. For secrecy, data of members are not thought of and
not identified with any sort of essential exploration results. Finally, for secrecy, yields of
examination are just mutual or spoken with those individuals who include a stake inside the
outcomes and are fundamental piece of examination in a particular limit.
Research Limitation: The fundamental constraint of exploration is the timespan for
performing the investigation on the chosen topic that is to analyse the challenges faced by
Amazon in relation to time management before utilising cloud computing (Doukoure and
Mnkandla, 2018). Alongside this, detailing of objectives and aim of the investigating topic,
execution of information assortment strategy, absence of past data in the Investigating area,
extent of conversation and so forth. These are primary restrictions that will be going to looked by
36
analysation of information about the specific area of study, it includes content analysis, framework
analysis, discourse analysis, grounded theory etc. These are essential but current research is based
on qualitative method and analysis quantitative data, different techniques will be suggested such as
cross-tabulation, trend analysis, MaxDiff analysis, Gap analysis, content analysis and so on. These
are consider main techniques of quantitative data analysis. According to the current investigation,
content analysis will be going to applied because it focused on counting and measuring information
present in secondary data (Divate, Sah and Singh, 2018). Under this method of data analysis,
number of themes will be created on each questions that support researcher to analysis information
or data about the leadership in minutely.
Research Ethics: This presents as a use of expert sets of accepted rules and good
principles to the social occasion, examination, revealing and distribution of information with
respect to explicit territory of study. There are number of exploration morals, for example,
objectivity, watchfulness, secrecy, mindful distribution, regard for protected innovation and
genuineness. These are principle research morals which must be follow by specialist as it help
them in assortment of essential information effectively and rivalry of each examination exercises
in moral way. Alongside this, research morals in essential exploration will be followed widely to
guarantee that the investigation yields are viewed as pertinent and strong. Fundamental
examination morals rules that will be utilized in current examination are privacy, secrecy and
educated assent. Under educated assent, scientist must give precise comprehension about
exploration point and destinations to the members and empowering them to select in or out of
including inside the essential examination. For secrecy, data of members are not thought of and
not identified with any sort of essential exploration results. Finally, for secrecy, yields of
examination are just mutual or spoken with those individuals who include a stake inside the
outcomes and are fundamental piece of examination in a particular limit.
Research Limitation: The fundamental constraint of exploration is the timespan for
performing the investigation on the chosen topic that is to analyse the challenges faced by
Amazon in relation to time management before utilising cloud computing (Doukoure and
Mnkandla, 2018). Alongside this, detailing of objectives and aim of the investigating topic,
execution of information assortment strategy, absence of past data in the Investigating area,
extent of conversation and so forth. These are primary restrictions that will be going to looked by
36
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
researcher while doing current thesis. Because, failure into these may automatically lead the
researcher to go through different difficulties.
Research reliability and validity: This is a significant part of methodology as it help
researcher in identification of research reliability and validity. Research reliability is based on
execution of test instrument or technique which is considering past study, investigation and
information collected from internet sources (Jaiswal, 2018). However, it can be said that since
the investigation is performed considering the secondary data collection tool therefore reliability
along with the validity are not much stronger because it has been analysed to that there world
some situations that took place where it can be said that the researcher got wrong information
and it made investigator to make changes which not only took time but also made the whole
dissertation less reliable and authentic in nature.
Summary
This chapter gives details related with different methods that are used to complete the
research. In order to identify challenges involved before cloud computing, quantitative methods
are utilised to have precise information related with different aspects. In addition to this,
secondary research is used which involves online sources, books, magazines for gathering of the
information. The next chapter will provide execution of the secondary research i.e. the results
will be illustrated that are attained by doing this research.
37
researcher to go through different difficulties.
Research reliability and validity: This is a significant part of methodology as it help
researcher in identification of research reliability and validity. Research reliability is based on
execution of test instrument or technique which is considering past study, investigation and
information collected from internet sources (Jaiswal, 2018). However, it can be said that since
the investigation is performed considering the secondary data collection tool therefore reliability
along with the validity are not much stronger because it has been analysed to that there world
some situations that took place where it can be said that the researcher got wrong information
and it made investigator to make changes which not only took time but also made the whole
dissertation less reliable and authentic in nature.
Summary
This chapter gives details related with different methods that are used to complete the
research. In order to identify challenges involved before cloud computing, quantitative methods
are utilised to have precise information related with different aspects. In addition to this,
secondary research is used which involves online sources, books, magazines for gathering of the
information. The next chapter will provide execution of the secondary research i.e. the results
will be illustrated that are attained by doing this research.
37
Chapter 4
Implementation
Introduction
The above section have illustrated certain research methodologies which will be
implemented within this section. This chapter will provide a primary as well as systematic
process for carrying out analysis of data that is gathered so that it can be interpreted within
substantive way for illustration of findings in clear and effective way. This research is dependent
on secondary data that is gathered through literature review by making use of distinct secondary
resources like journals, articles, books and online sources (Kareem, 2018). For carrying out
execution of secondary information and having adequate results for literature review, themes are
made for each research question. This will aid researcher within having analysis of information
in effective and easy manner.
Since, this dissertation is based on big data, VMware considering Jupyter Notebook pyspark, in
order to analyse that how customers behaviour within shopping transaction along with payment
on how fast a particular product is being purchased then any other products while considering the
shopping mode.
Big data
Big data to the enormous, various arrangements of data that develop at ever-expanding rates. It
incorporates the volume of data, the speed or speed at which it is made and gathered, and the
assortment or extent of the information focuses being secured. In other words, big data
frequently originates from numerous sources and shows up in different organizations. Also, big
data can be sorted as unstructured or organized. Organized information comprises of data
previously oversaw by the association in data sets and spreadsheets; it is every now and again
numeric in nature. Unstructured information will be data that is disorderly and doesn't fall into a
pre-decided model or configuration. It incorporates information assembled from web-based
media sources, which assist establishments with social affair data on client needs. This is why
the whole investigation is specifically relying on big data, which will help in analysing the
consumer behaviour while doing shopping and performing transactions like payments where
analyzation is being done of how fast the particular product is specifically being purchased and
much more faster ways than any other products considering different shopping modes.
38
Implementation
Introduction
The above section have illustrated certain research methodologies which will be
implemented within this section. This chapter will provide a primary as well as systematic
process for carrying out analysis of data that is gathered so that it can be interpreted within
substantive way for illustration of findings in clear and effective way. This research is dependent
on secondary data that is gathered through literature review by making use of distinct secondary
resources like journals, articles, books and online sources (Kareem, 2018). For carrying out
execution of secondary information and having adequate results for literature review, themes are
made for each research question. This will aid researcher within having analysis of information
in effective and easy manner.
Since, this dissertation is based on big data, VMware considering Jupyter Notebook pyspark, in
order to analyse that how customers behaviour within shopping transaction along with payment
on how fast a particular product is being purchased then any other products while considering the
shopping mode.
Big data
Big data to the enormous, various arrangements of data that develop at ever-expanding rates. It
incorporates the volume of data, the speed or speed at which it is made and gathered, and the
assortment or extent of the information focuses being secured. In other words, big data
frequently originates from numerous sources and shows up in different organizations. Also, big
data can be sorted as unstructured or organized. Organized information comprises of data
previously oversaw by the association in data sets and spreadsheets; it is every now and again
numeric in nature. Unstructured information will be data that is disorderly and doesn't fall into a
pre-decided model or configuration. It incorporates information assembled from web-based
media sources, which assist establishments with social affair data on client needs. This is why
the whole investigation is specifically relying on big data, which will help in analysing the
consumer behaviour while doing shopping and performing transactions like payments where
analyzation is being done of how fast the particular product is specifically being purchased and
much more faster ways than any other products considering different shopping modes.
38
VMWare
This is also considered to be the software on which present dissertation will rely upon. Basically,
VMware is a virtualization and distributed computing programming supplier situated in Palo
Alto, Calif. Established in the year of 1998, VMware is an auxiliary of Dell Technologies. EMC
Corporation initially procured VMware in the year of 2004; EMC was later gained by Dell
Technologies in the year of 2016. VMware puts together its virtualization innovations with
respect to its uncovered metal hypervisor ESX/ESXi in x86 design.
With VMware worker virtualization, a hypervisor is introduced on the physical worker to
consider numerous virtual machines (VMs) to run on a similar physical worker. Each VM can
run its own working framework (OS), which implies different OSes can run on one physical
worker. All the VMs on a similar physical worker share assets, for example, systems
administration and RAM. In the year of 2019, VMware added backing to its hypervisor to run
containerized remaining burdens in a Kubernetes bunch. These sorts of outstanding tasks at hand
can be overseen by the foundation group similarly as virtual machines and the DevOps groups
can send holders as they were utilized to. Including this, researcher has also taken into
consideration of jupyter notebook in order to run the code so that appropriate outcomes can
easily be pulled out in a specified time frame. In present context, scala cannot be considered as
an effective first language in order to learn the different elements like when venturing into the
world of data. on the other hand if it is analysed then it can be said that, Spark has given an
opportunity to take into consideration of an effective Python API named as PySpark. PySpark
allows Python programmers to interface with the Spark framework letting them manipulate data
at scale and work with objects over a distributed file system.
Themes
Theme 1: Illustrate the history of data management?
The information have been gathered from secondary sources of literature review and it
has been identified that concept of data management is evolving quickly with the needs of the
market. This leads to creation of affirmative influence on organisations who renders their
services within competitive working market.
39
This is also considered to be the software on which present dissertation will rely upon. Basically,
VMware is a virtualization and distributed computing programming supplier situated in Palo
Alto, Calif. Established in the year of 1998, VMware is an auxiliary of Dell Technologies. EMC
Corporation initially procured VMware in the year of 2004; EMC was later gained by Dell
Technologies in the year of 2016. VMware puts together its virtualization innovations with
respect to its uncovered metal hypervisor ESX/ESXi in x86 design.
With VMware worker virtualization, a hypervisor is introduced on the physical worker to
consider numerous virtual machines (VMs) to run on a similar physical worker. Each VM can
run its own working framework (OS), which implies different OSes can run on one physical
worker. All the VMs on a similar physical worker share assets, for example, systems
administration and RAM. In the year of 2019, VMware added backing to its hypervisor to run
containerized remaining burdens in a Kubernetes bunch. These sorts of outstanding tasks at hand
can be overseen by the foundation group similarly as virtual machines and the DevOps groups
can send holders as they were utilized to. Including this, researcher has also taken into
consideration of jupyter notebook in order to run the code so that appropriate outcomes can
easily be pulled out in a specified time frame. In present context, scala cannot be considered as
an effective first language in order to learn the different elements like when venturing into the
world of data. on the other hand if it is analysed then it can be said that, Spark has given an
opportunity to take into consideration of an effective Python API named as PySpark. PySpark
allows Python programmers to interface with the Spark framework letting them manipulate data
at scale and work with objects over a distributed file system.
Themes
Theme 1: Illustrate the history of data management?
The information have been gathered from secondary sources of literature review and it
has been identified that concept of data management is evolving quickly with the needs of the
market. This leads to creation of affirmative influence on organisations who renders their
services within competitive working market.
39
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Interpretation: From the graph illustrated above, it is interpreted that data management is
an incremental one that is based on their predecessors which comprises of advancements with
reference to networking, storage, software, hardware as well as computing models like cloud
computing, virtualisation, etc. At present scenario, it is era of big data (Khan, Shakil and Alam,
2018) .
Stage 1:
In 1960s, firms started storing all the information within the flat files but searching within
those files was a hard as well as time consuming process. In 1970s, RDBMS (relational database
management system) came for storing data within the peculiar structure. The level of abstraction
was added in context of SQL (structured query language) through which it becomes easy query
as well as search for data as per needs of the business (THE EVALUATION OF DATA
MANAGEMENT CONCEPTS, 2018). This stage is denoted by Manageable data structure.
Stage 2:
With RDBMS issues such as cost, processing of data, speed of accessing, data
redundancy, etc. also came up. This lead to rise within the new needs which brought in ER
40
Illustration 1: The Evolution of Data Management Concepts
an incremental one that is based on their predecessors which comprises of advancements with
reference to networking, storage, software, hardware as well as computing models like cloud
computing, virtualisation, etc. At present scenario, it is era of big data (Khan, Shakil and Alam,
2018) .
Stage 1:
In 1960s, firms started storing all the information within the flat files but searching within
those files was a hard as well as time consuming process. In 1970s, RDBMS (relational database
management system) came for storing data within the peculiar structure. The level of abstraction
was added in context of SQL (structured query language) through which it becomes easy query
as well as search for data as per needs of the business (THE EVALUATION OF DATA
MANAGEMENT CONCEPTS, 2018). This stage is denoted by Manageable data structure.
Stage 2:
With RDBMS issues such as cost, processing of data, speed of accessing, data
redundancy, etc. also came up. This lead to rise within the new needs which brought in ER
40
Illustration 1: The Evolution of Data Management Concepts
models through which data usability was increased. This also enabled developers within creating
critical techniques through which entities can be joined together. This is a ER Stage.
Stage 3:
The next problem which came was ample of data and it was growing at enhanced
volume. For dealing with this, data warehouse came which enabled firms like Amazon to store
huge amount of data irrespective of its volume and then opt for subset of information that is
smaller as well as emphasise on peculiar area of business. The volume of data is large which
creates an impact on agility and speed of business. This means that refinements were needed and
for this data marts came (Kodali and John, 2020). They emphasise on peculiar issues that are
streamlined for supporting needs of speedy queries as compared to data warehouse. They have
been evolved rapidly with the emerging technologies and takes advantage for improvisation of
aspects associated with scalability for hardware, virtualisation technologies along with ability for
creation of integrated software systems. This implies a stage of Data warehouse & Data marts
Stage 4:
With the increase in internet, the quantity of unstructured data is increasing that comes in
the form of video, audio, web content and images. For this, the initial solution that was
implemented was Binary Large Objects (BLOBs). In this case, unstructured data entities are
stored within relational database like uninterrupted chunk of data. More unified models were
used as a solution that incorporated information recognition, process management, text
management, version control along with the metadata. But with this, firms understood that they
need to have new solutions for dealing with quantity and variety of information. This is referred
to as web & unstructured content stage.
Stage 5:
Big data acts as increment within the data management and it was for the first time that
storage as well as computing cycle has reached a tipping point. This is probable that data can be
virtualised which leads to its storage in an cost efficacious manner within the cloud. It enabled
Amazon to have cheap computer memory as well as improvised network speed that renders edge
for big data stage (Linthicum, 2017). This is denoted by Big Data stage.
For having a required end stage, technologies are converging which further aids Amazon
or any other organisation to have an actionable insight and generate higher business values. This
41
critical techniques through which entities can be joined together. This is a ER Stage.
Stage 3:
The next problem which came was ample of data and it was growing at enhanced
volume. For dealing with this, data warehouse came which enabled firms like Amazon to store
huge amount of data irrespective of its volume and then opt for subset of information that is
smaller as well as emphasise on peculiar area of business. The volume of data is large which
creates an impact on agility and speed of business. This means that refinements were needed and
for this data marts came (Kodali and John, 2020). They emphasise on peculiar issues that are
streamlined for supporting needs of speedy queries as compared to data warehouse. They have
been evolved rapidly with the emerging technologies and takes advantage for improvisation of
aspects associated with scalability for hardware, virtualisation technologies along with ability for
creation of integrated software systems. This implies a stage of Data warehouse & Data marts
Stage 4:
With the increase in internet, the quantity of unstructured data is increasing that comes in
the form of video, audio, web content and images. For this, the initial solution that was
implemented was Binary Large Objects (BLOBs). In this case, unstructured data entities are
stored within relational database like uninterrupted chunk of data. More unified models were
used as a solution that incorporated information recognition, process management, text
management, version control along with the metadata. But with this, firms understood that they
need to have new solutions for dealing with quantity and variety of information. This is referred
to as web & unstructured content stage.
Stage 5:
Big data acts as increment within the data management and it was for the first time that
storage as well as computing cycle has reached a tipping point. This is probable that data can be
virtualised which leads to its storage in an cost efficacious manner within the cloud. It enabled
Amazon to have cheap computer memory as well as improvised network speed that renders edge
for big data stage (Linthicum, 2017). This is denoted by Big Data stage.
For having a required end stage, technologies are converging which further aids Amazon
or any other organisation to have an actionable insight and generate higher business values. This
41
enables them to have adequate space, capability for manage and adequate time for carrying out
analysis.
Theme 2: What are the challenges faced by Amazon in data management?
The above illustrated secondary information is gathered via distinct sources of literature
review, this has been identified that there were certain challenges that were faced by Amazon
with reference to data management.
Interpretation: It can be interpreted from above chart that major challenge that is being
faced by organisation like Amazon is speed of data analysis as instant results are needed for
dealing with different situations. But the major challenge associated is production of real-time
analysis for this it is necessary to have critical consideration for data architecture as well as its
flow (Merla and Liang, 2017). The next aspect involved is data integration which involves
bringing in together diverse technologies initiatives around the firm. For this, Amazon can gather
42
Illustration 1: Challenges with Data Management
analysis.
Theme 2: What are the challenges faced by Amazon in data management?
The above illustrated secondary information is gathered via distinct sources of literature
review, this has been identified that there were certain challenges that were faced by Amazon
with reference to data management.
Interpretation: It can be interpreted from above chart that major challenge that is being
faced by organisation like Amazon is speed of data analysis as instant results are needed for
dealing with different situations. But the major challenge associated is production of real-time
analysis for this it is necessary to have critical consideration for data architecture as well as its
flow (Merla and Liang, 2017). The next aspect involved is data integration which involves
bringing in together diverse technologies initiatives around the firm. For this, Amazon can gather
42
Illustration 1: Challenges with Data Management
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
data within the single repository and this is crucial for artificial intelligence initiative as it will
enable to operate probable dataset. An example can be taken into consideration for understanding
the concept of data integration like the winrar64 do not work on the 32 bit windows due to
compatibility issues. Same it goes with data, pdf document cannot be opened with or integrated
with word document unless it is converted into appropriate format. Thus, integration is one of the
critical challenge which creates a pessimistic impact on the way in which operations are being
conducted by the firm (Mukherjee, 2019). It declines the speed of processing even Amazon may
not be able to have access to some documents which will lead to hamper their functionalities.
Furthermore, filling up skills gap is critical requirement so that they can deal with
information in an adequate manner by opting for latest technologies for data management.
Automation can be taken into consideration by Amazon for dealing with complexity along with
routine tasks. Near about 90% organisations have automated certain aspects of their data
management processes that involves reporting front-end steps associated with filtering and back-
end steps related with analysis as they comes within (TRENDS IN DATA MANAGEMENT,
2020). Architecture is not regarded as major challenge by organisation as they have various
options for eliminating this aspect like AWS, Azure or any other cloud vendor. But network is in
continuous flux due to the fact that new data sources are tied within and traffic has to be also
prioritised. Within the data management, speed of accessibility is major challenge as it enables
Amazon to formulate any decisions or carry out their normal functionalities depending upon
peculiar instance of time or requirements.
Theme 3: Depict benefits collected by Amazon through usage of AWS (cloud computing) in data
management?
As per the data that has been collected for literature review, it has been interpreted that by
making use of AWS, organisation can have enhanced benefits. AWS is being furnished by
Amazon for having enhanced capabilities for dealing with the data in secured way so that
knowledge can be extracted from this in a precise manner.
43
enable to operate probable dataset. An example can be taken into consideration for understanding
the concept of data integration like the winrar64 do not work on the 32 bit windows due to
compatibility issues. Same it goes with data, pdf document cannot be opened with or integrated
with word document unless it is converted into appropriate format. Thus, integration is one of the
critical challenge which creates a pessimistic impact on the way in which operations are being
conducted by the firm (Mukherjee, 2019). It declines the speed of processing even Amazon may
not be able to have access to some documents which will lead to hamper their functionalities.
Furthermore, filling up skills gap is critical requirement so that they can deal with
information in an adequate manner by opting for latest technologies for data management.
Automation can be taken into consideration by Amazon for dealing with complexity along with
routine tasks. Near about 90% organisations have automated certain aspects of their data
management processes that involves reporting front-end steps associated with filtering and back-
end steps related with analysis as they comes within (TRENDS IN DATA MANAGEMENT,
2020). Architecture is not regarded as major challenge by organisation as they have various
options for eliminating this aspect like AWS, Azure or any other cloud vendor. But network is in
continuous flux due to the fact that new data sources are tied within and traffic has to be also
prioritised. Within the data management, speed of accessibility is major challenge as it enables
Amazon to formulate any decisions or carry out their normal functionalities depending upon
peculiar instance of time or requirements.
Theme 3: Depict benefits collected by Amazon through usage of AWS (cloud computing) in data
management?
As per the data that has been collected for literature review, it has been interpreted that by
making use of AWS, organisation can have enhanced benefits. AWS is being furnished by
Amazon for having enhanced capabilities for dealing with the data in secured way so that
knowledge can be extracted from this in a precise manner.
43
Interpretation: From above figure it can be interpreted that, AWS cloud platform renders
enhanced benefits for business that comprises of ease of usage and maintenance of IT
infrastructure by optimisation of operational cost by making use of pay per use model. AWS
provides Amazon with mobile friendly access that comprises of Mobile Hub that supports them
within having access for compatible as well as adequate features of applications. The console can
be utilised for developing, testing and monitoring app. Mobile SDK is liable for supporting IOS,
Web, Android through which different AWS services can be accessed. It furnishes user friendly
platform as this enables Amazon to have precise documentation as well as convenient console.
IAM is one of the crucial service through which access can be managed by admins (Benefits of
AWS, 2020). This provides secured and standard infrastructure in which user has to make
payment for functionalities that are used by them. AWS provides tools that comprised of
software as well as hardware encryption to prevent the data from any kind DDoS attacks,
filtering of harmful traffic and transport layer security certificates.
AWS renders Amazon with high storage that will be utilised either in combination or
independently. They provides distinct storage options that enables Amazon to deliver adequate
services in precise manner (Mukherjee and Kar, 2017). They are EBS, Amazon Glacier, Elastic
File System, S3 and Storage transport devices (like snowmobile and snowball). Pay per use
model enables users to pay only for those resources which are utilised by them. It enables firms
to have metered functionalities. Multi-Region backups, illustrates the regions in which users
44
Illu
stration 1: Benefits of AWS
enhanced benefits for business that comprises of ease of usage and maintenance of IT
infrastructure by optimisation of operational cost by making use of pay per use model. AWS
provides Amazon with mobile friendly access that comprises of Mobile Hub that supports them
within having access for compatible as well as adequate features of applications. The console can
be utilised for developing, testing and monitoring app. Mobile SDK is liable for supporting IOS,
Web, Android through which different AWS services can be accessed. It furnishes user friendly
platform as this enables Amazon to have precise documentation as well as convenient console.
IAM is one of the crucial service through which access can be managed by admins (Benefits of
AWS, 2020). This provides secured and standard infrastructure in which user has to make
payment for functionalities that are used by them. AWS provides tools that comprised of
software as well as hardware encryption to prevent the data from any kind DDoS attacks,
filtering of harmful traffic and transport layer security certificates.
AWS renders Amazon with high storage that will be utilised either in combination or
independently. They provides distinct storage options that enables Amazon to deliver adequate
services in precise manner (Mukherjee and Kar, 2017). They are EBS, Amazon Glacier, Elastic
File System, S3 and Storage transport devices (like snowmobile and snowball). Pay per use
model enables users to pay only for those resources which are utilised by them. It enables firms
to have metered functionalities. Multi-Region backups, illustrates the regions in which users
44
Illu
stration 1: Benefits of AWS
can keep instances of their data. This enables firm like Amazon to launch their EC2 instances
through which applications of users can be protected. Scalability and reliability, Amazon
renders an infrastructure which scales depending upon utilisation. In case if users scale down
their instances which are being utilised by them then the cost will be low. Scalability acts as an
asset for firms like Amazon as they do not need any kind of additional resource in case they run
out of the storage space (Pérez and et. al, 2019). Database, different databases are being offered
by Amazon that enables them to manage different activities in precise manner like DynamoDB,
Aurura, Neptune, QLDB and many others.
In addition to all this, it also provides Amazon with an option for management as well
as monitoring different tasks that comprises of tracking of resources, cloud resource
configuration and health of application. In addition to this, there will be automation within
infrastructure and activities conducted by users can also be retained in an appropriate manner.
These are benefits that are being rendered by AWS and provides organisation with enhanced
ways through which they can render their functionalities in an adequate manner.
Theme 4: How AWS help in business expansion by effective utilisation of data management?
From the above collected secondary information from ample number of secondary
resources, this has been identified that by making use of AWS firms can expand their
functionalities in an adequate manner (Pouyanfar and et. al, 2018). The reason behind this is that,
it provides organisation with different options through which firms like Amazon can amplify
their services in an appropriate manner.
45
through which applications of users can be protected. Scalability and reliability, Amazon
renders an infrastructure which scales depending upon utilisation. In case if users scale down
their instances which are being utilised by them then the cost will be low. Scalability acts as an
asset for firms like Amazon as they do not need any kind of additional resource in case they run
out of the storage space (Pérez and et. al, 2019). Database, different databases are being offered
by Amazon that enables them to manage different activities in precise manner like DynamoDB,
Aurura, Neptune, QLDB and many others.
In addition to all this, it also provides Amazon with an option for management as well
as monitoring different tasks that comprises of tracking of resources, cloud resource
configuration and health of application. In addition to this, there will be automation within
infrastructure and activities conducted by users can also be retained in an appropriate manner.
These are benefits that are being rendered by AWS and provides organisation with enhanced
ways through which they can render their functionalities in an adequate manner.
Theme 4: How AWS help in business expansion by effective utilisation of data management?
From the above collected secondary information from ample number of secondary
resources, this has been identified that by making use of AWS firms can expand their
functionalities in an adequate manner (Pouyanfar and et. al, 2018). The reason behind this is that,
it provides organisation with different options through which firms like Amazon can amplify
their services in an appropriate manner.
45
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Interpretation: From above figure it can be interpreted that while opting for delivering
services by online mediums AWS is an apt option due to the global infrastructure. For the firms
like Amazon or any other who renders their services on global markets and want to enter within
the new market, this is an apt option. In addition to this, AWS furnishes charge per utilisation
and exact amount for resources. When business will scale, they automatically have capacity but
they will be still paying for what is being utilised by them at peculiar instance of time. Scaling
can be decreased or increased depending upon needs of the users (The Definitive Benefits of
AWS Cloud Adoption, 2020). This leads them to have enhanced flexibility with reference to
business functions. In addition to this, agility and speed is also offered by these aspects like
Amazon can have space within the server within just few minutes. For this, Amazon have the
option for making use of elastic load balancing, autoscaling, Elastiocache, Redhshift (high
performance database) and storage speed can be enhanced by making use of AWS EBS, Glacier,
Simple Storage Service.
AWS provides enterprises with best security as well as compliance protocols and that is
the reason that Dow Jones, HealthCare.gov and Dow Jones have opted for this. This can be
regarded as best option for expansion of their (Amazon's) operations or services as the stack has
been expanded beyond furnishing storage and computing (Pramod and et. al, 2020). AWS
provides organisations to furnish access to robotics, AR/VR (augmented reality and virtual
46
Illustration 1: Amazon Web Services (AWS)
services by online mediums AWS is an apt option due to the global infrastructure. For the firms
like Amazon or any other who renders their services on global markets and want to enter within
the new market, this is an apt option. In addition to this, AWS furnishes charge per utilisation
and exact amount for resources. When business will scale, they automatically have capacity but
they will be still paying for what is being utilised by them at peculiar instance of time. Scaling
can be decreased or increased depending upon needs of the users (The Definitive Benefits of
AWS Cloud Adoption, 2020). This leads them to have enhanced flexibility with reference to
business functions. In addition to this, agility and speed is also offered by these aspects like
Amazon can have space within the server within just few minutes. For this, Amazon have the
option for making use of elastic load balancing, autoscaling, Elastiocache, Redhshift (high
performance database) and storage speed can be enhanced by making use of AWS EBS, Glacier,
Simple Storage Service.
AWS provides enterprises with best security as well as compliance protocols and that is
the reason that Dow Jones, HealthCare.gov and Dow Jones have opted for this. This can be
regarded as best option for expansion of their (Amazon's) operations or services as the stack has
been expanded beyond furnishing storage and computing (Pramod and et. al, 2020). AWS
provides organisations to furnish access to robotics, AR/VR (augmented reality and virtual
46
Illustration 1: Amazon Web Services (AWS)
reality), IoT and blockchain solutions. The tools and techniques offered by AWS comprises of
developer, engagement & management tools, database & storage solutions, app integration,
compute, business productivity tools, predictive analytics and machine learning (Sarkar and
Shah, 2018).
The compute services offered by AWS includes, EC2, Elastic Beanstalk, LightSail, AWS
Lambda and Elastic Container Service for Kubernetes (EKS). With references to migration in
which data will be transferred among AWS and datacentre comprises of Server Migration
Service (SMS), Snowball and Database Migration Service (DMS). In context of storage, AWS
provides, EBS (Elastic BlockStore), Glacier and Storage gateway. For dealing with security
aspects, AWS furnishes IAM, Inspector, Web Application Firewall, Cloud Directory and many
more. Similarly, by opting for AWS, firms can have database services, analytics, management
(Config, Service Catalog, OpsWorks, etc.), Internet of Things (IoT Core, FreeRTOS, etc.),
application services (SNS, SWF, SQF, Elastic transcoder, etc.), Deployment & Management
(CloudTrail, CloudWatch and CloudHSM), Developer tools (CodeBuild, CodeDeploy,
CodeCommit, CodeStar, etc.), Mobile services (Cognito, AWS AppSync, Device Farm, etc.),
business productivity (Chime, WorkMail, WorkDocs and Alexa) and there are many more
aspects that are being furnished by AWS. This will aid firms within having enhanced levels of
security and adequate way through which they can deliver their services in precise manner
(Rafique, 2019).
Summary
This chapter provides interpretation of different themes that are associated with the topic
advent of AWS in data management. It has been determined that cloud computing acts as an
asset for organisation that enables them to deliver their services in precise manner. AWS renders
organisation with enhanced benefits which makes it easy for firms to expand their operations
without any hassle within their operations that are associated with integration, migration or
security of their information that is present on the cloud. In addition to this, security, agility,
scalability, accessibility, speed and many more features are there through which overall
functionalities of firm can be amplified without any kind of issues related with data management.
The next chapter will provide a concluding note with respect to the entire dissertation and
some recommendations will also be given with respect to future work.
47
developer, engagement & management tools, database & storage solutions, app integration,
compute, business productivity tools, predictive analytics and machine learning (Sarkar and
Shah, 2018).
The compute services offered by AWS includes, EC2, Elastic Beanstalk, LightSail, AWS
Lambda and Elastic Container Service for Kubernetes (EKS). With references to migration in
which data will be transferred among AWS and datacentre comprises of Server Migration
Service (SMS), Snowball and Database Migration Service (DMS). In context of storage, AWS
provides, EBS (Elastic BlockStore), Glacier and Storage gateway. For dealing with security
aspects, AWS furnishes IAM, Inspector, Web Application Firewall, Cloud Directory and many
more. Similarly, by opting for AWS, firms can have database services, analytics, management
(Config, Service Catalog, OpsWorks, etc.), Internet of Things (IoT Core, FreeRTOS, etc.),
application services (SNS, SWF, SQF, Elastic transcoder, etc.), Deployment & Management
(CloudTrail, CloudWatch and CloudHSM), Developer tools (CodeBuild, CodeDeploy,
CodeCommit, CodeStar, etc.), Mobile services (Cognito, AWS AppSync, Device Farm, etc.),
business productivity (Chime, WorkMail, WorkDocs and Alexa) and there are many more
aspects that are being furnished by AWS. This will aid firms within having enhanced levels of
security and adequate way through which they can deliver their services in precise manner
(Rafique, 2019).
Summary
This chapter provides interpretation of different themes that are associated with the topic
advent of AWS in data management. It has been determined that cloud computing acts as an
asset for organisation that enables them to deliver their services in precise manner. AWS renders
organisation with enhanced benefits which makes it easy for firms to expand their operations
without any hassle within their operations that are associated with integration, migration or
security of their information that is present on the cloud. In addition to this, security, agility,
scalability, accessibility, speed and many more features are there through which overall
functionalities of firm can be amplified without any kind of issues related with data management.
The next chapter will provide a concluding note with respect to the entire dissertation and
some recommendations will also be given with respect to future work.
47
Chapter 5
Conclusion and Recommendation
Introduction
This dissertation renders an insight into challenges that are associated with data
management that are being faced by organisations while delivering their services within the
market. Data management refers to practice that comprises of management of data like a
valuable resource for unlocking their potential for organisation. For this, firms need to have
adequate strategies and methods for accessing, integrating, cleaning, governing, storing and
preparation of data for carrying out analytics.
Overview of dissertation
From above it can be concluded that data management refers to comprehensive
aggregation of concepts, practices, processes, procedures along with accompanying systems
which aids organisation to possess control over their data resources. The concept of data
management has evolved a lot like initially punch cards were used for storing the data but it
became difficult to search for any information from them. Depending upon issues that were there
within the solution, the new way was brought in through which respective aspect can be handled
in an adequate manner. With the competition, that prevails within the market, data management
has became a critical aspect as the amount of data is increasing and it becomes difficult to
analyses it. Thus, this concept is the major challenges and if this is not handled in an adequate
manner then this might lead to creation of pessimistic impact on overall functioning of the firm.
The above dissertation clearly illustrates the evolution of data management. This involves
assembly languages (FORTRAN and LISP), online data management (SQL & NoSQL), data
warehousing, etc. This evolution is liable for transforming the world by avoiding collision
among data and there is a probability that automatically data will be navigated to the destination
without human interference.
But while dealing with data, there are certain challenges that are associated with this
aspect, i.e. lack of workforce & technologies, with the advancements in technology it becomes
difficult to acquire human resources who have adequate knowledge with respect to that peculiar
technology and investment within training will be very high due to restricted number of experts
in that field. The other challenge that si being experienced by Amazon is alteration within the
48
Conclusion and Recommendation
Introduction
This dissertation renders an insight into challenges that are associated with data
management that are being faced by organisations while delivering their services within the
market. Data management refers to practice that comprises of management of data like a
valuable resource for unlocking their potential for organisation. For this, firms need to have
adequate strategies and methods for accessing, integrating, cleaning, governing, storing and
preparation of data for carrying out analytics.
Overview of dissertation
From above it can be concluded that data management refers to comprehensive
aggregation of concepts, practices, processes, procedures along with accompanying systems
which aids organisation to possess control over their data resources. The concept of data
management has evolved a lot like initially punch cards were used for storing the data but it
became difficult to search for any information from them. Depending upon issues that were there
within the solution, the new way was brought in through which respective aspect can be handled
in an adequate manner. With the competition, that prevails within the market, data management
has became a critical aspect as the amount of data is increasing and it becomes difficult to
analyses it. Thus, this concept is the major challenges and if this is not handled in an adequate
manner then this might lead to creation of pessimistic impact on overall functioning of the firm.
The above dissertation clearly illustrates the evolution of data management. This involves
assembly languages (FORTRAN and LISP), online data management (SQL & NoSQL), data
warehousing, etc. This evolution is liable for transforming the world by avoiding collision
among data and there is a probability that automatically data will be navigated to the destination
without human interference.
But while dealing with data, there are certain challenges that are associated with this
aspect, i.e. lack of workforce & technologies, with the advancements in technology it becomes
difficult to acquire human resources who have adequate knowledge with respect to that peculiar
technology and investment within training will be very high due to restricted number of experts
in that field. The other challenge that si being experienced by Amazon is alteration within the
48
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
format as it is not necessary that the format of the data is supported by respective storage
medium. Analysis, integration and volume are some other challenges which are being
experienced by firms by delivering their services within the market. To sustain within the market
it is important that these challenges are handled in an adequate manner so that market
requirements as well as decisions related with the same can be framed in a precise way.
AWS (Amazon Web Service) refers to a platform that is liable for offering reliable,
scalable, flexible, cost-effective and easy-to use cloud computing solution. This is being offered
by Amazon for dealing with challenges that are being faced by them and others while delivering
their services within the market. This platform is being developed by aggregation of PaaS, SaaS
and IaaS. AWS provides firms with enhanced features that comprises of scalability, reliability,
multi-region backup, security features, broad range of tools and techniques, artificial
intelligence,AR/VR solutions and many more functionalities are being are being provided by
them through which firm can address the challenges associated with data management. This will
also enable them to ensure that they are able to furnish their services in precise manner without
any kind of hassle within the ways of working. In addition to this, it will further aid them within
expanding their operations in precise manner. For an example, Amazon Aurara is a database that
can be utilised within applications of E-commerce and ERP which online services. Depending
upon needs firm can scale up or down their usage of applications.
Recommendation
It is recommended that firm like Amazon can have enhanced benefits by making use of
AWS but while opting fro peculiar platform it becomes important to address the limitations
associated with this. Though it makes use of pay per use model but the amount associated with
technical aspects and the package is for enterprise, business and developer the charge is high .
Furthermore, there is a high learning curve which means that there has to be experts who can
furnish precise information with respect to AWS so that all the aspects can be clearly understood
and accordingly utilised without discrepancies.
Summary
This chapter renders a concluding note in context of challenges that are being
experienced by organisation before advent of cloud computing. There were different ways in
which this is being carried out and when problems or issues were identified then new ways were
49
medium. Analysis, integration and volume are some other challenges which are being
experienced by firms by delivering their services within the market. To sustain within the market
it is important that these challenges are handled in an adequate manner so that market
requirements as well as decisions related with the same can be framed in a precise way.
AWS (Amazon Web Service) refers to a platform that is liable for offering reliable,
scalable, flexible, cost-effective and easy-to use cloud computing solution. This is being offered
by Amazon for dealing with challenges that are being faced by them and others while delivering
their services within the market. This platform is being developed by aggregation of PaaS, SaaS
and IaaS. AWS provides firms with enhanced features that comprises of scalability, reliability,
multi-region backup, security features, broad range of tools and techniques, artificial
intelligence,AR/VR solutions and many more functionalities are being are being provided by
them through which firm can address the challenges associated with data management. This will
also enable them to ensure that they are able to furnish their services in precise manner without
any kind of hassle within the ways of working. In addition to this, it will further aid them within
expanding their operations in precise manner. For an example, Amazon Aurara is a database that
can be utilised within applications of E-commerce and ERP which online services. Depending
upon needs firm can scale up or down their usage of applications.
Recommendation
It is recommended that firm like Amazon can have enhanced benefits by making use of
AWS but while opting fro peculiar platform it becomes important to address the limitations
associated with this. Though it makes use of pay per use model but the amount associated with
technical aspects and the package is for enterprise, business and developer the charge is high .
Furthermore, there is a high learning curve which means that there has to be experts who can
furnish precise information with respect to AWS so that all the aspects can be clearly understood
and accordingly utilised without discrepancies.
Summary
This chapter renders a concluding note in context of challenges that are being
experienced by organisation before advent of cloud computing. There were different ways in
which this is being carried out and when problems or issues were identified then new ways were
49
determined through which they can be resolved. It is crucial for firm to ensure that they opt for
latest technologies as this enables them to deal with the issues that they are facing and will have
enhanced solution for the same. AWS is evolving technology that furnishes wide range of tools
and techniques through which firms can enhance their capabilities by opting for relevant method.
50
latest technologies as this enables them to deal with the issues that they are facing and will have
enhanced solution for the same. AWS is evolving technology that furnishes wide range of tools
and techniques through which firms can enhance their capabilities by opting for relevant method.
50
References
Books & Journals
Agarwal, P. and Alam, M., 2020. Open Service Platforms for IoT. In Internet of Things
(IoT) (pp. 43-59). Springer, Cham.
Baldwin, P.R. And et. al, 2018. Big data in cryoEM: automated collection, processing and
accessibility of EM data. Current opinion in microbiology, 43, pp.1-8.
Beck, M., Hao, W. and Campan, A., 2017, January. Accelerating the mobile cloud: Using
amazon mobile analytics and k-means clustering. In 2017 IEEE 7th Annual Computing
and Communication Workshop and Conference (CCWC) (pp. 1-7). IEEE.
Caesarendra, W. and et. al, 2018. An AWS machine learning-based indirect monitoring method
for deburring in aerospace industries towards industry 4.0. Applied Sciences, 8(11),
p.2165.
Chang, V.I., 2020. A proposed framework for cloud computing adoption. In Sustainable
Business: Concepts, Methodologies, Tools, and Applications (pp. 978-1003). IGI Global.
Cianfrocco, M.A. and et. al, 2018. cryoem-cloud-tools: A software platform to deploy and
manage cryo-EM jobs in the cloud. Journal of structural biology, 203(3), pp.230-235.
da Silva, F.S. and Nascimento, M.H.R., 2020. Major Challenges Facing Cloud
Migration. ITEGAM-JETIA, 6(21), pp.59-65.
Demirbas, M., 2020. The Advent of Tightly Synchronized Clocks in Distributed Systems.
Divate, R., Sah, S. and Singh, M., 2018. High performance computing and big data. In Guide to
big data applications (pp. 125-147). Springer, Cham.
Doukoure, G.A.K. and Mnkandla, E., 2018, August. Facilitating the Management of Agile and
Devops Activities: Implementation of a Data Consolidator. In 2018 International
Conference on Advances in Big Data, Computing and Data Communication Systems
(icABCD) (pp. 1-6). IEEE.
Hurwitz, J.S. and Kirsch, D., 2020. Cloud computing for dummies. John Wiley & Sons.
Jaiswal, J.K., 2018. Cloud Computing for Big Data Analytics Projects.
Kaoudi, Z., Manolescu, I. and Zampetakis, S., 2020. Cloud-Based RDF Data
Management. Synthesis Lectures on Data Management, 15(1), pp.1-103.
Kareem, M., 2018. Prevention of SQL Injection Attacks using AWS WAF.
Karim, A. and et. al, 2020. Big data management in participatory sensing: Issues, trends and
future directions. Future Generation Computer Systems, 107, pp.942-955.
Khan, S., Shakil, K.A. and Alam, M., 2018. Cloud-based big data analytics—a survey of current
research and future directions. In Big data analytics (pp. 595-604). Springer, Singapore.
Kodali, R.K. and John, J., 2020, February. Smart Monitoring of Solar Panels Using AWS.
In 2020 International Conference on Power Electronics & IoT Applications in
Renewable Energy and its Control (PARC) (pp. 422-427). IEEE.
Li, Y. and et. al, 2020. Big Data and Cloud Computing. In Manual of Digital Earth (pp. 325-
355). Springer, Singapore.
Linthicum, D.S., 2017. Cloud computing changes data integration forever: What's needed right
now. IEEE Cloud Computing, 4(3), pp.50-53.
Mbelli, T.M., 2019, August. Computational Secure ORAM (COMP SE-ORAM) with [omega]
(log n) Overhead: Amazon S3 Case Study–Random Access Location. In 2019 IEEE
Cloud Summit (pp. 99-102). IEEE.
51
Books & Journals
Agarwal, P. and Alam, M., 2020. Open Service Platforms for IoT. In Internet of Things
(IoT) (pp. 43-59). Springer, Cham.
Baldwin, P.R. And et. al, 2018. Big data in cryoEM: automated collection, processing and
accessibility of EM data. Current opinion in microbiology, 43, pp.1-8.
Beck, M., Hao, W. and Campan, A., 2017, January. Accelerating the mobile cloud: Using
amazon mobile analytics and k-means clustering. In 2017 IEEE 7th Annual Computing
and Communication Workshop and Conference (CCWC) (pp. 1-7). IEEE.
Caesarendra, W. and et. al, 2018. An AWS machine learning-based indirect monitoring method
for deburring in aerospace industries towards industry 4.0. Applied Sciences, 8(11),
p.2165.
Chang, V.I., 2020. A proposed framework for cloud computing adoption. In Sustainable
Business: Concepts, Methodologies, Tools, and Applications (pp. 978-1003). IGI Global.
Cianfrocco, M.A. and et. al, 2018. cryoem-cloud-tools: A software platform to deploy and
manage cryo-EM jobs in the cloud. Journal of structural biology, 203(3), pp.230-235.
da Silva, F.S. and Nascimento, M.H.R., 2020. Major Challenges Facing Cloud
Migration. ITEGAM-JETIA, 6(21), pp.59-65.
Demirbas, M., 2020. The Advent of Tightly Synchronized Clocks in Distributed Systems.
Divate, R., Sah, S. and Singh, M., 2018. High performance computing and big data. In Guide to
big data applications (pp. 125-147). Springer, Cham.
Doukoure, G.A.K. and Mnkandla, E., 2018, August. Facilitating the Management of Agile and
Devops Activities: Implementation of a Data Consolidator. In 2018 International
Conference on Advances in Big Data, Computing and Data Communication Systems
(icABCD) (pp. 1-6). IEEE.
Hurwitz, J.S. and Kirsch, D., 2020. Cloud computing for dummies. John Wiley & Sons.
Jaiswal, J.K., 2018. Cloud Computing for Big Data Analytics Projects.
Kaoudi, Z., Manolescu, I. and Zampetakis, S., 2020. Cloud-Based RDF Data
Management. Synthesis Lectures on Data Management, 15(1), pp.1-103.
Kareem, M., 2018. Prevention of SQL Injection Attacks using AWS WAF.
Karim, A. and et. al, 2020. Big data management in participatory sensing: Issues, trends and
future directions. Future Generation Computer Systems, 107, pp.942-955.
Khan, S., Shakil, K.A. and Alam, M., 2018. Cloud-based big data analytics—a survey of current
research and future directions. In Big data analytics (pp. 595-604). Springer, Singapore.
Kodali, R.K. and John, J., 2020, February. Smart Monitoring of Solar Panels Using AWS.
In 2020 International Conference on Power Electronics & IoT Applications in
Renewable Energy and its Control (PARC) (pp. 422-427). IEEE.
Li, Y. and et. al, 2020. Big Data and Cloud Computing. In Manual of Digital Earth (pp. 325-
355). Springer, Singapore.
Linthicum, D.S., 2017. Cloud computing changes data integration forever: What's needed right
now. IEEE Cloud Computing, 4(3), pp.50-53.
Mbelli, T.M., 2019, August. Computational Secure ORAM (COMP SE-ORAM) with [omega]
(log n) Overhead: Amazon S3 Case Study–Random Access Location. In 2019 IEEE
Cloud Summit (pp. 99-102). IEEE.
51
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Merla, P. and Liang, Y., 2017, December. Data analysis using hadoop MapReduce environment.
In 2017 IEEE International Conference on Big Data (Big Data) (pp. 4783-4785). IEEE.
Mukherjee, R. and Kar, P., 2017, January. A comparative review of data warehousing ETL tools
with new trends and industry insight. In 2017 IEEE 7th International Advance
Computing Conference (IACC) (pp. 943-948). IEEE.
Mukherjee, S., 2019. Benefits of AWS in Modern Cloud. Available at SSRN 3415956.
Nagpure, S. and et. al, 2019, April. Data Leakage Agent Detection in Cloud Computing. In 2nd
International Conference on Advances in Science & Technology (ICAST).
Pérez, A. and et. al, 2019, July. On-premises Serverless Computing for Event-Driven Data
Processing Applications. In 2019 IEEE 12th International Conference on Cloud
Computing (CLOUD) (pp. 414-421). IEEE.
Pouyanfar, S. and et. al, 2018. Multimedia big data analytics: A survey. ACM Computing
Surveys (CSUR), 51(1), pp.1-34.
Pramod, K. and et. al, 2020. IMPLEMENTATION OF CHATBOT USING AWS AND
GUPSHUP API. Scientific and practical cyber security journal.
Rafique, A., 2019. Middleware for Data Management in Multi-Cloud.
Roe, R., 2020. Making the case for cloud: ROBERT ROE CONSIDERS THE LATEST CLOUD
AND SAAS TECHNOLOGY, AND THE BENEFITS IT CAN PROVIDE TO
LABORATORIES WITH TODAY'S WORKFLOWS AND AI
INITIATIVES. Scientific Computing World, 170(170), pp.10-13.
Roh, Y., Heo, G. and Whang, S.E., 2019. A survey on data collection for machine learning: a big
data-ai integration perspective. IEEE Transactions on Knowledge and Data
Engineering.
Sanchez, M., Beak, W. and Saxena, M., Cognitive Scale Inc, 2019. Hybrid data architecture
having a cognitive data management module for use within a cognitive environment.
U.S. Patent 10,438,122.
Sanchez, M., Beak, W. and Saxena, M., Cognitive Scale Inc, 2019. Method for using hybrid
data architecture having a cognitive data management module within a cognitive
environment. U.S. Patent 10,354,190.
Sarkar, A. and Shah, A., 2018. Learning AWS: Design, build, and deploy responsive
applications using AWS Cloud components. Packt Publishing Ltd.
SGIT, I., 2019. Exploring Apache Spark Data APIs for Water Big Data Management. Advanced
Intelligent Systems for Sustainable Development (AI2SD’2018): Vol 3: Advanced
Intelligent Systems Applied to Environment, 913, p.105.
Shakil, K. and et. al, 2018, May. Exploiting data reduction principles in cloud-based data
management for cryo-image data. In Proceedings of the 2018 International conference
on computers in management and business (pp. 61-66).
Siddiqa, A., Karim, A. and Gani, A., 2017. Big data storage technologies: a survey. Frontiers of
Information Technology & Electronic Engineering, 18(8), pp.1040-1070.
Smith, A. and et. al, 2019. Astronomy should be in the clouds. Bull. Am. Astron. Soc., 51, p.55.
52
In 2017 IEEE International Conference on Big Data (Big Data) (pp. 4783-4785). IEEE.
Mukherjee, R. and Kar, P., 2017, January. A comparative review of data warehousing ETL tools
with new trends and industry insight. In 2017 IEEE 7th International Advance
Computing Conference (IACC) (pp. 943-948). IEEE.
Mukherjee, S., 2019. Benefits of AWS in Modern Cloud. Available at SSRN 3415956.
Nagpure, S. and et. al, 2019, April. Data Leakage Agent Detection in Cloud Computing. In 2nd
International Conference on Advances in Science & Technology (ICAST).
Pérez, A. and et. al, 2019, July. On-premises Serverless Computing for Event-Driven Data
Processing Applications. In 2019 IEEE 12th International Conference on Cloud
Computing (CLOUD) (pp. 414-421). IEEE.
Pouyanfar, S. and et. al, 2018. Multimedia big data analytics: A survey. ACM Computing
Surveys (CSUR), 51(1), pp.1-34.
Pramod, K. and et. al, 2020. IMPLEMENTATION OF CHATBOT USING AWS AND
GUPSHUP API. Scientific and practical cyber security journal.
Rafique, A., 2019. Middleware for Data Management in Multi-Cloud.
Roe, R., 2020. Making the case for cloud: ROBERT ROE CONSIDERS THE LATEST CLOUD
AND SAAS TECHNOLOGY, AND THE BENEFITS IT CAN PROVIDE TO
LABORATORIES WITH TODAY'S WORKFLOWS AND AI
INITIATIVES. Scientific Computing World, 170(170), pp.10-13.
Roh, Y., Heo, G. and Whang, S.E., 2019. A survey on data collection for machine learning: a big
data-ai integration perspective. IEEE Transactions on Knowledge and Data
Engineering.
Sanchez, M., Beak, W. and Saxena, M., Cognitive Scale Inc, 2019. Hybrid data architecture
having a cognitive data management module for use within a cognitive environment.
U.S. Patent 10,438,122.
Sanchez, M., Beak, W. and Saxena, M., Cognitive Scale Inc, 2019. Method for using hybrid
data architecture having a cognitive data management module within a cognitive
environment. U.S. Patent 10,354,190.
Sarkar, A. and Shah, A., 2018. Learning AWS: Design, build, and deploy responsive
applications using AWS Cloud components. Packt Publishing Ltd.
SGIT, I., 2019. Exploring Apache Spark Data APIs for Water Big Data Management. Advanced
Intelligent Systems for Sustainable Development (AI2SD’2018): Vol 3: Advanced
Intelligent Systems Applied to Environment, 913, p.105.
Shakil, K. and et. al, 2018, May. Exploiting data reduction principles in cloud-based data
management for cryo-image data. In Proceedings of the 2018 International conference
on computers in management and business (pp. 61-66).
Siddiqa, A., Karim, A. and Gani, A., 2017. Big data storage technologies: a survey. Frontiers of
Information Technology & Electronic Engineering, 18(8), pp.1040-1070.
Smith, A. and et. al, 2019. Astronomy should be in the clouds. Bull. Am. Astron. Soc., 51, p.55.
52
Stergiou, C.L. and et. al, 2020. Secure Machine Learning scenario from Big Data in Cloud
Computing via Internet of Things network. In Handbook of Computer Networks and
Cyber Security (pp. 525-554). Springer, Cham.
Sudmanns, M. and et. al, 2019. Big Earth data: disruptive changes in Earth observation data
management and analysis?. International Journal of Digital Earth, pp.1-19.
Sun, A.Y. and Scanlon, B.R., 2019. How can Big Data and machine learning benefit
environment and water management: a survey of methods, applications, and future
directions. Environmental Research Letters, 14(7), p.073001.
Taori, P. and Dasararaju, H.K., 2019. Big Data Management. In Essentials of Business
Analytics (pp. 71-109). Springer, Cham.
Tomar, P. and Kaur, G. eds., 2017. Examining cloud computing technologies through the
internet of things. IGI Global.
Vijayakumar, T., 2018. Practical API Architecture and Development with Azure and AWS:
Design and Implementation of APIs for the Cloud. Apress.
Wadia, Y. and Gupta, U., 2017. Mastering AWS Lambda. Packt Publishing Ltd.
Wang, Z. and et. al, 2020. An empirical study on business analytics affordances enhancing the
management of cloud computing data security. International Journal of Information
Management, 50, pp.387-394.
Wijaya, T. and et. al, 2018. An AWS machine learning-based indirect monitoring method for
deburring in aerospace industries towards industry 4.0.
Wilson, E., 2018. A custom data management schema for single-cell RNA-sequencing
experiments.
Wingerath, W., Ritter, N. and Gessert, F., 2019. Real-Time & Stream Data Management: Push-
Based Data in Research & Practice. Springer.
Xu, Q. and et. al, 2019. A New Model of Cotton Yield Estimation Based on AWS. In Cloud
Computing, Smart Grid and Innovative Frontiers in Telecommunications (pp. 468-484).
Springer, Cham.
Zambrano, B., 2018. Serverless Design Patterns and Best Practices: Build, secure, and deploy
enterprise ready serverless applications with AWS to improve developer productivity.
Packt Publishing Ltd.
Online
The Continuing Evolution of Data Management. 2019. [Online]. Available through:
<https://www.eckerson.com/articles/the-continuing-evolution-of-data-management>.
What is Amazon Web Services and Why Should You Consider it?. 2020. [Online]. Available
through: <https://www.netsolutions.com/insights/what-is-amazon-cloud-its-advantages-
and-why-should-you-consider-it/>.
THE EVALUATION OF DATA MANAGEMENT CONCEPTS. 2018. [Online]. Available
through: <http://www.contemplatingdata.com/2018/01/09/evaluation-data-management-
concepts/>.
TRENDS IN DATA MANAGEMENT. 2020. [Online]. Available through:
<https://www.comptia.org/content/research/data-management-trends-survey>.
Benefits of AWS. 2020. [Online]. Available through: <https://www.educba.com/benefits-of-
aws/>.
53
Computing via Internet of Things network. In Handbook of Computer Networks and
Cyber Security (pp. 525-554). Springer, Cham.
Sudmanns, M. and et. al, 2019. Big Earth data: disruptive changes in Earth observation data
management and analysis?. International Journal of Digital Earth, pp.1-19.
Sun, A.Y. and Scanlon, B.R., 2019. How can Big Data and machine learning benefit
environment and water management: a survey of methods, applications, and future
directions. Environmental Research Letters, 14(7), p.073001.
Taori, P. and Dasararaju, H.K., 2019. Big Data Management. In Essentials of Business
Analytics (pp. 71-109). Springer, Cham.
Tomar, P. and Kaur, G. eds., 2017. Examining cloud computing technologies through the
internet of things. IGI Global.
Vijayakumar, T., 2018. Practical API Architecture and Development with Azure and AWS:
Design and Implementation of APIs for the Cloud. Apress.
Wadia, Y. and Gupta, U., 2017. Mastering AWS Lambda. Packt Publishing Ltd.
Wang, Z. and et. al, 2020. An empirical study on business analytics affordances enhancing the
management of cloud computing data security. International Journal of Information
Management, 50, pp.387-394.
Wijaya, T. and et. al, 2018. An AWS machine learning-based indirect monitoring method for
deburring in aerospace industries towards industry 4.0.
Wilson, E., 2018. A custom data management schema for single-cell RNA-sequencing
experiments.
Wingerath, W., Ritter, N. and Gessert, F., 2019. Real-Time & Stream Data Management: Push-
Based Data in Research & Practice. Springer.
Xu, Q. and et. al, 2019. A New Model of Cotton Yield Estimation Based on AWS. In Cloud
Computing, Smart Grid and Innovative Frontiers in Telecommunications (pp. 468-484).
Springer, Cham.
Zambrano, B., 2018. Serverless Design Patterns and Best Practices: Build, secure, and deploy
enterprise ready serverless applications with AWS to improve developer productivity.
Packt Publishing Ltd.
Online
The Continuing Evolution of Data Management. 2019. [Online]. Available through:
<https://www.eckerson.com/articles/the-continuing-evolution-of-data-management>.
What is Amazon Web Services and Why Should You Consider it?. 2020. [Online]. Available
through: <https://www.netsolutions.com/insights/what-is-amazon-cloud-its-advantages-
and-why-should-you-consider-it/>.
THE EVALUATION OF DATA MANAGEMENT CONCEPTS. 2018. [Online]. Available
through: <http://www.contemplatingdata.com/2018/01/09/evaluation-data-management-
concepts/>.
TRENDS IN DATA MANAGEMENT. 2020. [Online]. Available through:
<https://www.comptia.org/content/research/data-management-trends-survey>.
Benefits of AWS. 2020. [Online]. Available through: <https://www.educba.com/benefits-of-
aws/>.
53
The Definitive Benefits of AWS Cloud Adoption. 2020. [Online]. Available through:
<https://www.romexsoft.com/blog/benefits-of-aws-cloud/>.
54
<https://www.romexsoft.com/blog/benefits-of-aws-cloud/>.
54
1 out of 58
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.