Understanding the Data Link Layer: Functions and Importance in Network Communication
VerifiedAdded on 2023/04/24
|12
|4183
|378
AI Summary
In this document we will discuss about Understanding the Data Link Layer and below are the summary points of this document:-
Data Link Layer: Third layer of the OSI model, responsible for moving data between physical and higher layers.
Functions of Data Link Layer: Data framing, addressing, flow control, and multi-access capabilities.
Importance of Data Link Layer: Ensures data flow, handles transmission problems, and manages information congestion.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
IT
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1 | P a g e
Table of Contents
Data Link Layer.....................................................................................................................................2
Transport Layer.....................................................................................................................................4
Cloud Computing...................................................................................................................................7
References...........................................................................................................................................10
Table of Contents
Data Link Layer.....................................................................................................................................2
Transport Layer.....................................................................................................................................4
Cloud Computing...................................................................................................................................7
References...........................................................................................................................................10
2 | P a g e
Data Link Layer
Data link layer offer various functions as it provides service interface for moving
data from physical link layer to other. It is the third layer of the OSI model. OSI model has 7
layers that s physical, data link, network, transport, application, session and presentation
layer. Theses layer offer the idealized network communication protocols.
The information is encoded and decoded on the data link layer before transmitting it
over the network. It also determines ways in which devices can be recovered from the
corrosion of sending the frames at the same time. The data link layer has two sub layers one
is logical link layer and other is media access control layer. The role of logical layer is to
have a control of flow of data s that errors could be acknowledged.
The main functions of data link layers are that it ensures flow of data by sending and
receiving devices easily. It also helps in handling the problems that occur at time of
transmission of information. It also transmits information by easily routing the data packets
and addressing it.
Once the connection is set up it is the responsibility of data link layer to divide the
packets into data frames so that it can be handled by acknowledging it. The incoming data is
analysed by checking specific bits. If there is a bug in the data link layer it notifies at the
high level protocol. The data link layer manages the flow of information by enabling devices
so that congestion can be controlled. It is one of the most complicated layers in the complete
OSI model. It hides the detail about the data frames so that chances of collision reduce (Ma,
Huang, Wen, Green & Ho‐Baillie, 2016)
It is responsible for converting data stream of signal so that it could be sent over
underlying hardware. Data link layers convers the information into a format so that it can be
transmitted to upper layer. The data is stored in electric form it is converted to maintaining
secure connection over the host.
The major functions of data link layer are discussed below:
The first is data framing that takes packet from network layer and encapsulates into
the fame and then sends over the network bit by bit. Later at the receiver end, the signals are
assembled into frames (Zhao, Sexton, Park, Baure, Nino & So, 2015). The other function is
Data Link Layer
Data link layer offer various functions as it provides service interface for moving
data from physical link layer to other. It is the third layer of the OSI model. OSI model has 7
layers that s physical, data link, network, transport, application, session and presentation
layer. Theses layer offer the idealized network communication protocols.
The information is encoded and decoded on the data link layer before transmitting it
over the network. It also determines ways in which devices can be recovered from the
corrosion of sending the frames at the same time. The data link layer has two sub layers one
is logical link layer and other is media access control layer. The role of logical layer is to
have a control of flow of data s that errors could be acknowledged.
The main functions of data link layers are that it ensures flow of data by sending and
receiving devices easily. It also helps in handling the problems that occur at time of
transmission of information. It also transmits information by easily routing the data packets
and addressing it.
Once the connection is set up it is the responsibility of data link layer to divide the
packets into data frames so that it can be handled by acknowledging it. The incoming data is
analysed by checking specific bits. If there is a bug in the data link layer it notifies at the
high level protocol. The data link layer manages the flow of information by enabling devices
so that congestion can be controlled. It is one of the most complicated layers in the complete
OSI model. It hides the detail about the data frames so that chances of collision reduce (Ma,
Huang, Wen, Green & Ho‐Baillie, 2016)
It is responsible for converting data stream of signal so that it could be sent over
underlying hardware. Data link layers convers the information into a format so that it can be
transmitted to upper layer. The data is stored in electric form it is converted to maintaining
secure connection over the host.
The major functions of data link layer are discussed below:
The first is data framing that takes packet from network layer and encapsulates into
the fame and then sends over the network bit by bit. Later at the receiver end, the signals are
assembled into frames (Zhao, Sexton, Park, Baure, Nino & So, 2015). The other function is
3 | P a g e
addressing that make sure that hardware is addressed by unique function. The data frames
are then sent on the network and it is synchronized between the sender and receiver. The
other function is maintaining flow control between the networks that have different speed
and capacity. It also offers multi access so that capabilities can be accessed easily by
multiple systems.
The main task of the data link layer is to transfer raw material by adding checksum
so that errors could be detected. The data are broken into frames and it is sent sequentially is
that reliable connection takes place (Olivieri, et. al, 2016). The receiver sends back the
acknowledgment frame to inform that information is received correctly.
There are various design issues faced by data link layer, one is keeping the
transmission fast over the network as the speed of receiver slows down speed of
transmission of data. Thus a traffic regulation mechanism is needed so that transmission is
buffered and error is handled carefully. The other issues are broadcasting as it requires
control over the shared channel. Data link layer deals with such issue.
It sends the data in concept of framing by establishing a point to point connection
between computers by sending data into stream of bits (Xu, Li, Li, Zhang & Muntean, 2015)
Data link layer supports the communication between devices over the same network
or different. It assures that every piece of information is secure and it is not leaked to any
unauthorised user. It offers reliable communication as data is encoded and access is offered
only by authorised user. The main function is physical addressing and access control that
makes sure that safe transfer takes place. The gap between transmission of data packet is
checked and it is assured that synchronization is maintained. It keeps the bit buffered so that
smooth transaction takes place (Xu, Li, Li, Zhang & Muntean, 2015)
Data link layer adds physical addresses of both source and destination machines.
Various issues are framed at time of framing one is detecting the station of frame so that
alerts can be created. It also becomes detecting the end frame this for this an ending bit is
created (Lopacinski, Nolte, Buechner, Brzozowski & Kraemer, 2015). The starting bit also
needs to be identified so that a sequential patter can be maintained while delivering the
information.
addressing that make sure that hardware is addressed by unique function. The data frames
are then sent on the network and it is synchronized between the sender and receiver. The
other function is maintaining flow control between the networks that have different speed
and capacity. It also offers multi access so that capabilities can be accessed easily by
multiple systems.
The main task of the data link layer is to transfer raw material by adding checksum
so that errors could be detected. The data are broken into frames and it is sent sequentially is
that reliable connection takes place (Olivieri, et. al, 2016). The receiver sends back the
acknowledgment frame to inform that information is received correctly.
There are various design issues faced by data link layer, one is keeping the
transmission fast over the network as the speed of receiver slows down speed of
transmission of data. Thus a traffic regulation mechanism is needed so that transmission is
buffered and error is handled carefully. The other issues are broadcasting as it requires
control over the shared channel. Data link layer deals with such issue.
It sends the data in concept of framing by establishing a point to point connection
between computers by sending data into stream of bits (Xu, Li, Li, Zhang & Muntean, 2015)
Data link layer supports the communication between devices over the same network
or different. It assures that every piece of information is secure and it is not leaked to any
unauthorised user. It offers reliable communication as data is encoded and access is offered
only by authorised user. The main function is physical addressing and access control that
makes sure that safe transfer takes place. The gap between transmission of data packet is
checked and it is assured that synchronization is maintained. It keeps the bit buffered so that
smooth transaction takes place (Xu, Li, Li, Zhang & Muntean, 2015)
Data link layer adds physical addresses of both source and destination machines.
Various issues are framed at time of framing one is detecting the station of frame so that
alerts can be created. It also becomes detecting the end frame this for this an ending bit is
created (Lopacinski, Nolte, Buechner, Brzozowski & Kraemer, 2015). The starting bit also
needs to be identified so that a sequential patter can be maintained while delivering the
information.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
4 | P a g e
It is the lowest layer of the OSI model that transmits the bit over the network. It
sends the complete information about the data that is size of the packet, its address, and the
destination address. It offers error handling capacity by detecting the bugs so that security is
maintained. It provides the transfer across physical link layer. It offers local delivery of
frames from one node to another. Some of the design issues of data link layers are sservices
provided to the Network Layer, framing, error Control and flow Control. The major function
of data link layers is dealing with transmission errors, regulating data flow and slow
receivers not swamped by fast senders. It also offers a well-defined service edge on the
network by taking care of all the transmission errors. This responsibility of data link layer is
packing the data by encapsulating it into the frames. It also assures that synchronization of
data is maintained at time of data transfer. It ensures flow control by exchanging the
information at high speed.
Transport Layer
As based upon open system interconnection (OSI), the transport layer is responsible
for end-to-end communication over an interconnected network. On different hosts, there is a
logical communication between application processes within a set of protocols including
various components over the network.
It also helps the management in rectification of errors and thus delivers reliability
and quality to the end user. With the help of the transport layer, one can receive or send error
corrected data with the help of a host and also enable network components to allow
multiplexing. In Open System Interconnection Model, transport layer is considered to be
fourth layer of this network structure (Agyapong et al, 2014).
Transport layer also helps management by differentiating the applications function on
the same machine or computer device. The effective goal of transport layer is to help end
user by delivering cost effective and reliable services. It works transparently within the
layers above to deliver and receive data without errors. The application messages break into
segments and transfer them to the network layer and this is done on the send side.
Furthermore, on the receiving side, segments are reassembled into messages and these are
passes to application layer. One of the major advantages of TCP is that it uses a positive
acknowledgement with retransmission methodology where the receiving device needs to
It is the lowest layer of the OSI model that transmits the bit over the network. It
sends the complete information about the data that is size of the packet, its address, and the
destination address. It offers error handling capacity by detecting the bugs so that security is
maintained. It provides the transfer across physical link layer. It offers local delivery of
frames from one node to another. Some of the design issues of data link layers are sservices
provided to the Network Layer, framing, error Control and flow Control. The major function
of data link layers is dealing with transmission errors, regulating data flow and slow
receivers not swamped by fast senders. It also offers a well-defined service edge on the
network by taking care of all the transmission errors. This responsibility of data link layer is
packing the data by encapsulating it into the frames. It also assures that synchronization of
data is maintained at time of data transfer. It ensures flow control by exchanging the
information at high speed.
Transport Layer
As based upon open system interconnection (OSI), the transport layer is responsible
for end-to-end communication over an interconnected network. On different hosts, there is a
logical communication between application processes within a set of protocols including
various components over the network.
It also helps the management in rectification of errors and thus delivers reliability
and quality to the end user. With the help of the transport layer, one can receive or send error
corrected data with the help of a host and also enable network components to allow
multiplexing. In Open System Interconnection Model, transport layer is considered to be
fourth layer of this network structure (Agyapong et al, 2014).
Transport layer also helps management by differentiating the applications function on
the same machine or computer device. The effective goal of transport layer is to help end
user by delivering cost effective and reliable services. It works transparently within the
layers above to deliver and receive data without errors. The application messages break into
segments and transfer them to the network layer and this is done on the send side.
Furthermore, on the receiving side, segments are reassembled into messages and these are
passes to application layer. One of the major advantages of TCP is that it uses a positive
acknowledgement with retransmission methodology where the receiving device needs to
5 | P a g e
respond back to the sender that it certainly accept the data it was sent (Rathnayaka & Potdar,
2013). If this acknowledgment message not received by the sender, an assumption is being
made that receiving device did not receive a part or all of transmission. It also offers major
services like-
Data flow in the same order and also no loss of packets done in the transport layer
due to an attribute of checksum.
A connection oriented communication is offered by transport layer with help of
various protocols such as internet protocol, user datagram protocol, etc.
In transport layer, the congestion is also dodged by enabling the traffic over the
telecommunication network and thus allow to deliver the data to an appropriate place
over the host.
Various different process completed on transport layer such as multiplexing of data,
adding of source ad destination port number in header and also identify the address
of the hosts.
Some applications try to receive byte streams instead of packets; transport layer
enables for the transmission of byte-oriented data streams if necessary. It is called as
byte orientation.
The two key protocol that is used by transport layer includes a transmission control
protocol, user diagram protocol that offers a consistent communication among hosts. There
is end-to-end communication in a transport layer that helps in delivering of data without any
error (Gringeri, Bitar & Xia, 2013).
Transport layer in networking is based on a layered architecture model that provides host
to host communication services for applications. A logical communication is managed
through this layer which ensures not issue in protocols. Between source machine and
destination machine, transport layer provides connection less and connection oriented
transmission (Sun et al, 2011).
With help of Advantage Database Server using either datagram or streaming paradigms,
the advantage client can communicate. In datagram communication, UDP/IP are known to
be as protocols, however, they do not guarantee reliable data delivery. This makes writing of
sophisticated communication algorithms which are enhanced for use with Advantage. This
proprietary datagram transport layer also ensures that packet data are to be delivered
respond back to the sender that it certainly accept the data it was sent (Rathnayaka & Potdar,
2013). If this acknowledgment message not received by the sender, an assumption is being
made that receiving device did not receive a part or all of transmission. It also offers major
services like-
Data flow in the same order and also no loss of packets done in the transport layer
due to an attribute of checksum.
A connection oriented communication is offered by transport layer with help of
various protocols such as internet protocol, user datagram protocol, etc.
In transport layer, the congestion is also dodged by enabling the traffic over the
telecommunication network and thus allow to deliver the data to an appropriate place
over the host.
Various different process completed on transport layer such as multiplexing of data,
adding of source ad destination port number in header and also identify the address
of the hosts.
Some applications try to receive byte streams instead of packets; transport layer
enables for the transmission of byte-oriented data streams if necessary. It is called as
byte orientation.
The two key protocol that is used by transport layer includes a transmission control
protocol, user diagram protocol that offers a consistent communication among hosts. There
is end-to-end communication in a transport layer that helps in delivering of data without any
error (Gringeri, Bitar & Xia, 2013).
Transport layer in networking is based on a layered architecture model that provides host
to host communication services for applications. A logical communication is managed
through this layer which ensures not issue in protocols. Between source machine and
destination machine, transport layer provides connection less and connection oriented
transmission (Sun et al, 2011).
With help of Advantage Database Server using either datagram or streaming paradigms,
the advantage client can communicate. In datagram communication, UDP/IP are known to
be as protocols, however, they do not guarantee reliable data delivery. This makes writing of
sophisticated communication algorithms which are enhanced for use with Advantage. This
proprietary datagram transport layer also ensures that packet data are to be delivered
6 | P a g e
successfully as well as maintained a sort of sequencing of that packet data among Advantage
Database Server and Advantage client. Instead of sending packets at one time, sending
bursts of data packets is also possible with advantage datagram transport layer. The
limitation is of around 512 bytes of data as per single IPX packet whereas it is approx.
around 1450 bytes of data considering single IP packet. Moreover, in single burst,
Advantage can deliver up to 16 packets at once. Hence, 8k bytes of data can be stored by
IPX packets whereas 23k bytes of data can be stored in a burst of UDP/IP packets.
10 table records are considered to be the most common chunk of data sent among
Advantage Database Server and Advantage Client. In transferring of data, a single burst of
datagram packets is generally sufficient, however, with help of streaming communication,
the record transfer will adopt many round trips as of 2900 bytes of data must need to be
acknowledged. In most situations, a better performance can be expected in Advantage
datagram communication in comparison with streaming communication protocol as it
significantly reduces the number of acknowledgment packets required.
There are some major differences between the transport and lower layers. The transport
layer should be oriented more towards user service than simply reflect what the underlying
layers happen to provide. In addition, transport layer may have to overcome service
deficiencies of the lower layers. Transport level protocols go through three phases –
establishing, using and terminating a connection. At the transport layer, end-to-end
retransmission is needed which wastes resources by delivering the same packet over the
same links multiple times. When the network becomes congested, transport layer reduce rate
at which they insert packets into subnet as there is no way to prevent itself from becoming
overloaded.
Considering reliable communication, the packet data is sequenced and merger by
Advantage database communication layer and if packets within the burst are never received
due to a network failure or the like, the missing packets are simply resent. While using
streaming communication, the TCP/IP protocol delivers this functionality for Advantage.
The transport layer is the first layer that always resides, in the end, DTE’s. The network
layer services are used by transport layer and shields the upper layer from the details of
network connections and sorts of network used (Zhang & Zhang, 2008).
successfully as well as maintained a sort of sequencing of that packet data among Advantage
Database Server and Advantage client. Instead of sending packets at one time, sending
bursts of data packets is also possible with advantage datagram transport layer. The
limitation is of around 512 bytes of data as per single IPX packet whereas it is approx.
around 1450 bytes of data considering single IP packet. Moreover, in single burst,
Advantage can deliver up to 16 packets at once. Hence, 8k bytes of data can be stored by
IPX packets whereas 23k bytes of data can be stored in a burst of UDP/IP packets.
10 table records are considered to be the most common chunk of data sent among
Advantage Database Server and Advantage Client. In transferring of data, a single burst of
datagram packets is generally sufficient, however, with help of streaming communication,
the record transfer will adopt many round trips as of 2900 bytes of data must need to be
acknowledged. In most situations, a better performance can be expected in Advantage
datagram communication in comparison with streaming communication protocol as it
significantly reduces the number of acknowledgment packets required.
There are some major differences between the transport and lower layers. The transport
layer should be oriented more towards user service than simply reflect what the underlying
layers happen to provide. In addition, transport layer may have to overcome service
deficiencies of the lower layers. Transport level protocols go through three phases –
establishing, using and terminating a connection. At the transport layer, end-to-end
retransmission is needed which wastes resources by delivering the same packet over the
same links multiple times. When the network becomes congested, transport layer reduce rate
at which they insert packets into subnet as there is no way to prevent itself from becoming
overloaded.
Considering reliable communication, the packet data is sequenced and merger by
Advantage database communication layer and if packets within the burst are never received
due to a network failure or the like, the missing packets are simply resent. While using
streaming communication, the TCP/IP protocol delivers this functionality for Advantage.
The transport layer is the first layer that always resides, in the end, DTE’s. The network
layer services are used by transport layer and shields the upper layer from the details of
network connections and sorts of network used (Zhang & Zhang, 2008).
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
7 | P a g e
Cloud Computing
Cloud computing technology has become popular in the past few decades with the
popularity of smartphones and the internet which allow users and developers to use this
technology for their advantage. Cloud computing is referred to a type of computing that
relies on a pool of shared computing resources rather than having local servers or personal
devices dedicated to handling different applications (Xu, 2012). This technology has a huge
demand in both industries as well as educational level. It is a paradigm of information
technology which allows corporations and individuals to ubiquitously access their data and
applications through shred pools of configurable resources. It is important the parties have
access to the internet in order to use this technology. In this technology, a group of network
elements are destined to provide different types of services. Parties who are using cloud
computing technology can store their relevant information on “the cloud” which they can
later access remotely through their computer system (Hashem et al., 2015). This technology
allows users to store files and application on remote servers and they can access their data
via the internet. Cloud computing is not a single piece of technology, and it has a number of
elements through which it operates effectively.
There are three types of clouds which include public, private and community cloud.
The public cloud is open for access to everyone rather than just the owner or the customer.
In this platform, parties demand money from their customers to gain access and use the
public cloud. The private cloud is owned by a single party which is implemented and used in
a secure environment (Dinh, Lee, Niyato & Wang, 2013). It is also called an internal cloud
which can only be accessed by parties who have authorised permission. The community
cloud is accessible to a limited group or community of individuals. This cloud is not
necessarily owned by a particular community, and it can be accessed by third-party
developers for performing different operations. Furthermore, cloud computing technology is
categorised into three groups which parties can choose in order to fulfil their business or
personal requirements (Xu, 2012). Primarily, this technology is categorised into three groups
which include software as a service (SaaS), infrastructure as a service (IaaS) and platform as
a service (PaaS).
Cloud Computing
Cloud computing technology has become popular in the past few decades with the
popularity of smartphones and the internet which allow users and developers to use this
technology for their advantage. Cloud computing is referred to a type of computing that
relies on a pool of shared computing resources rather than having local servers or personal
devices dedicated to handling different applications (Xu, 2012). This technology has a huge
demand in both industries as well as educational level. It is a paradigm of information
technology which allows corporations and individuals to ubiquitously access their data and
applications through shred pools of configurable resources. It is important the parties have
access to the internet in order to use this technology. In this technology, a group of network
elements are destined to provide different types of services. Parties who are using cloud
computing technology can store their relevant information on “the cloud” which they can
later access remotely through their computer system (Hashem et al., 2015). This technology
allows users to store files and application on remote servers and they can access their data
via the internet. Cloud computing is not a single piece of technology, and it has a number of
elements through which it operates effectively.
There are three types of clouds which include public, private and community cloud.
The public cloud is open for access to everyone rather than just the owner or the customer.
In this platform, parties demand money from their customers to gain access and use the
public cloud. The private cloud is owned by a single party which is implemented and used in
a secure environment (Dinh, Lee, Niyato & Wang, 2013). It is also called an internal cloud
which can only be accessed by parties who have authorised permission. The community
cloud is accessible to a limited group or community of individuals. This cloud is not
necessarily owned by a particular community, and it can be accessed by third-party
developers for performing different operations. Furthermore, cloud computing technology is
categorised into three groups which parties can choose in order to fulfil their business or
personal requirements (Xu, 2012). Primarily, this technology is categorised into three groups
which include software as a service (SaaS), infrastructure as a service (IaaS) and platform as
a service (PaaS).
8 | P a g e
SaaS focuses on licensing the software application of cloud computing to the
customers, and they can pay for accessing these services. This is a rapidly growing market
with key players leading the way such as Amazon Web Services, Microsoft Azure, IBM
Cloud, Google Cloud Platform, Adobe and others. IaaS is referred to an infrastructure which
involves a method for delivering everything from operating systems to storage through IP
based connectivity which is available on-demand (Garrison, Kim & Wakefield, 2012).
Generally, clients who wanted to use this technology have the option to outsourced or access
this service on-demand rather than investing in the servers. PaaS is referred to three layers of
cloud-based computing which are the most complex. Although there are many similarities in
the PaaS and SaaS; however, this service is not delivered online, and it can be accessed by
parties through a platform (Hashizume, Rosado, Fernandez-Medina & Fernandez, 2013).
The industry for this service is expected to grow by 2020. Customers can choose between
these three services to choose the one which suits their demand and which allow them to
improve their performance.
There has been a substantial rise in the use of cloud-based software which is offered
by companies in all sectors, and there are many benefits and challenges relating to the use of
cloud computing technology. One of the key advantages of using this technology in
companies is cost saving. This technology did not require corporations to invest in on-
premise infrastructure which resulted in reducing their operational costs (Aleem & Ryan
Sprott, 2012). Moreover, customers only pay for those services which are used by them
which did not put pressure on their budget. Organisations can easily use services of Amazon
Web Services or Microsoft Azure which are substantially cheaper than compared to
establishing on-premise infrastructure based on which the use of cloud computing provides
cost advantage to companies. This technology is extremely affordable for smaller
businesses. Another benefit of using this technology is reliability since the public clouds are
handled by large corporations who have resources to manage under significant traffic and
protect the data of users from cyber-attacks.
The Service Legal Agreement (SLA) which is constructed between the customers
and cloud service providers (CSPs) provides that the customers will be able to access their
data 24/7 and it also guarantees 99.99 percent availability (Avram, 2014). Manageability is
another key advantage of cloud computing technology which makes the process efficient for
corporations. The companies which use cloud computing technology from CSPs did not
SaaS focuses on licensing the software application of cloud computing to the
customers, and they can pay for accessing these services. This is a rapidly growing market
with key players leading the way such as Amazon Web Services, Microsoft Azure, IBM
Cloud, Google Cloud Platform, Adobe and others. IaaS is referred to an infrastructure which
involves a method for delivering everything from operating systems to storage through IP
based connectivity which is available on-demand (Garrison, Kim & Wakefield, 2012).
Generally, clients who wanted to use this technology have the option to outsourced or access
this service on-demand rather than investing in the servers. PaaS is referred to three layers of
cloud-based computing which are the most complex. Although there are many similarities in
the PaaS and SaaS; however, this service is not delivered online, and it can be accessed by
parties through a platform (Hashizume, Rosado, Fernandez-Medina & Fernandez, 2013).
The industry for this service is expected to grow by 2020. Customers can choose between
these three services to choose the one which suits their demand and which allow them to
improve their performance.
There has been a substantial rise in the use of cloud-based software which is offered
by companies in all sectors, and there are many benefits and challenges relating to the use of
cloud computing technology. One of the key advantages of using this technology in
companies is cost saving. This technology did not require corporations to invest in on-
premise infrastructure which resulted in reducing their operational costs (Aleem & Ryan
Sprott, 2012). Moreover, customers only pay for those services which are used by them
which did not put pressure on their budget. Organisations can easily use services of Amazon
Web Services or Microsoft Azure which are substantially cheaper than compared to
establishing on-premise infrastructure based on which the use of cloud computing provides
cost advantage to companies. This technology is extremely affordable for smaller
businesses. Another benefit of using this technology is reliability since the public clouds are
handled by large corporations who have resources to manage under significant traffic and
protect the data of users from cyber-attacks.
The Service Legal Agreement (SLA) which is constructed between the customers
and cloud service providers (CSPs) provides that the customers will be able to access their
data 24/7 and it also guarantees 99.99 percent availability (Avram, 2014). Manageability is
another key advantage of cloud computing technology which makes the process efficient for
corporations. The companies which use cloud computing technology from CSPs did not
9 | P a g e
have to invest in the resources for managing the cloud which makes it easier for them to
manage their data. They did not have to hire IT experts or invest in security software to
ensure that their data is protected from unauthorised access. The use of cloud computing
also provides a strategic edge to companies which allow them to sustain even in adverse
market conditions (Garrison, Kim & Wakefield, 2012). However, there are many challenges
of using cloud computing technology as well. Downtime is a major issue which affects all
CSPs and creates challenges for clients.
The CSPs can be overwhelmed with a large number of clients due to which they face
technical outage, and their operations are temporarily suspended. Another issue with storing
the confidential data on the cloud is that it makes it vulnerable towards cyber-attacks due to
which the private data of companies can be accessed by cyber criminals (Tiwari & Mishra,
2012). Vendor lock-in is another issue with cloud computing; although the chargers of CSPs
are relatively low in the beginning they can raise their prices in the future, and it becomes
difficult for corporations to transfer their data to another CSP due to they face the challenge
of vendor lock-in. Lastly, the use of cloud computing limits the control of corporations on
their data and operations unless they are using private cloud which requires substantial
investment in the technology (Rao & Selvamani, 2015). Therefore, corporations should
carefully use the cloud computing technology in order to gain a competitive advantage;
however, they should assess the risks to avoid facing any challenges which will result in
sustaining their future growth.
have to invest in the resources for managing the cloud which makes it easier for them to
manage their data. They did not have to hire IT experts or invest in security software to
ensure that their data is protected from unauthorised access. The use of cloud computing
also provides a strategic edge to companies which allow them to sustain even in adverse
market conditions (Garrison, Kim & Wakefield, 2012). However, there are many challenges
of using cloud computing technology as well. Downtime is a major issue which affects all
CSPs and creates challenges for clients.
The CSPs can be overwhelmed with a large number of clients due to which they face
technical outage, and their operations are temporarily suspended. Another issue with storing
the confidential data on the cloud is that it makes it vulnerable towards cyber-attacks due to
which the private data of companies can be accessed by cyber criminals (Tiwari & Mishra,
2012). Vendor lock-in is another issue with cloud computing; although the chargers of CSPs
are relatively low in the beginning they can raise their prices in the future, and it becomes
difficult for corporations to transfer their data to another CSP due to they face the challenge
of vendor lock-in. Lastly, the use of cloud computing limits the control of corporations on
their data and operations unless they are using private cloud which requires substantial
investment in the technology (Rao & Selvamani, 2015). Therefore, corporations should
carefully use the cloud computing technology in order to gain a competitive advantage;
however, they should assess the risks to avoid facing any challenges which will result in
sustaining their future growth.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
10 | P a g e
References
Agyapong, P. K., Iwamura, M., Staehle, D., Kiess, W., & Benjebbour, A. (2014). Design
considerations for a 5G network architecture. IEEE Communications
Magazine, 52(11), 65-75.
Aleem, A., & Ryan Sprott, C. (2012). Let me in the cloud: analysis of the benefit and risk
assessment of cloud platform. Journal of Financial Crime, 20(1), 6-24.
Avram, M. G. (2014). Advantages and challenges of adopting cloud computing from an
enterprise perspective. Procedia Technology, 12, 529-534.
Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud computing:
architecture, applications, and approaches. Wireless communications and mobile
computing, 13(18), 1587-1611.
Garrison, G., Kim, S., & Wakefield, R. L. (2012). Success factors for deploying cloud
computing. Communications of the ACM, 55(9), 62-68.
Gringeri, S., Bitar, N., & Xia, T. J. (2013). Extending software defined network principles to
include optical transport. IEEE Communications Magazine, 51(3), 32-40.
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Khan, S. U. (2015).
The rise of “big data” on cloud computing: Review and open research
issues. Information systems, 47, 98-115.
Hashizume, K., Rosado, D. G., Fernández-Medina, E., & Fernandez, E. B. (2013). An
analysis of security issues for cloud computing. Journal of internet services and
applications, 4(1), 5.
Lopacinski, L., Nolte, J., Buechner, S., Brzozowski, M., & Kraemer, R. (2015). 100 Gbps
wireless–data link layer VHDL implementation. Measurement Automation
Monitoring, 61.
Ma, Q., Huang, S., Wen, X., Green, M. A., & Ho‐Baillie, A. W. (2016). Hole transport layer
free inorganic CsPbIBr2 perovskite solar cell by dual source thermal
evaporation. Advanced Energy Materials, 6(7), 1502202.
References
Agyapong, P. K., Iwamura, M., Staehle, D., Kiess, W., & Benjebbour, A. (2014). Design
considerations for a 5G network architecture. IEEE Communications
Magazine, 52(11), 65-75.
Aleem, A., & Ryan Sprott, C. (2012). Let me in the cloud: analysis of the benefit and risk
assessment of cloud platform. Journal of Financial Crime, 20(1), 6-24.
Avram, M. G. (2014). Advantages and challenges of adopting cloud computing from an
enterprise perspective. Procedia Technology, 12, 529-534.
Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud computing:
architecture, applications, and approaches. Wireless communications and mobile
computing, 13(18), 1587-1611.
Garrison, G., Kim, S., & Wakefield, R. L. (2012). Success factors for deploying cloud
computing. Communications of the ACM, 55(9), 62-68.
Gringeri, S., Bitar, N., & Xia, T. J. (2013). Extending software defined network principles to
include optical transport. IEEE Communications Magazine, 51(3), 32-40.
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Khan, S. U. (2015).
The rise of “big data” on cloud computing: Review and open research
issues. Information systems, 47, 98-115.
Hashizume, K., Rosado, D. G., Fernández-Medina, E., & Fernandez, E. B. (2013). An
analysis of security issues for cloud computing. Journal of internet services and
applications, 4(1), 5.
Lopacinski, L., Nolte, J., Buechner, S., Brzozowski, M., & Kraemer, R. (2015). 100 Gbps
wireless–data link layer VHDL implementation. Measurement Automation
Monitoring, 61.
Ma, Q., Huang, S., Wen, X., Green, M. A., & Ho‐Baillie, A. W. (2016). Hole transport layer
free inorganic CsPbIBr2 perovskite solar cell by dual source thermal
evaporation. Advanced Energy Materials, 6(7), 1502202.
11 | P a g e
Olivieri, D., Cristini, F., Monteduro, G., Pariscenti, L., Calabretta, M., Dell'Oro, R., &
Murru, F. A. (2016). U.S. Patent No. 9,378,252. Washington, DC: U.S. Patent and
Trademark Office.
Rao, R. V., & Selvamani, K. (2015). Data security challenges and its solutions in cloud
computing. Procedia Computer Science, 48, 204-209.
Rathnayaka, A. D., & Potdar, V. M. (2013). Wireless sensor network transport protocol: A
critical review. Journal of Network and Computer Applications, 36(1), 134-146.
Sun, Y., Seo, J. H., Takacs, C. J., Seifter, J., & Heeger, A. J. (2011). Inverted polymer solar
cells integrated with a low‐temperature‐annealed sol‐gel‐derived ZnO film as an
electron transport layer. Advanced Materials, 23(14), 1679-1683.
Tiwari, P. K., & Mishra, B. (2012). Cloud computing security issues, challenges and
solution. International journal of emerging technology and advanced
engineering, 2(8), 306-310.
Xu, C., Li, Z., Li, J., Zhang, H., & Muntean, G. M. (2015). Cross-layer fairness-driven
concurrent multipath video delivery over heterogeneous wireless networks. IEEE
Transactions on Circuits and Systems for Video Technology, 25(7), 1175-1189.
Xu, X. (2012). From cloud computing to cloud manufacturing. Robotics and computer-
integrated manufacturing, 28(1), 75-86.
Zhang, Q., & Zhang, Y. Q. (2008). Cross-layer design for QoS support in multihop wireless
networks. Proceedings of the IEEE, 96(1), 64-76.
Zhao, D., Sexton, M., Park, H. Y., Baure, G., Nino, J. C., & So, F. (2015). High‐Efficiency
Solution‐Processed Planar Perovskite Solar Cells with a Polymer Hole Transport
Layer. Advanced Energy Materials, 5(6), 1401855.
Olivieri, D., Cristini, F., Monteduro, G., Pariscenti, L., Calabretta, M., Dell'Oro, R., &
Murru, F. A. (2016). U.S. Patent No. 9,378,252. Washington, DC: U.S. Patent and
Trademark Office.
Rao, R. V., & Selvamani, K. (2015). Data security challenges and its solutions in cloud
computing. Procedia Computer Science, 48, 204-209.
Rathnayaka, A. D., & Potdar, V. M. (2013). Wireless sensor network transport protocol: A
critical review. Journal of Network and Computer Applications, 36(1), 134-146.
Sun, Y., Seo, J. H., Takacs, C. J., Seifter, J., & Heeger, A. J. (2011). Inverted polymer solar
cells integrated with a low‐temperature‐annealed sol‐gel‐derived ZnO film as an
electron transport layer. Advanced Materials, 23(14), 1679-1683.
Tiwari, P. K., & Mishra, B. (2012). Cloud computing security issues, challenges and
solution. International journal of emerging technology and advanced
engineering, 2(8), 306-310.
Xu, C., Li, Z., Li, J., Zhang, H., & Muntean, G. M. (2015). Cross-layer fairness-driven
concurrent multipath video delivery over heterogeneous wireless networks. IEEE
Transactions on Circuits and Systems for Video Technology, 25(7), 1175-1189.
Xu, X. (2012). From cloud computing to cloud manufacturing. Robotics and computer-
integrated manufacturing, 28(1), 75-86.
Zhang, Q., & Zhang, Y. Q. (2008). Cross-layer design for QoS support in multihop wireless
networks. Proceedings of the IEEE, 96(1), 64-76.
Zhao, D., Sexton, M., Park, H. Y., Baure, G., Nino, J. C., & So, F. (2015). High‐Efficiency
Solution‐Processed Planar Perovskite Solar Cells with a Polymer Hole Transport
Layer. Advanced Energy Materials, 5(6), 1401855.
1 out of 12
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.