Energy Optimization and Management in Cloud Computing

Verified

Added on  2022/11/29

|41
|10590
|286
AI Summary
This document discusses the challenges and solutions for energy optimization and management in cloud computing. It explores the impact of energy consumption in cloud data centers and the need for energy efficient approaches. The document also provides an overview of the research objectives, methodology, and findings related to energy optimization in the cloud. Keywords: cloud computing, energy efficiency, energy optimization, cloud data center, energy consumption

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Energy Optimization and Management in
Cloud Computing
Full Name
Student ID
Submitted for unit NIT6042 (Thesis 2)
Date

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Abstract
Cloud computing typically supports scalable resource utilization, virtualization and offers
various services including IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and
SaaS (Software as a Service). In spite of the tremendous advantages of cloud computing, it
still has certain shortcomings. For instance, both public and private clouds have relatively
high operating costs, which is a major disadvantage of cloud computing. Apart from that, the
era of green computing essentially mandates the utilization of limited energy resources.
However, the increasing number of the data centers demands more computational power,
which in turn raises the demand for low energy and low cost. The primary benefit of cloud
computing services is that it offers constant access and availability to multiple services
including networking, data storage, and computational power and so on. However, since the
cloud-computing paradigm has grown to be popular across the world, the amount of power
consumption has also increased dramatically.
Keywords: cloud computing, energy efficiency, energy optimization, cloud data center,
energy consumption
Document Page
Table of Contents
Abstract......................................................................................................................................2
1. Introduction............................................................................................................................5
1.1 Background of the Study..................................................................................................5
1.2 Research Aim...................................................................................................................6
1.3 Research Objectives.........................................................................................................6
1.4 Research Questions..........................................................................................................7
1.5 Problem Statement...........................................................................................................7
1.6 Rationale of the Study......................................................................................................7
1.7 Structure of the Thesis.....................................................................................................8
2. Literature Review.................................................................................................................10
2.1 Energy Consumption Problem in the Cloud..................................................................11
2.2 Cloud Energy Aware Scheduling...................................................................................12
2.2 Data Center Network Topologies...................................................................................13
2.3 Cloud Data Centers........................................................................................................15
2.4 Adoption of energy efficient solutions...........................................................................18
2.5 Cloud Federation............................................................................................................19
3. Research Methodology.........................................................................................................22
3.1 Research Problem...........................................................................................................22
1.4 Research Questions........................................................................................................23
3.3 Time Horizon.................................................................................................................23
4. Experiments and Results......................................................................................................25
Document Page
4.1 Presentation of the results..............................................................................................25
4.1.1 The CloudSim Simulator.........................................................................................25
4.1.2 The CloudSim architecture.....................................................................................26
4.1.3 The SoS (System of Systems) functional architecture............................................28
4.2 System Model.................................................................................................................29
4.3 Simulation tool...............................................................................................................29
4.4 Discussion......................................................................................................................32
5. Conclusion............................................................................................................................35
5.1 Limitation of the Study..................................................................................................35
5.5 Further Scope of the Study.............................................................................................36
References................................................................................................................................36

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Document Page
1. Introduction
The tremendous growth in the technological front, starting from server virtualization
to multi-tenant cloud models, the increasing number of cloud data centers is consuming an
enormous amount of power. Cloud computing essentially solves some serious problem such
as elimination of chronic costs, management of inefficiencies and dismal CPU utilization
levels. Large farms all across the world are rapidly deploying the cloud strategies to get rid of
the burden of maintaining the physical servers (Helleret al., 2015). However, the cramming
number of data centers and virtual machines (VMs) are creating major business data traffic
on the network. As a result, the dramatic increase in power consumption is leading to several
environmental as well as operational problems (higher power costs). In addition to that, there
are huge concerns in terms of the ability of the power grids to support the escalating energy
consumption requirements along with the elastic demand requirements of the cloud
environment.
1.1 Background of the Study
The energy consumption by the cloud data centers increasing at an exponential rate is
becoming a huge challenge for the cloud service providers. The cloud users are no more tied
to the physical infrastructure as their applications, software and data everything is access
through virtual services. Thus, the rapid migration of application platforms onto the cloud
environments is resulting into and increased demand for resources by the data centers. The
severe heat dissipated by the integrated servers and the associated cloud computing
equipment is automatically boosting the emission of harmful gases causing greenhouse effect
and increasing the amount of carbon footprints (Jalaliet al., 2016). In addition to that, the
high degree of power consumption is also responsible of entailing phenomena such as floods,
droughts and abnormal rise in the temperature. However, it is obvious that energy savings
Document Page
would compromise the performance of the computing servers. According to Boruet al.
(2015), performance loss due to energy savings is a major and recurrent issue, which needs to
be addressed. In this context, an energy efficient model is required for optimizing and
managing the energy consumption in the cloud environment. There are several existing
studies in this area and the present study aims at unveiling the primary aspects related to the
optimization, conservation and management of could data center energy.
1.2 Research Aim
The study aims to investigate the energy efficient approaches and techniques
deployed in the cloud in order to address the exponential increase in power consumption by
the data centers to manage the enormous amount of data traffic on the Internet. a significant
number of research works have been undertaken in the same area that try to examine the
ways to reduce the amount of power consumption of cloud data center servers. Several works
have addressed the placement of virtual machines (VMP) that majorly concerns with the
performance and energy aspects. Thus, the primary focus of this study is to examine and
study the approaches towards energy optimization and management in cloud computing.
1.3 Research Objectives
The objectives for the present study are formulated below:
 To identify the major challenges of cloud energy consumption
 To investigate the impact of energy consumption in the cloud
 To understand the various approaches and mechanism employed for
optimizing and managing energy consumption in cloud computing
 To propose recommendations from secondary sources to address the energy
consumption issue

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
1.4 Research Questions
The research questions for the present study are formulated from the research
objectives as identified in the above section:
 What are the major challenges of cloud energy consumption?
 What is the impact of energy consumption in the cloud?
 What are the various approaches and mechanisms employed for optimizing,
and managing energy consumption in cloud computing?
 What solutions are there to address the energy consumption issue?
1.5 Problem Statement
With the growing demand of cloud services across the globe, the growth of data
centers is accelerated, which in turn led to this problem of massive energy consumption. The
virtualized data centers and lowering the utilization of physical servers is the prime reason of
energy inefficiency. According to Dayarathna, Wen and Fan (2016), the issue related to the
high level of power consumption can be manifested into two broad aspects such as the data
center and the processing area. Due to the technological evolution, the wide and more
frequent use of computers have largely expanded the scope of data centers. As a result, the
number of servers is increasing exponentially day by day, resulting in excessively
overwhelming power consumption. The low level of server utilization also leads to the
wastage of energy.
1.6 Rationale of the Study
The cloud service providers need to deal with a major challenge of managing and
optimizing the energy consumption of virtual computers and data centers in the cloud
environment. The various challenges in this aspect encompass the operational challenges,
energy consumption measurement and management challenges. In this context, several
Document Page
mechanisms and approaches have been discussed in previous research works to deal with the
real issue of massive power consumption by cloud services (Hintemann and Clausen, 2016).
Today, the world is changing at a fast pace and people are solely reliant on the Internet and
advanced technologies associated with it. In this situation, the use of virtualized cloud
platforms is becoming quite necessary for everyday business. Thus, it is the high time to
carry out an in depth analysis of the energy consumption management and optimization in
order to enable effective and efficient industrial development and thereby, lead the
humankind towards the ideal age of information technology.
1.7 Structure of the Thesis
The paper is divided into different individual sections or chapters, each of which
properly focuses on the different aspects and factors of the study in a detailed manner.
Therefore, the content of the thesis will be segregated based on these chapters.
Introduction: The introduction section describes the basic aims, objectives and
purpose of the undertaking the study on the effectiveness of the energy optimization and
management of cloud computing data centers.
Literature review: The literature review section will focus on the basic concepts and
theories on the particular area of study. In order to serve this purpose, various literature
sources will be collected in order to analyze and gather useful and relevant data and
information regarding the need and mechanisms of energy management and optimization in
the cloud data centers.
Methodology: The methodology section will describe the basic requirement of
selecting a method and the chosen research approach to be able to conduct this study in an
appropriate manner so that a successful outcome of the research can be achieved.
Document Page
Experiments & results: Following the methodology as chosen in the previous
chapter, this chapter will explain the results and findings from executing the research. To be
more precise, the method will be followed and the derived outcomes will be analyzed and
discussed in this particular section.
Conclusion: The conclusion section will briefly summarize the entire research as and
how it has been carried out. In other words, this section will represent an overall view of the
thorough study, which includes the discussion of the results and findings as well. Apart from
that, it will look into the limitation of the present study as well as the future scope of
extending the research work.

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
2. Literature Review
The literature review section focuses on the different concepts and ideas related to
cloud computing and its energy consumption behaviors. In addition to that, it also looks into
the different energy optimization and management techniques with an aim to identify the
effective and most appropriate solutions. Several past research works have been executed in
this particular area of study, which has uncovered some truly interesting mechanisms to solve
the problem of massive energy consumption by the data centers in cloud computing. Thus,
these studies have been focused in this section as follows:
According to Ronget al. (2016), the primary benefit of cloud computing services is
that it offers constant access and availability to multiple services including networking, data
storage, and computational power and so on. However, since the cloud-computing paradigm
has grown to be popular across the world, the amount of power consumption has also
increased dramatically. In this context, Silvaet al. (2017) suggests that the cloud companies
have implemented some fine mechanisms to deal with energy costs and reduce the carbon
footprint. However, this approaches towards energy saving may lead to the loss of overall
cloud performance. Therefore, it is necessary to talk about solutions that do not affect the
performance metrics and generates equal or more profits. Mastelicet al. (2015) argues that it
may be possible by scheduling based on dynamic speed scaling, which in turn may use
machine learning and load prediction algorithms.
On the other hand, Jalaliet al. (2016) identified the virtual machine consolidation as a
process that comprises four individual decision making activities such as virtual machine for
migrating selection, physical node overload and under-load and target node for the migrated
virtual machine selection. Similarly, Hintemann and Clausen (2016) has proposed separate
adaptive heuristic algorithms that apply statistical analysis for estimating the threshold of
Document Page
overload of CPU utilization based on the historical data of virtual machines. Ronget al.
(2016) has proposed the CloudSim framework for virtual machine consolidation.
2.1 Energy Consumption Problem in the Cloud
There are two techniques for saving the energy consumed by the data center hardware
and servers. DVFS (dynamic voltage or frequency scaling) is one method and the other one is
by shutting down the idle servers. To be more precise, in the method of dynamic voltage and
frequency scaling, energy is typically saved by means of adjusting the operating clock for the
purpose of scaling down the supply voltages that is based in adaptive algorithm. The DVFS
approach effectively reduces potential power consumption even if it is dependent on the
hardware components for executing the scaling job. On the other hand, shutting down the
servers that are not in use effectively conserve more amount energy as in a highly dynamic
environment, turning off the system can potentially create high overhead that in turn leads to
a strong degradation performance. Hence, the application of the energy optimization
techniques is highly useful in the calculation of the exact amount of power or energy
consumption by the different components associated with the cloud data centers.
According to Dayarathna, Wen and Fan (2016), it is high time for the industrial and
the governmental institutions to address the serious problems of this massive and explosive
increase in the amount of energy consumption by the cloud data centers. Hence, the
development of highly energy efficient methods and techniques in cloud computing is a
potentially serious issue in this area of cloud computing. However, according to Benkhelifa et
al. (2015), the advancement in the development of the energy efficient cloud based solutions
are highly dependent on multiple key technologies. In this context, Hintemann and Clausen
(2016) has suggested that the platform level of application highly demands the energy
efficient mediums whereas; the hypervisor level requires energy efficient scheduling
algorithms along with storage and memory systems and resource management policies.
Document Page
Finally, the machine level demands energy efficient communications, scheduling and
applications.
In addition to that, the mobile phone market is rapidly growing at a high speed and at
the same time, the concept of mobile cloud computing is more and more becoming a
significant technology, supported by the mobile services. Hence, in order to overcome the
potential issues and obstacles related to the battery duration or battery life, cloud computing
and mobile computing integration is the newly emerging solution. In this context, Heller et al.
(2015) has suggested that the era of wireless communication and green networking, limited
resource management and cloud based mobile applications are to be considered in a proper
manner. Different components of the data center such as memory, CPU and hard disks have
the largest impact on the consumption of energy in the physical servers. To be more specific,
Deng et al. (2016) has suggested that the DVFS is a technique that is widely used for energy
saving purposes. Azad and Navimipour (2017) has proposed multiple algorithms in order to
shut down the less loaded servers by transferring the load to other running servers.
2.2 Cloud Energy Aware Scheduling
Jalali et al. (2016) has proposed different approaches targeted to autonomic energy
aware and dynamic scheduling mechanisms, which can easily adapt to varying infrastructures
and varying types of workloads. These are discussed as follows:
Schedule solving: Linear functions such as the power cost, SLA, revenue etc. are
considered for an Integer Linear Program (ILP) solver, which is also popularly known as the
NP complete problem and hence, an infinite number of searching algorithms are responsible
for finding the optimal solution.
Mathematical model of the data center: In this case, the data center can be modeled
based on a grid with a set of resources, each of which is associated with a specific power

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
consumption cost along with a set of jobs. These jobs are typically executed with a set of
profits, resource requirements as well as execution penalties. The decision regarding which
resources will be assigned to which job is required to be decided at each scheduling round.
This decision solely and invariably depends on the conditions and requirements as established
by the Service Level Agreement (SLA). In this context, a binary matrix can be scheduled
with the mathematical model, where it is supposed that the matrix is H x J (H & J are the
indexes or sets of jobs and hosts where the positions [h, j] typically denotes if a particular job
(J) is in host (H)). The basic assumption in this model is that a job is entirely run by only one
host and will never be split into different individual hosts. Moreover, the jobs require a
specific amount of resources for running including memory space, CPU quota and input and
output access, as predicted by the heuristic functions.
Consolidation methodology: The association of power and resource usage typically
grows sub linearly and proportionally. Hence, it essentially helps potential saving of energy
by means of consolidation. For instance, Intel Xeon 4 CPU machine consumes 235 (watts per
hour) power when all CPUs are idle, whereas, when only one active CPU id there, the power
consumption is 267.8 watts per hour. Therefore, it is seen that when two or more CPUs are
active, the consumption of power is approximately 317.9. Therefore, it typically implies that
if two machines use one single processor, each will consume more amount of energy as
compared to when executing the work by distributing it between two processor and shutting
down the second one.
2.2 Data Center Network Topologies
The different types of data center networks and related topologies are discussed in this
section.
Document Page
Fat tree topology: The fat tree topology is responsible for customizing the amount of
internal blocking while it exploits the traffic locality while keeping an option for non
blocking operations. Fat tree topology organizes processors in a typical on-chip network
based on binary tree. According to Kansal and Chana (2016), for destination, the fat tree
topology packet routing typically requires 2log (n) space.
Figure 2.1: Fat tree topology
(Source: Li et al. 2017)
Google fat tree topology: The Google fat tree topology is based on scalable huge data
centers that are dependent on the Ethernet switches on interconnected commodity. These
topologies are easily developed with cheap devices having a uniform capacity where each
port essentially supports the similar speed as end cost. In addition to that, the devices are
capable of transmitting at line speed when the individual packets are distributed across the
available path in a uniform way. However, this tree based network topologies are the most
dominant topologies in the cloud data centers. In this context, Mastelic et al. (2015) points
out that these tree-based topologies deployed in the operational DCs (data centers) can be
either multi rooted (fat tree) or single rooted.
Document Page
2.3 Cloud Data Centers
The information system executives are constantly focusing on the major kinds of
technologies in order to aid the organization in becoming more and more efficient, flexible
and agile. According to Rong et al. (2016), the data center is the heart of a company. More
specifically, the expenditure associated with the various data center technologies is constantly
growing. A recent study executed in this area denotes that around 33 % managers across
different organizations are planning to increase their budget related to the spending in data
center in a yearly basis. At this modern age with the advent of cloud computing, it is further
essential to deploy the right data center optimization techniques. Moreover, Reguri, Kogatam
and Moh (2016) suggests that the managers across different organizations are also gradually
expressing interest in the deployment of high levels off virtualization techniques.
Communication systems in the cloud computing environment are typically evolved from the
concepts of grid and cluster computing paradigms. It is designed for executing large as well
as computationally intensive jobs. It is also typically referred to as high performance
computing or HPC.
In this context, Silva et al. (2017) has said that the cloud based solutions can
effectively allow companies to expand the cloud infrastructure into a proper distributed
system. More number of companies is expressing their interest in working with big data,
which is naturally leading to more administrators searching for ways to outsource their IT
environment into a cloud (hybrid or public) environment for establishing better control
resources. In addition to that, the comparatively newer technologies applied in the cloud
environment are more focused in optimizing the speed, availability and flexibility of the
applications. The organizations are now focusing more on reducing the energy costs by
increasing the energy efficiency of the data centers (optimization of layout, cooling air flow
and cabinet densities) for realizing greater savings of power and energy.

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Idle vs. active low power modes: Energy efficiency in the cloud environment
typically associated two types of device operating modes such as idle and active. To be more
precise, these are the active state and the idle state. To be more specific, the idle state enables
the particular device to execute a useful task or work at an effectively reduced rate (i.e. if it is
a low power active state). On the contrary, an idle state indicates a state where a particular
device is not executing or performing any tasks. Therefore, You, Huang and Chae (2016)
suggests that the optimization process of the physical machines to virtual ones in the cloud
infrastructure has a profound impact on the number of associated active physical servers.
Hence, in terms of the reduction in the number of active servers, Tang et al. (2016) said that it
can potentially reduce the overall consumption of energy.
Communication networks: Communication systems in the cloud computing
environment are typically evolved from the concepts of grid and cluster computing
paradigms. It is designed for executing large as well as computationally intensive jobs. It is
also typically referred to as high performance computing or HPC. In order to achieve energy
efficiency through virtual computers, the energy management functions require direct action
on the hardware resources so that energy is shared amongst two or more virtual computer
hardware. Apart from that, specific energy management operations are dedicated towards
improving the consistency in the material resources. Another strategy is to utilize the
mechanism for managing hardware energy consumption.
Load balancing: One of the most effective method to avoid excessive energy
consumption is to enable the sleep mode in the idle network hardware and computing servers.
In this context, Zhou et al. (2016) suggests load balancing as one of the key enablers for
energy savings. However, the power mode changes result into potential delays in addition to
the inability to instantaneous activation or waking up of a sleeping server. Thus, the pool of
sleeping or idle servers should be capable of being available as and when needed and
Document Page
accommodate the incoming workloads for a short term as well as ensure effective prevention
of QoS (Quality of Service) degradation. In this context, Li et al. (2016) pointed out that data
centers are focused on offering the best quality and the best level of service, as defined in the
SLA (Service Level Agreements) even at peak workloads.
As a result, they are inclined towards over provisioning communication and
computing resources. A fully functioning data center typically has a capacity of 30 % (on an
average). According to Gai et al. (2016), the data center load is effectively correlated with
time and region because of the greater number of users being active during daytime.
Therefore, even at the time of being idle, the servers are consuming more than two thirds of
the peak energy consumption.
Scheduling mechanism based on data mining: Scheduling refers to allocating
resources to the users following a particular point of time, which synchronizes in accordance
with the requirements of those users. The key to developing a highly efficient data center is
the scheduling mechanism, which in turn is associated with the rational resource allocation as
well as the effective and efficient completion of a particular service request. In this regard,
the idle resources meeting the requirements of the users are divided into a specific number of
clusters in accordance with their structure of their data center networks with the help of the
clustering algorithm.
The task can be completed using the cloud controller involving multiple clusters
where an optimal cluster will be chosen based on the other constraints in order to accomplish
the task. The main job of the resource information controller is to focus on collecting and
recording the current classification of the idle servers. It is aimed at improving the
effectiveness and efficiency of finishing the job of task scheduling. In addition to that, it also
reduces the service provider’s costs. Moreover, according to Benkhelifa et al. (2015), the
Document Page
reasons of such a massive increase in the degree of energy consumption by the data centers
essentially includes the rising processing demands of big data. Therefore, cloud computing
along with big data is majorly responsible for forcing those data centers in terms of
processing the explosive amount of information and data.
According to Boru et al. (2015), the optimization of resources essentially revolves
around several aspects and factors found within the data center environment. These aspects
can range from power distribution management to air flow control. The truly efficient data
centers typically collaborate with the physical infrastructure providers with powerful and
long lasting solutions. The initial requests are accepted by the by the energy efficient
scheduling scheme. According to Azad and Navimipour (2017), the data center’s virtual
servers are typically placed depending on the precise locations of those in the SLA (Service
Level Agreement) restrictions. The constraints are affinity constraint, security constraint and
migration constraint. The energy efficient scheduling of the real time jobs on a periodic basis
is done on multi core processors with voltage islands, where the cores are divided into more
than one blocks (also known as voltage islands) (Fioccola et al. 2017).
2.4 Adoption of energy efficient solutions
The large cloud firms tend to adopt the effective energy efficient practices in their
data centers. However, it only accounts for 5 percent of the total energy consumption across
the globe. On the contrary, the 95 percent energy consumption results from the medium and
small firms. In order to achieve energy efficiency through virtual computers, the energy
management functions require direct action on the hardware resources so that energy is
shared amongst two or more virtual computer hardware. Apart from that, specific energy
management operations are dedicated towards improving the consistency in the material
resources. Another strategy is to utilize the mechanism for managing hardware energy
consumption. However, it has certain drawbacks and limitations. Other existing parts of a

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
total cloud foundation ought to likewise be considered for vitality mindful applications,
thinking about vitality utilization as a piece of the cost capacities to be connected. A few
research works have been done to fabricate vitality effective cloud parts exclusively. Building
a vitality proficient cloud model does not show just vitality effective host machines. In this
segment, we will explore the zones of an ordinary cloud setup that are in charge of
impressive measure of control dissemination and we will solidify the conceivable ways to
deal with fix the issues.
2.5 Cloud Federation
A typical Cloud Federation or a Federated Cloud is composed of multiple Cloud
Service Providers or CSPs, that are individually useful, active and independent. Some of the
Cloud Federation constituent candidates are the Google Cloud Platform, Amazon Web
Service, and IBM Bluemix. These candidates typically provide the cloud computing services
outside of the Federated Cloud. According to Deng et al. (2016), the Cloud Service Providers
or the customers of the Cloud Federation are subjected to operate as independent business
unit and are able to choose the processing of their workloads with more than one candidate or
constituent CSPs or even through the Cloud Federation. On the other hand, the Clouds
Coordinator and Clouds Broker of the Federated Cloud are typically independent partially,
associated with the functionality and applications that they offer to the users along with the
Service Provider (SP) and CSPs.
A CSP applicant must be worked and oversaw autonomously of the other cloud
service provider contender to permit a reasonable market for the Cloud Federation clients.
According to Dayarathna, Wen and Fan (2016), one or more Cloud Broker and Clouds-
Coordinator additionally must be overseen independently from the rest of the constituents in
the Cloud Service Provider’s federation to maintain a strategic distance from the potential
irreconcilable circumstances in allotting outstanding burdens among the partaking cloud
Document Page
service providers. The Cloud Federation is subjected to aggregate the various services, as
offered by the CSPs as well as deployed across the various geographic locations. In addition
to that, it also sconcentrates on the live video streaming, which is typically hosted by a
Service Provider or SP.
In this manner, the essential objective of the Cloud Federation is to give adaptable
valuing choices, along with the expansion and disperse figuring load usage, and limit vitality
use. Since our Federated Cloud worldview ranges diverse administrative powers, it should
likewise be custom-made to hold fast to neighborhood administrative and consistence rules
(Rossi et al. 2017). Unifying containerized loads prompts the improved utilization of
processing assets and strength of framework. In a later segment, another sort of Federated
Cloud design for complex versatile frameworks, the FoS or the Federation of Systems, will
be investigated and we will demonstrate that Federation of Systems traits does not fit Cloud
Federation, as battled in this area. In this context, Hintemann and Clausen (2016) has
suggested that the offline workloads are typically certain batch-like computing tasks, which
are dedicated to processing the information in an independent manner through two or more
online connected systems. These are typically process these workloads often with long lead
times, and these are also later served by other systems, for instance, a pre-calculated indexed
search results.
Document Page
Figure 2.2: Proposed Cloud Federation Systems of System comprises CSPs and SP
and are managed by Clouds Broker and Clouds-Coordinator
(Source: Kansal and Chana, 2016)
Linear functions such as the power cost, SLA, revenue etc. are considered for an
Integer Linear Program (ILP) solver, which is also popularly known as the NP complete
problem and hence, an infinite number of searching algorithms are responsible for finding the
optimal solution. On the other hand, shutting down the servers that are not in use effectively
conserve more amount energy as in a highly dynamic environment, turning off the system
can potentially create high overhead that in turn leads to a strong degradation performance.
Hence, the application of the energy optimization techniques is highly useful in the
calculation of the exact amount of power or energy consumption by the different components
associated with the cloud data centers. The accompanying segment talks about the SoS the
executives, usage, building, and plan contemplations. Likewise, the Cloud Federation enables
self-rule to its cloud service provider’s constituents to choose how to satisfy the motivation
behind the System of Systems (SoS) and its business objectives. The essential empowering
agents, the Clouds-Coordinator and the Clouds-Broker, by and large choose how to give or
refuse assistance, in this manner giving intends to upholding and keeping up the required
gauges and consistence. Nevertheless, in our Cloud Federation (federated cloud) SOS, the
cloud service providers communicate intentionally to satisfy the settled upon gainful
aggregate purposes.

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
3. Research Methodology
The research will follow a descriptive design with positivism philosophy in order to
study the different aspects, theories and concepts related to cloud computing and energy
consumption by the data centers and virtual machines. To be more precise, the research work
will solely involve data collection from secondary sources. The secondary sources used in
this research work are mainly peer-reviewed journals, articles, previous research studies and
experimental studies. Thus, the research method used in the present paper encompasses the
use of secondary sources for data collection. The gathered data will then be analyzed for
proper interpretation and discussion in order to reach towards a successful conclusion that
establishes appropriate links to the research objectives.
3.1 Research Problem
The research problem revolves around the massive degree of energy consumption by
different data centers in the cloud environment and the resulting high costs and increasing
carbon footprints across the globe. For this purpose, the study examines the secondary
resources for measuring the depth of the research already undertaken in this area. Booking
procedures are intended to relegate assignments in an ideal style for using framework assets.
As this industry is turning into a noteworthy asset and vitality shopper, high effectiveness
distributed computing server farms should be created so as to ensure a green figuring future
what is more, meet the mechanical needs. Various planning models have been proposed and
contemplated completely, yet the greater part of them are unsatisfactory for the present
distributed computing condition, which raised the requirement for progressively effective and
inventive planning models.
Document Page
1.4 Research Questions
The research questions for the present study are formulated from the research
objectives as identified in the above section:
 What are the major challenges of cloud energy consumption?
 What is the impact of energy consumption in the cloud?
 What are the various approaches and mechanisms employed for optimizing,
and managing energy consumption in cloud computing?
 What solutions are there to address the energy consumption issue?
3.3 Time Horizon
Main activities/ stages Week
1
Week
2
Week
3
Week
4
Week
5
Week
6
Week
7
Topic Selection ï‚·
Data collection from
secondary sources
ï‚· ï‚·
Creating layout ï‚·
Literature review ï‚· ï‚· ï‚·
Formation of the
research Plan
ï‚· ï‚·
Selection of the
Appropriate Research
Techniques
ï‚· ï‚·
Document Page
Analysis &
Interpretation of Data
Collection
ï‚· ï‚·
Findings of the Data ï‚·
Conclusion of the
Study
ï‚·
Formation of Rough
Draft
ï‚· ï‚·
Submission of Final
Work
ï‚· ï‚·
Table 1: Research Timeline
(Source: Created by the learner)

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
4. Experiments and Results
The present research is solely focused on only secondary resources collecting from
the previously existing research studies. In other words, it focuses on the different aspects
that has already been looked upon as part of the previously executed experiments and
researches by the various researchers and scientists in the past. Beside their advantages,
distributed computing server farms are confronting numerous issues, with high power
utilization being a standout amongst the most critical ones. Distributed computing speaks to
the favored option for on-request calculation and capacity where customers can spare,
recover, and share any proportion of information in the cloud. Further, the present pattern
expects that the vitality utilization of the United States' server farms will be around 73 billion
kilo watts for every hour by 2020. In this unique circumstance, an ordinary test looked by
cloud specialist co-ops is adjusting between limiting vitality use and the conveyed exhibition.
In the cloud server farm, 90% of the power is devoured by the server, system, and cooling
framework. In the event that control utilization keeps on expanding, control cost can
undoubtedly overwhelm equipment cost by a vast edge.
4.1 Presentation of the results
4.1.1 The CloudSim Simulator
The mechanism of CloudSim is a typical simulation based method that is dedicated to
modeling and simulating the DC (data center) environments. It is a significantly well known
tool for cloud simulation, which provides fundamental advantages to the cloud users and
customers including testing of the cloud services without the need for costs and the capacity
to make the requirement adjustments to the DC (data center) structure before the actual
deployment in the real world. The thorough and complete development of the CloudSim
Document Page
simulation model took place in the CLOUDS laboratory, which is typically the computer and
software engineering department of the Melbourne University.
CloudSim is a free simulator, which has been widely appreciated by the research
practitioners and scholars. The CloudSim Simulator offers several characteristics and features
to its users such as the creation and execution of cloudlets, energy optimization features, load
balancing, scheduling of tasks, and provisioning of resources. Servers and VM are executing
the tasks owned by the multiple users and resending them. Finally, energy meter is
responsible for typically computing the total energy consumed in the data center (DC).
Content delivery, social networking, real time instrumented information processing, web
hosting etc. are the main architectural features that are included in the cloud simulator
suggested by Kansal and Chana (2016). The application typically possess different
configurations, compositions and deployment requirements. Li et al. (2016) has proposed
different approaches targeted to autonomic energy aware and dynamic scheduling
mechanisms, which can easily adapt to varying infrastructures and varying types of
workloads. In this case, the data center can be modeled based on a grid with a set of
resources, each of which is associated with a specific power consumption cost along with a
set of jobs. These jobs are typically executed with a set of profits, resource requirements as
well as execution penalties. The decision regarding which resources will be assigned to which
job is required to be decided at each scheduling round. This decision solely and invariably
depends on the conditions and requirements as established by the Service Level Agreement
(SLA).
4.1.2 The CloudSim architecture
The CloudSim software framework has typically a multi layered design with the
multiple architecture elements or components. The CloudSim simulator initial releases
typically use SimJava for the discrete event simulation engine. According to Jalali et al.
Document Page
(2016), the data center environments that include multiple components are considered when
the CloudSim simulation layer is subjected to providing sufficient support in terms of
modeling and simulating these virtual cloud based DC (data center) environments. In this
context, the researcher has said that the data center is able to manage the multiple hosts as
well as manage multiple virtual machines during the life cycle. For the host component, the
process of allocation of the core processing to the virtual machines is performed depending
on the host allocation policy. This particular host allocation policy considers the different
types of hardware characteristics such as the total number of CPU cores, the total amount of
memory (both physical memory and secondary memory), CPU share, etc., which has been
allocated to the given instance of VM (Virtual Machine) (Akhter, Othman and Naha 2018).
Figure 4.1: The types of layers in the CloudSim architecture
(Source: Reguri, Kogatam and Moh, 2016)
For distributed computing, the concept of cloud computing is still emerging. There is
potential lack of a properly defined standard, methods and tools for tackling the application

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
and infrastructure level complexities in an efficient and effective manner. Therefore, it is
expected that the near future will witness multiple number of scientific and research efforts in
the field of both industrial and academically towards the aim of properly defining the core
policies, algorithms, and application benchmarks based on the contexts of execution. In this
context, the basic functionalities are already exposed to the CloudSim model and it will thus
be possible to perform some significant test cases depending on the particular configurations
and scenarios. Hence, it will lead to allowing the development of correct methods and best
practices with respect to the critical energy saving aspects related to Cloud Computing.
4.1.3 The SoS (System of Systems) functional architecture
Cloud Federation can also be thought of as an SoS or System of Systems, which is
focused on the desire for integrating the existing CSP systems in order to achieve the new
capabilities that are unavailable with single CSPs. The present environment essentially
requires the participation of CSPs, which are managerially and operationally independent.
Considerations regarding engineering design and implementation: The technical
engineering considerations are dependent on the process of management. It is typically
implement at two different individual levels such as the federation level and the CSP (cloud
service provider) level. However, some incident of conflict between the CSP is subjected to
seek in order to reconcile the operational and engineering requirements with the federated
cloud and vice versa. For instance, the cloud service provider is needed for deploying a DC
(data center) region. It is required for accommodating the customers and user needs with
respect to the un-federated service providers (SPs). However, the CSPs are needed for
deploying multiple DCs (data center) with the cloud service provider (CSP) in the federation
pool. In order to adopt the federated cloud or the Cloud Federation solution, it is essential for
service providers (SPs) to have specific standardized tools that permit the integration and
deployment of the software applications along with the data processing workloads. In relation
Document Page
to that, Rong et al. (2016) has suggested that the CSP can offer particular platforms to the
customers. However, it is crucial for the Clouds Broker and the Clouds Coordinator to offer a
generic approach to the SPs in terms of deploying and integrating with the Cloud Federation.
Hence, the CSP is responsible for modifying the interfaces and boundaries of the system by
means of exposing the interfaces that otherwise does not exist.
4.2 System Model
Mastelic et al. (2015) has researched and developed a system model that is based on
the below model depicted consisting of the task, users, virtual machine manager, servers,
scheduling algorithm as well as energy meter. To be more precise, the user represents the
typical customers or users of the cloud that are responsible for sending tasks to the data
centers (DCs) in the cloud based environment. On the other hand, the tasks typically refers to
the tasks that are received after the VM (virtual machine) status is accepted along with the
power saver decision scheduling algorithm or PSSA. The virtual machine manager or VM
Manager is responsible for handling each task having specific elements including the ID
number, size and the maximum time of completion. Similarly, PSSA (Power Saver
Scheduling Algorithm) is subjected to schedule the tasks to the users of cloud in accordance
with the data and information as sent from the VM (Virtual Machine) manager. Servers and
VM are executing the tasks owned by the multiple users and resending them. Finally, energy
meter is responsible for typically computing the total energy consumed in the data center
(DC).
4.3 Simulation tool
The main features and characteristics of cloud computing simulator as studied by
Heller et al. (2015), are thoroughly explained in this section. Content delivery, social
networking, real time instrumented information processing, web hosting etc. are the main
architectural features that are included in the cloud simulator suggested by Mastelic et al.
Document Page
(2015). The application typically possess different configurations, compositions and
deployment requirements. In this context, the quantification of the performance of the
allocation and scheduling policies in the real world cloud computing environments are
considered under different types of conditions. These are further related with the various
types of service and application models and as a result, it is potentially difficult due to the
following reasons.
(i) Typical cloud environments have varying supply patterns, demands and system
sizes
(ii) The cloud users usually have conflicting and heterogeneous quality of service
demands/ requirements
Real cloud infrastructures such as Amazon EC2 adoptions are typically limited to the
experiments in terms of the infrastructural scale, and the reproducing results, which in turn
may become potentially challenging. According to Heller et al. (2015), this particular
situation arises when the prevailing conditions in a typical internet based environment is
controlled by the application scheduling algorithms and the resource allocation developers. In
this regard, the Green Cloud computing simulator can be applied for developing model
solutions in order to allocate resources, monitor and schedule workloads as well as optimize
the network infrastructure and communication protocols. The Green Cloud simulator is a
typical extension of a well known network simulator that is released under the GPLA
(General Public License Agreement).
Parameters Value
Data center type Three tier topology

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Number of core switches 2
Number of aggregation
switches
4
Number of access switches 8
Number of servers 1440
Access links 1 GB
Aggregation links 1 GB
Core links 10 GB
Data center load 0.1, 0.2, 0.3, … , 0.9, 1.0
Simulation time 60 minutes
Power management in server DVFS and DNS
Task size 8500 bit
Task deadline 20 second
Task type High performance computing
Table 4.1: Configuration of Green Cloud
(Source: Boru et al. 2015)
Document Page
Figure 4.2: A three tier architecture for Green Cloud Simulator
(Source: Benkhelifa et al. 2015)
4.4 Discussion
However, according to Azad and Navimipour (2017), the advancement in the
development of the energy efficient cloud based solutions are highly dependent on multiple
key technologies. In this context, Dayarathna, Wen and Fan (2016) has suggested that the
platform level of application highly demands the energy efficient mediums whereas; the
hypervisor level requires energy efficient scheduling algorithms along with storage and
memory systems and resource management policies. 95 percent energy consumption results
from the medium and small firms. In order to achieve energy efficiency through virtual
computers, the energy management functions require direct action on the hardware resources
so that energy is shared amongst two or more virtual computer hardware. Apart from that,
Document Page
specific energy management operations are dedicated towards improving the consistency in
the material resources. Another strategy is to utilize the mechanism for managing hardware
energy consumption. Several studies carried out in the recent past has undertaken the
responsibility of developing different power saving policies in order to mitigate unnecessary
and idle consumption of power. These policies typically considers mode switching controls
and different decision processes. The algorithms used by the CSPs (Cloud Service Providers)
are aimed at optimizing the decision-making problem on mode switching as well as on the
service rate restrictions. Potential loss of performance due to energy savings is a major and
recurrent issue, which needs to be addressed. In this context, an energy efficient model is
required for optimizing and managing the energy consumption in the cloud environment.
There are several existing studies in this area and the present study aims at unveiling the
primary aspects related to the optimization, conservation and management of could data
center energy. Another strategy is to utilize the mechanism for managing hardware energy
consumption. However, it has certain drawbacks and limitations. Other existing parts of a
total cloud foundation ought to likewise be considered for vitality mindful applications,
thinking about vitality utilization as a piece of the cost capacities to be connected. It is
required for accommodating the customers and user needs with respect to the un-federated
service providers (SPs). However, the CSPs are needed for deploying multiple DCs (data
center) with the cloud service provider (CSP) in the federation pool. In order to adopt the
federated cloud or the Cloud Federation solution, it is essential for service providers (SPs) to
have specific standardized tools that permit the integration and deployment of the software
applications along with the data processing workloads. In relation to that, Dayarathna, Wen
and Fan (2016) has suggested that the CSP can offer particular platforms to the customers.
However, it is crucial for the Clouds Broker and the Clouds Coordinator to offer a generic

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
approach to the SPs in terms of deploying and integrating with the Cloud Federation. A few
research works have been done to fabricate vitality effective cloud parts exclusively.
Document Page
5. Conclusion
It can be concluded that the cloud computing paradigm is growing to a highly crucial
phenomenon in the area of IT (Information Technology), science and computing. More so
because of the abundant or ample amount advantages associated with cloud computing that it
renders to the end users. Therefore, the growing high user demand in the cloud based
environment in terms of the ICT resources, the consumption of energy is now one of the most
serious issue due to several reasons (including economic and ecological reasons). In this
context the paper has thoroughly discussed the various issues and problems related with the
current energy consumption by data centers. In addition to that, the paper has also presented
the formulas previously calculated and invented in this regard in order to solve the energy
consumption problem. The said formulas typically concentrate on the calculation of the total
amount and cost of energy as consumed by the different components (both servers and
hardware) of the physical servers and cloud based data centers (including, CPU, memory and
storage). These formulas demonstrate a proper understanding of the incentives gained by
means of energy saving in the cloud environment. In other words, the rapidly growing
concern in the shortage of power has typically brought major concerns in the future of cloud
system designs. Several studies carried out in the recent past has undertaken the responsibility
of developing different power saving policies in order to mitigate unnecessary and idle
consumption of power. These policies typically considers mode switching controls and
different decision processes. The algorithms used by the CSPs (Cloud Service Providers) are
aimed at optimizing the decision-making problem on mode switching as well as on the
service rate restrictions.
Document Page
5.1 Limitation of the Study
The research was limited to the study of the existing or previously experimented
energy optimization and management techniques that can be adopted in Cloud Computing.
For this purpose, it did not involve any collection of primary raw data for numerical analysis.
Moreover, due to the short span of time allocated for completing this research, it did not
involve all the areas and aspects associated with energy and power consumption in cloud
computing in an in depth, detailed and thorough manner. The time was limited for a specific
time that does not allow participation of larger sample sizes for better quality of analysis.
Financial budget was also a restriction in the study that limited the quality of the study. The
study has focused in some particular common area related to cloud computing paradigm. For
instance, the time considered for the study is significantly limited and thus, it can be said that
the study is time constrained.
5.5 Further Scope of the Study
Scope of the study due to restrictions could not be exploited to its proper potential
level. Further, the topic could have developed with study of specific organizations that deploy
energy efficient techniques and mechanisms. Moreover, the study can be further extended in
order to include primary data collection and quantitative analysis to gain a more current and
accurate overview of the present situation and status of the energy efficient solutions
available for deploying green cloud computing services across various organizations around
the globe. Further, comparative study also enhances chances of observing the various steps
applied in both the organizations.

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
References
Akhter, N., Othman, M. and Naha, R.K., 2018. Energy-aware virtual machine selection
method for cloud data center resource allocation. arXiv preprint arXiv:1812.08375.
Alhiyari, S. and El-Mousa, A., 2015, November. A Network and Power Aware framework
for data centers using virtual machines re-allocation. In 2015 IEEE Jordan Conference on
Applied Electrical Engineering and Computing Technologies (AEECT) (pp. 1-6). IEEE.
Azad, P. and Navimipour, N.J., 2017. An energy-aware task scheduling in the cloud
computing using a hybrid cultural and ant colony optimization algorithm. International
Journal of Cloud Applications and Computing (IJCAC), 7(4), pp.20-40.
Benkhelifa, E., Welsh, T., Tawalbeh, L., Jararweh, Y. and Basalamah, A., 2015. User
profiling for energy optimisation in mobile cloud computing. Procedia Computer
Science, 52, pp.1159-1165.
Bindu, G.H. and Janet, J., 2017, July. A statistical survey on vm scheduling in cloud
workstation for reducing energy consumption by balancing load in cloud. In 2017
International Conference on Networks & Advances in Computational Technologies
(NetACT) (pp. 34-43). IEEE.
Bindu, G.H. and Janet, J., 2017, July. A statistical survey on vm scheduling in cloud
workstation for reducing energy consumption by balancing load in cloud. In 2017
International Conference on Networks & Advances in Computational Technologies
(NetACT) (pp. 34-43). IEEE.
Boru, D., Kliazovich, D., Granelli, F., Bouvry, P. and Zomaya, A.Y., 2015. Energy-efficient
data replication in cloud computing datacenters. Cluster computing, 18(1), pp.385-402.
Document Page
Dayarathna, M., Wen, Y. and Fan, R., 2016. Data center energy consumption modeling: A
survey. IEEE Communications Surveys & Tutorials, 18(1), pp.732-794.
Deng, R., Lu, R., Lai, C., Luan, T.H. and Liang, H., 2016. Optimal workload allocation in
fog-cloud computing toward balanced delay and power consumption. IEEE Internet of
Things Journal, 3(6), pp.1171-1181.
Farahnakian, F., Liljeberg, P. and Plosila, J., 2014, February. Energy-efficient virtual
machines consolidation in cloud data centers using reinforcement learning. In 2014 22nd
Euromicro International Conference on Parallel, Distributed, and Network-Based Processing
(pp. 500-507). IEEE.
Fioccola, G.B., Donadio, P., Canonico, R. and Ventre, G., 2016, December. Dynamic routing
and virtual machine consolidation in green clouds. In 2016 IEEE international conference on
cloud computing technology and science (CloudCom) (pp. 590-595). IEEE.
Gai, K., Qiu, M., Zhao, H., Tao, L. and Zong, Z., 2016. Dynamic energy-aware cloudlet-
based mobile cloud computing model for green computing. Journal of Network and
Computer Applications, 59, pp.46-54.
Guo, S., Xiao, B., Yang, Y. and Yang, Y., 2016, April. Energy-efficient dynamic offloading
and resource scheduling in mobile cloud computing. In IEEE INFOCOM 2016-The 35th
Annual IEEE International Conference on Computer Communications(pp. 1-9). IEEE.
He, K., Li, Z., Deng, D. and Chen, Y., 2017. Energy-efficient framework for virtual machine
consolidation in cloud data centers. China Communications, 14(10), pp.192-201.
Heller, B., Seetharaman, S., Mahadevan, P., Yiakoumis, Y., Sharma, P., Banerjee, S. and
McKeown, N., 2015, April. Elastictree: Saving energy in data center networks. In Nsdi(Vol.
10, pp. 249-264).
Document Page
Hintemann, R. and Clausen, J., 2016, August. Green Cloud? The current and future
development of energy consumption by data centers, networks and end-user devices. In ICT
for Sustainability 2016. Atlantis Press.
Jalali, F., Hinton, K., Ayre, R., Alpcan, T. and Tucker, R.S., 2016. Fog computing may help
to save energy in cloud computing. IEEE Journal on Selected Areas in
Communications, 34(5), pp.1728-1739.
Kansal, N.J. and Chana, I., 2016. Energy-aware virtual machine migration for cloud
computing-a firefly optimization approach. Journal of Grid Computing, 14(2), pp.327-345.
Li, H., Zhu, G., Cui, C., Tang, H., Dou, Y. and He, C., 2016. Energy-efficient migration and
consolidation algorithm of virtual machines in data centers for cloud
computing. Computing, 98(3), pp.303-317.
Li, Y., Chen, M., Dai, W. and Qiu, M., 2017. Energy optimization with dynamic task
scheduling mobile cloud computing. IEEE Systems Journal, 11(1), pp.96-105.
Marahatta, A., Wang, Y., Zhang, F., Sangaiah, A.K., Tyagi, S.K.S. and Liu, Z., 2018.
Energy-aware fault-tolerant dynamic task scheduling scheme for virtualized cloud data
centers. Mobile Networks and Applications, pp.1-15.
Mastelic, T., Oleksiak, A., Claussen, H., Brandic, I., Pierson, J.M. and Vasilakos, A.V., 2015.
Cloud computing: Survey on energy efficiency. Acm computing surveys (csur), 47(2), p.33.
Reguri, V.R., Kogatam, S. and Moh, M., 2016, April. Energy efficient traffic-aware virtual
machine migration in green cloud data centers. In 2016 IEEE 2nd International Conference
on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High
Performance and Smart Computing (HPSC), and IEEE International Conference on
Intelligent Data and Security (IDS) (pp. 268-273). IEEE.

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Rong, H., Zhang, H., Xiao, S., Li, C. and Hu, C., 2016. Optimizing energy consumption for
data centers. Renewable and Sustainable Energy Reviews, 58, pp.674-691.
Rossi, F.D., Xavier, M.G., De Rose, C.A., Calheiros, R.N. and Buyya, R., 2017. E-eco:
Performance-aware energy-efficient cloud data center orchestration. Journal of Network and
Computer Applications, 78, pp.83-96.
Shen, D., Luo, J., Dong, F., Fei, X., Wang, W., Jin, G. and Li, W., 2015. Stochastic modeling
of dynamic right-sizing for energy-efficiency in cloud data centers. Future Generation
Computer Systems, 48, pp.82-95.
Silva, J.S., Lins, F.A.A., Sousa, E.T.G., Summer, H.B. and Fernandes, C.M., 2017, October.
Invasive technique for measuring the energy consumption of mobile devices applications in
mobile cloud environments. In 2017 IEEE international conference on systems, man, and
cybernetics (SMC) (pp. 2724-2729). IEEE.
Tang, Z., Qi, L., Cheng, Z., Li, K., Khan, S.U. and Li, K., 2016. An energy-efficient task
scheduling algorithm in DVFS-enabled cloud environment. Journal of Grid
Computing, 14(1), pp.55-74.
You, C., Huang, K. and Chae, H., 2016. Energy efficient mobile cloud computing powered
by wireless energy transfer. IEEE Journal on Selected Areas in Communications, 34(5),
pp.1757-1771.
Zhou, Z., Dong, M., Ota, K., Wang, G. and Yang, L.T., 2016. Energy-efficient resource
allocation for D2D communications underlaying cloud-RAN-based LTE-A networks. IEEE
Internet of Things Journal, 3(3), pp.428-438.
1 out of 41
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]