This thesis explores the critical issue of energy optimization and management within cloud computing environments. It begins by highlighting the exponential growth in cloud data centers and their significant power consumption, which poses a major challenge for cloud service providers. The research identifies key challenges, investigates the impact of energy consumption, and examines various approaches and mechanisms for optimization, including dynamic voltage and frequency scaling (DVFS), server shutdown strategies, and energy-aware scheduling. The study delves into data center network topologies like the fat tree and Google fat tree, and discusses the importance of idle versus active low-power modes and load balancing. It also covers cloud federation, proposing a system of systems comprising CSPs and SPs managed by cloud brokers and coordinators. The methodology employs a descriptive design with a positivism philosophy, utilizing secondary data sources such as peer-reviewed journals and research articles. The thesis also discusses the CloudSim simulator, its architecture, and its use in modeling and simulating data center environments. The conclusion summarizes the key findings and emphasizes the growing importance of energy-efficient practices in cloud computing.