Understanding the Core Role of CPU Scheduling in Systems
VerifiedAdded on 2019/09/30
|11
|2710
|228
Homework Assignment
AI Summary
This document provides a detailed explanation of CPU scheduling, a critical process in operating systems that optimizes CPU utilization and enables multitasking. It discusses how scheduling ensures that processes are executed efficiently and in a timely manner, enhancing system speed and overall performance. The document explores the components of CPU scheduling, including the CPU burst cycle, scheduler, and dispatcher, as well as the different queues involved in process management (job queue, ready queue, and device queue). It also differentiates between preemptive and non-preemptive scheduling schemes and outlines the circumstances when scheduling is required. Furthermore, the document delves into various scheduling algorithms like First Come First Serve (FCFS) and Shortest Job First (SJF), analyzing their advantages and disadvantages in terms of CPU utilization, throughput, turnaround time, and response time. This comprehensive analysis helps to understand the core role of CPU scheduling in ensuring a system's efficiency and responsiveness.

1
What is the Role of CPU Scheduling?
What is the Role of CPU Scheduling?
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

2
What is the Role of CPU Scheduling?
Keyword: CPU Scheduling
CPU Scheduling
CPU scheduling aims to optimize the utilization of CPU. It is a process that allows the system to
carry out multiple processes at once. In case of scheduling, the CPU keeps a process on hold
while the other one is being executed. This is done due to the unavailability of the resources. The
scheduling ensures that all the process in the CPU are being executed in a timely manner and the
system is utilizing the full capacity. The CPU scheduling not only makes the system more
efficient but it also increases the speed. When the scheduling is done, the operating system
focuses on choosing and executing one of the processes available in the ready queue. A short-
term scheduler or CPU scheduler is used to execute the selection process. It selects the processes
which are ready to complete. The scheduler also allocates the processes to the CPU.
Waiting Time in CPU
In computing, almost every program is executed on the basis of some alternating cycle. When
these programs are being executed, sometimes waiting for the input and the output becomes
inevitable. It occurs due to the differences between the CPU and the memory speed. CPU is able
to execute an instruction in a shorter time. On the contrary, fetching data from memory is a more
time-consuming process. So, the CPU needs to wait when the memory is fetching the data.
During the time, the CPU remains idle. In other words, the waiting time and the CPU are wasted
in the process. It makes the processes more time-consuming. Waiting for input and outputs for a
long time affects the overall efficiency of the system. The scheduling solves the problem by
keeping different programs in the queue.
What is the Role of CPU Scheduling?
Keyword: CPU Scheduling
CPU Scheduling
CPU scheduling aims to optimize the utilization of CPU. It is a process that allows the system to
carry out multiple processes at once. In case of scheduling, the CPU keeps a process on hold
while the other one is being executed. This is done due to the unavailability of the resources. The
scheduling ensures that all the process in the CPU are being executed in a timely manner and the
system is utilizing the full capacity. The CPU scheduling not only makes the system more
efficient but it also increases the speed. When the scheduling is done, the operating system
focuses on choosing and executing one of the processes available in the ready queue. A short-
term scheduler or CPU scheduler is used to execute the selection process. It selects the processes
which are ready to complete. The scheduler also allocates the processes to the CPU.
Waiting Time in CPU
In computing, almost every program is executed on the basis of some alternating cycle. When
these programs are being executed, sometimes waiting for the input and the output becomes
inevitable. It occurs due to the differences between the CPU and the memory speed. CPU is able
to execute an instruction in a shorter time. On the contrary, fetching data from memory is a more
time-consuming process. So, the CPU needs to wait when the memory is fetching the data.
During the time, the CPU remains idle. In other words, the waiting time and the CPU are wasted
in the process. It makes the processes more time-consuming. Waiting for input and outputs for a
long time affects the overall efficiency of the system. The scheduling solves the problem by
keeping different programs in the queue.

3
When the scheduling system is in operation, CPU becomes able to perform one process when
the input or output for the others is not available. It prevents the loss of CPU cycles. Besides it
also ensures full use of the CPU cycles. The efficiency of a system is affected due to the long
waiting time and loss of CPU cycles. So, enhancing the efficiency of the system becomes quite
challenging. The systems should operate in a fair and efficient manner. It becomes difficult in
varying dynamic condition. Additionally, prioritization of the tasks is another factor that needs to
be considered while executing processes in the CPU.
Queues Involved in Scheduling
There are three types of queues which are involved with the CPU usage are—
Job queue: It includes all processes which are being or will be executed by the CPU. In other
words, the processes which are once submitted to the CPU, reside in the job queue. The
processes in the job queue are allocated by the long-term scheduler.
Ready queue: It includes the processes which are currently in memory only. These processes
remain at the ready state. While being in the queue, the processes wait for execution. These
processes are allocated by the CPU scheduler or the short-term CPU.
Device queue: It includes the processes which are waiting for a device. In a CPU, multiple
processes can wait for the same device. In such situations, I/O completion sends the process back
to the ready queue.
Components of CPU Scheduling
The scheduling process is done with the help of CPU Burst Cycle, Dispatcher and the scheduler.
When the scheduling system is in operation, CPU becomes able to perform one process when
the input or output for the others is not available. It prevents the loss of CPU cycles. Besides it
also ensures full use of the CPU cycles. The efficiency of a system is affected due to the long
waiting time and loss of CPU cycles. So, enhancing the efficiency of the system becomes quite
challenging. The systems should operate in a fair and efficient manner. It becomes difficult in
varying dynamic condition. Additionally, prioritization of the tasks is another factor that needs to
be considered while executing processes in the CPU.
Queues Involved in Scheduling
There are three types of queues which are involved with the CPU usage are—
Job queue: It includes all processes which are being or will be executed by the CPU. In other
words, the processes which are once submitted to the CPU, reside in the job queue. The
processes in the job queue are allocated by the long-term scheduler.
Ready queue: It includes the processes which are currently in memory only. These processes
remain at the ready state. While being in the queue, the processes wait for execution. These
processes are allocated by the CPU scheduler or the short-term CPU.
Device queue: It includes the processes which are waiting for a device. In a CPU, multiple
processes can wait for the same device. In such situations, I/O completion sends the process back
to the ready queue.
Components of CPU Scheduling
The scheduling process is done with the help of CPU Burst Cycle, Dispatcher and the scheduler.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

4
CPU Burst Cycle: Every process includes CPU burst cycle and I/O burst cycle. The duration of
CPU burst cycle varies on the basis of the processes.
Scheduler: The scheduler operates when the processor becomes idle. It chooses another process
which is ready to run from the queue. The storage structure of the ready queue plays a key role in
determining which process needs to be executed. The algorithm is another factor that deals with
the selection process. The scheduler works on the basis of these two factors and selects the most
appropriate process accordingly.
Dispatcher: Dispatcher is another component that is involved in CPU scheduling. It is a module
plays a significant part in the scheduling process. It transfers the control of the CPU to the next
process which needs to be executed as chosen by the short-term scheduler. The function is done
in the following steps—
Switching of the processes
The user mode is switched
The processor reaches the appropriate location in the user program. The destination, in
this case, is the same location where the program was left last time.
The dispatcher requires operating at a faster speed to manage every process switch. The time
needed by the dispatcher in order to terminate one process and switch to another one is known as
the Dispatch Latency.
Situations When Scheduling is Required
The scheduling is needed in different situations. Here are four circumstances when it is necessary
—
CPU Burst Cycle: Every process includes CPU burst cycle and I/O burst cycle. The duration of
CPU burst cycle varies on the basis of the processes.
Scheduler: The scheduler operates when the processor becomes idle. It chooses another process
which is ready to run from the queue. The storage structure of the ready queue plays a key role in
determining which process needs to be executed. The algorithm is another factor that deals with
the selection process. The scheduler works on the basis of these two factors and selects the most
appropriate process accordingly.
Dispatcher: Dispatcher is another component that is involved in CPU scheduling. It is a module
plays a significant part in the scheduling process. It transfers the control of the CPU to the next
process which needs to be executed as chosen by the short-term scheduler. The function is done
in the following steps—
Switching of the processes
The user mode is switched
The processor reaches the appropriate location in the user program. The destination, in
this case, is the same location where the program was left last time.
The dispatcher requires operating at a faster speed to manage every process switch. The time
needed by the dispatcher in order to terminate one process and switch to another one is known as
the Dispatch Latency.
Situations When Scheduling is Required
The scheduling is needed in different situations. Here are four circumstances when it is necessary
—
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

5
Switching from Running State to Waiting State: When a process receives an I/O request, I
needs to switch from the running state to the waiting state. The same happens when one of the
child processes terminates. In that case, also, the original process needs to be stopped and wait
for the new instructions. During this phase, the CPU remains idle.
Switching from Running State to Ready State: When an interruption takes place, a process
needs to switch from the running state to ready state. During this time also, the CPU does not
perform any activity.
Switching from Waiting State to Ready State: The switching becomes essential when the I/O is
completed. In that case, the process again goes to the ready state. In this case, the CPU needs to
be prepared for executing the process again.
Switching on Termination: The switching also becomes essential when a process terminates. In
this case, also, the CPU needs to start executing a new process.
All the circumstances mentioned above are different in terms of the selection of the processes at
the time of scheduling. In the first and fourth case, the CPU does not have any choice other than
scheduling. As the existing process ends, the CPU must choose a new one. In the case of second
and third circumstances, the CPU gets the chance to select from the processes which are
available in the ready queue.
Types of Scheduling Schemes
The scheduling schemes can be categorized as preemptive scheduling and non-preemptive
scheduling.
Switching from Running State to Waiting State: When a process receives an I/O request, I
needs to switch from the running state to the waiting state. The same happens when one of the
child processes terminates. In that case, also, the original process needs to be stopped and wait
for the new instructions. During this phase, the CPU remains idle.
Switching from Running State to Ready State: When an interruption takes place, a process
needs to switch from the running state to ready state. During this time also, the CPU does not
perform any activity.
Switching from Waiting State to Ready State: The switching becomes essential when the I/O is
completed. In that case, the process again goes to the ready state. In this case, the CPU needs to
be prepared for executing the process again.
Switching on Termination: The switching also becomes essential when a process terminates. In
this case, also, the CPU needs to start executing a new process.
All the circumstances mentioned above are different in terms of the selection of the processes at
the time of scheduling. In the first and fourth case, the CPU does not have any choice other than
scheduling. As the existing process ends, the CPU must choose a new one. In the case of second
and third circumstances, the CPU gets the chance to select from the processes which are
available in the ready queue.
Types of Scheduling Schemes
The scheduling schemes can be categorized as preemptive scheduling and non-preemptive
scheduling.

6
Non-Preemptive Scheduling: In the case of non-preemptive scheduling, at first the CPU is
allocated to complete a process. The process uses the CPU until it is finished and switched to the
waiting state. When the process is ended or requires switching, the CPU is released. The
operating systems such as Apple Macintosh and Microsoft Windows 3.1 use this type of
switching method. The key feature of this method is – it does not need any hardware to complete
the scheduling. In many cases, the timers are required for CPU scheduling. Use of this method
eliminates the need for using a timer. So, the method can be used by different operating systems.
Preemptive Scheduling: Prioritizing is necessary to make the system able to operate efficiently.
In this type of scheduling, prioritization between the tasks is done before allocation to the CPU.
While running multiple tasks, the ones with high priority need to be done first. Due to this
approach, the task which is currently being executed in the CPU requires being executed. In this
case, the CPU stops running the current task and switches to the high –priority one.
Scheduling Algorithm
The scheduling in CPU is done on the basis of different algorithms. The selection of the
algorithm depends on a number of factors—
CPU Utilization: To ensure the best use of CPU, preventing the wastage of CPU cycle is necessary.
It is achieved when the CPU works most of the time. Ideally, the CPU should utilize 100% of the
time available to it. However, when a real system is considered, CPU utilization ranges from 40% in
case of the lightly loaded systems to 90%, in case of the heavily loaded systems. It indicates that the
workload of the system plays an important part to choose the scheduling algorithm.
Throughput: The total number of processes finished per unit time by the CPU is described as the
throughput. In other words, the total amount of work completed by the CPU in a unit of time is
Non-Preemptive Scheduling: In the case of non-preemptive scheduling, at first the CPU is
allocated to complete a process. The process uses the CPU until it is finished and switched to the
waiting state. When the process is ended or requires switching, the CPU is released. The
operating systems such as Apple Macintosh and Microsoft Windows 3.1 use this type of
switching method. The key feature of this method is – it does not need any hardware to complete
the scheduling. In many cases, the timers are required for CPU scheduling. Use of this method
eliminates the need for using a timer. So, the method can be used by different operating systems.
Preemptive Scheduling: Prioritizing is necessary to make the system able to operate efficiently.
In this type of scheduling, prioritization between the tasks is done before allocation to the CPU.
While running multiple tasks, the ones with high priority need to be done first. Due to this
approach, the task which is currently being executed in the CPU requires being executed. In this
case, the CPU stops running the current task and switches to the high –priority one.
Scheduling Algorithm
The scheduling in CPU is done on the basis of different algorithms. The selection of the
algorithm depends on a number of factors—
CPU Utilization: To ensure the best use of CPU, preventing the wastage of CPU cycle is necessary.
It is achieved when the CPU works most of the time. Ideally, the CPU should utilize 100% of the
time available to it. However, when a real system is considered, CPU utilization ranges from 40% in
case of the lightly loaded systems to 90%, in case of the heavily loaded systems. It indicates that the
workload of the system plays an important part to choose the scheduling algorithm.
Throughput: The total number of processes finished per unit time by the CPU is described as the
throughput. In other words, the total amount of work completed by the CPU in a unit of time is
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

7
considered as the throughput. The throughput level varies with the process. In some cases, it might be
10/second while in others it can reduce to 1/hour based on the requirement of specific processes.
Turnaround Time: It is the time which is required for executing the particular processes. It can
also be described as the interval between submission of one process to the time when the process is
completed.
Waiting Time: It is the cumulative amount of period which the process spends by waiting in the
queue at ready state. In other words, it can be described by the time after which the process gains
control over the CPU.
Load Average: It indicates the average number of processes which are residing in the ready queue
and waiting for the turn to take control over the CPU.
Response Time: When CPU receives an instruction, it takes some time to respond. This duration
is called the response time. While choosing the CPU scheduling, it is ensured that the Throughput
and CPU utilization are maximized. It is also ensured that other factors are reduced to optimize
utilization. It indicates that scheduling plays a key role to make the system faster and more efficient.
The advantages obtained from the scheduling depends on the selection of the specific algorithms—
First Come First Serve: In the case of the "First come first serve" scheduling algorithm, the
work is done in the same manner as the name suggests. It indicates that the process which is
received by the CPU first is executed first. In other words, the process which sends a request to
the CPU first gets the chance to be allocated first.
First Come First Serve algorithm is similar to the FIFO (First in First out) in the Queue data
structure. In the case of FIFO also, the data which is included in the queue first is the one which
is selected to leave the queue first. The application of this approach is found in the batch system.
considered as the throughput. The throughput level varies with the process. In some cases, it might be
10/second while in others it can reduce to 1/hour based on the requirement of specific processes.
Turnaround Time: It is the time which is required for executing the particular processes. It can
also be described as the interval between submission of one process to the time when the process is
completed.
Waiting Time: It is the cumulative amount of period which the process spends by waiting in the
queue at ready state. In other words, it can be described by the time after which the process gains
control over the CPU.
Load Average: It indicates the average number of processes which are residing in the ready queue
and waiting for the turn to take control over the CPU.
Response Time: When CPU receives an instruction, it takes some time to respond. This duration
is called the response time. While choosing the CPU scheduling, it is ensured that the Throughput
and CPU utilization are maximized. It is also ensured that other factors are reduced to optimize
utilization. It indicates that scheduling plays a key role to make the system faster and more efficient.
The advantages obtained from the scheduling depends on the selection of the specific algorithms—
First Come First Serve: In the case of the "First come first serve" scheduling algorithm, the
work is done in the same manner as the name suggests. It indicates that the process which is
received by the CPU first is executed first. In other words, the process which sends a request to
the CPU first gets the chance to be allocated first.
First Come First Serve algorithm is similar to the FIFO (First in First out) in the Queue data
structure. In the case of FIFO also, the data which is included in the queue first is the one which
is selected to leave the queue first. The application of this approach is found in the batch system.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

8
The First Come First Serve algorithm is advantageous because is simple to interpret and
implement to the system through programming. This approach also involves the use of a Queue
data structure. In this way, a new process gets included at the tail end of the queue. On the
contrary. the scheduler in the CPU chooses the process from the head of the queue structure.
Purchasing tickets from the ticket counter is the real-life example of the scheduling.
Despite the advantages such as ease of use, there are some issues associated with the algorithm
also. Firstly, It is Non-Pre-emptive in nature. It indicates that the process priority does not have
any significance in this case. Therefore, use of the FCFS scheduling might create the situations
when the processes with the minimum priority are being executed. For example, the regular
routine backup processes fall into the category of least priority. Such processes are time-
consuming in nature. While executing these processes as per the FCFS system, the switching
does not occur even if the high priority requests are received. It affects the effectiveness of the
system. Optimal Average Waiting Time is not achieved in this case. It is another drawback of the
FCFS system. The scheduling in this method also does not facilitate the system to utilize the
resources in a parallel manner. It leads to poor resource utilization and causes Convoy effect. It is
another disadvantage associated with FCFS scheduling.
Shortest Job First Scheduling: Reducing the waiting time is one of the key goals of the
scheduling process. Shortest Job First Scheduling is one of the best approaches which minimize
waiting time for the particular processes. This type of scheduling is also applied in the batch
systems. This type of scheduling is categorized into two types. These are Non-Pre-emptive and
Pre-emptive. For implementing the method successfully, knowledge on the duration time or the
burst time is necessary for the processor. However, practically, knowing the duration time or the
burst time is not possible for all the processes. It also indicates that the processor should be
The First Come First Serve algorithm is advantageous because is simple to interpret and
implement to the system through programming. This approach also involves the use of a Queue
data structure. In this way, a new process gets included at the tail end of the queue. On the
contrary. the scheduler in the CPU chooses the process from the head of the queue structure.
Purchasing tickets from the ticket counter is the real-life example of the scheduling.
Despite the advantages such as ease of use, there are some issues associated with the algorithm
also. Firstly, It is Non-Pre-emptive in nature. It indicates that the process priority does not have
any significance in this case. Therefore, use of the FCFS scheduling might create the situations
when the processes with the minimum priority are being executed. For example, the regular
routine backup processes fall into the category of least priority. Such processes are time-
consuming in nature. While executing these processes as per the FCFS system, the switching
does not occur even if the high priority requests are received. It affects the effectiveness of the
system. Optimal Average Waiting Time is not achieved in this case. It is another drawback of the
FCFS system. The scheduling in this method also does not facilitate the system to utilize the
resources in a parallel manner. It leads to poor resource utilization and causes Convoy effect. It is
another disadvantage associated with FCFS scheduling.
Shortest Job First Scheduling: Reducing the waiting time is one of the key goals of the
scheduling process. Shortest Job First Scheduling is one of the best approaches which minimize
waiting time for the particular processes. This type of scheduling is also applied in the batch
systems. This type of scheduling is categorized into two types. These are Non-Pre-emptive and
Pre-emptive. For implementing the method successfully, knowledge on the duration time or the
burst time is necessary for the processor. However, practically, knowing the duration time or the
burst time is not possible for all the processes. It also indicates that the processor should be

9
aware of the processes before execution. It is also not feasible in every case. The shortest job
first scheduling provides the optimum result when every process or job is available at the same
time. It indicates that the highest efficiency is achieved when all the processes have the same
arrival time.
Priority Scheduling: In this scheduling system, every process is assigned with a priority. The
processes which are associated with the highest priority require being executed first. When the
CPU encounters multiple processes with the same priority, the scheduling is done in FCFS
manner. The priority of a task depends on different factors such as time requirements, memory
requirements and other resource requirements.
Round Robin Scheduling: The round robin is another scheduling which is used to optimize the
performance of the CPU. In this case, each process gets a fixed time to take control of the CPU
for execution. The duration is called quantum. When a process gets the chance to be executed for
a fixed time, it is pre-empted. After the time, other processes are executed. The Context
switching saves the states of the pre-emptied processes.
Multilevel Scheduling: It is another type of algorithms. These algorithms are developed to deal
with the situations in which the processes from the different class are present. For instance, batch
or interactive processes fall into different classes. The response-time requirements for these
processes are different. So, scheduling needs also vary. Additionally, the foreground processes
are provided with higher priority than the background processes. When the multi-level queue
scheduling algorithm is used, the ready queue is separated into different parts. Each process is
allocated to one queue permanently. This allocation depends on the particular properties of the
processes. For instance, the process priority, memory size and process type influence the
aware of the processes before execution. It is also not feasible in every case. The shortest job
first scheduling provides the optimum result when every process or job is available at the same
time. It indicates that the highest efficiency is achieved when all the processes have the same
arrival time.
Priority Scheduling: In this scheduling system, every process is assigned with a priority. The
processes which are associated with the highest priority require being executed first. When the
CPU encounters multiple processes with the same priority, the scheduling is done in FCFS
manner. The priority of a task depends on different factors such as time requirements, memory
requirements and other resource requirements.
Round Robin Scheduling: The round robin is another scheduling which is used to optimize the
performance of the CPU. In this case, each process gets a fixed time to take control of the CPU
for execution. The duration is called quantum. When a process gets the chance to be executed for
a fixed time, it is pre-empted. After the time, other processes are executed. The Context
switching saves the states of the pre-emptied processes.
Multilevel Scheduling: It is another type of algorithms. These algorithms are developed to deal
with the situations in which the processes from the different class are present. For instance, batch
or interactive processes fall into different classes. The response-time requirements for these
processes are different. So, scheduling needs also vary. Additionally, the foreground processes
are provided with higher priority than the background processes. When the multi-level queue
scheduling algorithm is used, the ready queue is separated into different parts. Each process is
allocated to one queue permanently. This allocation depends on the particular properties of the
processes. For instance, the process priority, memory size and process type influence the
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

10
allocation of sub-queues to the processes. The foreground and background processes are
executed using separate queues.
Multilevel Feedback Queue Scheduling: In the case of a multilevel queue-scheduling algorithm,
processes are assigned permanently to a queue after entering the system. Once assigned, the
processes do not switch between queues. This algorithm is advantageous because of the low
scheduling overhead. However, this is inflexible in nature. It is a disadvantage of the process.
allocation of sub-queues to the processes. The foreground and background processes are
executed using separate queues.
Multilevel Feedback Queue Scheduling: In the case of a multilevel queue-scheduling algorithm,
processes are assigned permanently to a queue after entering the system. Once assigned, the
processes do not switch between queues. This algorithm is advantageous because of the low
scheduling overhead. However, this is inflexible in nature. It is a disadvantage of the process.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

11
1 out of 11
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.





