BN104D: Operating Systems - Memory and Process Management
VerifiedAdded on 2025/04/04
|16
|1816
|125
AI Summary
Desklib provides past papers and solved assignments for students. This solved assignment covers memory management and process scheduling.

Melbourne Institute of Technology
School of Information technology and Engineering
(SITE)
BN104/ BN104D
Submitted by -
Table of Content
School of Information technology and Engineering
(SITE)
BN104/ BN104D
Submitted by -
Table of Content
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

s
List of Tables...................................................................................................................................3
Introduction......................................................................................................................................4
B-Q1 Memory Management-Paging...............................................................................................5
Table 1 maps pages to frames (e.g. page 1 to frame 0, page 4 to frame 1, etc.) for a process.
The page and frame size is 1K, and the OS uses 1K (starting at address 0) of memory.............5
a. What is the address at the start of frame 1? What is the address at the end of frame 1?......5
b. i. Is page 2 mapped to frame 2? (Yes/No)............................................................................5
ii. Is page 3 mapped to frame 3? (Yes/No)..................................................................................5
c. Which pages have not been loaded into memory yet?.........................................................5
d. When Process A references an address on page 3, explain what the paging system does?
What happens to Process A as a consequence according to process transition?.........................6
e. Memory Management Unit (MMU) performs translation of logical addresses to physical
addresses. In addition, it also performs memory protection. Map the following logical
addresses to physical addresses (in decimal) and also check the protection during translation.
Show the calculations and steps involved during logical to physical address translation
process:........................................................................................................................................6
B-Q2 Fragmentation and Memory Mapping...................................................................................8
B-Q2a Memory compact/relocation is done to use the memory efficiently. In your opinion,
how often should memory compaction/relocation be performed? What do you think the
advantages and disadvantages would be of performing it even more often?..............................8
B-Q3 Process Management and Scheduling....................................................................................9
B-Q3a Given the following arrival time and CPU cycle times...................................................9
Draw a timeline for each of the following scheduling algorithms (3X4 = 12 Marks) and also
show the details of the ready queue formation during the timeline. (2X4 =8 Marks).................9
List of Tables...................................................................................................................................3
Introduction......................................................................................................................................4
B-Q1 Memory Management-Paging...............................................................................................5
Table 1 maps pages to frames (e.g. page 1 to frame 0, page 4 to frame 1, etc.) for a process.
The page and frame size is 1K, and the OS uses 1K (starting at address 0) of memory.............5
a. What is the address at the start of frame 1? What is the address at the end of frame 1?......5
b. i. Is page 2 mapped to frame 2? (Yes/No)............................................................................5
ii. Is page 3 mapped to frame 3? (Yes/No)..................................................................................5
c. Which pages have not been loaded into memory yet?.........................................................5
d. When Process A references an address on page 3, explain what the paging system does?
What happens to Process A as a consequence according to process transition?.........................6
e. Memory Management Unit (MMU) performs translation of logical addresses to physical
addresses. In addition, it also performs memory protection. Map the following logical
addresses to physical addresses (in decimal) and also check the protection during translation.
Show the calculations and steps involved during logical to physical address translation
process:........................................................................................................................................6
B-Q2 Fragmentation and Memory Mapping...................................................................................8
B-Q2a Memory compact/relocation is done to use the memory efficiently. In your opinion,
how often should memory compaction/relocation be performed? What do you think the
advantages and disadvantages would be of performing it even more often?..............................8
B-Q3 Process Management and Scheduling....................................................................................9
B-Q3a Given the following arrival time and CPU cycle times...................................................9
Draw a timeline for each of the following scheduling algorithms (3X4 = 12 Marks) and also
show the details of the ready queue formation during the timeline. (2X4 =8 Marks).................9

B-Q3b Using either internet resources or books understand the concept of waiting time and
turnaround time. Define those terms (waiting time and turnaround time) in your own words.
Then calculate waiting time and turnaround time for every job for all four scheduling
algorithms mentioned in B-Q3-a (Details of the calculations is essential)...............................10
Conclusion.....................................................................................................................................13
References......................................................................................................................................14
turnaround time. Define those terms (waiting time and turnaround time) in your own words.
Then calculate waiting time and turnaround time for every job for all four scheduling
algorithms mentioned in B-Q3-a (Details of the calculations is essential)...............................10
Conclusion.....................................................................................................................................13
References......................................................................................................................................14
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

List of Tables
Table 1: FCFS Scheduling.............................................................................................................11
Table 2: SJN Scheduling...............................................................................................................11
Table 3: SRT Scheduling...............................................................................................................11
Table 4: Round Robin Scheduling.................................................................................................12
Table 1: FCFS Scheduling.............................................................................................................11
Table 2: SJN Scheduling...............................................................................................................11
Table 3: SRT Scheduling...............................................................................................................11
Table 4: Round Robin Scheduling.................................................................................................12
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Introduction
In this assignment, we will apply our technical skills and the concepts of Operating System we
have learned so far to solve the questions of this assignment.
We will understand the concepts of memory management, paging, fragmentation, memory
compaction, memory mapping, process scheduling, etc and gain practical and hands on skills to
figure out the solutions to the common problems given in the assignment.
With the help of this assignment, we will also understand better how the operating system maps
the physical address to each process and how the scheduling algorithms works.
In this assignment, we will apply our technical skills and the concepts of Operating System we
have learned so far to solve the questions of this assignment.
We will understand the concepts of memory management, paging, fragmentation, memory
compaction, memory mapping, process scheduling, etc and gain practical and hands on skills to
figure out the solutions to the common problems given in the assignment.
With the help of this assignment, we will also understand better how the operating system maps
the physical address to each process and how the scheduling algorithms works.

B-Q1 Memory Management-Paging
Table 1 maps pages to frames (e.g. page 1 to frame 0, page 4 to frame 1, etc.) for a
process. The page and frame size is 1K, and the OS uses 1K (starting at address 0)
of memory.
OS
0 0 Page 1
1 1 Page 4
2 2 Page 2
3 3 free
4 4 Page 8
5 5 free
6 6 Page 0
7 7 Page 5
8 8 free
PAGES FRAMES
a. What is the address at the start of frame 1? What is the address at the end of
frame 1?
Solution:
Address at the start of the frame 1 is 1024 and at the end of frame 1 address is 2047.
b. i. Is page 2 mapped to frame 2? (Yes/No)
Solution:
Yes
ii. Is page 3 mapped to frame 3? (Yes/No)
Solution:
No
c. Which pages have not been loaded into memory yet?
Solution:
Pages 3, 6 and 7 have not been loaded into memory.
Table 1 maps pages to frames (e.g. page 1 to frame 0, page 4 to frame 1, etc.) for a
process. The page and frame size is 1K, and the OS uses 1K (starting at address 0)
of memory.
OS
0 0 Page 1
1 1 Page 4
2 2 Page 2
3 3 free
4 4 Page 8
5 5 free
6 6 Page 0
7 7 Page 5
8 8 free
PAGES FRAMES
a. What is the address at the start of frame 1? What is the address at the end of
frame 1?
Solution:
Address at the start of the frame 1 is 1024 and at the end of frame 1 address is 2047.
b. i. Is page 2 mapped to frame 2? (Yes/No)
Solution:
Yes
ii. Is page 3 mapped to frame 3? (Yes/No)
Solution:
No
c. Which pages have not been loaded into memory yet?
Solution:
Pages 3, 6 and 7 have not been loaded into memory.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

d. When Process A references an address on page 3, explain what the paging
system does? What happens to Process A as a consequence according to
process transition?
Solution:
When the process A tries to access page 3 which is not loaded into memory, it will result
into page fault. In such cases, the paging system tries to swap a page present in the
memory with the one which is to be accessed. This process is known as demand paging.
For swapping, the operating system uses LRU algorithm in which the page to be
accessed is swapped with the page which is least recently used in the memory. After
swapping of pages, now page 3 has been loaded into memory (RAM), process A can
easily access page 3.
When process A tries to access the page 3 which is not present in the memory, the OS
will move process A into the state of blocking or waiting. Once page 3 is loaded into
memory as a result of LRU algorithm, the OS will bring back process A into the ready
state.
e. Memory Management Unit (MMU) performs translation of logical addresses
to physical addresses. In addition, it also performs memory protection. Map
the following logical addresses to physical addresses (in decimal) and also
check the protection during translation. Show the calculations and steps
involved during logical to physical address translation process:
Solution:
We can solve this question using the following formulae,
Offset = (logical address) mod (page size)
Page no. = (logical address / page size)
Frame no = corresponding to page no.
Physical Address = (page size * Frame no.) + Offset
system does? What happens to Process A as a consequence according to
process transition?
Solution:
When the process A tries to access page 3 which is not loaded into memory, it will result
into page fault. In such cases, the paging system tries to swap a page present in the
memory with the one which is to be accessed. This process is known as demand paging.
For swapping, the operating system uses LRU algorithm in which the page to be
accessed is swapped with the page which is least recently used in the memory. After
swapping of pages, now page 3 has been loaded into memory (RAM), process A can
easily access page 3.
When process A tries to access the page 3 which is not present in the memory, the OS
will move process A into the state of blocking or waiting. Once page 3 is loaded into
memory as a result of LRU algorithm, the OS will bring back process A into the ready
state.
e. Memory Management Unit (MMU) performs translation of logical addresses
to physical addresses. In addition, it also performs memory protection. Map
the following logical addresses to physical addresses (in decimal) and also
check the protection during translation. Show the calculations and steps
involved during logical to physical address translation process:
Solution:
We can solve this question using the following formulae,
Offset = (logical address) mod (page size)
Page no. = (logical address / page size)
Frame no = corresponding to page no.
Physical Address = (page size * Frame no.) + Offset
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

i. Logical Address 1023,
Offset = (1023 mod 1024) = 1023
Page no = 1023 / 1024 = 0.9 =1
Frame mapped with page 1 = 0
Physical Address = (1024 * 0) +1023 = 1023
ii. Logical address 3000,
Offset = (3000 mod 1024) = 952
Page no = 3000 / 1024 = 2.9 = 3
Frame mapped with page 3 = no mapping
Physical address = (1024 * no mapping) + 952 = no mapping
iii. Logical address 4120,
Offset = (4120 mod 1024) = 24
Page no = 4120 / 1024 = 4
Frame mapped with page 4 = 1
Physical Address = (1024 * 1) + 24 = 1048
iv. Logical address 5000,
Offset = (5000 mod 1024) = 904
Page no = 5000 / 1024 = 4.8 = 5
Frame mapped with page 5 = 7
Physical Address = (1024 * 7) + 904 = 8072
Logical address
maps to
Physical address
1023 1023
3000 Not mapped
4120 1048
5000 8072
Offset = (1023 mod 1024) = 1023
Page no = 1023 / 1024 = 0.9 =1
Frame mapped with page 1 = 0
Physical Address = (1024 * 0) +1023 = 1023
ii. Logical address 3000,
Offset = (3000 mod 1024) = 952
Page no = 3000 / 1024 = 2.9 = 3
Frame mapped with page 3 = no mapping
Physical address = (1024 * no mapping) + 952 = no mapping
iii. Logical address 4120,
Offset = (4120 mod 1024) = 24
Page no = 4120 / 1024 = 4
Frame mapped with page 4 = 1
Physical Address = (1024 * 1) + 24 = 1048
iv. Logical address 5000,
Offset = (5000 mod 1024) = 904
Page no = 5000 / 1024 = 4.8 = 5
Frame mapped with page 5 = 7
Physical Address = (1024 * 7) + 904 = 8072
Logical address
maps to
Physical address
1023 1023
3000 Not mapped
4120 1048
5000 8072

B-Q2 Fragmentation and Memory Mapping
B-Q2a Memory compact/relocation is done to use the memory efficiently. In your
opinion, how often should memory compaction/relocation be performed? What do
you think the advantages and disadvantages would be of performing it even more
often?
Solution:
Memory compaction is the process in which the small chunks of free memory space is collected
and combined together to form a larger chunk. Memory compaction is performed to use the
memory efficiently. But the compaction process cannot be performed with static relocation. It
can only be performed with dynamic relocation.
In my opinion, the process of memory compaction shall be performed at an optimum rate and not
quite too often. The advantage of performing compaction very often is that continuous
compaction can be helpful to accommodate many processes time to time. It will result in better
utilization of memory space. But the disadvantage is that the process of compaction will
consume CPU more to search and combine the free memory chunks.
B-Q2a Memory compact/relocation is done to use the memory efficiently. In your
opinion, how often should memory compaction/relocation be performed? What do
you think the advantages and disadvantages would be of performing it even more
often?
Solution:
Memory compaction is the process in which the small chunks of free memory space is collected
and combined together to form a larger chunk. Memory compaction is performed to use the
memory efficiently. But the compaction process cannot be performed with static relocation. It
can only be performed with dynamic relocation.
In my opinion, the process of memory compaction shall be performed at an optimum rate and not
quite too often. The advantage of performing compaction very often is that continuous
compaction can be helpful to accommodate many processes time to time. It will result in better
utilization of memory space. But the disadvantage is that the process of compaction will
consume CPU more to search and combine the free memory chunks.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

B-Q3 Process Management and Scheduling
B-Q3a Given the following arrival time and CPU cycle times.
Draw a timeline for each of the following scheduling algorithms (3X4 = 12 Marks)
and also show the details of the ready queue formation during the timeline. (2X4 =8
Marks)
Solution:
i) FCFS
0
13 19 22 34 36
Ready queue: A | B | C | D | E
ii) SJN
0
13 15 18 24 36
Ready queue: A | E | C | B | D
iii) SRT
0
1 3 6 7 9 10 12 24 36
Ready queue: A | B | C | B | B | B | E | A | D
iv) Round Robin (quantum = 3)
Job Arrival Time CPU cycles
required
A 0 13
B 1 6
C 3 3
D 7 12
E 9 2
A B C D E
A E C B D
A B C B B B E A D
B-Q3a Given the following arrival time and CPU cycle times.
Draw a timeline for each of the following scheduling algorithms (3X4 = 12 Marks)
and also show the details of the ready queue formation during the timeline. (2X4 =8
Marks)
Solution:
i) FCFS
0
13 19 22 34 36
Ready queue: A | B | C | D | E
ii) SJN
0
13 15 18 24 36
Ready queue: A | E | C | B | D
iii) SRT
0
1 3 6 7 9 10 12 24 36
Ready queue: A | B | C | B | B | B | E | A | D
iv) Round Robin (quantum = 3)
Job Arrival Time CPU cycles
required
A 0 13
B 1 6
C 3 3
D 7 12
E 9 2
A B C D E
A E C B D
A B C B B B E A D
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

0
3 6 9 12 14 17 20 23 26 29 32 35 36
Ready queue: A | B | C | D | E | A | B | D | A | D | A| D | A
A B C D E A B D A D A D A
3 6 9 12 14 17 20 23 26 29 32 35 36
Ready queue: A | B | C | D | E | A | B | D | A | D | A| D | A
A B C D E A B D A D A D A

B-Q3b Using either internet resources or books understand the concept of waiting
time and turnaround time. Define those terms (waiting time and turnaround time)
in your own words. Then calculate waiting time and turnaround time for every job
for all four scheduling algorithms mentioned in B-Q3-a (Details of the calculations is
essential).
Solution:
Turnaround time refers to the total time taken by a process starting from its submission until its
completion. Turnaround time may or may not include waiting time. Turnaround time can be
calculated as –
Turnaround time = Process Completion time – Process Arrival time
Waiting Time refers to the time a process or processes spend waiting in the waiting queue while
another process is being executed. Waiting time of a process can be the result of:-
1. Unavailability of resources required by a process to execute,
2. When a process of higher priority comes into queue,
3. In SJF algorithm, when a process of lower CPU burst time comes in the queue, the
current process with comparatively higher CPU burst moves to waiting queue.
Waiting time can be calculated as –
Waiting Time = Process Turnaround time – Process Burst time
Waiting Time = Process Completion time – (Arrival time + burst time)
Abbreviations used –
TAT= Turnaround Time
WT = Waiting Time
time and turnaround time. Define those terms (waiting time and turnaround time)
in your own words. Then calculate waiting time and turnaround time for every job
for all four scheduling algorithms mentioned in B-Q3-a (Details of the calculations is
essential).
Solution:
Turnaround time refers to the total time taken by a process starting from its submission until its
completion. Turnaround time may or may not include waiting time. Turnaround time can be
calculated as –
Turnaround time = Process Completion time – Process Arrival time
Waiting Time refers to the time a process or processes spend waiting in the waiting queue while
another process is being executed. Waiting time of a process can be the result of:-
1. Unavailability of resources required by a process to execute,
2. When a process of higher priority comes into queue,
3. In SJF algorithm, when a process of lower CPU burst time comes in the queue, the
current process with comparatively higher CPU burst moves to waiting queue.
Waiting time can be calculated as –
Waiting Time = Process Turnaround time – Process Burst time
Waiting Time = Process Completion time – (Arrival time + burst time)
Abbreviations used –
TAT= Turnaround Time
WT = Waiting Time
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide
1 out of 16
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.




