Data Communication and Net-Centric Computing: Analysis and Solutions
VerifiedAdded on 2020/04/21
|14
|1747
|232
Homework Assignment
AI Summary
This assignment solution covers several key concepts in data communication and net-centric computing. It begins with an analysis of channel bandwidth and TDM MUX, calculating frame rates and frame durations. The solution then delves into efficiency calculations under different RTT conditions, comparing performance at 1.5 Mbps and 1 Gbps. A detailed explanation of Stop-and-Wait ARQ, Selective Reject ARQ, and Go-Back-N ARQ protocols is provided. The assignment continues with bandwidth-delay product and link utilization calculations, along with an analysis of flooding in network routing, discussing the implications of dynamic and static network diameters. Dijkstra's algorithm is applied to find shortest paths. The solution further explores network scalability with discussions on the maximum number of stations supported by a system. Finally, the assignment concludes with a comparison of circuit-switched and packet-switched networks for various applications and examines packetization and transmission delays in packet-switched networks, offering insights into cell size and delay calculations.

Running Head: DATA COMMUNICATION AND NET-CENTRIC COMPUTING 1
Data Communication and Net-Centric Computing
Institution
Name of Student
Date
Data Communication and Net-Centric Computing
Institution
Name of Student
Date
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 2
1.
a).
400Kbps Channel A
400 Kbps Channel B
1 00 01 10 11 1 10 11 11 10
400 Kbps Channel C
400 Kbps Channel D
400Kbps Channel E TDM MUX
b).
400 kbps channels
Time slot 5 bits.
Frame rate = Number of channels/time slot
Channels = 400/5 =80kbps
80*100000=8000000
Frame rate= 8000000bps/5b= 1600000fps
= 1600000fps
Frame Duration = 1/1600000 = 0.000000625 seconds
=0.625 ms
c). MUX output bit rate = N*Maximum input Rate
5* 400=2000kbps
2000*1000= 2000000bps
=2000000bps
1.
a).
400Kbps Channel A
400 Kbps Channel B
1 00 01 10 11 1 10 11 11 10
400 Kbps Channel C
400 Kbps Channel D
400Kbps Channel E TDM MUX
b).
400 kbps channels
Time slot 5 bits.
Frame rate = Number of channels/time slot
Channels = 400/5 =80kbps
80*100000=8000000
Frame rate= 8000000bps/5b= 1600000fps
= 1600000fps
Frame Duration = 1/1600000 = 0.000000625 seconds
=0.625 ms
c). MUX output bit rate = N*Maximum input Rate
5* 400=2000kbps
2000*1000= 2000000bps
=2000000bps

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 3
2. (i)
Given Frame size =600 bytes
Overhead =47 bytes
ACK frame size= 78 bytes
ŋ = Efficiency
(a) R = 1.5Mbps
Processing time = 1.2ms
Load size = 600+47 = 647 Bytes
Tx = Load size / R = 647*8/1.5*10^6
= 3.45 ms
Tax = ACK frame size /R
=78*8 / 1.5*10^6
=0.416 ms
When RTT = 1.5ms
Total time = Tx + RTT + Processing time + Tax
=3.45 ms + 1.5 ms + 1.2 ms + 0.416 ms
= 6.566 ms.
ŋ = Tx/Total time
3.45/6.566
= 52.54%
2. (i)
Given Frame size =600 bytes
Overhead =47 bytes
ACK frame size= 78 bytes
ŋ = Efficiency
(a) R = 1.5Mbps
Processing time = 1.2ms
Load size = 600+47 = 647 Bytes
Tx = Load size / R = 647*8/1.5*10^6
= 3.45 ms
Tax = ACK frame size /R
=78*8 / 1.5*10^6
=0.416 ms
When RTT = 1.5ms
Total time = Tx + RTT + Processing time + Tax
=3.45 ms + 1.5 ms + 1.2 ms + 0.416 ms
= 6.566 ms.
ŋ = Tx/Total time
3.45/6.566
= 52.54%
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 4
When RTT= 13ms
Total time = Tx + RTT + Processing time + Tax
=3.45 ms+13 ms+1.2 ms+0.416 ms
=18.066 ms.
ŋ = Tx/Total time
=3.45/18.066
=19.09%
When RTT= 117ms
Total time = Tx + RTT + Processing time + Tax
=3.45 ms+117 ms+1.2 ms+0.416 ms
=122.066 ms.
ŋ = Tx/Total time
3.45/122.066
=2.83%
When RTT = 1.25 seconds
Total time = Tx + RTT + Processing time + Tax
=3.45ms +1250ms+1.2ms+0.416ms
= 1255.066 ms.
ŋ = Tx/Total time
=3.45/1255.066
=0.275 %
When RTT= 13ms
Total time = Tx + RTT + Processing time + Tax
=3.45 ms+13 ms+1.2 ms+0.416 ms
=18.066 ms.
ŋ = Tx/Total time
=3.45/18.066
=19.09%
When RTT= 117ms
Total time = Tx + RTT + Processing time + Tax
=3.45 ms+117 ms+1.2 ms+0.416 ms
=122.066 ms.
ŋ = Tx/Total time
3.45/122.066
=2.83%
When RTT = 1.25 seconds
Total time = Tx + RTT + Processing time + Tax
=3.45ms +1250ms+1.2ms+0.416ms
= 1255.066 ms.
ŋ = Tx/Total time
=3.45/1255.066
=0.275 %
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 5
(b) R = 1Gbps
Processing time = 1.2ms
Load size = 600+47 = 647 Bytes
Tx = Load size / R = 647*8/1*10^9
=0.0052 ms
Tax = ACK frame size/R
=78*8/ 1*10^9
=0.00062 ms
When RTT = 1.5 ms
Total time = Tx + RTT + Processing time + Tax
=0.0052+1.5+1.2+0.00062
=2.71 ms.
ŋ = Tx/Total time
=0.0052/2.71
=0.19%
(b) R = 1Gbps
Processing time = 1.2ms
Load size = 600+47 = 647 Bytes
Tx = Load size / R = 647*8/1*10^9
=0.0052 ms
Tax = ACK frame size/R
=78*8/ 1*10^9
=0.00062 ms
When RTT = 1.5 ms
Total time = Tx + RTT + Processing time + Tax
=0.0052+1.5+1.2+0.00062
=2.71 ms.
ŋ = Tx/Total time
=0.0052/2.71
=0.19%

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 6
When RTT = 13ms
Total time = Tx + RTT + Processing time + Tax
=0.0052+13+1.2+0.00062
=14.21 ms
ŋ = Tx/Total time
=0.0052/14.21
=0.036%
When RTT= 117ms
Total time = Tx + RTT + Processing time + Tax
=0.0052+117+1.2+0.00062
=118.20582 ms
ŋ = Tx/Total time
=0.0052/118.20582
` =0.0044%
When RRT=1.25seconds
Total time = Tx + RTT + Processing time + Tax
=0.0052+1250+1.2+0.00062
=1251.20582ms
ŋ = Tx/Total time
=0.0052/1251.20582
=0.0004%
When RTT = 13ms
Total time = Tx + RTT + Processing time + Tax
=0.0052+13+1.2+0.00062
=14.21 ms
ŋ = Tx/Total time
=0.0052/14.21
=0.036%
When RTT= 117ms
Total time = Tx + RTT + Processing time + Tax
=0.0052+117+1.2+0.00062
=118.20582 ms
ŋ = Tx/Total time
=0.0052/118.20582
` =0.0044%
When RRT=1.25seconds
Total time = Tx + RTT + Processing time + Tax
=0.0052+1250+1.2+0.00062
=1251.20582ms
ŋ = Tx/Total time
=0.0052/1251.20582
=0.0004%
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 7
(ii).
The Stop and Wait ARQ
1. DO a retransmission of data in case of lost framer
a. Sender sends an information frame to receiver.
b. Sender waits for an ACK before sending the next frame.
c. Receiver sends an ACK if frame is correctly received.
d. If no ACK arrives within time-out, sender will resend the frame.
2. When an error is found in the data frame during transit, a NAK frame is returned. These NAK
frames then informs the sender to transmit again the last frame.
3. Whenever there is a bidirectional communication, all the parties transmit and accept data. Any
outstanding ACKs are put in the header of information frames and then piggybacking can save
bandwidth since the overhead from a data frame and an ACK frame (addresses, CRC, etc) can be
combined into just one frame (Malhotra, 2016).
The Selective Reject ARQ.
1. This ARQ solves the problems of Go-Back-N ARQ. It does this by accepting error free frames
that are out of order and only allows for the transmission of individual frames.
2. Only selected damaged frames are transmitted.
3. The sender can only send frames received by NAK
4. When a frame is corrupted during transit, NAK is returned and the frame is resent out of
sequence.
5. The sender keeps all data that have not been acknowledged.
6. Receiver must sort frames in its possession. Inserts the re-transmitted frames to their appropriate
place.
(ii).
The Stop and Wait ARQ
1. DO a retransmission of data in case of lost framer
a. Sender sends an information frame to receiver.
b. Sender waits for an ACK before sending the next frame.
c. Receiver sends an ACK if frame is correctly received.
d. If no ACK arrives within time-out, sender will resend the frame.
2. When an error is found in the data frame during transit, a NAK frame is returned. These NAK
frames then informs the sender to transmit again the last frame.
3. Whenever there is a bidirectional communication, all the parties transmit and accept data. Any
outstanding ACKs are put in the header of information frames and then piggybacking can save
bandwidth since the overhead from a data frame and an ACK frame (addresses, CRC, etc) can be
combined into just one frame (Malhotra, 2016).
The Selective Reject ARQ.
1. This ARQ solves the problems of Go-Back-N ARQ. It does this by accepting error free frames
that are out of order and only allows for the transmission of individual frames.
2. Only selected damaged frames are transmitted.
3. The sender can only send frames received by NAK
4. When a frame is corrupted during transit, NAK is returned and the frame is resent out of
sequence.
5. The sender keeps all data that have not been acknowledged.
6. Receiver must sort frames in its possession. Inserts the re-transmitted frames to their appropriate
place.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 8
Go-Back-n ARQ
1. The Receiver window size is 1.
2. Whenever a frame is lost or corrupted, every frame sent since the last frame acknowledged is
transmitted again.
3. When 1, 2, 3, 4 frames are sent but the sender just gets a NAK value of 3, The NAK will then ask
for frame 3 and the other earlier and subsequent frames to be sent again.
(iii).
(a)
Bandwidth 512kbps
1 bit takes 39ms for a round trip.
Bandwidth product delay = 512×1 03×100 ×10−3=51200 bits
(b)
Utilization percentage=?
Data frame length- 128 bytes = 128×8= 1024 bit
Link utilization= 1024/51200=0.02 = 2% utilization.
Go-Back-n ARQ
1. The Receiver window size is 1.
2. Whenever a frame is lost or corrupted, every frame sent since the last frame acknowledged is
transmitted again.
3. When 1, 2, 3, 4 frames are sent but the sender just gets a NAK value of 3, The NAK will then ask
for frame 3 and the other earlier and subsequent frames to be sent again.
(iii).
(a)
Bandwidth 512kbps
1 bit takes 39ms for a round trip.
Bandwidth product delay = 512×1 03×100 ×10−3=51200 bits
(b)
Utilization percentage=?
Data frame length- 128 bytes = 128×8= 1024 bit
Link utilization= 1024/51200=0.02 = 2% utilization.

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 9
c).
Utilization Percentage if using Go-Back-N ARQ with window size of 9.
The system sends up to 9 frames which is 9*1024 = 9216 bits
That is 9216/51200 =0.18
0.18*100 = 18%
Utilization percentage=18%
3. (i) The answer to this will have two assumptions, weather the diameter of the network
is set as dynamic or static.
If It is set to Dynamic, then YES this flooding approach will ensure that the packet is
guaranteed to reach its destination. This is because the hop count is large enough to reach
the farthest node and all paths available shall be used.
If the diameter is set to static, then NO flooding will not guarantee that the packet will
reach its destination since the destination is now further away from the current diameter.
(ii)
a) Dijkstra’s Algorithm
The dijkstra’s algorithm computes the least cost path from a single node to all other
nodes in a network. It is a link state algorithm.
c).
Utilization Percentage if using Go-Back-N ARQ with window size of 9.
The system sends up to 9 frames which is 9*1024 = 9216 bits
That is 9216/51200 =0.18
0.18*100 = 18%
Utilization percentage=18%
3. (i) The answer to this will have two assumptions, weather the diameter of the network
is set as dynamic or static.
If It is set to Dynamic, then YES this flooding approach will ensure that the packet is
guaranteed to reach its destination. This is because the hop count is large enough to reach
the farthest node and all paths available shall be used.
If the diameter is set to static, then NO flooding will not guarantee that the packet will
reach its destination since the destination is now further away from the current diameter.
(ii)
a) Dijkstra’s Algorithm
The dijkstra’s algorithm computes the least cost path from a single node to all other
nodes in a network. It is a link state algorithm.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 10
The least cost paths are made known to k destinations after the k th iteration. These k
paths will have the k smallest costs in least cost paths to all node destination (Murota,
2014).
Step N’ D(A),p(A) D(B),p(B) D(C),p(D) D(E),p(E
)
D(F),p(F) D(G),p(G)
0 H ∞ 10,H ∞ ∞ ∞ 5,H
1 HG 14,G 10,H 11,G 5,H
2 HGB 14,G 10,H 8,G 10,G 11,G 5,H
3 HGBC 14,G 10,H 8,G 10,G 11,G 5,H
4 HGBCE 14,G 10,H 8,G 10,G 11,G 5,H
5 HGBCEF 14,G 10,H 8,G 10,G 11,G 5,H
Thus shortest paths are: HGA=14; HB=10; HGBC=8; HE=10; HF=11; HG=5;
b) Bellman-Ford algorithm
Iteration H(Src) A B C D E F G H
0 0 inf inf inf inf inf inf inf inf
1 0 inf 10 inf inf inf inf 5 0
2 0 14 inf inf 9 inf 11 Inf 0
3 0 inf Inf inf inf 12 inf Inf 0
4 0 inf inf 14 inf inf inf inf 0
The least cost paths are made known to k destinations after the k th iteration. These k
paths will have the k smallest costs in least cost paths to all node destination (Murota,
2014).
Step N’ D(A),p(A) D(B),p(B) D(C),p(D) D(E),p(E
)
D(F),p(F) D(G),p(G)
0 H ∞ 10,H ∞ ∞ ∞ 5,H
1 HG 14,G 10,H 11,G 5,H
2 HGB 14,G 10,H 8,G 10,G 11,G 5,H
3 HGBC 14,G 10,H 8,G 10,G 11,G 5,H
4 HGBCE 14,G 10,H 8,G 10,G 11,G 5,H
5 HGBCEF 14,G 10,H 8,G 10,G 11,G 5,H
Thus shortest paths are: HGA=14; HB=10; HGBC=8; HE=10; HF=11; HG=5;
b) Bellman-Ford algorithm
Iteration H(Src) A B C D E F G H
0 0 inf inf inf inf inf inf inf inf
1 0 inf 10 inf inf inf inf 5 0
2 0 14 inf inf 9 inf 11 Inf 0
3 0 inf Inf inf inf 12 inf Inf 0
4 0 inf inf 14 inf inf inf inf 0
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 11
4. a) Maximum number of stations = 23.
b) New number of stations supported after reducing R by half.
=45 Stations
c) Total nodes supported by the accountant’s proposal.
= 23 stations
5. (i).
(a). N bits of data is being transferred in k time that is set to constant. For this
reason the application transfers data for a very long time. The circuit switched network is
thus suitable to use for this application since it will ensure that the application runs
without any interruption.
(b). When packet switched network is utilized, there won’t be need for congestion
control mechanism since every link is composed of big capacities and the application can
simultaneously transmit data a single or multiple links and there enough bandwidth plus
low data rates. For this reason, a small queue may be formed without congestion (Walsh
et al., 2014).
4. a) Maximum number of stations = 23.
b) New number of stations supported after reducing R by half.
=45 Stations
c) Total nodes supported by the accountant’s proposal.
= 23 stations
5. (i).
(a). N bits of data is being transferred in k time that is set to constant. For this
reason the application transfers data for a very long time. The circuit switched network is
thus suitable to use for this application since it will ensure that the application runs
without any interruption.
(b). When packet switched network is utilized, there won’t be need for congestion
control mechanism since every link is composed of big capacities and the application can
simultaneously transmit data a single or multiple links and there enough bandwidth plus
low data rates. For this reason, a small queue may be formed without congestion (Walsh
et al., 2014).

DATA COMMUNICATION AND NET-CENTRIC COMPUTING 12
(ii). (a).
Constant rate or transmission rate = 32kbps
=32000bps.
The time for filling a cell is the packetilization delay
So total cell size = 8.P bits
And Packetilization delay= 8. P
32000 = 0.25P msec
Therefore Packetilization delay is given by 0.25×P.
(b). (i) For P=1500 bytes
Packetilization delay = 0.25×1500=375msec
=375msec
For P=48 bytes
Packetilization delay= 0.25×48
=12msec
(ii). (a).
Constant rate or transmission rate = 32kbps
=32000bps.
The time for filling a cell is the packetilization delay
So total cell size = 8.P bits
And Packetilization delay= 8. P
32000 = 0.25P msec
Therefore Packetilization delay is given by 0.25×P.
(b). (i) For P=1500 bytes
Packetilization delay = 0.25×1500=375msec
=375msec
For P=48 bytes
Packetilization delay= 0.25×48
=12msec
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide
1 out of 14
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2026 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.
