Advanced Topics in IP Networks: IP Lookup Algorithm Analysis
VerifiedAdded on 2023/05/31
|5
|1263
|128
Homework Assignment
AI Summary
This document presents a comprehensive solution to an IP lookup assignment, focusing on the analysis of binary search algorithms for prefix lengths in IP networks. The solution delves into worst-case examples, complexity calculations, and the design of optimized algorithms for specific prefix length databases. It includes the construction of an IP-lookup data structure based on the Binary Search on Prefix Intervals algorithm, using a given forwarding table. The document also addresses memory requirements, worst-case memory reference speeds, latency calculations for single packets, and the impact of pipelining on the algorithm's performance. The analysis covers memory usage, latency in different scenarios, and the implications of using separate memory banks for each level of a tree structure. The assignment explores the application of markers and BMP for efficient IP lookup, which is crucial for understanding high-speed routing lookups.

Advanced Topics in IP networks: Exercise 1
Submission date: 15.11.2018
1. Consider the case of IP-lookup with Binary Search on Prefix
Lengths. M. Waldvogel, G. Varghese, J. Turner, and B. Plattner.
Scalable High Speed IP Routing Lookups. Proceedings of ACM Sigcomm, 1997
(a) Give a worst case example for a lookup with markers but without pre-computations of the
bmp and calculate its complexity (assume the packet in your example is not the first
forwarded to its destination)
The objective lies in determining entire prefixes of specific lengths denoted as L. through
the use of hashes in finding the perfect corresponding prefix.
(b) Show an example where the worst complexity of lookup in the final algorithm with
markers and bmp is logW+1.
Submission date: 15.11.2018
1. Consider the case of IP-lookup with Binary Search on Prefix
Lengths. M. Waldvogel, G. Varghese, J. Turner, and B. Plattner.
Scalable High Speed IP Routing Lookups. Proceedings of ACM Sigcomm, 1997
(a) Give a worst case example for a lookup with markers but without pre-computations of the
bmp and calculate its complexity (assume the packet in your example is not the first
forwarded to its destination)
The objective lies in determining entire prefixes of specific lengths denoted as L. through
the use of hashes in finding the perfect corresponding prefix.
(b) Show an example where the worst complexity of lookup in the final algorithm with
markers and bmp is logW+1.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

(c) Consider that the prefixes in your database are only of lengths: /31 /22 /18 /15 /14 /8 . Design
an algorithm that is optimized to this type of database. What is the complexity of your
suggested algorithm?
Given Forward table .2
128.17\16
128.179\9
128.15.19\23
160.3\16
160.128\9
The procedure is carried out in backward retrogressive by dividing length posed on
databases in accordance to prefix lengths.
First, the use of markers in directing binary searches while bidding for greater corresponding
prefixes is a prerequisite for this example.
Linear Searching Hash Tables
Organizing a tables prefix length in the following elementary scheme searches hash tables in
a linear fashion.
LENGTH (L) HASH HASH TABLES
9 o 01010
16 o 01010 | 0110110
23 o 011011010101
The above case scenario depicts a small routing table containing four prefixes where (L) is
9,9,16,16 and 23. Since the prefixes are arrayed to only 3 submissions.
Therefore, searching for an address denoted as (D) begins with the most farsighted length
(L) of 23, extracting beginning length (L) bits of the address (D) followed by conducting
a search for all length unveilings in the hash table.
Since there is no use of a BMP, the first least length (l) is sought from the array (L). The latter
is actuated by classifying a single position lower than actual position of (l) as the searching
proceeds.
an algorithm that is optimized to this type of database. What is the complexity of your
suggested algorithm?
Given Forward table .2
128.17\16
128.179\9
128.15.19\23
160.3\16
160.128\9
The procedure is carried out in backward retrogressive by dividing length posed on
databases in accordance to prefix lengths.
First, the use of markers in directing binary searches while bidding for greater corresponding
prefixes is a prerequisite for this example.
Linear Searching Hash Tables
Organizing a tables prefix length in the following elementary scheme searches hash tables in
a linear fashion.
LENGTH (L) HASH HASH TABLES
9 o 01010
16 o 01010 | 0110110
23 o 011011010101
The above case scenario depicts a small routing table containing four prefixes where (L) is
9,9,16,16 and 23. Since the prefixes are arrayed to only 3 submissions.
Therefore, searching for an address denoted as (D) begins with the most farsighted length
(L) of 23, extracting beginning length (L) bits of the address (D) followed by conducting
a search for all length unveilings in the hash table.
Since there is no use of a BMP, the first least length (l) is sought from the array (L). The latter
is actuated by classifying a single position lower than actual position of (l) as the searching
proceeds.

By making (L) the records lay out, L[i].length denotes the prefixes’ length located at position
[i]. Therefore, L[i].hash points to the hash board comprising of prefixes of
L[i].length. Therefore, the code is as shown in figure 2 below.
2. Build the exact IP-lookup data structure on the given forwarding table when using the Binary
Search on Prefix Intervals IP-lookup algorithm.
Given the prefixes P1 = 9, P2 = 16 and P3 = 23, assuming the zero introduction of L points to
P1’s hash board, the beginning to P2’s hash board, and the latter pointing to P3’s hash board.
1 P1=9 01010
2 P2 = 16 01010 011011
3 P3=23 011011
010101
(a) (b) (c)
Therefore, binary searching (a) commences amid the hash table and search for 16 in the hash
table comprising P2. The small triangles symbolize a pointer to hash table to search.
After a complete search for 011011010101, 01010 is looked up in the middle hash board on the
marker node. Therefore, the latter is implemented in directing binary search to the table’s lower half.
[i]. Therefore, L[i].hash points to the hash board comprising of prefixes of
L[i].length. Therefore, the code is as shown in figure 2 below.
2. Build the exact IP-lookup data structure on the given forwarding table when using the Binary
Search on Prefix Intervals IP-lookup algorithm.
Given the prefixes P1 = 9, P2 = 16 and P3 = 23, assuming the zero introduction of L points to
P1’s hash board, the beginning to P2’s hash board, and the latter pointing to P3’s hash board.
1 P1=9 01010
2 P2 = 16 01010 011011
3 P3=23 011011
010101
(a) (b) (c)
Therefore, binary searching (a) commences amid the hash table and search for 16 in the hash
table comprising P2. The small triangles symbolize a pointer to hash table to search.
After a complete search for 011011010101, 01010 is looked up in the middle hash board on the
marker node. Therefore, the latter is implemented in directing binary search to the table’s lower half.
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

3. What is the Worst Case in references to the memory of the data structure that you built in Section
A Figure 2 : Standard Binary Search
1
2 3
5
6 7
94 10
11
13
14
12 15
8
17
18
19
16 20
2122
2324
14 25
22 27
2930
31
Compared to linear search algorithm, searching binary using hash tables calls for O{log2 Wdist} as the
anticipated time. There is however need for markers for the following table that relate shorter
prefixes pointing to longer ones.
4. What is the memory requirement of a data structure that you built in section A?
N+2 elements are the memory requirement needed for a balanced binary search tree
(BST). Therefore, as implemented equal to 30.
5. What is the worst case of the algorithm when the memory reference speed is 50nsec.
With 50ns DRAM, this relates to around bundles every second — enough to forward a
ceaseless stream of 64-byte parcels touching base on an OC192c line. It is again hard to help
A Figure 2 : Standard Binary Search
1
2 3
5
6 7
94 10
11
13
14
12 15
8
17
18
19
16 20
2122
2324
14 25
22 27
2930
31
Compared to linear search algorithm, searching binary using hash tables calls for O{log2 Wdist} as the
anticipated time. There is however need for markers for the following table that relate shorter
prefixes pointing to longer ones.
4. What is the memory requirement of a data structure that you built in section A?
N+2 elements are the memory requirement needed for a balanced binary search tree
(BST). Therefore, as implemented equal to 30.
5. What is the worst case of the algorithm when the memory reference speed is 50nsec.
With 50ns DRAM, this relates to around bundles every second — enough to forward a
ceaseless stream of 64-byte parcels touching base on an OC192c line. It is again hard to help
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

quick incremental updates in the most pessimistic scenario, since inclusion or erasure of a
(short) prefix can change the longest coordinating prefixes of a few essential interims in
the parcel.
6. What is the latency for a single package (in the worst case)?
To begin with, it is important to fabricate a system with propagation delay. For this we will envision
that the wellspring of intrigue produced a bundle, which we allude to as stamped packet. Every
single meddling source on the system, all sources yet the one measured spread delay for, produce
bundles in such a design, to the point that they, for each switch, all arrive and get lined for
transmission just before the checked bundle. This implies when checked parcel touches base there
are n − 1 bundles lined holding up to be transferred. Since there are n−1 parcels in the line, it will
viably take ·(n−1)+ = n · seconds for parcel to achieve the following switch or sink, on the offτ τ τ
chance that if this switch was keep going on the way. Note that in view of the manner in which
meddling sources emanate their parcels, by the time checked bundle gets to the following switch on
its way, parcels that where in front of it in the past switch get transmitted further down the system
and same circumstance rehashes where checked parcel discovers n − 1 bundles holding up to be
transmitted in front of it.
T = (n · N + 1) · τ
T = N · n · τ + τ = (N · n + 1) · τ
7. To make a Pipeline it was decided to put each level of tree in a separate memory bank. Now what
is the worst case of the algorithm?
A different memory calculation course query task is moderate on a trie in light of the fact that it
requires up to 32 memory gets to in the most pessimistic scenario. Besides, a lot of storage room is
squandered in a trie as pointers that are invalid, and that are on chains ways with 1-degree hubs.
8. What is the latency (in worst case) for a single package using the Section VI algorithm?
It only takes 7 hash computations and the search is limited to prefix lengths which contains only
one entry.
(short) prefix can change the longest coordinating prefixes of a few essential interims in
the parcel.
6. What is the latency for a single package (in the worst case)?
To begin with, it is important to fabricate a system with propagation delay. For this we will envision
that the wellspring of intrigue produced a bundle, which we allude to as stamped packet. Every
single meddling source on the system, all sources yet the one measured spread delay for, produce
bundles in such a design, to the point that they, for each switch, all arrive and get lined for
transmission just before the checked bundle. This implies when checked parcel touches base there
are n − 1 bundles lined holding up to be transferred. Since there are n−1 parcels in the line, it will
viably take ·(n−1)+ = n · seconds for parcel to achieve the following switch or sink, on the offτ τ τ
chance that if this switch was keep going on the way. Note that in view of the manner in which
meddling sources emanate their parcels, by the time checked bundle gets to the following switch on
its way, parcels that where in front of it in the past switch get transmitted further down the system
and same circumstance rehashes where checked parcel discovers n − 1 bundles holding up to be
transmitted in front of it.
T = (n · N + 1) · τ
T = N · n · τ + τ = (N · n + 1) · τ
7. To make a Pipeline it was decided to put each level of tree in a separate memory bank. Now what
is the worst case of the algorithm?
A different memory calculation course query task is moderate on a trie in light of the fact that it
requires up to 32 memory gets to in the most pessimistic scenario. Besides, a lot of storage room is
squandered in a trie as pointers that are invalid, and that are on chains ways with 1-degree hubs.
8. What is the latency (in worst case) for a single package using the Section VI algorithm?
It only takes 7 hash computations and the search is limited to prefix lengths which contains only
one entry.
1 out of 5
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.