ICT101: Analysis of Sorting Algorithms - Group Assignment Report
VerifiedAdded on 2023/04/21
|7
|3638
|416
Report
AI Summary
This ICT101 group assignment report delves into the realm of sorting algorithms, examining their significance in real-world applications and the underlying principles of discrete mathematics. The report meticulously explores various sorting algorithms, including selection sort, insertion sort, merge sort, and quicksort, offering detailed explanations of their functionalities, complexities, and practical implementations. It provides a comprehensive analysis of each algorithm's strengths and weaknesses, alongside a discussion of real-world scenarios where these algorithms prove to be effective solutions. The report emphasizes the divide-and-conquer strategy employed by quicksort and merge sort and provides a detailed illustration of the quicksort algorithm. The document also outlines the use of computational methods in solving sorting problems, offering students a comprehensive understanding of sorting algorithms and their practical implications. This assignment is available on Desklib, a platform that offers study resources to students.

ICT101 – Group Assignment brief
ICT 101 Discrete Mathematics for IT
Assignment # 3 - Group Assignment
Trimester 3, 2018
<Submitted by>
<Instructor name>
<Date>
1 | P a g e
ICT 101 Discrete Mathematics for IT
Assignment # 3 - Group Assignment
Trimester 3, 2018
<Submitted by>
<Instructor name>
<Date>
1 | P a g e
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

ICT101 – Group Assignment brief
1. Introduction
To sort a set of objects it is necessary to order them according to a relation of order defined on
these objects. Often objects are indexed by a key and the order relationship is about this key. For
example, every student enrolled at the University receives a number which is the key to access his file.
We sort objects when they are numerous and we must be able to get to it quickly: searching for an
element in an unsorted array is an operation of high complexity, whereas if the array is sorted, the
search can be done in logarithmic time. We say that a sorting algorithm is in place, if it only uses
memory space of a constant size, in addition to the space used to store the objects to sort. When the
values taken by these objects are few (for example, birth years), then it may be more effective to
compare each object to the other objects. However, we need more complex sorting algorithms, if the
elements to be sorted are huge and it is not possible to compare each item with other, as the most
simplest and apparent way to sort. In this study, we will look into the following sorting algorithms
(Asaad et al, 2017):
Selection Sort
Insert Sort
Merge Sort
Quick Sort
2. Problem definition
Many real life applications generate a lot of data which needs to be sorted in order to get to a
particular element easily. For example a medicine shop or pharmaceutical outlet deals in thousands of
medicines. There are lot of customers coming in to get served and as most of the customers need
immediate service, the shop owner cannot afford to lose their precious time trying to locate as to where
he stored that particular medicine. If the medicine list is sorted out internally and placed as per the
order specified, then the customers can be delivered the medicine almost immediately. Similarly, the
physical libraries maintain a catalogue of books and the books are shelved as per the catalogue which is
usually sorted in an alphabetical order. Imagine if the books are not sorted, it may take days for
someone to locate a book. Books in sorted order can be found instantaneously (Beer, 2017).
Thus, we can define the sorting problem as the one which helps to solve the real life problems
of searching, ordering and locating items so as to make our searches easier and efficient. There are
many algorithms which help to solve the sorting problem and we have named a few of them which will
be discussed hereunder.
3. Real world applications based on the problem of choice
There are countless real world problems that need sorting as a solution. For example there may
be thousands of students enrolled in a year. Their names or their list needs to be sorted so that further
action can be taken. For example, a student has delayed in paying the fees. To get his further details
would need hours of location exercise, if the physical records of students are not sorted. The staff or the
university assistant also needs to preserve the physical documents for each and every student. Hence
the physical records or files need to be maintained in a sorted order. This can be done by assigning a key
and then associating this key with the student’s personal details. Practically, everything is now being
stored on a computer. In a paperless office or regime we do not want to store any physical records. So
the university staffs maintain everything on the computer. This means that for big universities it turns
out that millions of records need to be stored. These have to be sorted and stored to make the search
easier and efficient (Faria, 2017).
2 | P a g e
1. Introduction
To sort a set of objects it is necessary to order them according to a relation of order defined on
these objects. Often objects are indexed by a key and the order relationship is about this key. For
example, every student enrolled at the University receives a number which is the key to access his file.
We sort objects when they are numerous and we must be able to get to it quickly: searching for an
element in an unsorted array is an operation of high complexity, whereas if the array is sorted, the
search can be done in logarithmic time. We say that a sorting algorithm is in place, if it only uses
memory space of a constant size, in addition to the space used to store the objects to sort. When the
values taken by these objects are few (for example, birth years), then it may be more effective to
compare each object to the other objects. However, we need more complex sorting algorithms, if the
elements to be sorted are huge and it is not possible to compare each item with other, as the most
simplest and apparent way to sort. In this study, we will look into the following sorting algorithms
(Asaad et al, 2017):
Selection Sort
Insert Sort
Merge Sort
Quick Sort
2. Problem definition
Many real life applications generate a lot of data which needs to be sorted in order to get to a
particular element easily. For example a medicine shop or pharmaceutical outlet deals in thousands of
medicines. There are lot of customers coming in to get served and as most of the customers need
immediate service, the shop owner cannot afford to lose their precious time trying to locate as to where
he stored that particular medicine. If the medicine list is sorted out internally and placed as per the
order specified, then the customers can be delivered the medicine almost immediately. Similarly, the
physical libraries maintain a catalogue of books and the books are shelved as per the catalogue which is
usually sorted in an alphabetical order. Imagine if the books are not sorted, it may take days for
someone to locate a book. Books in sorted order can be found instantaneously (Beer, 2017).
Thus, we can define the sorting problem as the one which helps to solve the real life problems
of searching, ordering and locating items so as to make our searches easier and efficient. There are
many algorithms which help to solve the sorting problem and we have named a few of them which will
be discussed hereunder.
3. Real world applications based on the problem of choice
There are countless real world problems that need sorting as a solution. For example there may
be thousands of students enrolled in a year. Their names or their list needs to be sorted so that further
action can be taken. For example, a student has delayed in paying the fees. To get his further details
would need hours of location exercise, if the physical records of students are not sorted. The staff or the
university assistant also needs to preserve the physical documents for each and every student. Hence
the physical records or files need to be maintained in a sorted order. This can be done by assigning a key
and then associating this key with the student’s personal details. Practically, everything is now being
stored on a computer. In a paperless office or regime we do not want to store any physical records. So
the university staffs maintain everything on the computer. This means that for big universities it turns
out that millions of records need to be stored. These have to be sorted and stored to make the search
easier and efficient (Faria, 2017).
2 | P a g e

ICT101 – Group Assignment brief
A software developer who develops that develops the application needs to understand
different sorting algorithms and this algorithm has to be applied to sort the records. Each algorithm has
different capacity and capability, the pros and cons and the complexity. Some algorithms are computing
intensive but easy to code, while others are difficult to code but lesser intensive in complexity in terms
of number of comparisons to be made. Let’s take an example of sorting all university students (Faria,
2017).
Sorting by selection is simply selecting the smallest of the element to sort, remove, and
iteratively repeat the process as long as there are elements in the sequel. As and when the elements
removed, these are stored in a stack. The sequence to be sorted is stored in a table and is arranged to
represent the pile in the same table as following: the stack is represented at the beginning of the table,
and each time element is removed from the suite it is replaced by the first element which appears next
to the stack, and take its place. When the process stops the stack contains all the items in the sequence
sorted in ascending order. The sort by selection is in Θ (n 2 ) in all cases (King and Smith, 2017).
Insert sorting consists of inserting the elements one after the others in an initially empty sorted
sequence. When the rest are stored in a table, the sequence to be sorted gets under construction. The
sequence is represented by a chained list and we insert links one after the other in a new list initially
empty. Sorting does n-1 insertions. At the iteration, in the worst case, the algorithm has the complexity
of Θ (n 2 ). The best case complexity for insertion sort is Θ (n 2). Thus, in some cases, insertion sorting
performs better than the selection sorting (Lafore, 2017).
Merge sorting implements a very simple approach of type - divide and conquer: the sequence
to be sorted is first split in two sequences of lengths equal to one element. These two suites are then
sorted separately before being merged (or interclassed). The effect from this sorting comes from the
efficiency of the merger: the principle is to go through simultaneously the two suites sorted in ascending
order of their elements, by extracting each time the smallest element. The merger can be done by
keeping the order of elements of the same value (sorting by merge is stable). Merge sorting is well
suited to linked lists: to split the list it is enough to go through it by linking the elements of even ranks on
one side and the elements of odd ranks on the other. The merging of two chained lists is done easily.
Conversely, if the sequence to be sorted is stored in a table, it is necessary to use an ancillary table at
the time of the merger, at least in its most common implementations.
The complexity of the merge is O (n) in all cases. The complexity of the merge sort therefore
satisfies the following recursive equation: T (n) = 2 ⇥ T (n / 2) + O (n). It follows that the merge sort is in
the complexity of O (n log n). It is verified by cumulating the numbers of comparisons made at each level
of the tree that represents the execution of the function (as shown in the figure below): each node
corresponds to a call of the function, its threads correspond to both recursive calls, and its label
indicates the length from the suite. The height of the tree is therefore log 2 n and at each level the
accumulated local complexity (split and merge) is O (n), from which a total cost deducted is ( Marszałek,
Z., 2017):
O (n) * log2 n = O (n log n)
3 | P a g e
A software developer who develops that develops the application needs to understand
different sorting algorithms and this algorithm has to be applied to sort the records. Each algorithm has
different capacity and capability, the pros and cons and the complexity. Some algorithms are computing
intensive but easy to code, while others are difficult to code but lesser intensive in complexity in terms
of number of comparisons to be made. Let’s take an example of sorting all university students (Faria,
2017).
Sorting by selection is simply selecting the smallest of the element to sort, remove, and
iteratively repeat the process as long as there are elements in the sequel. As and when the elements
removed, these are stored in a stack. The sequence to be sorted is stored in a table and is arranged to
represent the pile in the same table as following: the stack is represented at the beginning of the table,
and each time element is removed from the suite it is replaced by the first element which appears next
to the stack, and take its place. When the process stops the stack contains all the items in the sequence
sorted in ascending order. The sort by selection is in Θ (n 2 ) in all cases (King and Smith, 2017).
Insert sorting consists of inserting the elements one after the others in an initially empty sorted
sequence. When the rest are stored in a table, the sequence to be sorted gets under construction. The
sequence is represented by a chained list and we insert links one after the other in a new list initially
empty. Sorting does n-1 insertions. At the iteration, in the worst case, the algorithm has the complexity
of Θ (n 2 ). The best case complexity for insertion sort is Θ (n 2). Thus, in some cases, insertion sorting
performs better than the selection sorting (Lafore, 2017).
Merge sorting implements a very simple approach of type - divide and conquer: the sequence
to be sorted is first split in two sequences of lengths equal to one element. These two suites are then
sorted separately before being merged (or interclassed). The effect from this sorting comes from the
efficiency of the merger: the principle is to go through simultaneously the two suites sorted in ascending
order of their elements, by extracting each time the smallest element. The merger can be done by
keeping the order of elements of the same value (sorting by merge is stable). Merge sorting is well
suited to linked lists: to split the list it is enough to go through it by linking the elements of even ranks on
one side and the elements of odd ranks on the other. The merging of two chained lists is done easily.
Conversely, if the sequence to be sorted is stored in a table, it is necessary to use an ancillary table at
the time of the merger, at least in its most common implementations.
The complexity of the merge is O (n) in all cases. The complexity of the merge sort therefore
satisfies the following recursive equation: T (n) = 2 ⇥ T (n / 2) + O (n). It follows that the merge sort is in
the complexity of O (n log n). It is verified by cumulating the numbers of comparisons made at each level
of the tree that represents the execution of the function (as shown in the figure below): each node
corresponds to a call of the function, its threads correspond to both recursive calls, and its label
indicates the length from the suite. The height of the tree is therefore log 2 n and at each level the
accumulated local complexity (split and merge) is O (n), from which a total cost deducted is ( Marszałek,
Z., 2017):
O (n) * log2 n = O (n log n)
3 | P a g e
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

ICT101 – Group Assignment brief
Thus, so far the three algorithms we have discussed, the most efficient in most cases seems to
be the merge sort as it has the minimal complexity as compared to the others.
Quicksort is a divide and conquer method that consists in sorting an array T to partition it into
two sub-arrays T 1 and T 2 respectively containing the smallest and largest elements of T, then
recursively applying the algorithm on the arrays T 1 and T 2. There are several methods for partitioning
an array. The general principle is to use a particular element, the pivot, as the value of partitioning. In
the algorithm, which will be presented in the next section, the pivot is the first element of the array. One
can also choose randomly any element of the array. The partition algorithm takes as input, an array T
[m, n] (with m < = n) and reorder T, and put at the beginning of the array, the elements T [i] of T [m + 1,
n] such that T [i] < = T [m]. then put at the end of the table the elements T [i] of T [m + 1, n] such that T
[i]> T [m], Place T [m] between the two. It returns the index of T [m] in the new table: T [m] is called the
pivot. We can show that the complexity of the quick sort is O (nlog n) on average, but also O (n 2 ) in the
worst case. In practice, this is the most used algorithm and very often, the most quick (Robert, 2017).
4. Solution to the problem (If there are many solutions then discuss one solution and also research
and discuss why it is an effective solution to the problem).
As far as the solution to the problem is concerned, there are in effect many solutions, we
discussed at least 4 solution in brief, and we showed that the quick sort is the most frequently used
sorting algorithm, and of the quickest one. Hence, it is good if we can discuss the quick sort algorithm in
more detail along with the detailed illustration of its algorithm. We had discussed then Quick sort
attempts to find a pivot as it partitions an array, splitting it as per the following algorithm (Singh, Joshi
and Choudhary, 2018):
Algorithm: partition
Entry: T [m, n] is an array of integers.
Output: the pivot index in the rearranged table
Begin
Pivot: = T [m]
i: = m + 1 # index where to insert the first element < = T [m]
for j: = m + 1 do
# j is the index of the current element
if T [j] < = pivot then
swap T [i] and T [j]
i: = i + 1
swap T [m] and T [i 1]
return i 1
The above algorithm denotes the working of the quick sort. This can be easily converted to a
code, for instance this can be converted in to the C Language code and the list input to it will return the
sorted list. The above algorithm is in effect a sub routine or a function which can be called by the main
algorithm or main program, in say C language. The Main algorithm can be illustrated as follows
(Sedgewick, 1977):
Algorithm: QuickSort
entry: T [m, n] is an array of integers.
output: T is sorted in ascending order
begin
if m <n then
IndPivot = partition (T [m, n])
QuickSort (T [m, IndPivot 1])
4 | P a g e
Thus, so far the three algorithms we have discussed, the most efficient in most cases seems to
be the merge sort as it has the minimal complexity as compared to the others.
Quicksort is a divide and conquer method that consists in sorting an array T to partition it into
two sub-arrays T 1 and T 2 respectively containing the smallest and largest elements of T, then
recursively applying the algorithm on the arrays T 1 and T 2. There are several methods for partitioning
an array. The general principle is to use a particular element, the pivot, as the value of partitioning. In
the algorithm, which will be presented in the next section, the pivot is the first element of the array. One
can also choose randomly any element of the array. The partition algorithm takes as input, an array T
[m, n] (with m < = n) and reorder T, and put at the beginning of the array, the elements T [i] of T [m + 1,
n] such that T [i] < = T [m]. then put at the end of the table the elements T [i] of T [m + 1, n] such that T
[i]> T [m], Place T [m] between the two. It returns the index of T [m] in the new table: T [m] is called the
pivot. We can show that the complexity of the quick sort is O (nlog n) on average, but also O (n 2 ) in the
worst case. In practice, this is the most used algorithm and very often, the most quick (Robert, 2017).
4. Solution to the problem (If there are many solutions then discuss one solution and also research
and discuss why it is an effective solution to the problem).
As far as the solution to the problem is concerned, there are in effect many solutions, we
discussed at least 4 solution in brief, and we showed that the quick sort is the most frequently used
sorting algorithm, and of the quickest one. Hence, it is good if we can discuss the quick sort algorithm in
more detail along with the detailed illustration of its algorithm. We had discussed then Quick sort
attempts to find a pivot as it partitions an array, splitting it as per the following algorithm (Singh, Joshi
and Choudhary, 2018):
Algorithm: partition
Entry: T [m, n] is an array of integers.
Output: the pivot index in the rearranged table
Begin
Pivot: = T [m]
i: = m + 1 # index where to insert the first element < = T [m]
for j: = m + 1 do
# j is the index of the current element
if T [j] < = pivot then
swap T [i] and T [j]
i: = i + 1
swap T [m] and T [i 1]
return i 1
The above algorithm denotes the working of the quick sort. This can be easily converted to a
code, for instance this can be converted in to the C Language code and the list input to it will return the
sorted list. The above algorithm is in effect a sub routine or a function which can be called by the main
algorithm or main program, in say C language. The Main algorithm can be illustrated as follows
(Sedgewick, 1977):
Algorithm: QuickSort
entry: T [m, n] is an array of integers.
output: T is sorted in ascending order
begin
if m <n then
IndPivot = partition (T [m, n])
QuickSort (T [m, IndPivot 1])
4 | P a g e
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

ICT101 – Group Assignment brief
QuickSort (T [IndPivot + 1, n]
The beauty of this algorithm is that it uses recursion and the code is very terse and simple to
understand. The quicksort program calls itself recursively and it pivot’s/partitions recursively till
we get the sorted list of elements. Both, numeric as well as alphanumeric elements can be
sorted using this algorithm.
5. Possible algorithm (How computing can be useful in providing the solution discussed in section
4)
Having discussed the Quick sort algorithm in detail and described about the programming
efforts, we can now select this algorithm as it seems to be very simple to code as well as very efficient in
most cases. A complexity of algorithm can be compared with big O and we can now show the results
that it is very efficient sorting method that is certainly the one most used in the programs. This sorting
was found by CA Hoare, and we refer to Robert Sedgewick who worked in the 70's on this sorting and
improved it and we refer to his work for a complete study of this sorting. We give the principles of this
sorting and its complexity on average and at worst (Singh, Joshi and Choudhary, 2018).
Its principle is to go through the list L = ( a 1, a 2, ..., a n) by dividing it systematically into two
sub lists L1 and L2. One is such that all its elements are inferior to all those on the other list and working
separately on each of the two sub lists by reapplying the same division to each of the two sub lists until
only sub-lists are obtained. It is a dichotomous algorithm that divides the problem into two sub-
problems whose results are reused by recombination, so it is of complexity O (nlog (n)). To partition an L
list into two sub lists L1 and L2 (Wilf, 2017):
we choose any value from the list L (the last for example) that we call pivot, then the sub-list L1
is constructed as comprising all the elements of L whose value is less than or equal to the pivot, and sub-
list L2 is constructed as consisting of all elements whose value is greater than the pivot.
L = [4, 23, 3, 42, 2, 14, 45, 18, 38, 16]
take as pivot the last pivot value = 16
We obtain for example:
L1 = [4, 14, 3, 2]
L2 = [23, 45, 18, 38, 42]
At this stage here is the arrangement of L:
L = L1 + pivot + L2 = [ 4, 14, 3, 2 , 16 , 23, 45, 18, 38, 42 ]
Indeed, by working on the table itself by rearranging values, the pivot 16 is placed in the right
place directly:
[ 4 <16, 14 <16, 3 <16, 2 <16 , 16 , 23> 16, 45> 16, 18> 16, 38> 16, 42> 16 ]
Applying the same approach to the two sub-lists: L1 (pivot = 2) and L2 (pivot = 42)
[ 4, 14, 3, 2 , 16 , 23, 45, 18, 38, 42 ] we obtain:
L11 = [] empty list
L12 = [ 3, 4, 14 ]
L1 = L11 + pivot + L12 = ( 2 , 3, 4, 14 )
L21 = [ 23, 38, 18 ]
L22 = [ 45 ]
L2 = L21 + pivot + L22 = ( 23, 38, 18, 42 , 45 )
At this stage here is the new arrangement of L:
L = [ ( 2 , 3, 4, 14 ) , 16 , ( 23, 38, 18, 42 , 45 ) ]
etc ...
So gradually by subdividing the problem into two sub-problems, at each step we get a pivot
well placed. The sequence ( a 1, a 2, ..., a n) is stored in a table T [...] in central memory. The partition
process described above (also called segmentation) is the central point of the quick sorting. We
5 | P a g e
QuickSort (T [IndPivot + 1, n]
The beauty of this algorithm is that it uses recursion and the code is very terse and simple to
understand. The quicksort program calls itself recursively and it pivot’s/partitions recursively till
we get the sorted list of elements. Both, numeric as well as alphanumeric elements can be
sorted using this algorithm.
5. Possible algorithm (How computing can be useful in providing the solution discussed in section
4)
Having discussed the Quick sort algorithm in detail and described about the programming
efforts, we can now select this algorithm as it seems to be very simple to code as well as very efficient in
most cases. A complexity of algorithm can be compared with big O and we can now show the results
that it is very efficient sorting method that is certainly the one most used in the programs. This sorting
was found by CA Hoare, and we refer to Robert Sedgewick who worked in the 70's on this sorting and
improved it and we refer to his work for a complete study of this sorting. We give the principles of this
sorting and its complexity on average and at worst (Singh, Joshi and Choudhary, 2018).
Its principle is to go through the list L = ( a 1, a 2, ..., a n) by dividing it systematically into two
sub lists L1 and L2. One is such that all its elements are inferior to all those on the other list and working
separately on each of the two sub lists by reapplying the same division to each of the two sub lists until
only sub-lists are obtained. It is a dichotomous algorithm that divides the problem into two sub-
problems whose results are reused by recombination, so it is of complexity O (nlog (n)). To partition an L
list into two sub lists L1 and L2 (Wilf, 2017):
we choose any value from the list L (the last for example) that we call pivot, then the sub-list L1
is constructed as comprising all the elements of L whose value is less than or equal to the pivot, and sub-
list L2 is constructed as consisting of all elements whose value is greater than the pivot.
L = [4, 23, 3, 42, 2, 14, 45, 18, 38, 16]
take as pivot the last pivot value = 16
We obtain for example:
L1 = [4, 14, 3, 2]
L2 = [23, 45, 18, 38, 42]
At this stage here is the arrangement of L:
L = L1 + pivot + L2 = [ 4, 14, 3, 2 , 16 , 23, 45, 18, 38, 42 ]
Indeed, by working on the table itself by rearranging values, the pivot 16 is placed in the right
place directly:
[ 4 <16, 14 <16, 3 <16, 2 <16 , 16 , 23> 16, 45> 16, 18> 16, 38> 16, 42> 16 ]
Applying the same approach to the two sub-lists: L1 (pivot = 2) and L2 (pivot = 42)
[ 4, 14, 3, 2 , 16 , 23, 45, 18, 38, 42 ] we obtain:
L11 = [] empty list
L12 = [ 3, 4, 14 ]
L1 = L11 + pivot + L12 = ( 2 , 3, 4, 14 )
L21 = [ 23, 38, 18 ]
L22 = [ 45 ]
L2 = L21 + pivot + L22 = ( 23, 38, 18, 42 , 45 )
At this stage here is the new arrangement of L:
L = [ ( 2 , 3, 4, 14 ) , 16 , ( 23, 38, 18, 42 , 45 ) ]
etc ...
So gradually by subdividing the problem into two sub-problems, at each step we get a pivot
well placed. The sequence ( a 1, a 2, ..., a n) is stored in a table T [...] in central memory. The partition
process described above (also called segmentation) is the central point of the quick sorting. We
5 | P a g e

ICT101 – Group Assignment brief
construct a Partition function carrying out this action. As we re-apply the same action on the two sub-
lists obtained after partition, the method is recursive, so the quick sort is a recursive procedure.
Here is a general specification of the quick sorting procedure:
Quick Sort on [a..b]
Partition [a..b] returns pivot & [a..b] = [x .. pivot '] + [pivot] + [pivot' '.. y]
Quick Sort on [pivot '' .. y]
Quick Sort on [x .. pivot ']
It is proposed to arbitrarily choose the pivot that one seeks to place, then to scan the list to
rearrange in both directions (left and right) by building a sub-list on the left whose elements have a
value less than that of the pivot and a sub-list on the right whose elements have a value greater than
that of the pivot. In the scan on the left, we do not touch an element if its value is less than the pivot
(the elements are considered as being then in the good sub-list) we stop this scan as soon as we find an
element whose value is greater than that of the pivot. In the scan on the right, we do not touch an
element if its value is greater than the pivot (the elements are considered to be then in the good sub-
list) we stop this scan as soon as we find an element whose value is smaller than that of the pivot. In the
latter case this element is not in its place in this sub-list but rather in the other sub-list. A screenshot
below shows the running of the program and the results:
6. Conclusion
There is no doubt that Quick sort is one of the quickest running algorithms and this has been
proven by many computer scientists. The above work was done from the research conducted by
eminent computer scientist Robert Sedgewick, who is also American computer science researcher and
computer science professor at Princeton University and a member of the board of directors of Adobe
Systems. One real time example was taken and the run time results were also shown.
References
Axtmann, M. and Sanders, P., 2017. Robust massively parallel sorting. In 2017 Proceedings of the
Ninteenth Workshop on Algorithm Engineering and Experiments (ALENEX) (pp. 83-97). Society for
Industrial and Applied Mathematics.
Asaad, S.W., Min, H., Sukhwani, B. and Thoennes, M.S., International Business Machines Corp, 2017.
Tunable hardware sort engine for performing composite sorting algorithms. U.S. Patent 9,710,503.
Beer, D., 2017. The social power of algorithms.
6 | P a g e
construct a Partition function carrying out this action. As we re-apply the same action on the two sub-
lists obtained after partition, the method is recursive, so the quick sort is a recursive procedure.
Here is a general specification of the quick sorting procedure:
Quick Sort on [a..b]
Partition [a..b] returns pivot & [a..b] = [x .. pivot '] + [pivot] + [pivot' '.. y]
Quick Sort on [pivot '' .. y]
Quick Sort on [x .. pivot ']
It is proposed to arbitrarily choose the pivot that one seeks to place, then to scan the list to
rearrange in both directions (left and right) by building a sub-list on the left whose elements have a
value less than that of the pivot and a sub-list on the right whose elements have a value greater than
that of the pivot. In the scan on the left, we do not touch an element if its value is less than the pivot
(the elements are considered as being then in the good sub-list) we stop this scan as soon as we find an
element whose value is greater than that of the pivot. In the scan on the right, we do not touch an
element if its value is greater than the pivot (the elements are considered to be then in the good sub-
list) we stop this scan as soon as we find an element whose value is smaller than that of the pivot. In the
latter case this element is not in its place in this sub-list but rather in the other sub-list. A screenshot
below shows the running of the program and the results:
6. Conclusion
There is no doubt that Quick sort is one of the quickest running algorithms and this has been
proven by many computer scientists. The above work was done from the research conducted by
eminent computer scientist Robert Sedgewick, who is also American computer science researcher and
computer science professor at Princeton University and a member of the board of directors of Adobe
Systems. One real time example was taken and the run time results were also shown.
References
Axtmann, M. and Sanders, P., 2017. Robust massively parallel sorting. In 2017 Proceedings of the
Ninteenth Workshop on Algorithm Engineering and Experiments (ALENEX) (pp. 83-97). Society for
Industrial and Applied Mathematics.
Asaad, S.W., Min, H., Sukhwani, B. and Thoennes, M.S., International Business Machines Corp, 2017.
Tunable hardware sort engine for performing composite sorting algorithms. U.S. Patent 9,710,503.
Beer, D., 2017. The social power of algorithms.
6 | P a g e
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

ICT101 – Group Assignment brief
Faria, B., 2017. Visualizing Sorting Algorithms.
King, B. and Smith, C.P., 2017. A fraction activity to understand sorting algorithms. Teaching Children
Mathematics, 23(9), pp.560-563.
Lafore, R., 2017. Data structures and algorithms in Java. Sams Publishing.
Marszałek, Z., 2017. Parallelization of modified merge sort algorithm. Symmetry, 9(9), p.176.
Robert, S., 2017. Algorithms.
Singh, D.P., Joshi, I. and Choudhary, J., 2018. Survey of GPU based sorting algorithms. International
Journal of Parallel Programming, 46(6), pp.1017-1034
Wilf, H.S., 2017. Algorithms and complexity.
7 | P a g e
Faria, B., 2017. Visualizing Sorting Algorithms.
King, B. and Smith, C.P., 2017. A fraction activity to understand sorting algorithms. Teaching Children
Mathematics, 23(9), pp.560-563.
Lafore, R., 2017. Data structures and algorithms in Java. Sams Publishing.
Marszałek, Z., 2017. Parallelization of modified merge sort algorithm. Symmetry, 9(9), p.176.
Robert, S., 2017. Algorithms.
Singh, D.P., Joshi, I. and Choudhary, J., 2018. Survey of GPU based sorting algorithms. International
Journal of Parallel Programming, 46(6), pp.1017-1034
Wilf, H.S., 2017. Algorithms and complexity.
7 | P a g e
1 out of 7

Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.