Towards a Realistic Analysis of Some Popular Sorting Algorithms (original) (raw)

A general framework for the realistic analysis of sorting and searching algorithms. Application to some popular algorithms

We describe a general framework for realistic analysis of sorting and searching algorithms, and we apply it to the average-case analysis of five basic algorithms: three sorting algorithms (QuickSort, InsertionSort, BubbleSort) and two selection algorithms (QuickMin and SelectionMin). Usually, the analysis deals with the mean number of key comparisons, but, here, we view keys as words produced by the same source, which are compared via their symbols in the lexicographic order. The realistic cost of the algorithm is now the total number of symbol comparisons performed by the algorithm, and, in this context, the average-case analysis aims to provide estimates for the mean number of symbol comparisons used by the algorithm. For sorting algorithms, and with respect to key comparisons, the average-case complexity of QuickSort is asymptotic to 2n log n, InsertionSort to n2/4 and BubbleSort to n2/2. With respect to symbol comparisons, we prove that their average-case complexity becomes Θ (n...

Performance Comparison of Sorting Algorithms On The Basis Of Complexity

Abstract: When using the visualize to compare algorithms, never forget that the visualize sorts only very small arrays. The effect of quadratic complexity (either a square number of moves or a square number of exchanges) is dramatic as the size of the array grows. For instance, dichotomic insertion, which is only marginally slower than quick sort on 100 items, becomes 10 times slower on 10000 items. We have investigated the complexity values researchers have obtained and observed that there is scope for fine tuning in present context. Strong evidence to that effect is also presented. We aim to provide a useful and comprehensive note to researcher about how complexity aspects of sorting algorithms can be best analyzed. Keywords: Algorithm analysis, Sorting algorithm, Empirical Analysis Computational Complexity notations. Title: Performance Comparison of Sorting Algorithms On The Basis Of Complexity Author: Mr. Niraj Kumar, Mr. Rajesh Singh International Journal of Computer Science and Information Technology Research ISSN 2348-120X (online), ISSN 2348-1196 (print) Research Publish Journals

A Comprehensive Note on Complexity Issues in Sorting Algorithms

2009

Since the dawn of computing, the sorting problem has attracted a great deal of research. In past, many researchers have attempted to optimize it properly using empirical analysis. We have investigated the complexity values researchers have obtained and observed that there is scope for fine tuning in present context. Strong evidence to that effect is also presented. We aim to provide a useful and comprehensive note to researcher about how complexity aspects of sorting algorithms can be best analyzed. It is also intended current researchers to think about whether their own work might be improved by a suggestive fine tuning. Our work is based on the knowledge learned after literature review of experimentation, survey paper analysis being carried out for the performance improvements of sorting algorithms. Although written from the perspective of a theoretical computer scientist, it is intended to be of use to researchers from all fields who want to study sorting algorithms rigorously.

Breaking the O(n^2) Barrier: Novel Approaches to Time Complexity Analysis in Quick Sort

International Journal of Computer Science and Mobile Computing (IJCSMC), 2020

The most common type of sorting algorithm used is quicksort. As the name suggests it is the one of the most fastest sorting algorithm used since the innovation of sorting. However, this sort has often been criticised for its worst-case time complexity that is O(n^2). Practically the quick sort tends to follow its average case and best-case scenarios most of the time. So practically the quick sort is the most efficient practical sorting algorithm. In this paper we will fine tune this algorithm and remove its worst-case time complexity of O(n^2) and make this the best sorting algorithm both theoretically and practically. Various techniques will be discussed from pivot fixing to randomisation and further using medians of medians.

Analysis and Performance Measurement of Sorting Algorithms

Proceedings of National Conference on Convergent Innovative Technologies & Management (CITAM-2011) on 2 nd & 3 rd December 2011 at Cambridge Institute of Technology,Bangalore India, 2011

Any number of practical applications in computing requires things to be in order. The performance of any computation depends upon the performance of sorting algorithms. Like all complicated problems, there are many solutions that can achieve the same results. One sort algorithm can do sorting of data faster than another. A lot of sorting algorithms has been developed to enhance the performance in terms of computational complexity, memory and other factors. This paper chooses three of the sorting algorithms: the heap sort, merge sort, quick sort and measures their performance for the realization of time complexity with respect to the theories which are represented normally using asymptotic notation.

Performances of Sorting Algorithms in Popular Programming Languages

In modern days we see disciplines like data sciences, data Analytics, and machine learning are emerging to promote Information Technology and in bringing up Internet of things. Algorithms like sorting and searching plays a important role, as efficient sorting with searching is important for other algorithms. The emerging areas are using Python and Java languages for its implementation and have become most popular ones than C and C++. Also it is seen less asymptotic analysis is made on popular sorting algorithms using Python and Java. Sufficient asymptotic analysis and results using C/C++ are seen in research articles. This paper makes performance measures by implementing Popular Sorting Algorithms(PSAs) using Java, Python and C++ languages. We conducted our experiments on random data to make average case analysis and on ordered data to make worst case analysis. Our results have shown Merge sort performing well for Python implementation than Quick sort. Quick sort remains excellent in performance for Java implementation for random data. C++ implementation verified the behaviors of PSAs. Finally, we have indicated the miss behaviors made by Java and Python as observed in the results.

The Folklore of Sorting Algorithms

2009

The objective of this paper is to review the folklore knowledge seen in research work devoted on synthesis, optimization, and effectiveness of various sorting algorithms. We will examine sorting algorithms in the folklore lines and try to discover the tradeoffs between folklore and theorems. Finally, the folklore knowledge on complexity values of the sorting algorithms will be considered, verified and subsequently converged in to theorems.

Striving for Efficiency in Algorithms: Sorting

Sorting is the fundamental algorithmic problem in computer science. It is the first step in solving many other algorithmic problems. Donald Knuth, a world famous computer scientist and author of the book "The Art of Computer Programming, Volume 3: Sorting and Searching" ([6]), wrote: "I believe that virtually every important aspect of programming arises somewhere in the context of searching or sorting". Quicksort is a comparison sorting algorithm that, on average, makes O(n log n) comparisons to sort n items. This is as efficient as a comparison sorting algorithm can be ([1]). Quicksort is often faster in practice than other O(n log n) sorting algorithms and it has another advantage-it sorts in place, that is, the items are rearranged within the array, so it does not require a lot of additional space ([1]). Quicksort was invented by a British computer scientist, C.A.R. Hoare, in 1960. Sir Charles Antony Richard Hoare describes how he invented Quicksort in his interview published in [10]. After graduating from the University of Oxford in 1956, Hoare did his national service in the Royal Navy studying Russian. In 1958 he took a course in Mercury Autocode, which was the programming language used on a computer in Oxford University. Later, he was a visiting student at Moscow State University in the Soviet Union for a year. That is when he developed the Quicksort algorithm. The following is a quote from the interview with Hoare ([10]), where he describes his invention of Quicksort: "The National Physical Laboratory was starting a project for the automatic translation of Russian into English, and they offered me a job. I met several of the people in Moscow who were working on machine translation, and I wrote my first published article, in Russian, in a journal called Machine Translation. In those days the dictionary in which you had to look up in order to translate from Russian to English was stored on a long magnetic tape in alphabetical order. Therefore it paid to sort the words of the sentence into the same alphabetical order before consulting the dictionary, so that you could look up all the words in the sentence on a single pass of the magnetic tape. I thought with my knowledge of Mercury Autocode, I'll be able to think up how I would conduct this preliminary sort. After a few moments I thought of the obvious algorithm, which is now called bubble sort, and rejected that because it was obviously rather slow. I thought of Quicksort as the second thing. It didn't occur to me that this was anything very difficult. It was all an interesting exercise in programming. I think Quicksort is the only really interesting algorithm that I ever developed." Hoare described the algorithm in his papers in 1961 and 1962 ([2], [3], [4], [5]). After its invention by Hoare, Quicksort has undergone extensive analysis by Robert Sedgewick in 1975, 1977, 1978 ([7], [8], [9]). Sedgewick in his paper "Implementing Quicksort programs" ([9]) presented "a practical study of how to implement the Quicksort sorting algorithm and its best variants on real computers". The paper contains the original version of Quicksort and presents step-by-step modifications to the algorithm which, as Sedgewick says, make its implementation on real computers more efficient.

An Analytical Comparison of Different Sorting Algorithms in Data Structure

2015

Sorting is considered as a very basic operation in computer science. Sorting is used as an intermediate step in many operations. Sorting refers to the process of arranging list of elements in a particular order either ascending or descending using a key value. There are a lot of sorting algorithms have been developed so far. This research paper presents the different types of sorting algorithms of data structure like Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Heap Sort and Quick Sort and also gives their performance analysis with respect to time complexity. These six algorithms are important and have been an area of focus for a long time but still the question remains the same of "which to use when?" which is the main reason to perform this research. Each algorithm solves the sorting problem in a different way. This research provides a detailed study of how all the six algorithms work and then compares them on the basis of various parameters apart from time c...