MAHEEN SHAHID 17-MS-CSc-26
SANA NASEER 17-MS-CSc-33
ANEELA ASLAM 17-MS-CSc-44
DR. JAVED IQBAL
University of Engineering and
A Review on Divide and
Conquer Sorting Algorithm
organization of data in a database system needs special operations like
insertion, deletion, sorting, searching and traversing etc. Sorting categorizes
the data in a specific order which makes other operations easier. Hence a
better sorting algorithm increases the effectiveness of each of the succeeding
operations. Among numerous Sorting Techniques, Divide and Conquer algorithms grasp
potential since most of them may put less load both in terms of Space as well
as processor time. Similar to all Sorting methods Divide and Conquer algorithms
follow both Iterative as well as Recursive approaches. In this paper, we have charted
some of the conventional and newly proposed Divide and Conquer sorting
Key words— DBMS, Divide and Conquer, Greedy, Hierarchical Bubble
Sorting, Nested Grid Files, non-Manhattan Channel Routing,
Researches in several database
management systems have given enlargement to various sorting algorithms among
which Divide and Conquer techniques holds the most potential in terms of both
Space complexity as well as Time complexity . Sorting can be done either in
iterative or recursive way. The recursive method is mostly based on Inductive
steps which can easily be formulated in terms of Divide and Conquer approach. The
Divide and Conquer approach has been employed in many conventional sorting
algorithms like Merge Sort, Quick Sort, Heap Sort, Radix Sort, etc. They
generally follow a time complexity of O(n log2 n).
2 Thomas A.Mueck and
Manfred J. Schauer offered two order query execution algorithms for both
Balanced and Nested Grid files that uses greedy and Divide and Conquer methods
respectively. It reduces the length of r sequences for both read and writes and
pass DBMS access disk plans for supplementary execution.
3 J. –T. Yan
surveyed a Hierarchical Bubble Sorting non Manhattan Channel Routing problem
and proposed an alternate algorithm with a complexity of O (hn) where n is
number of terminals and h is number of routing tracks. It represents sequences
of left-swap and right-swap passes. It follows a top-down approach where the
swap operations are mapped to a particular track.
2 Ellis Horowitz and
Alessandro Zorat articulated a divide and conquer model to enhance
computational speed for parallel processing. If the number of processors
increases with number of elements, N the time complexity is reduced to O (N).
COMPARISON OF LATELY PROPOSED
DIVIDE AND CONQUER SORTING ALGORITHM
Paper(1) : Order
Query Execution algorithms for both Balanced and Nested Grid Files
In this framework, we elaborate the
construction of disk access plans for sort order queries in balanced and
nested grid files. The key idea is to use the order information contained in
the directory of the multi attribute search structure. The presented
algorithms are shown to yield a significant decrease in the number of disk
I/O operations by appropriate use of the order information.
The two algorithms
use Greedy and Divide and Conquer approaches respectively. It reduces the
read/write sequences and passes the disk access plans to DBMS query processor
for further execution.
Divide and Conquer
approach requires less number of I/O operations and buffer gates. Storage
consumption is less. Worst case run time for construction phase is O(nh) and
processing phase is O(n+th)
If the buffer page
pool is not controlled by the query processor, then the algorithm fails.
commercial DBMS that rely on B-Tree or single key hash structures for index
Paper(2) : Divide and Conquer
for Parallel Processing
In this paper a
realistic model for divide-and-conquer based algorithms is postulated; the
efficiency of some algorithms is then analyzed, taking into account all
relevant parameters of the model (time, data movement and number of
It combines the
dynamically express parallelism with the principle of Divide and Conquer
approach. The amount of parallelism depends on the processed data and varies
when the program is being executed
instantiations of recursive procedures need not be kept track by the writer.
If the number of processors, P increases with number of elements, n the time
complexity is reduced to O (n).
The process is
highly efficient only if large number of processors is available.
Paper(3) : Algorithm against Hierarchical Bubble
Sorting Based non-Manhattan Channel Routing problem
In the paper,
based on an optimality-oriented swap-direction selection and a
‘divide-and-conquer’ technology, a hierarchical BSNMCR (HBSNMCR) problem is
formulated and an O(hn) optimal algorithm is proposed, where h is the number
of routing tracks in a HBSNMCR solution, for h/spl les/k.
It performs the
operation by dividing the overall pass into left swap pass and right swap
pass which are further subdivided respectively. The top to bottom swap
operations are mapped into a single track.
It has a
complexity of O(hn) where n is the number of terminals and h is number of
It requires more
PAPER(4): Spatial Divide and Conquer with Motion
Cues for Tracking through Clutter
(Zhaozheng Yin and Robert Collins
Department of Computer Science and Engineering
The Pennsylvania State University
A spatial divide
and conquer approach is used to subdivide
foreground and background into smaller regions, with different
features being selected to distinguish between different pairs of object and
produced by the divide and conquer approach are quite good for distinguishing
the motorcycle from its surrounding background, and that motion detection
results alone would not succeed, due to motion clutter.
It requires more
Different Spatial Dividing Methods Different
subdivisions of the scene background will generate different weight images
PAPER(5): Bit-width Optimization by
Fixed-point Digital Signal Processing Systems
Jaeyong Chung, Member, IEEE, Lok-Won Kim, Member,
We define a
special class of designs called innerfork-
free designs, and unlike greedy-based methods including , we guarantee
them the optimal solutions even for non-convex objective functions in the
discrete solution space. For general designs, our method is applied after
partitioning and transforms the solution space of N variables to that of Nf
variables, where N is the number of signals and Nf is the number of fork
signals. Our algorithm generates the efficient frontier in a single run and
allows designers to explore the trade-off between area and error at a
negligible runtime cost. Unlike the methods relying on optimization packages
as in , our optimization method is transparent..
the maximal vector problem can be solved in ?(?????) where ??is the
size of the given set of ??dimensional
vectors. The next subsection introduces pruning techniques to keep the size
of Pareto sets small.
1)heuristics play a big role and the quality of results is
2) with the exception of greedy methods, the runtime is
prohibitively long or we need to run it for a long time with the hope that we
may obtain better results.
3) many existing methods relyon generic optimization
packages and the optimization process is not transparent.
The five designs
include a degree 4 polynomial approximation to log(1 x), the RGB-to-YCbCr
color space converter, the2-2 matrix multiplication using Strassen’s
algorithm, cubic B-splines, and 8 8 Discrete Cosine Transform.
Paper(6): A divide-and-conquer bound for aggregate’s quality and algebraic
We establish a divide-and-conquer bound for
aggregate’s quality and algebraic connectivity, as defined for weighted
undirected graphs. Aggregate’s quality is used in the context of
aggregation-based multigrid methods as a tool for the design of robust
multigrid solvers. Although initially introduced for discretized partial
differential equations aggregate’s quality is now also used for graph
Laplacian systems .Aggregate’s quality is defined on a set of vertices, also
called aggregate, and measures the maximal impact on the multigrid
convergence of representing this set of vertices by a single vertex.
Recursive application requires overall
O(|V| + |E|)
operations. with only few (or even no) recursive steps max operation are
Close to 1.
Less optimize when we use popular aggregation
approach is multiple pairwise aggregation, in which unknowns are merged into
pairs, then the pairs are merged again into groups of size at most 4, and so
on, until the average size of the aggregates reaches a prescribed value. The
quality one wants to optimize is the quality of the final aggregate; however,
for the reasons of cost the construction algorithm is based on the quality of
the original pairs and the ?c parameter for the merged pairs of aggregates. It
is therefore important to know that optimizing these latter quantities one implicitly
tends to optimize ? through the upper bound.
The divide-and-conquer bound can be used in several ways. First,
it recursive application provides a quick albeit likely imprecise procedure
for estimating the quality of a vertex set, or the algebraic connectivity of
Another possibility is to use the bound in with only few (or
even no) recursive steps. The resulting estimate of aggregate’s quality or
algebraic connectivity is then accurate enough if both arguments of the max
operation are close to 1.
and Conquer Partitioning Techniques for Smart Water
developed a software, called SWANP (Smart Water Network Partitioning) that
allows finding automatically the optimal layout of District Meter Areas based
on a multi-level algorithm. This paper compare SWANP with other procedures
reference to the specific results obtained in this study, the SWANP software
showed better results with respect to the HGP and WSC procedures in terms of
both hydraulic performance and minimization of the number of edge cuts, while
the MAS procedure provided a better result only in terms of edge cuts.
simulation results obtained with SWANP are not comparable with their layout,
because the number of nodes and the flow to assign to each source in a WNS
depend on network topology and hydraulic characteristics. If the algorithms
tries to match the balance constraint, as happens in a WNP, hydraulic
performance can be significantly worsen
should provide a clear definition of their “divide and conquer” aims, possibly
using the classification in Water Network Partitioning (WNS) and Water
Network Sectorization (WNS);
Paper(8): A simple expected running time analysis for
“divide and conquer” algorithms
present a simple combinatorial method for analyzing the expected running time
of such algorithms, and prove that under very weak assumptions this expected running
time will be asymptotically equivalent to the running time obtained when
problems are always split evenly into two sub problems of size n/2.
If f (k) = ? (k) as
with randomized Quicksort then it will take ? (n log n).
takes ? (log n) expected
time to perform a randomized binary search for a random element in an n-element
sorted array (here, f (k)= ? (1))
this situation, the decision of which sub problem to recursively solve is no
longer a random event, and in order to obtain a bound on worst-case running
time we must conservatively assume that the algorithm always recurses on the
larger of its two sub problems. Here our methods break down, since there no
longer seems to be simple expression for Exk.
many randomized “divide and conquer” algorithms,
in which sub problems solved recursively
The Divide and
Conquer sorting methods solve the problem of complexity as they follow both
recursive and iterative path while handling the data. Their complexity to sort
the data is very less as compared to usual iterative approaches generally O(n
log2n). The Divide and
Conquer sorting algorithms has a extensive array of uses in DBMS, in numerous
record keeping systems for instance telephone directories, dictionaries, bank
1 E. Horowitz and A.
Zorat, “Divide-and-Conquer for Parallel Processing”, IEEE Transactions on
Computers, vol. C-32, no. 6, pp.582585, June 1983
2 T. A.Mueck and M.
J. Schauer, “Optimizing Sort Order Query Execution in Balanced and Nested Grid
Files”, IEEE Transactions on Knowledge and Data Engineering, vol. 7, no. 2, pp. 246-260, April 1995
3 J. –T. Yan, “Hierarchical bubble-sorting-based
non-Manhattan channel routing”, IEE Proc.-Comput. Digit. Tech, vol. 147, no. 4,
pp. 215-220, July 2000