**introduction to Algorithms**

Computer Science can be defined as the *study of algorithms. *This study encompasses four distinct

areas:

· Machines for executing algorithms.

· Languages for describing algorithms.

· Foundations of algorithms.

· Analysis of algorithms.

It can be easily seen that "algorithm" is a fundamental notion in computer science. So it deserves

a precise definition. The dictionary's definition, "any mechanical or recursive computational

procedure," is not entirely satisfying since these terms are not basic enough.

**Definition: **An *algorithm *is a finite set of instructions, which, if followed, accomplish a particular

task. In addition every algorithm must satisfy the following criteria:

1. Input: there are zero or more quantities which are externally supplied;

2. Output: at least one quantity is produced;

3. Definiteness: each instruction must be clear and unambiguous;

4. Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm

will terminate after a finite number of steps;

5. Effectiveness: every instruction must be sufficiently basic that a person using only pencil

and paper can in principle carry it out. It is not enough that each operation is definite, but

it must also be feasible.

In formal computer science, one distinguishes between an algorithm, and a program. A

program does not necessarily satisfy the fourth condition. One important example of such a

program for a computer is its operating system, which never terminates (except for system

crashes) but continues in a wait loop until more jobs are entered.

An algorithm can be described in several ways. One way is to give it a graphical form of

notation such as. This form places each processing step in a "box" and uses arrows to indicate the

next step. Different shaped boxes stand for different kinds of operations.

If a computer is merely a means to an end, then the means may be an algorithm but the

end is the transformation of data. That is why we often hear a computer referred to as a data

processing machine. Raw data is input and algorithms are used to transform it into refined data.

So, instead of saying that computer science is the study of algorithms, alternatively, we might say

that computer science is the study of data.

5

**1.2 Abstract Data Type:**

ADT is kind of data abstraction where a type's internal form is hidden behind a set of

access functions. Values of the type are created and inspected only by calls to the access

functions. This allows the implementation of the type to be changed without requiring any

changes outside the module in which it is defined. Objects and ADT are both forms of data

abstraction, but objects are not ADT. Objects use procedural abstraction (methods), not type

abstraction.

A classic example of an ADT is a stack data type for which functions might be provided to

create an empty stack, to push values onto a stack and to pop values from a stack.

ADT A type whose internal form is hidden behind a set of access functions. Objects of the

type are created and inspected only by calls to the access functions. This allows the

implementation of the type to be changed without requiring any changes outside the module in

which it is defined.

Abstract data types are central to object-oriented programming where every class is an

ADT.

**1.3 The Running Time of a Program**

When solving a problem we are faced frequently with a choice among algorithms. On what

basis should choose? There are two often-contradictory goals.

1. We would like an algorithm that is easy to understand code and debug.

2. We would like an algorithm that makes efficient use of the computers resources,

especially, one that runs as fast as possible.

Measuring the running time of a program: The running time of a program depends on factors

such as:

1. The input of a program.

2. The quality of a code generated by the compiler used to create the object program.

3. The nature and speed of the instructions on the machine used to execute the program.

4. The time complexity of the algorithm underlying the program.

**Calculating the Running Time of a Program**

Some Important Formulas for finding the Running Time of a program are the following:

When finding the Running Time of a Program with that has Scary nested loops it's a good

thing to know how Summations work and why you use Summations.

We use summations because of what a Summation stands for is very similar to how a loop

is run.

6

Some important properties of Summations are good to review in order to make it easier to

work with them. The following property shows one way of manipulating summations to work to

your advantage in certain questions.

Another important aspect of summations is how to manipulate the equation properly to fit the

first helpful equations given above.

Some rules for finding the Running Time that come are following:

7

A good way to understand how Big Oh works is to look at the following diagram.

**Expression Name**

O(1) constant

logarithmic

log squared

O(n) linear

N log n

Quadratic

cubic

exponential

Table: The Names of Common Big Oh Expressions

8

**1.4 Good Programming Practice:**

There are a substantial number of ideas we should bear in mind when designing an algorithm

and implementing it as a program. These ideas often appear platitudinous, because by and large

they can be appreciated only through their successful use in real problems, rather than by

development of theory.

1. *Plan the design of a program**. *This strategy of sketch-then-detail tends to produce a

more organized final program that is easier to debug and maintain.

2. ** Encapsulate. **Use procedures and ADT’s to place the code for each principal operation

and type of data in one place in the program listing. Then, if changes become

necessary, the section of code requiring change will be localized.

3. ** Use or modify an existing program. **One of the chief inefficiencies in the

programming process is that usually a project is tackled as if it were the first program

ever written. One should first look for an existing program that does all or a part of the

task.

4. ** Program at the command level. **A well-designed operating system will allow us to

connect a network of available programs together without writing any programs at all,

except for one list of operating system commands. To make command compos able, it

is generally necessary that each behave as a filter, a program with one input file and

one output file.

9

**UNIT 2**

**Basic Data Type**

2.1 The data type “list”

2.2 Static and Dynamic Memory Allocation

2.3 Pointers

2.4 Linear Linked List

2.4.1 Array implementation of list

2.4.2 Pointer implementation of list

2.4.3 Doubly-link lists

2.5 Stack

2.6 Queues

2.7 Mapping

**2.1 The Data Type “List”**

Using ADT’s allows the data in a specific piece of code to be hidden from other pieces of

code that don't need and shouldn't have access to it. This is often called modular programming or

encapsulation. The idea behind the ADT is to hide the actual implementation of the code, leaving

only its abstraction visible. In order to do this, one needs to find a clear division between the

linked list code and the surrounding code. When the linked list code is removed, what remains is

its logical abstraction. This separation makes the code less dependent on any one platform. Thus,

programming using the ADT method is usually considered a necessity for cross-platform

development as it makes maintenance and management of the application code much easier.

The ADT concept is developed here via the example of how the doubly linked list

*Application Programming Interface *(API) was created. The doubly linked list (DLL) can also be an

independent C module and compiled into a larger program.

An abstract data type (ADT).

**2.2 Static and Dynamic Memory Allocation: -**

In static memory allocation memory is allocated at compile time. If we declare a stack

through an array of 100 elements (all integers) then the statement will be:

int stk[100];

This declaration would typically be used if 100 records are to be stored in memory. The

moment we make this declaration 200 bytes are reserved in memory for storing 100 integers in it.

However it may so happen that when we actually run the program we might be interested in

storing only 60 records, even in this case 200 bytes would get reserved in memory that would

10

result in wastage of memory. There may be cases when we need to store more than 100 records,

in this case array would fall short in size.

We can overcome these problems by allocating memory at Run Time (instead of compile

time), this is called Dynamic Memory Allocation. It is done by using Standard Library functions

malloc() and calloc(). Syntax for allocating memory through malloc:

p = (data type*) malloc (n*size of (

**2.3 Pointers**

Pointer is a variable that gives the Address (location or link or reference) of some other

variable. As other variables Pointer also cannot be used before declaration. We can declare the

pointer variable as:

datatype *p;

This declaration specifies that p is a pointer, which will store the address of variable of

particular datatype. '*' stands for 'value of address'. Thus if we declare int *p; then it would mean,

the value at the address contained in p is an integer.

**Implementation of Lists: -**

In implementation of lists we shall describe data structures that can be used to represent

lists. We shall consider array, pointer and cursor implementation of list.

**Linked Linear List**

We saw in previous chapters how static representation of linear ordered list through Array

leads to wastage of memory and in some cases overflows. Now we don't want to assign memory to

any linear list in advance instead we want to allocate memory to elements as they are inserted in

list. This requires Dynamic Allocation of memory and it can be achieved by using malloc() or

calloc() function.

But memory assigned to elements will not be contiguous, which is a requirement for linear

ordered list, and was provided by array representation. How we could achieve this?

We have to consider a logical ordered list, i.e. elements are stored in different memory

locations but they are linked to each other and form a logical list as in Fig. This link represents

that each element

A 1 A 2 A 3 A 4

… …

.

A n

Figure: Logical List

has address of its logical successor element in the list. We can understand this concept through a

real life example. Suppose their is a list of 8 friends, x1, x2......x8. Each friend resides at different

locations of the city. x1 knows the address of x2, x2 knows the address of x3 and so on .... x7 has

the address of x8. If one wants to go to the house of x8 and he does not know the address he will

go to x2 and so on Fig.

The concept of linked list is like a family despaired, but still bound together. From the

above discussion it is clear that Link list is a collection of elements called nodes, each of

x 1 A d d .o f x2 x2 A d d .o f x3 X3

… … . X8 N U L L

Figure

Which stores two items of information:

· An element of the list

· A link or address of another node

Link can be provided through a pointer that indicates the location of the node containing the

successor of this list element. The NULL in the last node indicates that this is the last node in the

list.

11

**2.4 Linked List**

**2.4.1 Array Implementation of List: -**

This type of implementation of a linked list is sometimes useful. It suffers from the usual

array problem of having to estimate the needed size of the array in advance. There are some ways

around this problem (such as using a vector instead of an array, allocating space for the array

dynamically, or even allocating space for additional arrays as needed), but they have their own

complications. We will assume here that the maximum size needed can be foretold. The array will

hold a fixed number of records (structures), each of which represents a node.

In this implementation, a linked list object contains an array of nodes, perhaps called

NodeArray, and four integers fields labelled Avail, Front, Rear, and Count. Each record in

NodeArray contains an Info field and an integer Next field. Avail, Front, Rear, and each Next field

all contain fake pointers. What we use as a fake pointer to a node is the array index of that node.

To represent the NULL pointer we can use an illegal array index, such as -1. We will not start our

list with a dummy node, though it is possible to do so by making a few small changes. The picture

of an empty list is as follows:

The idea is that Front points to the first node of the list. Since it is -1, our imitation NULL

pointer, this means that we have an empty list. The Count of zero also indicates the same thing.

Rear is -1, indicating that there is no node at the rear of the list (as the list is empty). The Avail

field is intended to point to a list of free nodes. The programmer must arrange to manage free

space within the array with this method! In out picture above, all of the nodes are free and are

linked together in a list with Avail containing zero and thus pointing at the first free node,

NodeArray[0]. Then this node has a Next value of 1, showing that it points to NodeArray[1], etc.

Finally, NodeArray[4].Next contains our imitation NULL pointer to mark the end of the list.

After items have been dynamically inserted into the list and perhaps items have been

deleted as well, we might end up with a picture such as the following.

12

Can you read what is on this list? All you do is follow the (fake) pointers. Since Head contains 1,

the first node contains the string "Ann". This node contains a pointer to node 0, so the second

node contains "Sam". This node contains a pointer to node 4, so the third node contains "Mary".

Finally, this node has -1 in the Next field, marking the end of the list. So the three data items in

the list are "Ann", "Sam", and "Mary". If you need a quick way to get to the last node on the list,

you use the 4 found in the Rear field. The list of available (free) nodes contains node 2 and node 3.

Note that such a list object always contains two linked lists: the list of data nodes and the list of

available (free) nodes. It is left as an exercise to the reader to implement a linked list class using

this array-based scheme.

**2.4.2 Pointer Implementation of List: -**

It is very common to implement a linked list using pointers and either structures or

objects for the nodes. We will choose objects for the nodes and set up the class ListNodeClass as

shown below.

class ListNodeClass

{

private:

ItemType Info;

ListNodeClass * Next;

public:

// First, the constructor:

ListNodeClass(const ItemType & Item, ListNodeClass * NextPtr = NULL):

Info(Item), Next(NextPtr)

{

};

void GetInfo(ItemType & TheInfo) const;

friend class ListClass; // very convenient to allow this

};

typedef ListNodeClass * ListNodePtr;

Note that the two data fields are private, as usual. The type ItemType would have to be set

up to handle whatever data we want to hold in the list. The Next field is a pointer to another

ListNodeClass object, since the * refers to whatever Next points to.

The most important function is the constructor. It takes a data item and a pointer as its

parameters and uses them to fill in the fields of the object being constructed. Note that the pointer

parameter has a default value of NULL. Recall that the Info(Item), Next(NextPtr) means to copy

Item into Info and NextPtr into Next. The GetInfo function simply sends back a code of the data

from the object.

13

The ListNodeClass is also told that it has a friend class named ListClass. Functions of the

friend class have direct access to the private data fields (Info and Next) of ListNodeClass objects. It

is not necessary to allow this direct access, but it is easier than the alternative: using get and set

class functions every time a ListClass function wants to look at or change the private fields of a

ListNodeClass object. Some people refuse to use friend classes, however, because they give more

access than is really necessary, thus partially defeating the information hiding that class provide.

Finally, note that the above code sets up ListNodePtr as a convenient type name for a

pointer to a node. This means that we won't have to use ListNodeClass * from here on in the rest

of the code, but can instead use ListNodePtr.

Next, let's set up a class declaration for list objects. Each list will have three private data

fields: a pointer to the first node on the list, a pointer to the last node on the list, and a count of

how many data nodes are in the list.

The Front pointer will point to what is called a **dummy node**. This is a node containing no data. It

is easier to code the insertion and deletion operations if the first node on the list contains no data.

Otherwise, the code to insert a new first node or to delete the current first node must be different

from the code to handle the normal cases.

The following is the class declaration for ListClass. Helping functions, which are only used

by other ListClass functions are made private, so that they cannot be accessed otherwise. (This is

another instance of **information hiding**.) There is a comment that the three data fields are

sometimes made into protected fields. The purpose of this is that if we ever created a subclass

(using inheritance), functions of the subclass would be able to directly access these fields. With

private fields, the fields would be inherited, but not directly accessible by the subclass. Since we

do not plan to inherit a new class from ListClass, this is not a concern.

class ListClass

{

private:

ListNodePtr GetNode(const ItemType & Item,

ListNodePtr NextPtr = NULL);

void FreeNode(ListNodePtr NodePtr);

void ClearList(void);

// Next 3 are sometimes made into protected fields:

ListNodePtr Front, Rear;

int Count;

public:

// constructor:

ListClass(void);

// destructor:

~ListClass(void);

int NumItems(void) const;

bool Empty(void) const;

void InsertFront(const ItemType & Item);

void InsertRear(const ItemType & Item);

void InsertInOrder(const ItemType & Item);

ItemType RemoveFront(void);

ListNodePtr Find(const ItemType & Item) const;

};

The public functions above provide the usual linked list operations. The above class

declaration thus shows that, from an abstract data type point of view, the list operations are list

creation, list destruction, getting the number of items in the list, checking if the list is empty,

inserting an item at the front of the list, inserting an item at the rear of a list, inserting an item

into an ordered list, removing an item from the front of a list, and finding an item in the list.

Certainly other operations are possible. For example, why not have an operation to remove the

item from the rear of a list? (One practical reason not to do so is that it is not easy to adjust the

Rear pointer after deleting the last node and to get a NULL into the Next field of the new last node.

The only way to do so appears to be traverse the list from the first to the last node, before

removing the final node, in order to get a pointer to the next to the last node!)

14

A ListClass object, named List is pictured below. Functions are not shown in the objects in

order to keep the diagram from becoming cluttered, but if they were, the private functions should

be shown as completely boxed in, whereas the public function should be shown in open boxes to

indicate that they are available for public use. Note that a ListClass object doesn't really contain

the data in the list. Rather, the data is contained in node objects that are outside of the List object

itself. The List object simply contains pointers to the first and last nodes of the list.

The complete code files for this example are given below. The last file contains a simple

test program to try out several linked list operations. This test program is straightforward and will

not be commented upon here. Read through these files to get the general idea, and then go on to

the notes that follow.

In the next to the last file above, you will note that the constructor initializes a ListClass

object to contain a Count of zero, and to have Front and Rear pointers that point to a dummy

node. Of course, this means that it has to create a dummy node. This is handled by the helping

function GetNode. In turn, GetNode uses new with the ListNodeClass constructor to dynamically

allocate a new node.

Since a ListClass object points to other objects outside of it, it is important to have a

destructor to clean up the space used by these ListNodeClass objects. The destructor uses the

ClearList function to reclaim the space used by the data nodes and then calls FreeNode(Front) to

get rid of the dummy node. The ListClass object itself is automatically removed.

The FreeNode function is trivial in that it just uses delete to remove the node pointed to.

The ClearList function, however, is more interesting. Let's look at it in more detail since it is a

good example of a list processing function.

/* Given: Nothing.

Task: Clear out all nodes on the list except the dummy.

The list is thus set to an empty list.

Return: Nothing directly, but the implicit ListClass object is modified.

*/

void ListClass::ClearList(void)

{

ListNodePtr Current, Temp;

Current = Front->Next;

while (Current != NULL)

{

Temp = Current;

Current = Current->Next;

FreeNode(Temp);

}

Front->Next = NULL;

Rear = Front;

Count = 0;

}

In essence the idea is to traverse the list, freeing up each data node as it is encountered.

The Current pointer is used to point to the successive nodes on the list. Note that it is initialized

to point where the dummy node's Next field points, that is, to point to the first data node. Then we

have a while loop. Inside of it, the Current pointer is copied into Temp, Current is advanced by

placing in it a copy of the Next field of the node we have been pointing at, and FreeNode is used to

free up the node that Temp points to. Notice that we can't simply free the node that Current

points to each time around the loop. If we did that we would lose our pointer into the linked list

and have no way to advance to the next node. The loop stops when Current is advanced to pick

15

up the NULL value marking the end of the list. After the loop ends, the ListClass object is adjusted

so that the Front and Rear pointers are NULL and the Count is zero. In other words, the list object

is changed into the typical picture of an empty list.

Let's look at a few more of the list manipulation functions, starting with InsertFront.

/* Given: Item A data item.

Task: To insert a new node containing Item at the front of the implicit

ListClass object.

Return: Nothing directly, but the implicit object is modified.

*/

void ListClass::InsertFront(const ItemType & Item)

{

ListNodePtr NodePtr;

NodePtr = GetNode(Item, Front->Next);

Front->Next = NodePtr;

if (Count == 0)

Rear = NodePtr;

Count++;

}

It is clear that the GetNode function is being used to create a node to hold the new data item. The

second parameter to this function is used to initialize the Next field of this new node so that it

points to the first data node. At this point we have the following picture:

To finish things off, the function adjusts the Next field of the dummy node so that it points

to the new node. The Count field for the list also needs to be incremented. That finishes things for

the example in our picture.

Unfortunately, it is not always quite that simple. There is a special case lurking in the

background. What happens if we insert at the front of an empty list? Remember that an empty list

looks like this:

16

The steps we previously did to insert a new node at the front of the list are still fine, but

they leave one thing out: they do not adjust the Rear pointer, currently pointing at the dummy

node, to point to the newly inserted node. That is handled by the if in the code above.

You may not have noticed it, but the ListClass functions are directly accessing the private data

fields of the ListNodeClass objects. That would not normally be allowed, but it is legal here

because we made ListClass to be a friend of ListNodeClass.

Next, let's look at the InsertInOrder function. Note that it assumes that the list is already

in order.

/* Given: Item A data item.

Task: Insert Item into a new node added to the implicit ListClass object

(assumed to already be in ascending order based on the Info

field) so as to maintain the ascending order.

Return: Nothing directly, but the implicit object is modified.

*/

void ListClass::InsertInOrder(const ItemType & Item)

{

ListNodePtr Current, Last, Temp;

bool Proceed;

Temp = GetNode(Item);

Current = Front->Next;

Last = Front;

Proceed = true;

while (Proceed && (Current != NULL))

if (Item > Current->Info)

{

Last = Current;

Current = Current->Next;

}

else

Proceed = false;

Last->Next = Temp;

Temp->Next = Current;

if (Current == NULL) // item was inserted at rear of list

Rear = Temp;

Count++;

}

Suppose that we have an ordered list containing 4, 6, 9, and that we want to insert 8

using the above function. Note that it moves a pair of pointers, Current and Last, through the list,

with Last always pointing to the node before the one that Current points to. Current starts out by

being initialized to point to the first data node, while Last points to the dummy node. Each time

17

around the while loop a check is made to see if the 8 is greater than the item contained in the

node that Current points to. If so, the pointers are advanced so that each points to the next node.

If not, then the place to insert has been found, so the loop is stopped by setting the Proceed flag to

false. The picture at the point where the insertion location has been found is shown below:

All that remains to be done is to link in the new node, up the Count by 1, and check for any

special cases. It turns out that the only unique case is when the new item ends up going at the

very end of the list. In such a case the Rear pointer must be adjusted (as shown in the code

above).

The Remove Front function is simpler. However, it exits with an error message if there is

no data item to remove. If not, it sets up NodePtr to point to the first data node and extracts the

value from this node's Info field. It then adjusts the Next field of the dummy node so that it points

to the second data node, skipping around the first one. The Count is decremented and the first

data node is freed up. Not surprisingly, there is a special case. This occurs when the list only has

one data node. After removing it, the Rear field must be adjusted to point to the dummy node.

**2.4.3 Doubly-Link lists: -**

A doubly linked list has two pointer fields in each node, often named Left and Right. If the

linked list is pictured as a horizontal sequence of nodes, the Right pointers are used to traverse

the lists from left to right (that is, from beginning to end). However, the Left pointers can be used

to back up to the left whenever that is desired. The following is one possible picture of a doubly

linked list. Sometimes dummy nodes are added at the front and rear as well.

18

One can thus traverse a doubly linked list in both the forward and backward directions.

This can make coding simpler, for example, when writing a sequential search function to find a

given node in the list, with the plan that we will use this to find the node after which to insert a

new node. With a singly linked list we needed to return both a pointer to the node found and a

pointer to the previous node. With a doubly linked list it suffices to return a pointer to the

matching node, since we can always follow its Left field to get at the previous node.

**Layout of Doubly Linked List in memory**

The arrows indicate to what the Prior and Next pointers point. The Current pointer can

point to any Node Struct, so it is open-ended.

In order to fulfill the first requirement, I decided to strictly adhere to the ANSI C standard,

and, with the possible exception of how one sets up one's data and uses the DLL's input/output

functions, there should be no endian (byte order) problems. The second requirement was met with

the creation of a top-level structure. There is only one of these structures per linked list. It keeps

track of the node pointers, the size of the applications data in bytes, how many nodes are in the

list, whether or not the list has been modified since it was created or loaded into memory, where

searching starts from, and what direction a search proceeds in. Figure 1 illustrates how the toplevel

structure is integrated into the DLL.

typedef struct list

{

Node *head;

Node *tail;

Node *current;

Node *saved;

size_t infosize;

unsigned long listsize;

DLL_Boolean modified;

DLL_SrchOrigin search_origin;

DLL_SrchDir search_dir;

} List;

This and the next *typedef *structure remain hidden from the application program. The

node pointers mentioned above are defined in the next structure, which includes the pointers to

the application's data and the pointers to the next and prior nodes. One of these structures is

created for each node in the list.

typedef struct node

{

Info *info;

struct node *next;

struct node *prior;

} Node;

The last definition is a dummy *typedef *of the user's data. It is defined as type *void *so that

the DLL's functions will be able to handle any of C's or an application's data types.

typedef void Info;

As you can see, if the two structures mentioned above are hidden from the application, all

of the ugly innards of how the linked list operates will by default be hidden from the application.

Thus, we have an abstract data type.

**2.5 STACK: -**

A **stack **is an abstract data structure that consists of an ordered collection of items with

two basic operations called **push **and **pop**. The **push **operation appends a single new item into the

**stack **collection. The **pop **operation removes a single item off the **stack **collection, but in the

reverse order that the item was added using the **push **operation.

19

We can model a **stack **data structure in terms of a pipe that has one end open and the

other closed. The **push **operation places items into the pipe from the open end. The **pop**

operation removes an item only from the open end. Assuming the items do not juggle around in

the pipe, the items placed into the pipe can only be removed in the order that they are placed into

the pipe. Although, in theory the stack is like an infinite length pipe that can hold an endless

number of items, in practice, due to the finite memory available on a computer system, the stack

has a maximum size. This is called the **maximum stack size**. In theory, one never needs to

track the number of elements currently in the stack collection, but in practice this is called the

**current stack size**. The operation of performing a **pop **on an empty stack is given a special error

called a **stack underflow**. Since in practice, a stack can only contain a finite number of elements,

an error called a **stack overflow **occurs when the stack is full and a **push **operation is performed.

**Stack **will work on LIFO application.

**Adding into stack**

**procedure **add(item : items)

{add item to the global stack stack;

top is the current top of stack and n is its maximum size}

**begin**

**if **top = n **then **stackfull;

top := top+1;

stack(top) := item;

**end: **{of add}

**Deletion in stack**

**procedure **delete(**var **item : items)

{remove top element from the stack stack and put it in the item}

**begin**

**if **top = 0 **then **stackempty;

item := stack(top);

top := top-1;

**end; **{of delete}

These two procedures are so simple that they perhaps need no more explanation. Procedure delete actually

combines the functions TOP and DELETE, stackfull and stackempty are procedures which are left unspecified since

they will depend upon the particular application. Often a stackfull condition will signal that more storage needs to be

allocated and the program re-run. Stackempty is often a meaningful condition.

20

**2.6 QUEUE: -**

A Queue is an ordered list of items that have two basic operations: adding new items to the

queue, and removing items of the queue. The operation of adding new items on the queue occurs

only at one end of the queue called the **rear**(or **back)**. The operation of removing items of the

queue occurs at the other end called the **front**. The following consists of a queue that has five

integers:

**front: 23 33 -23 90 2 : rear**

The operations of adding 2 new items,5 and 4, to the queue occurs in the back and the

queue looks like so.

**front: 23 33 -23 90 2 5 4 :rear**

The operations of removing three items from the queue would yield the queue

**front: 90 2 5 4 : rear**

The first item on the queue, the item that will be removed with the next remove operation,

is called the **front **of the queue. The last item is called

the **rear **of the queue. The number of items in the queue is called the **queue size**. If the number of

items in the queue is zero, an attempt at a remove

operation produces a **queue underflow error**. In theory, there does not exist a **queue overflow**.

But in practice, implementations of a queue have a maximum **queue size**.

**Addition into a queue**

**procedure **addq (item : items)

{add item to the queue q}

**begin**

**if **rear=n **then **queuefull

**else begin**

rear :=rear+1;

q[rear]:=item;

**end;**

**end;**{of addq}

**Deletion in a queue**

**procedure **deleteq (**var **item : items)

{delete from the front of q and put into item}

**begin**

**if **front = rear **then **queueempty

**else begin**

front := front+1

item := q[front];

**end;**

**end; **{of deleteq}

**2.7 Mapping**

A mapping or associative store is a function from elements of one type, called the domain

type to elements of another type, called the range type. We express the fact that the mapping *M*

associates element *r *of range type range type with element *d *of domain type domain type by *M(d)=*

*r.*

Certain mapping such as square(i)= i2 can be implemented easily by giving an arithmetic

expression or other simple means for calculating *M(d) *from *d *. However for many mapping there is

no apparent way to describe *M(d) *other than to store for each *d *the value of *M(d).*

21

22

**UNIT 3**

**Basic Operations and Sets**

3.1 Sets

3.2 An ADT with union, intersection and difference

3.3 Bit vector implementation of sets

3.4 Link-list implementation of sets

3.5 The data dictionary

**3.1 Sets**

A set is a collection of member (or elements) each member of a set either is itself a set or

primitive element called an *atom*. All member of a set are different, which means no set can

contain two copies of the same elements.

When used as tool in algorithm and data structure design, atoms usually are integers,

characters or strings and all elements in any one set are usually of the same type. We shall often

assume that a relation, usually denoted and read “less than” or “process”, linearly orders atoms. A

linear order <>

1. For any a and b in S, exactly one a< a="b">

2. For all a, b and c in S, if a

**3.2 An ADT with union, intersection and difference**

A **set **is an unordered collection of particular objects. These objects are referred to as **elements**.

An example is shown as:

A = {1, 2, 3, 4, 5, 9, 10}

Where. A is the set and the elements are identified within the braces. Generally a set is denoted

by a capital letter while lower case letters represents the elements. For a large series with a

pattern, the set could be designated as:

N = {1, 2, 3, 4, 5, ¼ }

The use of the symbol Î in set theory means that object is an element of a set. For example, a Î S

means that a is an element within the set S. More than one element can be identified such as a, b

Î S which indicates that both a and b are elements of the set S. Continuing in this vein, Ï means

that an element is not in a set. For example, a Ï S states that a is not an element within the set S.

A **subset **is a set where every element belongs to another set. The symbols Ì and É are used to

designate subsets, depending upon the syntax. If all of the elements of V are also elements of T

then V is a subset of T and this can be written as V Ì T , which can be interpreted as saying V is

contained in T, or T É V, which can be used to imply that T contains V. For example,

T = {all letters in the English alphabet}

V = {all vowels}

then V Ì T

If there exists one element that exists in V that is not in T, then R is not a subset and this

is designated as V Ë T.

If two sets, S and T, contain exactly the same identical elements, then they are equal and

S = T. It is also true that S Ì T and T Ì S. If on the other hand, S Ì T and S ¹ T then S is called a

23

**proper subset **of T. This means that there is at least one element of T, which is not an element of

S.

It is important to recognize the syntax utilized in set theory. For example, if the set S = {a,

b, c}, then a Î S is the correct form that means a is an element of the set S. But, using {a} Î S is

incorrect because the braces mean a set. S can consist of subsets that contain only one element.

For example, {a} can be used to designate a subset of S containing only a. Then, it is proper to

state that {a} Ì S which means that the set containing {a} is contained within S.

It is also possible to represent a set of elements in one set from another set, which has

certain properties. This is written as:

S = {x Î T | x has the property p}

or, if the set T is clear to the user, this can be shortened to the form:

S = {x | x has the property p}

The following example shows the set S contains all of those elements from the set of natural

numbers N, where x + 2 = 7.

S = {x Î N | x + 2 = 7}

Since S contains only one element, 5, this is called a **unit set**. Another example shows that the

definition of a particular set can be done in different fashions. In this first definition, the set E will

contain all even natural numbers:

E = {x Î N | x is even}

hence,

E = {2, 4, 6, 8, ¼ }

Another way of defining this set is as follows:

E = {2n | n Î N }

By convention, uses of certain letters usually define the element of a set. For example, the

**null set**, Æ, is an empty set, which contains no elements. Æ is a subset of all sets. The **universal**

**set**, U, is the set consisting of all the elements that are of interest for a problem.

N is generally used for natural values (N = {1, 2, 3, 4, ...}, sometimes 0 is considered a

natural number), and Z is used for integers (Z = {..., -3, -2, -1, 0, 1, 2, 3, ...}), and R represents all

real numbers.

The **union **of sets defines another set, which consists of elements from two or more sets. It

can be shown mathematically as A È B whose elements are {x | x Î A or x Î B}. This operation is

often called a Boolean OR operation since the element must be in either set A or set B to be

included in the newly formed set. For example, if

A = {0, 2, 3, 5, 7, 9, 11}

and

B = {1, 2, 4, 6, 7, 9, 11}

then

C = A È B = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11}

A figure called a **Venn diagram **is often used to

depict the different relationships in set theory. If the

two sets have at least one common element then

they are called **conjoint**. If, on the other hand,

**Venn Diagram showing the union of two**

**sets.**

24

there are no common elements in the two sets then they are said to be **disjoint**. For example, E =

{all male students at Ferris} and F = {all female students at Ferris}, the sets are disjoint because a

student cannot be a member of both groups. In the example above, the sets A and B are conjoint

since there is at least one common element.

The **intersection **of two sets is denoted as C = A Ç B where the elements of this new set are

defined by {x | x Î A and x Î B}. This is sometimes called a Boolean AND operation since an

element must be in both sets to be included in the new set. If the sets are disjoint then A Ç B = 0.

In the example above, where:

A = {0, 2, 3, 5, 7, 8, 9, 11}

and

B = {1, 2, 4, 6, 7, 9, 11}

then

C = A Ç B = {2, 7, 9, 11}

The **complement **of a set is defined by the elements that

are a part of the universal set but not in the subset. If A is

a subset then the complement is shown as Ac or comp (A).

As an example, if

U = {all male and female students at FSU}

then the subset

A = {all male students at FSU}

then

comp (A) = {all female students at FSU}

Venn Diagram showing the A minus B set

operation.Another useful set of operations is A\B

which means A minus B. The new set will

contain all of those elements of A that are not in

B or {x | x Î A and x Ï B}. Using the examples of

A and B from above,

C = A\B = {0, 3, 5, 8}

Sets also follow the distributive law. Thus,

A È (B Ç C) = (A È B) Ç (A È C)

and

A Ç (B È C) = (A Ç B) È (A Ç C)

While not a proof, this property is depicted in using Venn diagrams.

**Venn Diagram showing the intersection of**

**two sets.**

25

**3.3 A Bit –Vector Implementation of Sets**

The best implementation of a SET ADT depends on the operation to be performed and on

the size of the set. When all sets in our domain of discourse are subsets of a small “universal set”

whose elements are the integers 1,2,3…………….N for some fixed N, then we can use a bit-vector

(boolean array) implementation. A set is represented by a bit vector in which the Ith bit is true if I

is an element of the set. The major advantage of this representation is that MEMBER, INSERT and

DELETE operation can be performed in constant time by directly addressing the appropriate bit.

the universal set.

**3.4 Link-list implementation of sets: -**

It should also be evident that linked lists can represent sets, where the items in the list are the

elements of the set. Unlike the bit-vector representation, the list representation uses space

proportional to the size of the set represented, not the size of the universal set.

When we have operations like INTERSECTION on sets represented by linked lists, we have several

options. If the universal set is linearly ordered, then we can represent a set by a stored list. That

is, we assume all set members are comparable by a relation “<” and the members of a set appear

on a list in the order e1, e2, e3………………..en, where e1

stored list is that we do not need to search the entire list to determine whether an element is on

the list.

A B

C

A B

C

A B

C

A B

C

A B

C

A B

C

b

a

**Graphical "proof" of the distributive law.**

26

**3.5 The data dictionary: -**

“A data dictionary is used to describe all aspects of a computer based information system: -

in particular its data and its processes, i.e. data about data.”

“To DOCUMENT for all posterity the data elements, structures, flows, stores, processes, and

external entities in the system to be designed.”

Specific arrangements of data attributed (elements) that define the organization of a single

instance of a data flow. Data structures belong to a particular data store. Think about this. For

example, you may desire to collect information about an instance of an "employee." Employees

pave payroll information; perhaps work address information, and human resources information.

Because the purpose of this information is to serve several systems or data flows, you might want

to store the whole of the employee information in several data structures.

**Data Elements**: -

The descriptive property or characteristic of an entity. In database terms, this is a "attribute" or a

"field." In Microsoft and other vendor terms, this is a "property." Therefore, the data element is

part of a data structure, which is part of a data store. It has a source, and there may be data

entry rules you wish to enforce on this element.

**Data Flows**:

Represent an input of data to a process or the output of data (or information) from a process. A

data flow is also used to represent the creation, deletion, or updating of data in a file or database

(called a "data sore" in the Data Flow Diagram (DFD.) Note that a data flow is not a process, but

the passing of data only. These are represented in the DFD. Each data flow is a part of a DFD

Explosion of a particular process.

When we use a set in the design of an algorithm, we may not need powerful operations like union

and intersection. Often, we only need to keep a set of “current” objects, with periodic insertions

and deletions from the set. Also, from time to time we may need to know whether a particular

element is in the set. A set ADT with the operations INSERT, DELETE and MEMBER has been

given the name *dictionary. *We shall also include MAKENULL as a dictionary operation to initialize

whether data structure is used in the implementation.

27

**UNIT 4**

**Algorithms Analysis Techniques**

4.1 Efficiency of algorithms

4.2 Analysis of recursive programs

4.3 Solving recurrence equation

**4.1 Efficiency of algorithms**

1. Algorithms (Algorithm is a well-defined sequence of steps)

2. Computational resources: time and space (which leads to solving a certain problem. Steps

should be:

a. Unambiguous

b. There should be finitely many of them)

3. Best, worst and average case performance

4. How to compare algorithms: machine-independent measure of efficiency

5. Growth rate.

6. Complexity measure O ( ).

**Best, worst and average case**

• Best performance: the item we search for is in the first position; examines one position.

• Worst performance: item not in the array or in the last position; examines all positions.

• Average performance (given that the item is in the array): examines half of the array.

**Rate of Growth**

We don't know how long the steps actually take; we only know it is some constant time. We can

just lump all constants together and forget about them.

What we are left with is the fact that the time in sequential search grows linearly with the input,

while in binary search it grows logarithmically much slower.

**Big Oh complexity measure**

Big O notation gives an asymptotic upper bound on the actual function, which describes

time/memory usage of the algorithm.

The complexity of an algorithm is O(f(N)) if there exists a constant factor K and an input size N0

such that the actual usage of time/memory by the algorithm on inputs greater than N0 is always

less than K f(N).

**Big O**

"Big O" refers to a way of rating the efficiency of an algorithm. It is only a rough estimate of the

actual running time of the algorithm, but it will give you an idea of the performance relative to the

size of the input data.

28

An algorithm is said to be of order O(*expression*), or simply of order *expression *(where

*expression *is some function of n, like n2 and n is the size of the data) if there exist numbers p, q

and r so that the running time always lies below between p.*expression*+q for n > r. Generally

*expression *is made as simple and as small as possible.

For example, the following piece of code executes the innermost code(n2 - n) / 2, and the whole

expression for the running time will be a quadratic. Since this lies below n2 for all n > 100, the

algorithm is said to have order O(n2).

for i := n downto 2 do

begin

h := 1;

for j := 2 to i do

if list[j] > list[h] then h := j;

temp := list[h];

list[h] := list[i];

list[i] := list[temp];

end;

(this is the algorithm for Selection Sort)

**Choosing an algorithm**

Choosing an algorithm isn't a case of simply picking the fastest one and implementing it.

Especially in a competition, you will also have to balance this against the time it will take you to

code the algorithm. In fact you should use the **simplest **algorithm that will run in the time

provided, since you generally get no more points for a faster but more complex algorithm. In fact it

is sometimes worth implementing an algorithm that you know will not be fast enough to get

100%, since 90% for a slow, correct algorithm is better than 0% for a fast but broken one.

When choosing an algorithm you should also plan what type of data structures you intend

to use. The speed of some algorithms depends on what types of data structures they are combined

with.

**Analysis of recursive programs**

The techniques for solving different equation are sometimes subtle and bear considerable

resemblance to the methods for solving differential equation. Consider the sorting program, there

the procedure merge sort takes a list of length n as input and returns a sorted list as its output.

The procedure merge(L1, L2) takes as input two sorted list L1 and L2, scans them each, element

by element, from the front. At each step, the larger of the two front elements is deleted from its list

and emitted as output. The result is a single sorted list containing the elements of L1 and L2. The

time taken by merge on lists of length n/2 is O(n).

**function **mergesort(L : LIST, n : integer) : LIST;

{ L is a list of length n. A sorted version of is

returned. We assume n is a power of 2. }

**var**

L1,L2: LIST

**begin**

**if **n=1 **then**

**return**(L);

**else begin**

Break L into two halves L1 and L2, each of length n/2;

**return**(merge(mergesort(L1,n/2), mergesort(L2,n/2)));

**end**

**end**; { mergesort }

29

Let T(n) be the worst case running time of procedure mergesort, we can write a recurrence

( or difference ) equation the upper bounds T(n), as follows

T(n) <= c1 if n=1

Or

T(n) < = 2T(n/2) + c2 n if n>1

C1 and C2 are constant.

**Solving recurrence equation**

There are three different approaches we might take to solving a recurrence equation.

1. Guess a solution f(n) and use the recurrence to show that T(n) <= f(n). Some times we

guess only the form of f(n), leaving some parameters unspecified (e.g guess f(n)=an2 for

some a) and deduce suitable values for the parameters as we try to prove T(n) <= f(n) for all

n.

2. Use the recurrence itself to substitute for any T(m), m <>

for m > 1 have been replaced by formulas involving only T(1) . Since T(1) is always a

constant, we have a formula for T(n) in terms of n and constants. This formula is what we

have referred to as a “closed form” for T(n).

30

**UNIT 5**

**Algorithms Design Technique**

5.1 Divide and conquer algorithms

5.2 Dynamic programming

5.3 Greedy algorithm

5.4 Minimum-cost spanning trees

5.5 Minimum Spanning Tree

5.6 Prim’s Algorithm

5.7 Kruskal’s Algorithm

5.8 Shortest Paths

5.9 Dijkastra’s Algorithm

5.10 Backtracking

**5.1 Divide-and-Conquer algorithms**

This is a method of designing algorithms that (informally) proceeds as follows:

Given an instance of the problem to be solved, split this into several, smaller, sub-instances (*of*

*the same problem*) independently solve each of the sub-instances and then combine the subinstance

solutions so as to yield a solution for the original instance. This description raises the

question:

By what methods are the sub-instances to be independently solved?

The answer to this question is central to the concept of Divide-&-Conquer algorithm and is

a key factor in gauging their efficiency.

Consider the following: We have an algorithm, alpha say, which is known to solve all

problem instances of size n in at most c n^2 steps (where c is some constant). We then discover

an algorithm, beta say, which solves the same problem by:

· Dividing an instance into 3 sub-instances of size *n/2*.

· Solves these 3 sub-instances.

· Combines the three sub-solutions taking *d n *steps to do this.

Suppose our original algorithm, *alpha*, is used to carry out the `solves these sub-instances' step 2.

Let

*T(alpha)( n ) = Running time of alpha*

*T(beta)( n ) = Running time of beta*

Then,

*T(alpha)( n ) = c n^2 (by definition of alpha)*

But

*T(beta)( n ) = 3 T(alpha)( n/2 ) + d n*

*= (3/4)(cn^2) + dn*

So if *dn < (cn^2)/4 *(i.e. *d <>*) then *beta *is *faster than alpha*

In particular for all large enough *n*, (*n > 4d/c = Constant*), *beta *is faster than *alpha*.

This realisation of *beta *improves upon *alpha *by just a constant factor. But if the problem

size, *n*, is large enough then

*n > 4d/c*

*n/2 > 4d/c*

*...*

*n/2^i > 4d/c*

31

Which suggests that using *beta *instead of *alpha *for the `*solves these*' stage *repeatedly until the*

*sub-sub-sub..sub-instances are of size n0 < = (4d/c) *will yield a still faster algorithm.

So consider the following new algorithm for instances of size *n*

**Procedure ***gamma *(*n *: **problem size **) **is**

**Begin**

**if ***n <= n^-0 ***then**

Solve problem using Algorithm *alpha*;

**else**

Split into 3 sub-instances of size *n/2*;

Use *gamma *to solve each sub-instance;

Combine the 3 sub-solutions;

**end if**;

**end ***gamma*;

Let *T(gamma)(n) *denote the running time of this algorithm.

*cn^2 if n < = n0*

*T(gamma)(n) =*

*3T(gamma)( n/2 )+dn otherwise*

We shall show how relations of this form can be estimated later in the course. With these methods

it can be shown that

*T(gamma)( n ) = O( n^{log3} ) (=O(n^{1.59..})*

This is an *asymptotic improvement *upon algorithms *alpha *and *beta*.

The improvement that results from applying algorithm *gamma *is due to the fact that it

maximise the savings achieved *beta*.

The (relatively) inefficient method *alpha *is applied only to *"small" *problem sizes.

The precise form of a divide-and-conquer algorithm is characterised by:

· The *threshold *input size, *n0*, below which the problem size is not sub-divided.

· The *size *of sub-instances into which an instance is split.

· The *number *of such sub-instances.

· The algorithm used to combine sub-solutions.

It is more usual to consider the *ratio *of initial problem size to sub-instance size. The *threshold *in

(I) is sometimes called the *(recursive) base value*. In summary, the generic form of a divide-andconquer

algorithm is:

**Procedure ***D-and-C *(*n *: **input size**) **is**

**Begin**

**if ***n < = n0 ***then**

Solve problem without further

sub-division;

**else**

Split into *r *sub-instances

each of size *n/k*;

for each of the *r *sub-instances do

*D-and-C (n/k)*;

Combine the *r *resulting

sub-solutions to produce

the solution to the original problem;

**end if**;

**end ***D-and-C*;

**5.2 Dynamic Programming**

The term ``Dynamic Programming" (DP) refers to a collection of algorithms that can be

used to compute optimal policies given a perfect model of the environment as a Markov decision

process (MDP). Classical DP algorithms are of limited utility in reinforcement learning both

32

because of their assumption of a perfect model and because of their great computational expense,

but they are still very important theoretically. DP provides an essential foundation for the

understanding of the methods presented in the rest of this book. In fact, all of these methods can

be viewed as attempts to achieve much the same effect as DP, only with less computation and

without assuming a perfect model of the environment.

Starting with this chapter, we usually assume that the environment is a finite MDP. That is, we

assume that its state and action sets, and , for , are finite, and that its dynamics

are given by a set of transition probabilities, , and

expected immediate rewards, , for all ,

, and ( is plus a terminal state if the problem is episodic). Although DP

ideas can be applied to problems with continuous state and action spaces, exact solutions are

possible only in special cases. A common way of obtaining approximate solutions for continuous

state and action tasks is to quantize the state and action spaces and then apply finite-state DP

methods.

The key idea of DP, and of reinforcement learning generally, is the use of value functions

to organize and structure the search for good policies. As discussed there, we can easily obtain

optimal policies once we have found the optimal value functions, or , which satisfy the

Bellman optimality equations:

or

for all , , and . As we shall see, DP algorithms are obtained by turning

Bellman equations such as these into assignment statements, that is, into update rules for

improving approximations of the desired value functions.

**5.3 Greedy Algorithm**

An algorithm is a step-by-step recipe for solving a problem. A **greedy algorithm **might also

be called a "single-minded" algorithm or an algorithm that gobbles up all of its favorites first. The

idea behind a greedy algorithm is to perform a single procedure in the recipe over and over again

until it can't be done any more and see what kind of results it will produce. It may not completely

33

solve the problem, or, if it produces a solution, it may not be the very best one, but it *is *one way of

approaching the problem and sometimes yields very good (or even the best possible) results.

**5.4 Minimum-Cost spanning trees**

Let G=(V,E) be an undirected connected graph. A sub-graph t = (V,E1) of G is a spanning

tree of G if and only if t is a tree.

Above figure shows the complete graph on four nodes together with three of its spanning

tree.

Spanning trees have many applications. For example, they can be used to obtain an

independent set of circuit equations for an electric network. First, a spanning tree for the electric

network is obtained. Let B be the set of network edges not in the spanning tree. Adding an edge

from B to the spanning tree creates a cycle. Kirchoff’s second law is used on each cycle to obtain a

circuit equation.

Another application of spanning trees arises from the property that a spanning tree is a

minimal sub-graph G’ of G such that V(G’) = V(G) and G’ is connected. A minimal sub-graph with

n vertices must have at least n-1 edges and all connected graphs with n-1 edges are trees. If the

nodes of G represent cities and the edges represent possible communication links connecting two

cities, then the minimum number of links needed to connect the n cities is n-1. The spanning

trees of G represent all feasible choice.

In practical situations, the edges have weights assigned to them. These weights may

represent the cost of construction, the length of the link, and so on. Given such a weighted graph,

one would then wish to select cities to have minimum total cost or minimum total length. In either

case the links selected have to form a tree. If this is not so, then the selection of links contains a

cycle. Removal of any one of the links on this cycle results in a link selection of less const

connecting all cities. We are therefore interested in finding a spanning tree of G. with minimum

cost since the identification of a minimum-cost spanning tree involves the selection of a subset of

the edges; this problem fits the subset paradigm.

**5.5 Minimum Spanning Tree: -**

Let G = (V,E) be an undirected connected graph. A subgraph t = (V, E1) of g is a spanning tree of *g*

if *t *is a tree.

Example: shows the complete graph on four nodes together with three of its spanning trees.

34

Spanning trees have many applications. For example, they can be used to obtain an independent set of circuit equations

for an electric network. In practical situations, the edges have weights assigned to them. These weights may represent

the cost of construction, the length of the link, and so on. Given such a weighted graph, one would then wish to select

cities to have minimum total cost or minimum total length. In either case the links selected have to form a tree. We are

therefore interested in finding a spanning tree g with minimum cost. A graph and one of its minimum spanning tree

involves the selection of a subset of the edges, this problem fits the subset paradigm.

1 1

10 28 10

6 2 6 2

14 16 25 14 16

25 3 7 3

7 5

24 18 12 12

5 4 22 4

22

Figure (a): Graph Fig. (b): Minimum spanning tree

We present two algorithms for finding a minimum spanning tree of a weighted graph: Prim’s

algorithm and Kruskal’s algorithm.

**5.6 Prim’s Algorithm**

A greedy method to obtain a minimum spanning tree is the to build the tree edge by edge. The

next edge to include is chosen according to some optimization criterion. The simplest such

criterion is to choose an edge that results in a minimum increase in the sum of the costs of the

edges so far included. There are two possible ways to interpret this criterion. In the first, the set of

edges so far selected from a tree. Thus if A is the set of edges selected so far, then A forms a tree.

The next edge (*u*,*v*) to be included in A is a minimum cost edge not in A with property that A È

{(*u*,*v*)} is also a tree. The following example shows this selection criterion results in a minimum

spanning tree. The corresponding algorithm is known as Prim’s algorithm.

Example shows the working of Prim’s method on the graph . The spanning tree obtained is shown

in figure (b) and has a cost of 99.

1 1

10 10

6 2 6 2

7 25 7

35

3 3

5 5

5 4

Fig. (a) Fig. (b)

1 1

10 10

6 2 6 2

7 25 7

25 3 3

5 5

4 4 12

22 22

Fig. (c) Fig. (d)

1 1

10 10

6 2 16 6 2 16

7 25 7 14

25 3 3

5 12 5

4 4 12

22 22

Fig. (e) Fig. (f)

Having seen how Prim’s method works, let us obtain a n algorithm to find a minimum cost

spanning tree using this method. The algorithm will start with a tree that include only a minimum

cost edge of g. Then, edges are added to this tree one by one. The next edge (i, j) to be added is

such that I is a vertex already included in the tree, j is a vertex not yet included, and the cost of (i,

j), cost [i, j], is minimum among all edges (k, l) such that vertex k is in the tree and vertex e is not

in the tree. To determine this edge (i, j) efficiently, we associate with each vertex j not yet included

in the tree a value near [j]. The value near [j] [near (j)] is minimum among all choices for near [j].

We define near [j] = 0 for all vertices j that are already in the tree. The next edge to include is

defined by the vertex j such that near [j] # 0 (j not already in the tree) and cost [j] [near (j)] is

minimum.

Prim (E, cost, *n*, *t*)

//E is the set of edges in *g*. cost [*n*] [*n*] is

// the cost adjacmcy matrix of an n vertex

36

//graph such that cost [*i,j*] is

//either a positive real number or is it

//no edge (*i, j*) exists.

//A minimum spanning tree is computed

//and stored as a set of edges in

//The array *t*[*n*–1] [2]. (The final cost is

//returned.

{

Let (*k*, l) be an edge of minimum cost in E;

Min cost = cost [*k*] [l];

*t *[1] [1] = *k*; *k*[1] [2] = l;

for (*i *= 1; *i *< = *n *: *i *+ l)

if (cost [*i*] [l] <>*i*] [*k*])

near [*i*] = l;

else

near [*i*] = *k*;

near [*k*] = near [l] = 0;

for (*i *= 2; *i *< = *n*–1; *i *+ l)

{ //Find *n*–2 additional edges for *t*.

Let *j *be an index such that near [*j*] ! = 0 and

Cost [*j*] near [(*j*)] is minimum;

T [*i*] [1] = *j*; *t *[*i*] [2] = near [*j*];

min cost = mincost + cost [*j*] [near [*j*]];

near [*j*] = 0;

for (*k *= 1; *k *< = *n*; *k *+ l) //update near

if (near [*k*] ! = 0 && cost [*k*] [near [*k*] > cost [*k*] [*j*])

near [*k*] = *j*;

}

return (min cost);

}

The time required by algorithm prim is 0 (*n*2), where *n *is the number of vertices in the

graph *g*.

**5.7 Kruskal’s Algorithm**

There is a second possible interpretation of the optimization criteria mentioned earlier in

which the edges of the graph are considered in non-decreasing order of cost. This interpretation is

that the set t of edges so far selected for the spanning tree be such that it is possible to complete t

into a tree. Thus t may not be a tree at all stages in the algorithm. In fact it will generally only be

37

a forest since the set of edges t can be completed into a tree if there are no cycles in t. This

method is due to kruskal.

Example: Consider the graph of figure (*a*). We begin with no edges selected figure(*a*) shows

the current graph with no edges selected Edge (1,6) is the first edge considered. It is included in

the spanning tree being built. This yields the graph of figure (*b*). Next the edge (3,4) is selected and

included in the tree (fig. (*c*)). The next edge to be considered is (2,7). Its inclusion in the tree being

built does not create a cycle, so we get the graph of figure. Edge (2,3) is considered next and

included in the tree figure (*e*). Of the edges not yet considered (7,4) has the least cost. It is

considered next. Its inclusion in the tree results in a cycle, so this edge is discarded. Edge (5,4) is

the next edge to be added in the tree being built. This result in the configuration of figure (*f*). The

next edge to be considered is the edge (7,5). It is discarded as its inclusion creates a cycle. Finally

edge (6,5) is considered an included in the tree built. This completes the spanning tree. The

resulting tree has cost 99.

1 1

10

6 2 6 2

7 7

3 3

5 5

4 4

Fig. (a) Fig. (b)

1 1

10 10

6 2 6 2

7 14

3 7 3

5 12 5

4 4 12

Fig. (c) Fig. (d)

38

1 1

10 10

2

6 16 6 2 16

7 14 14

3 7 3

5 12 12

4 4

5 22

Fig. (e) Fig. (f)

For clarity, kruskal’s method is written out more formally in following algorithm.

1. *t *= 0;

2. while [(it has less than *n*–1 edges) R& (E! = 0)]

3. {

4. Choose an edge (*u,v*) from E of lowest cost;

5. Delete (*q, w*) from E;

6. If (*u, w*) does not create a cycle in it)

add (*v,w*) to t;

7. else

Discard (*v, w*);

8. }

Initially E is the set of all edges in g. The only functions we wish to perform on this set are

(1) determine an edge with minimum cost (line 4) and (2) delete this edge (line 5). Both these

functions can be performed efficiently if the edges in E are maintained as a sorted sequential list.

It is not essential to sort all the edges so long as the next edge for line 4 can be determined easily.

If the edges are maintained as a min heap, then the next edge to consider can be obtained in 0

(long |E|) line. The construction of heap it self take O (|E|) time. To be able to perform step 6

efficiently, the vertices in g should be grouped together in such a way that one can easily

determine whether the vertices v and w are already connected by the earlier selection of edges. If

they are, then the edge (v,w) is to be added to t. One possible grouping is to place all vertices in

the same connected component by t into a set. For example, when the edge (2,6) is to be

considered, the sets are {1,2}, {2,4,6}, and {5}. Vertices 2 and 6 are in different sets so these sets

are combined to give {1,2,3,4,6} and {5}. The next edge to be considered is (1,4). Since vertices 1

and 4 are in the same set, the edge is rejected. The edge (3,5) connects vertices in different sets

and results in the final spanning tree.

**5.8 Shortest Paths**

Graphs can be used to represent the highway structure of a state on country with vertices

representing cities and edges representing sections of highway. The edge can them be assigned

39

weights which may be either the distance along that section of highway. A motoriot wishing to

drive from city A to B would be interested in answers to the following question:

· Is there a path from A to B?

· If there is more than one path from A to B, which is the shortest path?

The problems defined by these questions are special cases of the path problems we study in

this section. The length of a path is now defined to be the sum of the weights of the edges or that

path. The starting vertex of the path is referred to as the source, and the last vertex the

destination. The graphs are digraphs to allow for one-way structs. In the problem we consider we

are given a directed graph g = (V,E), a weighting function cost for the edges of g, and a source

vertex V0. The problem is to determine the shortest path from V0 to all the remaining vertices of

g. It is assumed that all the weight are positive.

**5.9 Dijkstra’s Algorithm**

This algorithm determines the lengths of the shortest paths from v0 to all other vertices in g.

Dijkstra’s (*v*, cost, dist, n)

//dist [j], 1< = *j *< = *n*, is set to the legnth

//of the hortest path from vertex *v *to

//vertex j in a diagraph *g *with *n*

//vertices dist [*v*] set to zero. G is

//represented by its cost adjacency matrix

//cost [*n*] [*n*].

{

for (*i *= 1; *i *< = *n*; *i *&&)

{ //intializes

S [*i*] = false; dist [*i*] = cost [*v*][*i*]

}

S [*v*] = true; dist [*v*] = 0.0; ||put *v *in S.

{

//Determines *n*–1 paths from *v*.

Choose u from among these vertices not in S such that digit [*u*] is minimum;

S[*u*] = true; ||put *u *in S

For (each is adjacent to u with S[*w*] = false)

//update distances

dist [*w*] = dist [*u*] + cost [*u*] [*w*]

}

}

Example: Consider the eight vertex diagraph with cost adjacency matrix as in figure. The values of

dist and the vertices selected at each iteration of for loop of line 12 is previous algorithm, for

finding all the shortest paths from

40

[4] is minimum is identified to be

enters S is

of the dist [ ] values are altered. The algorithm when only seven of the eight vertices are in S. By

the definition of dist, the distance of the last vertex, in this case

shortest path from

1500 5

4 250

1200 1000

800

2 3

300

1000

1 8 1000

1700 7

Figure (a)

1 2 3 4 5 6 7 8

1 0

2 300 0

3 100 800 0

4 1200 0

5 1500 0 250

6 1000 0 900 1400

7 0 1000

8 1700 0

Figure (b)

Ilevation S Vertex Distance

Selcted LA SF DEN CHI BOST NY WA NO

[1] [2] [3] [4] [5] [6] [7] [8]

Intial — — +00 +00 +00 1500 0 250 +00 +00

1 {5} 6 +00 +00 +00 1250 0 250 1150 1650

2 {5,6} 7 +00 +00 +00 1250 0 250 1150 1650

41

3 {5,6,7} 4 +00 +00 2450 1250 0 250 1150 1650

4 {5,6,7,4} 8 3350 +00 2450 1250 0 250 1150 1650

5 {5,6,7,4,8} 3 350 3250 2450 1250 0 250 1150 1650

6 {5,6,7,4,8,3} 2 3350 3250 2450 1250 0 250 1150 1650

{5,6,7,4,8,3,2}

Figure

**5.10 Backtracking**

Backtracking is a systematic way to go through all the possible configurations of a space.

These configurations may be all possible arrangements of objects (permutations) or all possible

ways of building a collection of them (subsets). Other applications may demand enumerating all

spanning trees of a graph, all paths between two vertices, or all possible ways to partition the

vertices into color classes.

What these problems have in common is that we must generate each one of the possible

configurations exactly once. Avoiding both repetitions and missing configurations means that we

must define a systematic generation order among the possible configurations. In combinatorial

search, we represent our configurations by a vector , where each element is

selected from an ordered set of possible candidates for position *i*. As shown below, this

representation is general enough to encode most any type of combinatorial object naturally.

The search procedure works by growing solutions one element at a time. At each step in

the search, we will have constructed a partial solution with elements fixed for the first *k *elements

of the vector, where . From this partial solution , we will construct the set of

possible candidates for the (*k*+1)st position. We will then try to extend the partial solution by

adding the next element from . So long as the extension yields a longer partial solution, we

continue to try to extend it.

However, at some point, might be empty, meaning that there is no legal way to extend

the current partial solution. If so, we must *backtrack*, and replace , the last item in the solution

value, with the next candidate in . It is this backtracking step that gives the procedure its name:

Backtrack(*A*) Compute , the set of candidate first elements of solution *A*.

*k *= 1

while *k *> 0 do

while do (*advance*)

= the next element from

if is a solution, report it.

*k *= *k *+ 1

compute , the set of candidate *k*th elements of solution *A*.

*k *= *k *- 1 (*backtrack*)

Backtracking constructs a tree of partial solutions, where each vertex is a partial solution.

There is an edge from *x *to *y *if node *y *was created by advancing from *x*. This tree of partial

solutions provides an alternative way to think about backtracking, for the process of constructing

42

the solutions corresponds exactly to doing a depth-first traversal of the backtrack tree. Viewing

backtracking as depth-first search yields a natural recursive implementation of the basic

algorithm:

Backtrack-DFS(*A,k*)

if is a solution, report it.

else

*k *= *k *+1

compute

while do

= an element in

=

Backtrack(*a,k*)

Although a breadth-first search could also be used to enumerate all solutions, depth-first

search is greatly preferred because of the amount of storage required. In depth-first search, the

current state of the search is completely represented by the path from the root to the current

search node, which requires space proportional to the *height *of the tree. In breadth-first search,

the queue stores all the nodes at the current level, which is proportional to the *width *of the

search tree. For most interesting problems, the width of the tree will grow exponentially in its

height.

43

**UNIT 6**

**Trees and Sorting**

6.1 Basic terminology

6.2 Implementation of tree

6.3 An Array Representation of Trees

6.4 Representation of Trees by List of Children

6.5 Binary trees

6.6 The internal sorting model

6.7 Bubble sort

6.8 Insertion sort

6.9 Selection sort

6.10 Heap sort

6.11 Divide and conquer

6.11.1 Merge sort

6.11.2 Quick sort

6.11.3 Binary search

**6.1 Basic terminology of Trees**

· Trees are a way of organizing data so that branches relate the data items. Examples:

· Family Tree

· Genealogical Chart

**Definition**

· A tree is a finite set of one or more nodes such that:

· There is a specially designated node called the root

· The remaining nodes are partitioned into n>= 0 disjoint sets, T1,…Tn where each of these

sets is a tree. (T1,…Tn ) are subtrees of the root

**Root Node**

Node at the "top" of a tree - the one from which all operations on the tree commence. The

root node may not exist (a NULL tree with no nodes in it) or have 0, 1 or 2 children in a

binary tree.

**Leaf Node**

Node at the "bottom" of a tree - farthest from the root. Leaf nodes have no children.

**Complete Tree**

Tree in which each leaf is at the same distance from the root.

**Height**

Number of nodes which must be traversed from the root to reach a leaf of a tree.

**6.2 Implementation of tree**

The basic implementations for trees and discuss their capabilities for supporting the

various tree operations are defined in this section.

44

**6.3 An Array Representation of Trees**

A complete binary tree has a simple array representation. Suppose we number the nodes

from left to right, beginning at the top and ending at the bottom. Then we can store the various

data items in the corresponding elements of an array.

For example

can be represented by the array

This in fact corresponds to the level order enumeration of the tree. Note that we only use

an initial segment of the array. Provided the array is long enough, and we know the number of

tree nodes, it doesn't matter how many unused components there are at the end.

**6.3 Representation of Trees by List of Children**

Again, the idea is simple. A node in the tree has

· A data field

· A left child field with a pointer to another tree node

· A right child field with a pointer to another tree node

· Optionally, a parent field with a pointer to the parent node

The most important thing to remember about the linked representation is this:

· The pointer to the root node, not a node, represents a tree.

· The empty tree is simply the NULL pointer, not an empty node.

An important and useful way to representing trees is to form for each node a list of its children.

Like linked lists, trees are made up of nodes. A common kind of tree is a **binary tree**, in

which each node contains a reference to two other nodes (possibly null). These references are

referred to as the left and right subtrees. Like list nodes, tree nodes also contain cargo. A state

diagram for a tree looks like this:

To avoid cluttering up the picture, we often omit the Nones.

The top of the tree (the node tree refers to) is called the **root**. In keeping with the tree

metaphor, the other nodes are called branches and the nodes at the tips with null references are

called **leaves**. It may seem odd that we draw the picture with the root at the top and the leaves at

the bottom, but that is not the strangest thing.

45

To make things worse, computer scientists mix in another metaphor the family tree. The

top node is sometimes called a **parent **and the nodes it refers to are its **children**. Nodes with the

same parent are called **siblings**.

Finally, there is a geometric vocabulary for talking about trees. We already mentioned left

and right, but there is also "up" (toward the parent/root) and "down" (toward the children/leaves).

Also, all of the nodes that are the same distance from the root comprise a **level **of the tree.

We probably don't need three metaphors for talking about trees, but there they are.

Like linked lists, trees are recursive data structures because they are defined recursively.

A tree is either:

· The empty tree, represented by None, or

· A node that contains an object reference and two tree references.

**6.5 Binary Trees**

A Binary tree is a finite set of elements that is either empty or is partitioned into three

disjoint subsets. The first subset contains a single element called the root of the tree. The other

two subsets are themselves binary trees, called the left and right subtrees of the original tree. A

left or right subtree is called nodes of the tree.

A conventional method of picturing a binary tree is shown in figure. This tree consists of nine

nodes with A as its root. Its left subtree is rooted at B and its right subtree is rooted at C. This is

indicated by the two branches emanating from A to B on the left and to C on the right. The

absence of a branch indicates an empty subtree for example, the left subtree of the binary tree

rooted at C and the right subtree of the binary tree rooted at E are both empty. The binary trees

rooted at D, G, H and I have empty right and left subtrees.

A

B C

D E F

G H I

A Binary Tree

If A is the root of a binary tree and B is the root of its left or right subtree, then A is said to be the

**father **of B and B is said to be the **left **or **right **son of A. A node that has no sons (such as D, G,

H, and I of figure) is called a leaf. Node n1 is an ancestor of node n2 (and n2 is a descendent of n1).

If, n1 is either the father of n2 or the father of some ancestor of *n*2. For example, in the tree of fig.,

A is an ancestor of G and H is a descendent of C, but E is neither an ancestor nor a descendent of

C. A node n2 is a left descendent of node n1 if n2 is either the left son of n1 or a descendent of the

left son of n1. A right descendent may be similarly defined. Two nodes are brothers if they are left

and right sons of the same father.

Illustrate some structures that are not binary trees.

A A A

B C B C B C

D E D E D F

46

G

(a) (b) (c)

Structures that are not binary trees

The simplest form of tree is a **binary tree**. A binary tree consists of

a. A *node *(called the **root **node) and

b. Left and right sub-trees.

Both the sub-trees are themselves binary trees.

A binary tree

The nodes at the lowest levels of the tree (the ones with no sub-trees) are called **leaves**.

In an *ordered binary tree*,

1. The keys of all the nodes in the left sub-tree are less than that of the root,

2. The keys of all the nodes in the right sub-tree are greater than that of the root, the left and

right sub-trees are themselves ordered binary trees.

**Properties of Binary Trees**

· The maximum number of nodes on level i is 2i-1, i >= 1

· The maximum number of nodes in a binary tree of depth k is 2k -1, k >= 1

· For any non-empty tree, T, if n0 is the number of leaf nodes and n2 is the number of nodes

of degree 2, then n0 = n2 + 1

**6.6 The internal sorting model**

The simplest algorithms usually take O(n2) time to sort n objects and are only useful for

sorting short lists. One of the most popular sorting algorithms is quick sort, which takes O(nlogn)

time on average. Quick sort works well for most common applications, although, in the worst

case, it can take O(n2) time. There are other methods, such as heaps ort and merge sort, that take

O(nlogn) time in the worst case, although their average case behavior may not be quite as good as

47

that of quick sort. Merge sort, however, is well-suited for external sorting. We shall consider

several other algorithms called “bin” or “bucket” sorts. These algorithms work only on special

kinds of data, such as integers chosen from a limited range, but when applicable, they are

remarkably fast, taking only O(n) time the worst case.

We assume that the objects to be sorted are records consisting of one or more fields. One

of the fields, called the key, is of a type for which a linear – ordering relationship<= is defined.

Integers, reals and arrays of characters are common examples of such types, although we may

generally use any key type for which a “less than” or “less than or equal to “relation is defined.

The sorting problem is to arrange a sequence of records so that the values of their key fields form

a nondecreasing sequence. That is given records r1,r2…….. rn with key values k1,k2…..kn, respectively,

we must produce the same records in an order ri1,ri2,…….rin, such that ki1<=ki2 <= ki3……………..<= kin.

The records all need not have distinct values, nor do we require that records with the same key

value appear in any particular order.

We shall use several criteria to evaluate the running time of an internal sorting algorithm. The

first and most common measure is the number of algorithm steps required to sort n records.

Another common measure is the number of comparisons between keys that must be made to sort

n records. This measure is often useful when a comparison between a pair keys is a relatively

expensive operation, as when keys are long strings of characters. If the size of records is large, we

may also want to count the number of times a record must be, moved. The application at hand

usually makes the appropriate cost measure evident.

**6.7 Bubble sorting**

The bubble sort is the oldest and simplest sort in use. Unfortunately, it's also the slowest.The

bubble sort works by comparing each item in the list with the item next to it, and swapping them

if required. The algorithm repeats this process until it makes a pass all the way through the list

without swapping any items (in other words, all items are in the correct order). This causes larger

values to "bubble" to the end of the list while smaller values "sink" towards the beginning of the

list.

The bubble sort is generally considered to be the most inefficient sorting algorithm in

common usage. Under best-case conditions (the list is already sorted), the bubble sort can

approach a constant *O*(*n*) level of complexity. General-case is an abysmal *O*(*n*2).

While the insertion, selection, and shell sorts also have *O*(*n*2) complexities, they are

significantly more efficient than the bubble sort.

**Pros: **Simplicity and ease of implementation.

**Cons: **Horribly inefficient.

**Source Code**

Below is the basic bubble sort algorithm.

**void **bubbleSort(**int **numbers[], **int **array_size)

{

**int **i, j, temp;

**for **(i = (array_size - 1); i >= 0; i--)

{

**for **(j = 1; j <= i; j++)

{

**if **(numbers[j-1] > numbers[j])

48

{

temp = numbers[j-1];

numbers[j-1] = numbers[j];

numbers[j] = temp;

}

}

}

}

· Unsorted array:

· 45 67 12 34 25 39

· Effective array of size 6:

· 45 12 34 25 39 67

· Effective array of size 5:

· 12 34 25 39 45

· Effective array of size 4:

· 12 25 34 39

· Effective array of size 3:

· 12 25 34

· Effective array of size 2:

· 12 25

· Effective array of size 1:

· 12

· Sorted array:

· 12 25 34 39 45 67

**6.8 Insertion Sort**

The insertion sort works just like its name suggests - it inserts each item into its proper

place in the final list. The simplest implementation of this requires two list structures - the source

list and the list into which sorted items are inserted. To save memory, most implementations use

an in-place sort that works by moving the current item past the already sorted items and

repeatedly swapping it with the preceding item until it is in place.

Like the bubble sort, the insertion sort has a complexity of *O*(*n*2). Although it has the same

complexity, the insertion sort is a little over twice as efficient as the bubble sort.

**Pros: **Relatively simple and easy to implement.

**Cons: **Inefficient for large lists.

The insertion sort is a good middle-of-the-road choice for sorting lists of a few thousand

items or less. The algorithm is significantly simpler than the shell sort, with only a small trade-off

in efficiency. At the same time, the insertion sort is over twice as fast as the bubble sort and

49

almost 40% faster than the selection sort. The insertion sort shouldn't be used for sorting lists

larger than a couple thousand items or repetitive sorting of lists larger than a couple hundred

items.

**Source Code**

Below is the basic insertion sort algorithm.

**void **insertionSort(**int **numbers[], **int **array_size)

{

**int **i, j, index;

**for **(i=1; i <>

{

index = numbers[i];

j = i;

**while **((j > 0) && (numbers[j-1] > index))

{

numbers[j] = numbers[j-1];

j = j - 1;

}

numbers[j] = index;

}

}

**6.9 Selection Sort**

The selection sort works by selecting the smallest unsorted item remaining in the list, and

then swapping it with the item in the next position to be filled. The selection sort has a complexity

of *O*(*n*2).

**Pros: **Simple and easy to implement.

**Cons: **Inefficient for large lists, so similar to the more efficient insertion sort that the insertion

sort should be used in its place.

The selection sort is the unwanted stepchild of the *n*2 sorts. It yields a 60% performance

improvement over the bubble sort, but the insertion sort is over twice as fast as the bubble sort

and is just as easy to implement as the selection sort. In short, there really isn't any reason to use

the selection sort - use the insertion sort instead.

If you really want to use the selection sort for some reason, try to avoid sorting lists of

more than a 1000 items with it or repetitively sorting lists of more than a couple hundred items.

**Source Code**

Below is the basic selection sort algorithm.

**void **selectionSort(**int **numbers[ ], **int **array_size)

{

**int **i, j;

**int **min, temp;

50

**for **(i = 0; i <>

{

min = i;

**for **(j = i+1; j <>

{

**if **(numbers[j] <>

min = j;

}

temp = numbers[i];

numbers[i] = numbers[min];

numbers[min] = temp;

}

}

**6.10 Heap sorting**

The heap sort is the slowest of the *O*(*n *log n) sorting algorithms, but unlike the merge and

quick sorts it doesn't require massive recursion or multiple arrays to work. This makes it the most

attractive option for *very *large data sets of millions of items.

The heap sort works as it name suggests - it begins by building a heap out of the data set,

and then removing the largest item and placing it at the end of the sorted array. After removing

the largest item, it reconstructs the heap and removes the largest remaining item and places it in

the next open position from the end of the sorted array. This is repeated until there are no items

left in the heap and the sorted array is full. Elementary implementations require two arrays - one

to hold the heap and the other to hold the sorted elements.

To do an in-place sort and save the space the second array would require, the algorithm

below "cheats" by using the same array to store both the heap and the sorted array. Whenever an

item is removed from the heap, it frees up a space at the end of the array that the removed item

can be placed in.

**Pros: **In-place and non-recursive, making it a good choice for extremely large data sets.

**Cons: **Slower than the merge and quick sorts.

**Source Code**

Below is the basic heap sort algorithm. The siftDown() function builds and reconstructs the heap.

**void **heapSort(**int **numbers[ ], **int **array_size)

{

**int **i, temp;

**for **(i = (array_size / 2)-1; i >= 0; i--)

siftDown(numbers, i, array_size);

**for **(i = array_size-1; i >= 1; i--)

51

{

temp = numbers[0];

numbers[0] = numbers[i];

numbers[i] = temp;

siftDown(numbers, 0, i-1);

}

}

**void **siftDown(**int **numbers[ ], **int **root, **int **bottom)

{

**int **done, maxChild, temp;

done = 0;

**while **((root*2 <= bottom) && (!done))

{

**if **(root*2 == bottom)

maxChild = root * 2;

**else if **(numbers[root * 2] > numbers[root * 2 + 1])

maxChild = root * 2;

**else**

maxChild = root * 2 + 1;

**if **(numbers[root] <>

{

temp = numbers[root];

numbers[root] = numbers[maxChild];

numbers[maxChild] = temp;

root = maxChild;

}

**else**

done = 1;

}

}

A complete program of heap sort

#include

#include

52

#define NUM_ITEMS 100

void heapSort(int numbers[], int array_size);

void siftDown(int numbers[], int root, int bottom);

int numbers[NUM_ITEMS];

int main()

{

int i;

//seed random number generator

srand(getpid());

//fill array with random integers

for (i = 0; i <>

numbers[i] = rand();

//perform heap sort on array

heapSort(numbers, NUM_ITEMS);

printf("Done with sort.\n");

for (i = 0; i <>

printf("%i\n", numbers[i]);

}

void heapSort(int numbers[], int array_size)

{

int i, temp;

for (i = (array_size / 2)-1; i >= 0; i--)

siftDown(numbers, i, array_size);

for (i = array_size-1; i >= 1; i--)

{

temp = numbers[0];

numbers[0] = numbers[i];

numbers[i] = temp;

siftDown(numbers, 0, i-1);

}

}

void siftDown(int numbers[], int root, int bottom)

{

int done, maxChild, temp;

done = 0;

while ((root*2 <= bottom) && (!done))

{

if (root*2 == bottom)

maxChild = root * 2;

else if (numbers[root * 2] > numbers[root * 2 + 1])

maxChild = root * 2;

else

maxChild = root * 2 + 1;

if (numbers[root] <>

{

temp = numbers[root];

numbers[root] = numbers[maxChild];

53

numbers[maxChild] = temp;

root = maxChild;

}

else

done = 1;

}

}

**6.11 Divide and conquer: -**

**6.11.1 Merge sort: -**

The merge sort splits the list to be sorted into two equal halves, and places them in

separate arrays. Each array is recursively sorted, and then merged back together to form the final

sorted list. Like most recursive sorts, the merge sort has an algorithmic complexity of *O*(*n *log *n*).

Elementary implementations of the merge sort make use of three arrays - one for each half

of the data set and one to store the sorted list in. The below algorithm merges the arrays in-place,

so only two arrays are required. There are non-recursive versions of the merge sort, but they don't

yield any significant performance enhancement over the recursive algorithm on most machines.

**Pros: **Marginally faster than the heap sort for larger sets.

**Cons: **At least twice the memory requirements of the other sorts; recursive.

The merge sort is slightly faster than the heap sort for larger sets, but it requires twice the

memory of the heap sort because of the second array. This additional memory requirement makes

it unattractive for most purposes - the quick sort is a better choice most of the time and the heap

sort is a better choice for very large sets.

Like the quick sort, the merge sort is recursive which can make it a bad choice for

applications that run on machines with limited memory.

**Source Code**

Below is the basic merge sort algorithm.

**void **mergeSort(**int **numbers[], **int **temp[], **int **array_size)

{

m_sort(numbers, temp, 0, array_size - 1);

}

**void **m_sort(**int **numbers[], **int **temp[], **int **left, **int **right)

{

**int **mid;

**if **(right > left)

{

mid = (right + left) / 2;

m_sort(numbers, temp, left, mid);

m_sort(numbers, temp, mid+1, right);

54

merge(numbers, temp, left, mid+1, right);

}

}

**void **merge(**int **numbers[], **int **temp[], **int **left, **int **mid, **int **right)

{

**int **i, left_end, num_elements, tmp_pos;

left_end = mid - 1;

tmp_pos = left;

num_elements = right - left + 1;

**while **((left <= left_end) && (mid <= right))

{

**if **(numbers[left] <= numbers[mid])

{

temp[tmp_pos] = numbers[left];

tmp_pos = tmp_pos + 1;

left = left +1;

}

**else**

{

temp[tmp_pos] = numbers[mid];

tmp_pos = tmp_pos + 1;

mid = mid + 1;

}

}

**while **(left <= left_end)

{

temp[tmp_pos] = numbers[left];

left = left + 1;

tmp_pos = tmp_pos + 1;

}

**while **(mid <= right)

{

temp[tmp_pos] = numbers[mid];

55

mid = mid + 1;

tmp_pos = tmp_pos + 1;

}

**for **(i=0; i <= num_elements; i++)

{

numbers[right] = temp[right];

right = right - 1;

}

}

**6.11.2 Quick sort**

Quick sort is a very efficient sorting algorithm invented by C.A.R. Hoare. It has two phases:

· The partition phase and

· The sort phase.

As we will see, most of the work is done in the partition phase - it works out where to divide the

work. The sort phase simply sorts the two smaller problems that are generated in the partition

phase.

This makes quick sort a good example of the **divide and conquer **strategy for solving

problems. (You've already seen an example of this approach in the binary search procedure.) In

quick sort, we divide the array of items to be sorted into two partitions and then call the quick

sort procedure recursively to sort the two partitions *i.e. *we *divide *the problem into two smaller

ones and *conquer *by solving the smaller ones. Thus the conquer part of the quick sort routine

looks like this:

quicksort( void *a, int low, int high )

{

int pivot;

/* Termination condition! */

if ( high > low )

{

pivot = partition( a, low, high );

quicksort( a, low, pivot-1 );

quicksort( a, pivot+1, high );

}

}

Initial Step – First Partition

Sort Left Partition in the same way

56

For the strategy to be effective, the *partition *phase must ensure that all the items in one

part (the lower part) and less than all those in the other (upper) part.

To do this, we choose a *pivot *element and arrange that all the items in the lower part are

less than the pivot and all those in the upper part greater than it. In the most general case, we

don't know anything about the items to be sorted, so that any choice of the pivot element will do -

the first element is a convenient one.

As an illustration of this idea, you can view this animation, which shows a partition

algorithm in which items to be sorted are copied from the original array to a new one: items

*smaller *than the pivot are placed to the left of the new array and items *greater *than the pivot are

placed on the right. In the final step, the pivot is dropped into the remaining slot in the middle.

## No comments:

Post a Comment