Examples of Algorithms which has O(1), O(n log n) and O(log n) complexities

If you want examples of Algorithms/Group of Statements with Time complexity as given in the question, here is a small list –

O(1) time
1. Accessing Array Index (int a = ARR[5];)
2. Inserting a node in Linked List
3. Pushing and Poping on Stack
4. Insertion and Removal from Queue
5. Finding out the parent or left/right child of a node in a tree stored in Array
6. Jumping to Next/Previous element in Doubly Linked List
and you can find a million more such examples…

O(n) time
1. Traversing an array
2. Traversing a linked list
3. Linear Search
4. Deletion of a specific element in a Linked List (Not sorted)
5. Comparing two strings
6. Checking for Palindrome
7. Counting/Bucket Sort
and here too you can find a million more such examples….
In a nutshell, all Brute Force Algorithms, or Noob ones which require linearity, are based on O(n) time complexity

O(log n) time
1. Binary Search
2. Finding largest/smallest number in a binary search tree
3. Certain Divide and Conquer Algorithms based on Linear functionality
4. Calculating Fibonacci Numbers – Best Method
The basic premise here is NOT using the complete data, and reducing the problem size with every iteration

O(nlogn) time
1. Merge Sort
2. Heap Sort
3. Quick Sort
4. Certain Divide and Conquer Algorithms based on optimizing O(n^2) algorithms
The factor of ‘log n’ is introduced by bringing into consideration Divide and Conquer. Some of these algorithms are the best optimized ones and used frequently.

O(n^2) time
1. Bubble Sort
2. Insertion Sort
3. Selection Sort
4. Traversing a simple 2D array
These ones are supposed to be the less efficient algorithms if their O(nlogn) counterparts are present. The general application may be Brute Force here.

I hope this answers your question well. If users have more examples to add, I will be more than happy 🙂

http://stackoverflow.com/questions/1592649/examples-of-algorithms-which-has-o1-on-log-n-and-olog-n-complexities

Determining if an Algorithm is O (log n)

I know with O(n), you usually have a single loop; O(n^2) is a double loop; O(n^3) is a triple loop, etc. How about O (log n)?

You’re really going about it the wrong way here. You’re trying to memorize which big-O expression goes with a given algorithmic structure, but you should really just count up the number of operations that the algorithm requires and compare that to the size of the input. An algorithm that loops over its entire input has O(n) performance because it runs the loop n times, not because it has a single loop. Here’s a single loop with O(log n) performance:

for (i = 0; i < log2(input.count); i++) {
    doSomething(...);
}

So, any algorithm where the number of required operations is on the order of the logarithm of the size of the input is O(log n). The important thing that big-O analysis tells you is how the execution time of an algorithm changes relative to the size of the input: if you double the size of the input, does the algorithm take 1 more step (O(log n)), twice as many steps (O(n)), four times as many steps (O(n^2)), etc.

Does it help to know from experience that algorithms that repeatedly partition their input typically have ‘log n’ as a component of their performance? Sure. But don’t look for the partitioning and jump to the conclusion that the algorithm’s performance is O(log n) — it might be something like O(n log n), which is quite different.

http://programmers.stackexchange.com/questions/146021/determining-if-an-algorithm-is-o-log-n

How do you rotate a two dimensional array?

O(n^2) time and O(1) space algorithm ( without any workarounds and hanky-panky stuff! )
Rotate by +90:
  1. Transpose
  2. Reverse each row
Rotate by -90:
  1. Transpose
  2. Reverse each column
Rotate by +180:
Method 1: Rotate by +90 twice
Method 2: Reverse each row and then reverse each column
Rotate by -180:
Method 1: Rotate by -90 twice
Method 2: Reverse each column and then reverse each row
Method 3: Reverse by +180 as they are same

Find smallest integer value in array list in Java without Arrays.sort

This figure should be helpful :
enter image description here
Then to answer your question, what would you do on paper ?
  1. Create and initialize the min value at tenIntArray[0]
  2. Create a variable to hold the index of the min value in the array and initialize it to 0 (because we said in 1. to initialize the min at tenIntArray[0])
  3. Loop through the elements of your array
  4. If you find an element inferior than the current min, update the minimum value with this element and update the index with the corresponding index of this element
  5. You’re done
Writing the algorithm should be straightforward now.

Detecting if a string has unique characters: comparing my solution to “Cracking the Coding Interview?”

There are two separate questions here: what’s the efficiency of your solution, and what is the reference solution doing? Let’s treat each independently.
First, your solution:
public static boolean checkForUnique(String str){
boolean containsUnique = false;

for(char c : str.toCharArray()){
if(str.indexOf(c) == str.lastIndexOf(c)){
containsUnique
= true;
} else {
containsUnique
= false;
}
}

return containsUnique;
}
Your solution essentially consists of a loop over all characters in the string (let’s say there are n of them), checking on each iteration whether the first and last index of the characters are the same. The indexOf and lastIndexOf methods each take time O(n), because they have to scan across all the characters of the string to determine if any of them match the one you’re looking for. Therefore, since your loop runs O(n) times and does O(n) work per iteration, its runtime is O(n2).
However, there’s something iffy about your code. Try running it on the string aab. Does it work correctly on this input? As a hint, as soon as you determine that there are two or more duplicated characters, you’re guaranteed that there are duplicates and you can return that not all characters are unique.
Now, let’s look at the reference:
public static boolean isUniqueChars(String str) {
if (str.length() > 256) { // NOTE: Are you sure this isn't 26?
return false;
}
int checker = 0;
for (int i = 0; i < str.length(); i++) {
int val = str.charAt(i) - 'a';
if ((checker & (1 << val)) > 0) return false;
checker
|= (1 << val);
}
return true;
}
This solution is cute. The basic idea is the following: imagine that you have an array of 26 booleans, each one tracking whether a particular character has appeared in the string already. You start with all of them false. You then iterate across the characters of the string, and each time you see a character you look into the array slot for that character. If it’s false, this is the first time you’ve seen the character and you can set the slot to true. If it’s true, you’ve already seen this character and you can immediately report that there’s a duplicate.
Notice that this method doesn’t allocate an array of booleans. Instead, it opts for a clever trick. Since there are only 26 different characters possible and there are 32 bits in an int, the solution creates an int variable where each bit of the variable corresponds to one of the characters in the string. Instead of reading and writing an array, the solution reads and writes the bits of the number.
For example, look at this line:
if ((checker & (1 << val)) > 0) return false;
What does checker & (1 << val) do? Well, 1 << val creates an int value that has all bits zero except for the valth bit. It then uses bitwise AND to AND this value with checker. If the bit at position val in checker is already set, then this evaluates to a nonzero value (meaning we’ve already seen the number) and we can return false. Otherwise, it evaluates to 0, and we haven’t seen the number.
The next line is this:
checker |= (1 << val);
This uses the “bitwise OR with assignment” operator, which is equivalent to
checker = checker | (1 << val);
This ORs checker with a value that has a 1 bit set only at position val, which turns the bit on. It’s equivalent to setting the valth bit of the number to 1.
This approach is much faster than yours. First, since the function starts off by checking if the string has length greater than 26 (I’m assuming the 256 is a typo), the function never has to test any string of length 27 or greater. Therefore, the inner loop runs at most 26 times. Each iteration does O(1) work in bitwise operations, so the overall work done is O(1) (O(1) iterations times O(1) work per iteration), which is significantly faster than your implementation.
If you haven’t seen bitwise operations used this way, I’d recommend searching for “bitwise operators” on Google to learn more.
Hope this helps!

Plain English explanation of Big O

Quick note, this is almost certainly confusing Big O notation (which is an upper bound) with Theta notation (which is a two-side bound). In my experience this is actually typical of discussions in non-academic settings. Apologies for any confusion caused.

The simplest definition I can give for Big-O notation is this:
Big-O notation is a relative representation of the complexity of an algorithm.
There are some important and deliberately chosen words in that sentence:
  • relative: you can only compare apples to apples. You can’t compare an algorithm to do arithmetic multiplication to an algorithm that sorts a list of integers. But two algorithms that do arithmetic operations (one multiplication, one addition) will tell you something meaningful;
  • representation: Big-O (in its simplest form) reduces the comparison between algorithms to a single variable. That variable is chosen based on observations or assumptions. For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). This assumes that comparison is expensive. But what if comparison is cheap but swapping is expensive? It changes the comparison; and
  • complexity: if it takes me one second to sort 10,000 elements how long will it take me to sort one million? Complexity in this instance is a relative measure to something else.
Come back and reread the above when you’ve read the rest.
The best example of Big-O I can think of is doing arithmetic. Take two numbers (123456 and 789012). The basic arithmetic operations we learnt in school were:
  • addition;
  • subtraction;
  • multiplication; and
  • division.
Each of these is an operation or a problem. A method of solving these is called an algorithm.
Addition is the simplest. You line the numbers up (to the right) and add the digits in a column writing the last number of that addition in the result. The ‘tens’ part of that number is carried over to the next column.
Let’s assume that the addition of these numbers is the most expensive operation in this algorithm. It stands to reason that to add these two numbers together we have to add together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000 additions.
See the pattern? The complexity (being the number of operations) is directly proportional to the number of digits n in the larger number. We call this O(n) or linear complexity.
Subtraction is similar (except you may need to borrow instead of carry).
Multiplication is different. You line the numbers up, take the first digit in the bottom number and multiply it in turn against each digit in the top number and so on through each digit. So to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as many as 10 or 11 column adds to get the end result too.
If we have two 100-digit numbers we need to do 10,000 multiplications and 200 adds. For two one million digit numbers we need to do one trillion (1012) multiplications and two million adds.
As the algorithm scales with n-squared, this is O(n2) or quadratic complexity. This is a good time to introduce another important concept:
We only care about the most significant portion of complexity.
The astute may have realized that we could express the number of operations as: n2 + 2n. But as you saw from our example with two numbers of a million digits apiece, the second term (2n) becomes insignificant (accounting for 0.0002% of the total operations by that stage).
The Telephone Book
The next best example I can think of is the telephone book, normally called the White Pages or similar but it’ll vary from country to country. But I’m talking about the one that lists people by surname and then initials or first name, possibly address and then telephone numbers.
Now if you were instructing a computer to look up the phone number for “John Smith” in a telephone book that contains 1,000,000 names, what would you do? Ignoring the fact that you could guess how far in the S’s started (let’s assume you can’t), what would you do?
A typical implementation might be to open up to the middle, take the 500,000th and compare it to “Smith”. If it happens to be “Smith, John”, we just got real lucky. Far more likely is that “John Smith” will be before or after that name. If it’s after we then divide the last half of the phone book in half and repeat. If it’s before then we divide the first half of the phone book in half and repeat. And so on.
This is called a binary search and is used every day in programming whether you realize it or not.
So if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 20 times. In comparing search algorithms we decide that this comparison is our ‘n’.
For a phone book of 3 names it takes 2 comparisons (at most).
For 7 it takes at most 3.
For 15 it takes 4.

For 1,000,000 it takes 20.
That is staggeringly good isn’t it?
In Big-O terms this is O(log n) or logarithmic complexity. Now the logarithm in question could be ln (base e), log10, log2 or some other base. It doesn’t matter it’s still O(log n) just like O(2n2) and O(100n2) are still both O(n2).
It’s worthwhile at this point to explain that Big O can be used to determine three cases with an algorithm:
  • Best Case: In the telephone book search, the best case is that we find the name in one comparison. This is O(1) or constant complexity;
  • Expected Case: As discussed above this is O(log n); and
  • Worst Case: This is also O(log n).
Normally we don’t care about the best case. We’re interested in the expected and worst case. Sometimes one or the other of these will be more important.
Back to the telephone book.
What if you have a phone number and want to find a name? The police have a reverse phone book but such lookups are denied to the general public. Or are they? Technically you can reverse lookup a number in an ordinary phone book. How?
You start at the first name and compare the number. If it’s a match, great, if not, you move on to the next. You have to do it this way because the phone book is unordered (by phone number anyway).
So to find a name:
  • Best Case: O(1);
  • Expected Case: O(n) (for 500,000); and
  • Worst Case: O(n) (for 1,000,000).
The Travelling Salesman
This is quite a famous problem in computer science and deserves a mention. In this problem you have N towns. Each of those towns is linked to 1 or more other towns by a road of a certain distance. The Travelling Salesman problem is to find the shortest tour that visits every town.
Sounds simple? Think again.
If you have 3 towns A, B and C with roads between all pairs then you could go:
A -> B -> C
A -> C -> B
B -> C -> A
B -> A -> C
C -> A -> B
C -> B -> A
Well actually there’s less than that because some of these are equivalent (A -> B -> C and C -> B -> A are equivalent, for example, because they use the same roads, just in reverse).
In actuality there are 3 possibilities.
Take this to 4 towns and you have (iirc) 12 possibilities. With 5 it’s 60. 6 becomes 360.
This is a function of a mathematical operation called a factorial. Basically:
5! = 5 * 4 * 3 * 2 * 1 = 120
6! = 6 * 5 * 4 * 3 * 2 * 1 = 720
7! = 7 * 6 * 5 * 4 * 3 * 2 * 1 = 5040
...
25! = 25 * 24 * ... * 2 * 1 = 15,511,210,043,330,985,984,000,000
...
50! = 50 * 49 * ... * 2 * 1 = 3.04140932... × 10^64
So the Big-O of the Travelling Salesman problem is O(n!) or factorial or combinatorial complexity.
By the time you get to 200 towns there isn’t enough time left in the universe to solve the problem with traditional computers.
Something to think about.
Polynomial Time
Another point I wanted to make quick mention of is that any algorithm that has a complexity of O(na) is said to have polynomial complexity or is solvable in polynomial time.
Traditional computers can solve polynomial-time problems. Certain things are used in the world because of this. Public Key Cryptography is a prime example. It is computationally hard to find two prime factors of a very large number. If it wasn’t, we couldn’t use the public key systems we use.
Anyway, that’s it for my (hopefully plain English) explanation of Big O (revised).

Resources and tutorials to get started on algorithms

MIT has a course on algorithms in their Open Courseware Program with video, audio and PDF lectures.

There is also an online course, also with video lectures, at ArsDigita University.

At University of Florida there is the course Data Structures and Algorithms in Java, and just as the above it has video lectures available online.

At freescienceonline.blogspot.com you can find a whole lot of video lectures on algorithms, as well as a lot of other interesting videos.

http://stackoverflow.com/questions/366418/resources-and-tutorials-to-get-started-on-algorithms

Plain English explanation of Big O

Disclaimer, stolen from Jon Skeet’s answer:

Quick note, this is almost certainly confusing Big O notation (which is an upper bound) with Theta notation (which is a two-side bound). In my experience this is actually typical of discussions in non-academic settings. Apologies for any confusion caused.

The simplest definition I can give for Big-O notation is this:

Big-O notation is a relative representation of the complexity of an algorithm.

There are some important and deliberately chosen words in that sentence:

relative: you can only compare apples to apples. You can’t compare an algorithm to do arithmetic multiplication to an algorithm that sorts a list of integers. But two algorithms that do arithmetic operations (one multiplication, one addition) will tell you something meaningful;
representation: Big-O (in its simplest form) reduces the comparison between algorithms to a single variable. That variable is chosen based on observations or assumptions. For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). This assumes that comparison is expensive. But what if comparison is cheap but swapping is expensive? It changes the comparison; and
complexity: if it takes me one second to sort 10,000 elements how long will it take me to sort one million? Complexity in this instance is a relative measure to something else.
Come back and reread the above when you’ve read the rest.

The best example of Big-O I can think of is doing arithmetic. Take two numbers (123456 and 789012). The basic arithmetic operations we learnt in school were:

addition;
subtraction;
multiplication; and
division.
Each of these is an operation or a problem. A method of solving these is called an algorithm.

Addition is the simplest. You line the numbers up (to the right) and add the digits in a column writing the last number of that addition in the result. The ‘tens’ part of that number is carried over to the next column.

Let’s assume that the addition of these numbers is the most expensive operation in this algorithm. It stands to reason that to add these two numbers together we have to add together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000 additions.

See the pattern? The complexity (being the number of operations) is directly proportional to the number of digits n in the larger number. We call this O(n) or linear complexity.

Subtraction is similar (except you may need to borrow instead of carry).

Multiplication is different. You line the numbers up, take the first digit in the bottom number and multiply it in turn against each digit in the top number and so on through each digit. So to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as many as 10 or 11 column adds to get the end result too.

If we have two 100-digit numbers we need to do 10,000 multiplications and 200 adds. For two one million digit numbers we need to do one trillion (1012) multiplications and two million adds.

As the algorithm scales with n-squared, this is O(n2) or quadratic complexity. This is a good time to introduce another important concept:

We only care about the most significant portion of complexity.

The astute may have realized that we could express the number of operations as: n2 + 2n. But as you saw from our example with two numbers of a million digits apiece, the second term (2n) becomes insignificant (accounting for 0.00002% of the total operations by that stage).

The Telephone Book

The next best example I can think of is the telephone book, normally called the White Pages or similar but it’ll vary from country to country. But I’m talking about the one that lists people by surname and then initials or first name, possibly address and then telephone numbers.

Now if you were instructing a computer to look up the phone number for “John Smith”, what would you do? Ignoring the fact that you could guess how far in the S’s started (let’s assume you can’t), what would you do?

A typical implementation might be to open up to the middle, take the 500,000th and compare it to “Smith”. If it happens to be “Smith, John”, we just got real lucky. Far more likely is that “John Smith” will be before or after that name. If it’s after we then divide the last half of the phone book in half and repeat. If it’s before then we divide the first half of the phone book in half and repeat. And so on.

This is called a bisection search and is used every day in programming whether you realize it or not.

So if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 21 or so times (I might be off by 1). In comparing search algorithms we decide that this comparison is our ‘n’.

For a phone book of 3 names it takes 2 comparisons (at most).
For 7 it takes at most 3.
For 15 it takes 4.

For 1,000,000 it takes 21 or so.

That is staggeringly good isn’t it?

In Big-O terms this is O(log n) or logarithmic complexity. Now the logarithm in question could be ln (base e), log10, log2 or some other base. It doesn’t matter it’s still O(log n) just like O(2n2) and O(100n2) are still both O(n2).

It’s worthwhile at this point to explain that Big O can be used to determine three cases with an algorithm:

Best Case: In the telephone book search, the best case is that we find the name in one comparison. This is O(1) or constant complexity;
Expected Case: As discussed above this is O(log n); and
Worst Case: This is also O(log n).
Normally we don’t care about the best case. We’re interested in the expected and worst case. Sometimes one or the other of these will be more important.

Back to the telephone book.

What if you have a phone number and want to find a name? The police have a reverse phone book but such lookups are denied to the general public. Or are they? Technically you can reverse lookup a number in an ordinary phone book. How?

You start at the first name and compare the number. If it’s a match, great, if not, you move on to the next. You have to do it this way because the phone book is unordered (by phone number anyway).

So to find a name:

Best Case: O(1);
Expected Case: O(n) (for 500,000); and
Worst Case: O(n) (for 1,000,000).
The Travelling Salesman

This is quite a famous problem in computer science and deserves a mention. In this problem you have N towns. Each of those towns is linked to 1 or more other towns by a road of a certain distance. The Travelling Salesman problem is to find the shortest tour that visits every town.

Sounds simple? Think again.

If you have 3 towns A, B and C with roads between all pairs then you could go:

A -> B -> C
A -> C -> B
B -> C -> A
B -> A -> C
C -> A -> B
C -> B -> A

Well actually there’s less than that because some of these are equivalent (A -> B -> C and C -> B -> A are equivalent, for example, because they use the same roads, just in reverse).

In actuality there are 3 possibilities.

Take this to 4 towns and you have (iirc) 12 possibilities. With 5 it’s 60. 6 becomes 360.

This is a function of a mathematical operation called a factorial. Basically:

5! = 5 * 4 * 3 * 2 * 1 = 120
6! = 6 * 5 * 4 * 3 * 2 * 1 = 720
7! = 7 * 6 * 5 * 4 * 3 * 2 * 1 = 5040

25! = 25 * 24 * … * 2 * 1 = 15,511,210,043,330,985,984,000,000

50! = 50 * 49 * … * 2 * 1 = 3.04140932… × 10^64
So the Big-O of the Travelling Salesman problem is O(n!) or factorial or combinatorial complexity.

By the time you get to 200 towns there isn’t enough time left in the universe to solve the problem with traditional computers.

Something to think about.

Polynomial Time

Another point I wanted to make quick mention of is that any algorithm that has a complexity of O(na) is said to have polynomial complexity or is solvable in polynomial time.

Traditional computers can solve polynomial-time problems. Certain things are used in the world because of this. Public Key Cryptography is a prime example. It is computationally hard to find two prime factors of a very large number. If it wasn’t, we couldn’t use the public key systems we use.

Anyway, that’s it for my (hopefully plain English) explanation of Big O (revised).

http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o

Big ‘O’ notation and Java Constant time performance

Big ‘O’ notation and Java Constant time performance

Collection Constant time performance, in a line is “The same amount of time taken for a certain operation(method) regardless of the number of elements in the collection”

Big ‘O’ notation

The Big O notation is used to indicate the time taken by algorithms to run :-

For example:-
(N is the number of elements)

O(N):- The time taken is linerarly dependent to the number of elements.

O(log N):- The time taken is logarithmic to the number of elements.

O(1):- The time taken is constant time, regardless of the number of elements.

Please read here to find out about the big O notation:-

http://en.wikipedia.org/wiki/Big_O_notation

In java collections

Sets

1) The HashSet offers constant time performance for basic operations (add, remove, contains and size).

2) The TreeSet provides guaranteed log(n) time cost for the basic operations (add, remove and contains).

3) Like HashSet, the LinkedHashSet provides constant-time performance for the basic operations (add, contains and remove)

Lists

4) The ArrayList offers constant time performance in the size, isEmpty, get, set, iterator, and listIterator operations. The add operation runs in linear time. All of the other operations run in linear time (roughly speaking).

Maps

5) The HashMap provides constant-time performance for the basic operations (get and put).

6) The TreeMap provides guaranteed log(n) time cost for the containsKey, get, put and remove operations.

7) Like HashMap, the LinkedHashMap provides constant-time performance for the basic operations (add, contains and remove), assuming the the hash function disperses elements properly among the buckets.

If you need a List you could use the ArrayList.

If you need a Set use the HashSet or for a sorted set use the TreeSet or an insertion ordered set use the LinkedHashSet.

If you need a Map use the HashMap or for a key sorted map use the TreeMap or a key insertion ordered map use the LinkedHashMap.

http://objectissues.blogspot.com/2006/11/big-o-notation-and-java-constant-time.html

Selection Sort Code

// selectSort.java // demonstrates selection sort // to run this program: C>java SelectSortApp //////////////////////////////////////////////////////////////// class ArraySel { private long[] a; // ref to array a private int nElems; // number of data items // ————————————————————– public ArraySel(int max) // constructor { a = new long[max]; // create the array nElems = 0; // no items yet } // ————————————————————– public void insert(long value) // put element into array { a[nElems] = value; // insert it nElems++; // increment size } // ————————————————————– public void display() // displays array contents { for (int j = 0; j < nElems; j++) // for each element, System.out.print(a[j] + " "); // display it System.out.println(""); } // ————————————————————– public void selectionSort() { int out, in, min; for (out = 0; out < nElems – 1; out++) // outer loop { min = out; // minimum for (in = out + 1; in < nElems; in++) { // inner loop if (a[in] < a[min]) // if min greater, min = in; // we have a new min } swap(out, min); // swap them } // end for(out) } // end selectionSort() // ————————————————————– private void swap(int one, int two) { long temp = a[one]; a[one] = a[two]; a[two] = temp; } // ————————————————————– } // end class ArraySel // ////////////////////////////////////////////////////////////// class SelectSortApp { public static void main(String[] args) { int maxSize = 100; // array size ArraySel arr; // reference to array arr = new ArraySel(maxSize); // create the array arr.insert(77); // insert 10 items arr.insert(99); arr.insert(44); arr.insert(55); arr.insert(22); arr.insert(88); arr.insert(11); arr.insert(00); arr.insert(66); arr.insert(33); arr.display(); // display items arr.selectionSort(); // selection-sort them arr.display(); // display them again } // end main() } // end class SelectSortApp // ////////////////////////////////////////////////////////////// Invariant
In the selectionSort.java program. the data items with indices less than or equal to out are always sorted.