Big O

Basics

Big O notation is a way of describing how fast/slow an algorithm is.

O(n) (pronounced "big oh of en") means that the time it takes the algorithm to run increases linearly with the amount of data being processed.

O(n^2) means that the time it takes the algorithm to run increases dramatically when the amount of data increases slightly.

Big O sets an upper limit for how long an algorithm will run. If an algorithm is O(n) then it is also, by definition, O(n log n), O(n^2), O(2^n), and O(n!). It would still be most accurate to say the algorithm is O(n).

Fastest
- O(1) = constant speed regardless of the data
- O(log n)
- O(n)
- O(n log n)
- O(n^2)
- O(2^n)
- O(n!)
Slowest

Diagram of speeds [source: Big O Cheat Sheet]

O Ө Ω

Big O specifies an upper-bound.
(pronounced Big Oh)

Big Ω specifies a lower-bound.
(pronounced Big Omega)

Big Ө specifies that the upper-bound and lower-bound are the same. This is called the tight-bound.
(pronounced Big Theta)

These bounds are all asymptotic. That means that they ignore any constant in the time complexity of your algorithm (usually called "k"). For example, if you have to open a database connection, that operation will take a constant amount of time that is not related to the amount of data you are processing. It may be that for small data sets, this constant k is high enough that your algorithm is slower than O(n). But the algorithm still is O(n) because when you test very large sets of data, the constant k is relatively insignificant.

So an asymptotic bound is a bound for data sets that are large enough to ignore constant time complexities.

Lookup Tables

[source: Big O Cheat Sheet]

Operation and space complexity of standard data structures:

``````
Data Structure     | Time Complexity                                                                       | Space Complexity | Data Structures
| Average                                   | Worst Case                                | Worst Case       |
| Access   | Search   | Insert   | Delete   | Access   | Search   | Insert   | Delete   |                  |
Array              | Ө(1)     | Ө(n)     | Ө(n)     | Ө(n)     | O(1)     | O(n)     | O(n)     | O(n)     | O(n)             | Array
Stack              | Ө(n)     | Ө(n)     | Ө(1)     | Ө(1)     | O(n)     | O(n)     | O(1)     | O(1)     | O(n)             | Stack
Queue              | Ө(n)     | Ө(n)     | Ө(1)     | Ө(1)     | O(n)     | O(n)     | O(1)     | O(1)     | O(n)             | Queue
Linked List        | Ө(n)     | Ө(n)     | Ө(1)     | Ө(1)     | O(n)     | O(n)     | O(1)     | O(1)     | O(n)             | Linked List
Double Linked List | Ө(n)     | Ө(n)     | Ө(1)     | Ө(1)     | O(n)     | O(n)     | O(1)     | O(1)     | O(n)             | Double Linked List
Skip List          | Ө(log n) | Ө(log n) | Ө(log n) | Ө(log n) | O(n)     | O(n)     | O(n)     | O(n)     | O(n log n)       | Skip List
Hash Table         | N/A      | Ө(1)     | Ө(1)     | Ө(1)     | N/A      | O(n)     | O(n)     | O(n)     | O(n)             | Hash Table
Binary Search Tree | Ө(log n) | Ө(log n) | Ө(log n) | Ө(log n) | O(n)     | O(n)     | O(n)     | O(n)     | O(n)             | Binary Search Tree
Cartesian Tree     | N/A      | Ө(log n) | Ө(log n) | Ө(log n) | N/A      | O(n)     | O(n)     | O(n)     | O(n)             | Cartesian Tree
B-Tree             | Ө(log n) | Ө(log n) | Ө(log n) | Ө(log n) | O(log n) | O(log n) | O(log n) | O(log n) | O(n)             | B-Tree
Red-Black Tree     | Ө(log n) | Ө(log n) | Ө(log n) | Ө(log n) | O(log n) | O(log n) | O(log n) | O(log n) | O(n)             | Red-Black Tree
Splay Tree         | N/A      | Ө(log n) | Ө(log n) | Ө(log n) | N/A      | O(log n) | O(log n) | O(log n) | O(n)             | Splay Tree
AVL Tree           | Ө(log n) | Ө(log n) | Ө(log n) | Ө(log n) | O(log n) | O(log n) | O(log n) | O(log n) | O(n)             | AVL Tree
KD Tree            | Ө(log n) | Ө(log n) | Ө(log n) | Ө(log n) | O(n)     | O(n)     | O(n)     | O(n)     | O(n)             | KD Tree
``````

Array sorting algorithms:
``````
Algorithm      | Time Complexity                            | Space Complexity |
| Best       | Average       | Worst         | Worst            |
Quick Sort     | Ω(n log n) | Ө(n log n)    | O(n^2)        | O(log n)
Merge Sort     | Ω(n log n) | Ө(n log n)    | O(n log n)    | O(n)
Tim Sort       | Ω(n)       | Ө(n log n)    | O(n log n)    | O(n)
Heap Sort      | Ω(n log n) | Ө(n log n)    | O(n log n)    | O(1)
Bubble Sort    | Ω(n)       | Ө(n^2)        | O(n^2)        | O(1)
Insertion Sort | Ω(n)       | Ө(n^2)        | O(n^2)        | O(1)
Selection Sort | Ω(n^2)     | Ө(n^2)        | O(n^2)        | O(1)
Tree Sort      | Ω(n log n) | Ө(n log n)    | O(n^2)        | O(n)
Shell Sort     | Ω(n log n) | Ө(n(log n)^2) | O(n(log n)^2) | O(1)
Bucket Sort    | Ω(n + k)   | Ө(n + k)      | O(n^2)        | O(n)
Radix Sort     | Ω(n k)     | Ө(n k)        | O(n k)        | O(n + k)
Counting Sort  | Ω(n + k)   | Ө(n + k)      | O(n + k)      | O(n)
Cube Sort      | Ω(n)       | Ө(n log n)    | O(n log n)    | O(n)
``````
Lists

Lists vs Arrays:

- Lists are stored in heap memory, Arrays are stored in stack memory.
- List length is dynamic, Array length is static.
- You can insert nodes and remove nodes from a List quickly. With Arrays, you have to recreate the Array each time.
- Binary search works on Arrays because you can easily access any index of the Array. Binary search doesn't work on Lists because the indexes have to be accessed sequentially.
- Lists take more memory to store the same data as an Array because each node in the List requires a pointer to the next node.
- Arrays can be cached more easily because all the memory locations are right next to each other.

A linked list is an ordered collection of nodes. Unlike an array, each node of a linked list may be stored at any location in memory. Each node includes a pointer to the location of the next node in memory.

``````
variable → █ → █ → █ → █ → null
``````

``````
class ListNode
{
Object Data;
ListNode NextNode = null;
}
``````

The tail node points back to the head node, forming a full circle.

``````
variable → █ → █ → █ → █
↑           │
└───────────┘
``````

Each node includes a pointer both forward to the next node and one backwards to the previous node.

If the tail node also connects to the head node, it makes a Circular Doubly Linked List.

``````
variable → █ ↔ █ ↔ █ ↔ █ → null
``````

``````
class ListNode
{
Object Data;
ListNode NextNode = null;
ListNode PreviousNode = null;
}
``````

Sort

Counting Sort

Appropriate if you know the exact range of values that might be in the list, and they are discrete values (like integers, not decimals), and the range isn't too big.

Sample unsorted array, with possible value range 0-9:
``````
1, 4, 1, 2, 7, 5, 2
``````

Algorithm:
- Create an array whose length covers the range of possible values. This is the counting array.
- Iterate through the unsorted data once, incrementing the corresponding slot in the counting array for each value. You're counting up how many times each value occurs.
``````
Counting array: 0, 2, 2, 0, 1, 1, 0, 1, 0, 0
``````
- If you're just dealing with integers or characters, you can just iterate through the counting array now, outputting each value in the right quantities.

- If you're dealing with more complex objects, continue.
- Iterate through the counting array, adding to each slot the total counts from previous slots. This is now an array of indexes in the sorted array.
``````
Counting array: 0, 2, 4, 4, 5, 6, 6, 7, 7, 7
``````
- Iterate through the unsorted array. For each value, place it at the index indicated in the counting array and decrement the slot in the counting array.
``````
Unsorted array: 1, 4, 1, 2, 7, 5, 2
Counting array: 0, 2, 4, 4, 5, 6, 6, 7, 7, 7
Sorted array: 1, 1, 2, 2, 4, 5, 7
``````

Time complexity: O(N + K)
Space complexity: O(N + K)
Where N is the number of elements in the unsorted array, and K is the number of possible values.