### Updated algorithmic-analysis.txt

revision
1b6ac261fe6b635d5b8a2a9588fcde6dc1253acc

section1/algorithmic-analysis

# Algorithmic Analysis

~~Write here..~~One could say this is more of an advanced topic, maybe even an unecessary one, but it's definitely an interesting one. As a programmer you will be concerned with whether one algorithm is faster then another. How that is determined is by analyzing the algorithm. I wil tell you how fast and efficient the algorithms in this book are compared to similar ones. I will also go into detail as to why. If all you care about is whether A or B is faster, then you can probably skip over these sections, maybe come back to them later.

One thing you do need to know regardless is how the running time of an Algorithm is expressed. There are two popular ways, Big Oh notation and Tilde notation, but they both pretty much have the same idea.

Say you have an Algorithm to brute force search for duplicates on an Array of N objects. That is to say you manually compare every object to every other object. The running time of this algorithm will theoretically be something like:

**c(N

The first thing we can do is completely disregard _c_ and _z_. Why? Because we can't control them. Both of them are completely dependant on things like the speed of the computer the program is being run on, the programming language used, how well the programmer himself coded up the algorithm, etc. Things that, as an Algorithm Designer, are completely out of your hands.

The next thing that we can say is we only care about the highest order term. Say you have two algorithms. Algorithm A has a running time (after stripping away the constants) of **N

A = 10

While B is proportional to

B = 10

But we don't care about small inputs. For small inputs the operation is going to be super fast regardless of what algorithm you use. In fact when you're only dealing with small inputs, usually it's just easier to code up a 'slower' but simpler algorithm then one that runs faster, but takes 4x as many lines of code. What we care about is large inputs. Let's revist the previous algorithms, but change the input to 10,000 and now we have

A = 100,000,000 + 10,000 = 100,010,000

B = 100,000,000 + 100,000 = 100,100,000

Still a difference, but not nearly as much, and this difference will only decrease at the input gets larger. So we say that both algorithms have the same proprtional running time of N

In this text we will use the ~N

One thing you do need to know regardless is how the running time of an Algorithm is expressed. There are two popular ways, Big Oh notation and Tilde notation, but they both pretty much have the same idea.

Say you have an Algorithm to brute force search for duplicates on an Array of N objects. That is to say you manually compare every object to every other object. The running time of this algorithm will theoretically be something like:

**c(N

^{2}) + z** - Where _c_ is the amount of time it takes the computer to perform each calculation, and _z_ is some arbitrary inital setup time.The first thing we can do is completely disregard _c_ and _z_. Why? Because we can't control them. Both of them are completely dependant on things like the speed of the computer the program is being run on, the programming language used, how well the programmer himself coded up the algorithm, etc. Things that, as an Algorithm Designer, are completely out of your hands.

The next thing that we can say is we only care about the highest order term. Say you have two algorithms. Algorithm A has a running time (after stripping away the constants) of **N

^{2}+ N** and Algorithm B has a running time of **N^{2}+ 10N**. Objectively speaking the first one is faster. For small inputs, noticeably faster. If you're only working with 10 items, that is to say N = 10 then, Algorithm A has a running time proportional toA = 10

^{2}+10 = 110While B is proportional to

B = 10

^{2}+ 100 = 200, or almost twice as long.But we don't care about small inputs. For small inputs the operation is going to be super fast regardless of what algorithm you use. In fact when you're only dealing with small inputs, usually it's just easier to code up a 'slower' but simpler algorithm then one that runs faster, but takes 4x as many lines of code. What we care about is large inputs. Let's revist the previous algorithms, but change the input to 10,000 and now we have

A = 100,000,000 + 10,000 = 100,010,000

B = 100,000,000 + 100,000 = 100,100,000

Still a difference, but not nearly as much, and this difference will only decrease at the input gets larger. So we say that both algorithms have the same proprtional running time of N

^{2}, which is expressed as either ~N^{2}or O(N^{2}). The O() notation, prnounce Big Oh, comes from Calculus and it's analysis of the growth of functions, so I won't go into all the details of it here. All you need to know is that O(N^{2}) is much faster then O(N^{4}).In this text we will use the ~N

^{2}notation, but if you look at other algorithm resrouces and see the O() notation, just remember it basically means the same thing for our purposes.