# Notes on Iteration & Recursion

`Iteration`

is the repetition of a process to produce a sequence of outcomes. Each repetition of this process is a single iteration and the outcome of every iteration is the starting point of the next iteration.

In mathematics, one can use iteration to find approximate solutions to equations. Repeatedly solving an equation to obtain a result using the result from the previous calculation is referred as an `iterative process`

. Many root-finding algorithms make use of iterative methods, producing a sequence of numbers that are hopefully approximating toward the root with each iteration. Take a look at Newton’s method and Brent’s method as example root-finding algorithms.

In programming we use iteration quite often. Many programming languages have loop constructs that lets you iterate over a given set or we can write iterative functions that loop to repeat some piece of code. We also have the concept of `recursion`

, which is similar to iteration but instead of repeating some piece of code, a recursive function calls itself to repeat the code.

Using a simple for loop construct to loop through all the integers from 1 to `n`

, multiplying as we go along, would be an iterative process. Writing down a function `fn(n)`

that calls itself taking `n`

as the argument and returning `n * fn(n - 1)`

unless `n = 0`

, would be a recursive process.

Most CPUs model recursion with loops and a stack data structure. Each recursive call pushes a new stack frame then pops it when it returns. Recursion hence usually has the overhead of method calls (unless you’re using tail recursion and a compiler that handles it), but it usually requires less code than iteration. Infinite recursion can lead to system crash while infinite iteration would consume CPU cycles.

Next: Function Trampoline