Project 1

Problem 1

Suppose I want to implement my own function for computing the square root of a number $a$. One way to do this is by using Newton's method.

If you write out Newton's method and simplify it a bit, what it tells you to do is, start with a (reasonable) initial guess $x_0$ and then perform the following "average damping" iteration:

$$x_{n+1} = \frac{1}{2} \left( x_n + \frac{a}{x_n} \right)$$

This should converge to a fixed point which is the square root of $a$. (Though we won't be concerned with issues of convergence here.)

  1. Check that $\sqrt{a}$ is actually a fixed point of the iteration.

  2. Turn this iteration into a function square_root for computing square roots. You should leave it up to the user what accuracy is sufficient? You may also want to add a maximum number of iterations before giving up.

Problem 2

The famous Collatz conjecture asks whether or not iterating the function

$$ f(n) = \begin{cases} n/2 &\mbox{if } n \text{ even} \\ 3n + 1 & \mbox{if } n \text{ odd} \end{cases} $$

always reaches $1$ for any initial natural number $n$.

We'll say that the Collatz length of $n$ is the number of steps it takes to reach $1$ the first time. Compute the $n$ with longest Collatz length for $1 < n \le 20$.

Problem 3

Try implementing a function which computes the $n$-th Fibonacci number in two ways: Recursively and iteratively.

Which computation takes longer on some small values of $n$ = $10$, $15$, $20$, $25$ and $30$?

Can you figure out why? Try following the computation through by hand for a couple steps.

Problem 4

Suppose we want to implement Simpson's rule to do numerical integration.

First, implement the basic three point (left-point + mid-point + right-point) version. To do this, write a function which takes the left end point $a$, the right end point $b$ and a function $f$ to integrate.

Once that works, extend it to the composite Simpson's rule where the number of subintervals is specified by an additional argument $n$.

Problem 5

A well-known problem is computing large integer exponents of numbers in a small number of steps using the following observation:

$$ a^0 = 1 $$$$ a^{2n} = (a^n)^2 $$$$ a^{2n+1} = a \cdot (a^n)^2 $$

Notice that the last two steps reduce the size of the exponent by about half. In particular, this is frequently done when working modulo $m$ for any $m$.

Write a function which takes integers $a$, $n$ and $m$ and computes $a^n$ modulo $m$ efficiently using this observation.

Roughly, how does the number of steps depend on $n$?