# MoreLinear \\ Fourier

## Table of Contents

## Goal

Using the multiplicative properties of complex numbers we construct an easily-invertible orthogonal basis in which periodic components of our input become apparant. This basis allows us to carry out a class of operations that are numerically intensive in the standard basis but much quicker in the frequency-basis.

## Complex numbers

Complex numbers are of the form **a+ib** where **i= √(-1)** and **a** and **b** are known as the real and imaginary components of the complex number. The pair (a,b) are often referred to as a point on the complex plane. However the implicit analogy to 2D coordinates is a bit probematic

To envision a complex plane we treat the *real* component **a** acting as the *x* and the *imaginary* component **b** acting as the *y*.
Like 2D coordinates, we can add/translate them and they can be scaled

- (a+ib) + (c+id) = (a+c) + i(bd)
- c * (a+ib) = ca +icb

However unlike 2D [x,y] coordinates, complex numbers can be multiplied:

- (a+ib) * (c+id) = (ac-bd) + i(ad+bc)

Normally if we are given 2 coordinates [x_{0},y_{0}] and [x_{1},y_{1}] we never ask ourselves to multiple them because multiplication between two points is simply no a defined operation. However in the complex-number case this operation comes naturally and it generates a new complex number

Note: You can do a dot and cross product between two [x,y] coordinates but:

- [x
_{0},y_{0}] • [x_{1},y_{1}] →scalar- [x
_{0},y_{0}] × [x_{1},y_{1}] →requires an extra`z`

dimension

So points in the complex plane have this extra "feature" so it's important to not stretch the 2D coordinate interpretation too far. As far as I can tell there is no easy way to visualize what the result of a multiplication looks like in the general case.

## Complex Conjugates

Just like points and vectors, complex numbers have a "length" or magnitude

- ||a+ib|| = √(a
^{2}+b^{2}}

And just like for points and vectors, this norm is a real scalar value

The square of the norm is ofcourse just *a ^{2}+b^{2}* and this often comes up in things like the inner product. For instance given a vector [x y] we can take its inner product with itself [x y][x y]

^{T}to get its norm-squared. In the trivial

`1x1`

case this will be [z][z]^{T}which is z

^{2}.

In the complex case where *z* is a complex number *a+ib* then we'd get *z*z ^{T}=z^{2}=(a+ib)^{2}=a^{2}+i2ab-b^{2}* which is non-real and doesn't match our magnitude-squared. So using the transpose no longer works. But this norm-squared is still easy to express as a product

- a
^{2}+b^{2}= (a+ib)(a-ib)

With *a-ib* known as the *complex conjugate* of *a+ib*

And what's convenient is that this actually still works for real values. Since a real value *a* can be written as the complex number a+i*0 we can write its complex conjugate as a-i*0 and see that it's norm is

- a
^{2}+0^{2}= (a+i0)(a-i0) = a^{2}

Which is just the absolute value of *a* squared

So instead of doing a transpose we need to introduce a new transposition called the Hermitian transpose

- [a+ib c+id]
^{*}= [a-ib c-d]^{T}

The Hermitian transpose is denoted with a star. It simply take a normal transpose and replaces the terms with their complex conjugates.

When the vector values are all real it will behave exactly like a transpose and when the values are complex, it will take their conjugates and give us scalar/real values of each complex values' magnitude

## The Unit Circle

Notice that when the *norm-squared* is equal to *1* the magnitude is also equal to *1*. All these special points who's length is equal to *1* lie on what's called the *unit cirle*. If we start with two complex numbers *a+ib* and *c+id* then

- (a
^{2}+b^{2})^{1/2}= 1*and therefore*a^{2}+b^{2}= 1 - (c
^{2}+d^{2})^{1/2}= 1*and therefore*c^{2}+d^{2}= 1

As we saw, their product is the point *(ac-bd) + i(ad+bc)* and its length is also equal to 1. Proof:

- [ (ac-bd)
^{2}+ (ad+bc)^{2}]^{(1/2)} - [ a
^{2}c^{2}-2acbd+b^{2}d^{2}+a^{2}d^{2}+2adbc+b^{2}c^{2}]^{1/2} - [ a
^{2}c^{2}+b^{2}b^{2}+a^{2}d^{d}+b^{2}c^{2}]^{1/2} - [ c
^{2}(a^{2}+b^{2}) + d^{2}(a^{2}+b^{2}) ]^{(1/2)}

Through some subsitution we see that the length of the product is also equal to *1*

This tells us that while the general complex product is hard to think about, we know that least the product of any two points on the unit circle gives us a new point on the unit circle

## Euler's Formula

To make the multiplicative rotations more natural we would like to start describing the points in terms of polar coordinates. The initial conversion is straightforward:

- radius = r = (a
^{2}+b^{2})^{1/2} - angle = θ = atan(b/a)

Going back is also easy

- a = rcos(θ)
- b = rsin(θ)

So the complex number *z=a+ib* can now be defined in terms of *r* and *theta*:

- z = rcos(θ) + irsin(θ)

A more compact representation of this is given by **Euler's formula**:

- re
^{iθ}=rcos(θ)+irsin(θ)

A complete proof of this identity is a bit long, but the steps are:

- Show that the derivative of e
^{x}is e^{x}(it's the only equation of the form a^{x}that have this "stable" property) - Using the chain rule we see that the
*n*derivative of e^{th}^{ix}will be equal to i^{n}e^{ix} - i
^{n}is a cyclic function with half it's terms imaginary and half real - We then describe e
^{iθ}around*0*in terms of the Taylor/Maclaurin Series - Half the terms of the Taylor/Maclaurin Series will be imaginary and half real.
- The real terms will match the Maclaurin Series of a cosine function and the imaginary will match those of the sine function

Using the trigonometric properties (which are self evident if you think of the graph and even/odd functions)

- sin(-θ)=-sin(θ)
- cos(-θ)= cos(θ)

We can rewrite the complex conjugate as

- a - ib
- r cos(θ) -
*i*r sin(θ) - r cos(-θ) +
*i*r sin{-θ) =**re**^{-θ}

As we just saw in the previous section, the radius in the products of points on the unit cirle is always equal to one. so a polar coordinate system will bring the problem down from 2 variables, *a* and *b*, to just one constant *radius* of size *1* and one variable *angle* θ. The points on the unit circle can all be written down as:

- e
^{iθ}

And we can say the magnitude of the product of two of these is always equal to *1*

- || e
^{iθ}* e^{iω}|| = 1

Now notice that if *ω=2π-θ* that:

- e
^{ i (θ+ω)}= e^{ i 2π}= cos(2π) +*i*sin(2π) = 1 +*i*0 = 1

## The roots of unity

Furthermore if *α=2π/n* then:

- [e
^{iα}]^{n}= e^{inα}= e^{i2π}= 1

This tells us that taking the n^{th} root of *1* has a complex numbers solution! (in addition to the trivial solution of *1*)

- 1
^{1/n}= e^{i2π/n}

The typical notation here is to say

- ξ = e
^{-i2π/n}= cos(2π/n)+isin(2π/n) - ξ
^{n}=1

This ξ is called the **n ^{th} root of unity**
By extension we can also see that:

- ξ
^{n+j}= ξ^{n}ξ^{j}= ξ^{j} - ξ
^{nk}= [ξ^{n}]^{k}= [e^{-i2π}]^{k}= 1^{k}= 1

## Fourier series

The Fourier series is a very special sum of the exponents of an n^{th} root of unity **ξ**

- 1 + ξ
^{k}+ ξ^{2k}+ … + ξ^{(n-2)k}+ ξ^{(n-1)k}

Here the *k* can be any integer value, but the sum will always maintain the property that if it's multiplied by *ξ ^{k}* we get the same sequence back. The last term goes to

*ξ*and the remaining terms in effect shift places giving us:

^{nk}= 1- ξ
^{k}+ ξ^{2k}+ ξ^{3k}+ … + ξ^{(n-1)k}+ 1

So we can write

- ξ
^{k}**fourier-series*=*fourier-series*

Therefore

- ξ
^{k}**fourier-series*-*fourier-series*= 0 *fourier-series** (ξ^{k}- 1) = 0

And therefore..

*fourier-series*= 0

.. or to write it out again in full form - for all integer values of *k*

- 1+ξ
^{k}+ξ^{2k}+…+ξ^{(n-2)k}+ξ^{(n-1)k}= 0

## Fourier Vectors

It turns out that Fourier series, when places in a vector with different values of *k*, form mutually orthogonal vectors - here I choose *r* and *s* for *k* and carry out the Hermitian inner product:

- [ 1 ξ
^{r}ξ^{2r}… ξ^{(n-1)r}] [ 1 ξ^{s}ξ^{2s}… ξ^{(n-1)s}]^{*} - [ 1 ξ
^{r}ξ^{2r}… ξ^{(n-1)r}] [ 1 ξ^{-s}ξ^{-2s}… ξ^{-(n-1)s}]^{T} - 1*1 + ξ
^{r}*ξ^{-s}+ ξ^{2r}*ξ^{-2s}+ … + ξ^{(n-2)r}ξ^{-(n-2)s}} + ξ^{(n-1)r}ξ^{-(n-1)s} - 1 + ξ
^{r-s}+ ξ^{2(r-s)}+ … + ξ^{(n-2)(r-s)}+ ξ^{(n-1)(r-s)}

Now *r-s* is just another *k* value from our Fourier series and so the sum will go to *0* **except** when *r=s*:

- [ 1 ξ
^{k}ξ^{2k}… ξ^{(n-1)k}] [ 1 ξ^{k}ξ^{2k}… ξ^{(n-1)k}]^{*} - 1*1 + ξ
^{k}ξ^{-k}+ ξ^{2k}ξ^{-2k}+ … + ξ^{(n-1)k}ξ^{-(n-1)k} - 1*1 + ξ
^{0}+ ξ^{0}+ … + ξ^{0} - 1 + 1 + 1 + … + 1 = n

You could also just view the *r=s* case as the sum of the 2-norms of the roots of unity. There are *n* terms and each one is necessarily of length *1*. Either way, the answer will always be *n* no matter what *k* you choose
and therefore the 2-norm/length all all fourier vectors is *n ^{1/2}*

It's important to note that if you drop all the complex terms none of this works! You can't construct mutually orthogonal vectors out of purely real sine/cosine waves

## Fourier Matrix

We've just shown that the fourier vectors are orthogonal as long as the *k*'s are different. The only caveat is that when *k=n* then the result is identical to *k=0* and when *k=n+1* it's identical to *k=1* .. etc. So there are actually only *n* distinct fourier vectors. The next step is self evident. We just choose take the *n* distinct vectors and slap them together into a fourier matrix **F** and end up withh an orthogonal basis:

**F** =

1 | 1 | 1 | 1 | .. |

1 | ξ | ξ^{2} |
ξ^{3} |
.. |

1 | ξ^{2} |
ξ^{4} |
ξ^{6} |
.. |

1 | ξ^{3} |
ξ^{6} |
ξ^{9} |
.. |

1 | … | … | … | .. |

1 | ξ^{n-1} |
ξ^{n-2} |
… | .. |

Looking at the real components of the columns - there is one constant component (the first column) and then *n* samples of the *cosine* function over one oscillation for the second column, *n* samples over 2 periods for the 3rd column, *n* samples over 3 periods.. etc.

ℜ(**F**) =

1 | 1 | 1 | 1 | .. |

1 | cos(1*2π/n) | cos(2*2π/n) | cos(3*2π/n\) | .. |

1 | cos(2*2π/n) | cos(4*2π/n) | cos(6*2π/n) | .. |

1 | cos(3*2π/n) | cos(6*2π/n) | cos(9*2π/n) | .. |

1 | … | … | … | .. |

1 | cos([n-1]*2π/n) | cos(2[n-1]2π/n) | cos(3[n-1]*2π/n) | .. |

If your input is over 1 second then this maps to a sampled cosine function of 1Hz, 2Hz, 3Hz, etc..

**TODO**: Add a proof that these don't form an orthogonal basis

Since each series (irrespective of the *k* exponent) has length *n ^{1/2}* (remember that the self-inner norms equal

*n*), all the columns of

**F**can be normalized in one go by dividing the matrix by

*n*. The resulting matrix

^{1/2}**(1/n**is now even better b/c it's orthonormal/unitary. Therefore its inverse is just its Hermitian transpose.

^{1/2})F- [(1/n
^{1/2})F]^{-1}= [(1/n^{1/2})F]^{*} - F
^{-1}= (1/n)F^{*}

In **F ^{-1}F** the diagonal elements will be the column magnitudes, which have been normalized to

*1*, and the off diagonal elements will be inner products of orthogonal vectors and therefore

*0*- so the product will give us the identity matrix

**I**

## Frequency Space?

We constructed a very convenient basis that's easily invertible and independent of the input and we can now easily move to the basis and back but it's not exactly what one would imagine as "frequency space" and a few things are unresolved

### How does a sinusoidal look in this basis?

Since each column is a mixture of real and imaginary samples of sine/cosine functions we at first glance don't have a straight correspondance between frequencies and the column number.

Reusing the trigonometric identities:

- sin(-θ) = -sin(θ)
- cos(-θ) = cos(θ)

We can carefully pair Euler functions to get back bare sine/cosine functions with no imaginary parts

- [e
^{iθ}+ e^{-iθ}]/2 = [cos(θ) + isin(θ) + cos(-θ) + isin(-θ)]/2 =**cos(θ)** - [e
^{iθ}- e^{-iθ}]/2i = [cos(θ) + isin(θ) - cos(-θ) - isin(-θ)]/2i =**sin(θ)**

If we put in *-2πφ/n* for *θ* we back our familiar roots of unity:

- [ξ
^{-φ}+ ξ^{φ}]/2 = cos(2πφ/n) - n*[ξ
^{-φ}+ ξ^{φ}]/2 = cos(2πφ) - [ξ
^{-φ}- ξ^{φ}]/2i = sin(2πφ/n) - n*[ξ
^{-φ}- ξ^{φ}]/2i = sin(2πφ)

Note: b/c these are cyclical functions:

- sin(-θ) = sin(2π - θ)
- cos(-θ) = cos(2π - θ)
- e
^{-iθ}= e^{i(2π-θ)}- ξ
^{-k}= ξ^{n-k}

With some rearranging we can rewrite these as:

- cos(2πφ) = n*[ξ
^{n-φ}+ ξ^{φ}]/2 - sin(2πφ) = n*[ξ
^{n-φ}- ξ^{φ}]/2i

and if *φ* is equal to a *k* value, then this will be equal to basis vectors in our fourier matrix.

For instance if *φ* = 3 we can pick an *x*

- x = [ 0 0 n/2 0 0 .. 0 0 n/2 0 0]
_{1,n}

So that:

- cos(2π * 3) = F
^{*}x^{T}

Similarly with the sine function, except with an extra minus sign:

- y = [ 0 0 -n/2i 0 0 .. 0 0 n/2i 0 0]
_{1,n}

So that:

- sin(2π * 3) = F
^{*}y^{T}

Both of these matrix-vector products are always producing real values and complex coordinate value has a real and imaginary part, so it stands for both a have a sine and a cosine of it's corresponding basis vector frequency.

Furthermore there is a consistent symmetry between the n^{th} and (n-φ)^{th} coordinate. This symmetry in both real and imaginary terms the first n/2 terms tell us everything about the oscillating components of our input. You can completely disregard the second n/2 terms.

NoteThis doesn't represent a loss of information. The input hadnreal values the output hasn/2complex coordinates (each made of 2 values).

NoteIf the input is complex then this symmertry won't hold. But in most applications a complex input is not meaningful

### How does a phase shift look in this basis?

Notice how each the real part of the coordinate is making *cosine* waves and the *imaginary* is making sine waves. Both are present in the final complex coordinate value and an arbitrary coordinate value will generate one of both.

- v = ℜ(v) + ℑ(v)
- vF
^{*}= ℜ(v)F^{*}+ ℑ(v)F^{*}= α sin(…) + β cos(…)

Sinusoidals have an amazing property, that if you add sinusoidals that have the same frequency but different amplitudes and/or phase you always get back just one sinusoidal of that frequency.

In our case we are simply adding a *sine* and *cosine* and this additive property as the consequency of the trig identity:

- sin(A+B) = sin(A)cos(B)+cos(A)sin(B)

Now given any *sine* wave with a phase shift we can decompose it into the sum of a *sine* and *cosine*

- Asin(ω t + φ)
- = Asin(φ)cos(ωt)+Acos(φ)sin(ω t)
- = A
_{1}cos(ω t) + A_{2}sin(ω t)*.. bc φ is a constant*

Working back we get the general rules for **harmonic addition**

- α sin(ωt) + β cos(ωt) = γ sin(ωt+θ)
- γ = √[α
^{2}+β^{2}] - θ = atan(β/α)

So these sine/cosine functions of the same frequency in conjuction represent a phase shifted sine function.

It also means that if your phase shifts (say your input is delayed) then the real and imaginary coordiante components will fluctuate, but the norm/length will remain constant b/c √[α^{2}+β^{2}] will always be equal to the sine wave's amplitude γ.

If you take the norm of **Fx** (or more simply just do **Fx(Fx)^{**}*) you will get the amplitude at each frequency component (at the expense of loosing all phase information) and you can then draw a spectrogram

### How does a frequency who's period isn't a whole fraction of the sample rate come out in this basis?

All the previous math had assumed for simplicity that the input was at a frequency that matches one of the basis vectors - that it in effect produced one coordinate point. But how about if *φ* is not equal to a *k* value?