## Computational Neuroscience :: Week 2 - Linear systems

### Linear algebra refresher

Unsurprisingly, you’ll need to know at least a little linear algebra to do linear systems analysis. Specifically, you need to know what scalars, vectors, and matrices are, and the kinds of operations you can do with them. If these concepts are unfamiliar, read through the first chapter or two of any basic text on linear algebra.

The good news is that computations with vectors and matrices are relatively straightforward in modern programming languages. Rather than having to write loops or call specialized functions, you can write straightforward expressions that resemble standard vector and matrix notation. The following is a brief introduction to linear algebra operations in numpy:

#### Creating scalars, vectors, and matrices

import numpy as np

# a scalar
a = 1.23
# a row array (1 x n)
b = np.array([1, 2, 3, 4, 5])
# a column array (m x 1)
bc = np.array([1, 2, 3, 4, 5])[:,np.newaxis]
# easily create sequences
c = np.arange(5)
# an empty array
d = np.zeros(6)

# a 2D array (3 x 5)
M = np.ones((3,5))

Note that in numpy, arrays can have any number of dimensions. Thus, a vector is a 1D array, and a matrix is a 2D array. However, as you’ll see in the next section, in order to treat arrays like vectors and matrices, you usually have to give numpy some hints.

#### Array and matrix math

The following operations are the same for arrays, vectors, and matrices.

# a scalar times a scalar
a * 20

# adding and multiplying scalars applies the operation to each element
b + 10, b * 20

b + c

# if dimensions don't match, you'll get an error
b + d

# numpy does broadcasting: adding a column to a row array creates a 2D array
# this is a useful way to create a mesh of values
b + bc

The following operations are not the same for arrays and vectors:

# array multiplication is element-wise
b * c

# to compute the dot product, you need to sum up the values
sum(b * c)
# or use a special function
np.dot(b, c)

Question 1: What’s the angle between $\vec{b}$ and $\vec{c}$?

These operations are not the same for arrays and matrices:

# if you multiply a 1D array by a 2D array, you get broadcasting
b * M

# why doesn't this work?
bc * M

# in matrix math, multiplying a matrix by a vector yields a vector
np.dot(M, b)
# matching dimensionality: columns in the first operand need to match number of elements in the second
np.dot(M, d)   # error!
# matrix multiplication is not commutative: order matters
np.dot(b, M)   # error! vector size doesn't match number of rows

# multiplying two matrices yields another matrix
A = np.array([[1., 2],[3, 4],[5,6],[7,8]])
B = np.array([[1., 2, 3],[4, 5, 6]])
# note that the number of columns in A is equal to the number of rows in B
np.dot(A, B)
# these won't work:
A * B           # not matrix multiplication
np.dot(B, A)    # dimensions don't match

### Linear filters, convolution and Fourier transforms

#### Problem 1

We will investigate the properties of two different filters (impulse response functions):

For both filters, $\tau_1 = 50$ ms and $\tau_2 = 100$ ms.

1. Plot $h_1(t)$ and $h_2(t)$ for $% $ ms. A bin size of 1 ms is good.

2. Generate three input signals:

Let $\omega_1 = 0.3$ Hz and $\omega_2 = 3$ Hz. Note that $s_3$ is a square wave.

Use a bin size of 1 ms and plot about 10 s of data for each signal.

3. Compute and plot the response of the system defined by $h_1(t)$ to $s_1(t)$, $s_2(t)$, and $s_3(t)$. Recall that this is the convolution of $h_1$ and $s_n$ times the time step. Repeat for $h_2$.

What do you notice about the response amplitudes? What differences do you see between the outputs of the two filters?

4. Generate 3 seconds of white noise at a resolution of 1 ms (in numpy, use np.random.randn()) and calculate the output of the systems defined by $h_1$ and $h_2$. Compute and plot the power spectrum of the input signal and the two output signals. Hint: calculate the Fast Fourier transform and then use abs to get the modulus of the output.

What do you notice about the spectra? Do they seem noisy? You can get a better estimate of the power spectrum by generating a much longer sample of white noise (say around 1000 s), dividing it into 10 s segments, and averaging the power spectra from each segment.

5. Compute the cross correlation (np.correlate()) between the white noise input and the two filtered outputs. Try using 10 seconds of data and 100 seconds of data.

What’s the effect of the amount of data? Compare the results of the cross-correlation with the filters you used to generate the response. What does this suggest about the relationship between convolution and cross-correlation?

#### Problem 2

1. Using the definition of the Fourier transform ($\tilde{h}(\omega) = \int_{-\infty}^\infty h(t) \exp(i \omega t) dt$), calculate the frequency response of $h_1(t)$ above. Do this analytically first, then verify that you get the same result from a fast Fourier transform.

2. Plot the magnitude and phase of $\tilde{h_1}(\omega)$ (in numpy, use abs and phase, respectively). Explain what the filter does to its input.

3. Use the frequency response function to filter $s_1$, $s_2$, and $s_3$ in the frequency domain, then calculate the inverse Fourier transform to get back to the time domain.

This exercise is based in part on an assignment from MCB 262, Theoretical Neuroscience, at UC Berkeley