## Computational Neuroscience :: Week 1 - Basic Plotting and Statistics

### Getting Started

To complete the exercises in this course, you’ll need to have access to at least one scientific programming environment. These are some suggestions:

• Python/Numpy/Matplotlib: on OS X or Linux, install through OS package manager, or install anaconda. Pros: strong user base, dynamic and mature programming language, free. Cons: less stable than commercial alternatives, fewer specialized statistical packages
• R/RStudio: Download and install R for Windows or Mac OS X. On Linux, install through OS package manager. Download and install the latest stable version of RStudio Desktop. Install, at a minimum, dplyr, ggplot2, and plyr. Pros: strong user base, broad support for specialized statistical models, free. Cons: idiosyncratic programming language
• MATLAB: available for purchase from the bookstore for a reduced rate. Pros: well-supported, mature, integrated development environment; many labs have released code for public use. Cons: expense, poor support in language for advanced programming idioms

You may also want to consider using Jupyter, which is a web notebook (code, text, and graphics can all be combined in a single document) that supports a broad range of programming languages, including R, Python, and MATLAB. If you write your assignments as notebooks, make sure to export as PDF or HTML before submitting.

As noted in the syllabus, unless otherwise noted, you must complete this assignment without using third-party toolkits or packages for neural analysis. However, you will likely need to do significant online research to determine how to implement some of the computations. If you do, you should reference your sources. You may work in teams, but each member of the team must submit a copy of his or her own work.

### Question 1

Generate Poisson spike trains with a time-varying probability function

where $r(t)$ represents the probability of spiking in each bin, at a bin size of 10 msec, $ω$ =3Hz, and $% $ sec.

A. First, plot $r(t)$. Then generate 20 independently simulated spike trains and plot them as rasters (hint: set the y position of the data points equal to the trial number)

B. Show two PSTHs, each averaged from 10 independently simulated spike trains, and two PSTHs each averaged from 1000 spike trains. How do these PSTHs relate to $r(t)$? What do you learn by comparing and contrasting these PSTHs?

C. Compute the variance of the spike trains as a function of time. Does the noise look multiplicative or additive?

D. Change the binsize to 150 ms. What does the PSTH look like? How about 300 ms?

### Question 2

Now you’ll load some real spiking data stored in JSON format. Although not ideally suited for spiking data, JSON is a well-established method of storing different kinds of data structures that can be read by almost every programming language.

In python/numpy, you can load the data with the following command. The result is a list of numpy arrays. Each array contains the spike times in each trial. Note that because different trials may contain different numbers of events, spike data doesn’t fit well into tabular formats.

import numpy as np
import json
spikes = [np.asarray(trial) for trial in json.load(open("st82_2_4_1_st468_song_7.json"))]

A. Calculate some basic statistics. How many trials? What’s the average number of spikes per trial? What’s the standard deviation?

B. Plot the trials as a raster. What patterns do you see in the response?

C. Plot the trials as a PSTH. Try adjusting the bin size between 1 ms and 50 ms. What bin size seems best for resolving the peaks of activity?

D. Consider the spikes that have negative times. These correspond to spontaneous activity. Calculate the intertrial interval histogram for these spikes. What is the mean and standard deviation? What’s the coefficient of variation ($\sigma/\mu$) and Fano factor (variance over the mean)?

E. Now do the same for the spikes between 0 and 10000 ms. Which epoch is better described by a homogeneous Poisson process?

Bonus question: calculate the PSTH for the data by convolving the spike trains with a 5 ms Gaussian kernel

This exercise is based in part on an assignment from MCB 262, Theoretical Neuroscience, at UC Berkeley