Friday 22 August 2014

Contra Oscillations


I recently wrote a criticism of the use of network theory in neuroscience.
Unfortunately I have similar problems with oscillations.

1) Is anything oscillating? 

In physics, oscillations are defined by the presence of simple harmonic motion, in which a restoring force (proportional to the deviation from a set-point) causes a circular trajectory of the system in the complex plane. The projections onto measurables generally result in sinusoidally varying output. Do such things exist in the brain? It is not clear. Alpha waves, for example, do not really look sinusoidal. Well, this could be due to additional resonances in the system -- for example if there were two oscillating elements A and B, in which A is kept in phase with B, but a also has an exact whole-number multiple of the frequency of B. For an alpha wave, perhaps A has three times the frequency of B. This is the natural explanation for an oscillation theorist, as it explains the two spectral peaks, but will it do for a neurobiologist? A pair of synchronized harmonic oscillators?

A more sophisticated version could be that, like a violin string, there are a large number of coupled oscillators, which can be out of phase in different subpopulations. Due to the coupling, perhaps they can be excited at different frequencies, leading to superimposed harmonic modes that can give the unusual shaped waveform seen in alpha.

But notice also that the oscillations described have varying phases. If you look at alpha, it changes phase very often! The presence of a phase-change in real oscillators is evidence for a sudden perturbation of the system. But if those phase-changes are basically part-and-parcel of your steady-state signal, as perhaps is the case most of the time in EEG, what does the fourier phase component represent? I think it is unwise to call this kind of behaviour an 'oscillation'. It cannot be described by what we normally call 'oscillation'. Perhaps we need a better model of how this unusual kind of signal can be generated in the brain, for example using synaptic biophysical or neuronal mass-models. Anyhow, even if alpha-band power can metaphorically be tied to an oscillation, I think calling gamma an 'oscillation' is probably hugely overstepping the mark, from what we currently know.

It may be more parsimonious to dump the oscillation theory of EEG fourier components altogether.

2) Masked fallacies

We may be making the same error that the network theorists made: we found a number to quantify, and this number correlates with lots of behaviour. Therefore it must explain how the brain operates. Just because we can examine different frequencies by performing a FFT, this doesn't mean that these frequency bands are really relevant to brain function. Any signal whatsoever is amenable to spectral analysis, even morse code or digital signals; but this on its own does not mean that those frequency components bear any relation to what is being signalled.

More importantly, the correlation of frequencies to behaviours does not carry any explanatory weight. For example, let us imagine that an overexcited scientist discovers that occipital alpha waves are found both when the eyes are closed, and when people are not paying attention to a visual scene (but not when attending). What is she to conclude from this? Is it justified to conclude that alpha "does" anything? Syllogism:
   P1: When performing well,  alpha is always absent.
   P2: When performing badly, alpha is always present.
Now which conclusion do you think follows logically from the premises?
   C1: Absent  alpha is necessary  for performing well.
   C2: Absent  alpha is sufficient for performing well.
   C3: Present alpha is necessary  for performing badly.
   C4: Present alpha is sufficient for performing badly.
Well by pure modus ponens, all of them do. Most scientists want to conclude C1. Some distraction theorists may want to conclude C3. Nobody really glances at C2 and C4. The dangerous consequence is that it encourages us to attribute causality when there is none, or even in the wrong direction.

We have developed a new language for frequency domain analysis - a language that may be significantly more opaque than we have realised, when attempting to describe events at the neuronal level. We may just be re-describing things that are better seen on standard ERP, i.e. averaged event-locked time-domain traces.

People have started to realise this, and now commonly mix frequency- and time-domain analyses freely. Oscillations tend to be relatively transient -- if that's not an oxymoron! And although fourier spectra are entirely capable of representing transients, as well as the temporal evolution of envelopes, thought has shifted towards wavelet transforms. But natural physical systems generating transient wavelet components are much less common and more contrived -- consequently making interpretations from this is a recipe for disaster. This is clear enough for transients and envelopes. But I argue, we have no real reason to believe this problem should vanish for non-transient EEG data.

As a minimum, you must be clear in your mind how your signal processing pathway will represent a step or gradual change in signal. Moreover the signal processing pathway must generate a representation that matches some theory of how signals are generated at the source.

3) Power and phase should not logically be separated

Until recently, most studies examined spectral power. This entails performing a time-windowed fourier transform and discarding phase information. We think we have an intuitive grasp of what this spectral power means. But what does this really mean? Can power in different frequency bands tell us about how the brain is working?

If you take a sound and discard the phase information, you have an unrecognisable mess. This is true however you window or filter the sound. Similarly, if you take an image and discard the phase of the 2D fourier spectrum, you are left with garbage. Here's a line of MATLAB to hear de-phased sound:

  load handel; sound(y) % normal
  w=64; h=hamming(w); for t=1:length(y)-w;
    z=fft(y(t:t+w-1).*h); z2=ifft(z.*exp(rand*2*pi*t*1j));

    y2(t)=real(z2(w/2));
  end; sound(y2) % dephased


As you can hear, the sound retains some of its rhythmic characteristics, but it really bears no other resemblance to the original! You can try other frequency transforms: if you change 'rand' to '0.1', you can actually keep the phase information, but amplify the phase -- but it still sound pretty alien.

Well, it turns out that most early 'oscillation' work discarded phase. I'm not arguing that sound or pictures are a good model for what EEG activity is like. But I would like to say that it's really quite unclear what these 'band powers' mean on their own, in terms of interpreting a signal.

4) Linear assumptions give artefactual couplings

Phase information has been used to study phase-amplitude coupling in a single channel. What does that mean in terms of the signal? Aficionados would tell you that PAC occurs when the power of some higher frequency correlates with the phase of an (independent) lower frequency. For example, gamma power might be higher whenever theta is in a trough, compared to when theta is at a peak. In noise, different frequencies are independently-extractable components of the signal, so the presence of any correlation is meaningful. So what causes this?

The simplest explanation is that the two signals are not independent. There might be some saturation or nonlinearity in your recording electrode / amplifier / skull -- for example making fast oscillations appear smaller in amplitude when you are at the peak of theta, compared to at the trough. Any kind of nonlinearity could generate 'cross-frequency coupling' - because it clearly breaks the independence between the frequencies.

However we now have enough single-cell electrophysiology to convince us there might be some truth to phase-amplitude coupling: a cell is more likely to fire when the LFP is at particular phases. This 'preferred phase' might be different for different brain areas and in different tasks, but is similar for many neurones in a region. In theory, this might cause EEG gamma to increase in amplitude at certain theta phases, but artefactual causes must still be ruled out. This is especially concerning if gamma appears strongest in theta troughs.

5) Synchrony buys nothing over ERP


Studies often report cross-frequency coupling in a single channel. What does this mean? The afficionados will tell you that those independent frequency components of the signal have similar phase differences, across trials. For example, on any given trial, the phase of gamma is at a given lag compared to alpha. The consistency across trials gives you CFC.

A similar conclusion can be drawn by measuring synchrony across all frequencies. For example, "synchrony increases" at key moments during a trial. The usual meaning of synchrony, at a particular frequency, is that across trials, the phase of the Fourier decomposition at a given frequency is similar. A recent paper showed "increased gamma synchrony" around the time of a decision.

But this is obviously going to be the case! It follows directly from the ERP in fact. Something happens at a moment in time. Therefore, whatever shape the wave is, the fact that the wave has a distinct spectrum will lead to CFC.  The specific phase differences between frequencies are determined purely by the shape of the ERP surrounding the stimulus.
If the ERP shows a readiness potential, then we might expect to see 'synchrony' (phase coupling) increasing before the event - since this is required to produce an ERP that sums across trials.

6) The usual culprits

I have deliberately left out the obvious, well-known problems with oscillation analysis, such as arbitrariness in choice of filtering, incomplete eye movement and blink removal, the compromise between time resolution and windowing artifact, etc....

No comments:

Post a Comment