Thursday, November 04, 2010

Simons Science Series: David Heeger

David Heeger, the speaker on Oct 13, is a researcher in the interdisciplinary cross-section of engineering, psychology, and neuroscience, and was introduced against the backdrop of some stunning snow-covered mountains.

He started with vision systems (vision task of motion perception as "unconscious influence"). We can begin understanding vision from single neurons that can be modeled simply as linear weighting functions that respond to particular speed or direction. For example, each neuron has an orientation or direction selectivity that can be seen as a tuning curve. Q: Do these selectivities happen at birth or later? David said, some even before eyes open (referred to many cat+strobe light experiments). He mentioned halfway rectification and spiking threshold to convert functions. Also, there is small bias among neurons for directions (preferred orientation). Next David wanted to extend this model to motion. Neurons average over space and time window. Q: What is the fastest speed one can measure. David said, distance is in units of degrees of visual perception, and if I recall right, he said 100/sec. Distributed representation of speed is obtained by multiple sensors in different orientation preference. A failure of this model is that linearity of signals or responses dont hold, so he proposed normalizing with all neurons in the neighborhood for L_2. Q: Does it matter at what stage the normalization happens? A: No, it happens many times/stages. Q: What mechanisms implement the above? A: Many, like inhibition, synoptic depression, .... Q; Does normalization work in other areas of brain? A: Yes, like in olfaction in fruitfly. The advantage of this proposal is output can be interpreted as a probability and has certain invariances. Q (from theoretical computer scientist, Baruch? Boaz?): Why L_2. A: Some do L_1. Next David attacked the problem of attention. Even when vision does not change, attention can change. David extended the model to matrices and a function based on pointwise multiplication, and pointed out the neatness of the approach.

In the second part of his talk, he connected his work to clinical aspects by looking at failures of these computational models in cases like epilepsy, autism or schizophrenia (who for ex are not fooled by certain image contrasts). Then he broadened the message from neural circuit mechanisms to behavior.

Scientists and Mathematicians may be shy, but not when it comes to asking questions. There were many questions at the end. Q: what is the role of "defective" parts in the computation. Lot of redundancy, averages will work for primates, but not for insects where failure rates of neurons are very small. Q: What is the computational model for a "small" organism, eg is attention a factor in small organisms (Sylvain Cappell)? A: Attention does not need new resources over what we have.

I am sure many of us theoretical computer scientists in the audience were distracted by the math/algorithmic/complexity questions underlying the neuron model and had similar reaction to the talk: so, the model of brain is parallel guess of thresholds and linear weighting function that implements each threshold? To what extent does this --- with L_2 weighting --- fit the reality? Is there any nonlinear computation model? What are the powers and limitations? Etc. But the rest of the audience didn't seem focused on these questions.

Disclaimer: I scribed what I could, there were many interesting comments by David and the audience that I didn't have the scientific background to parse in real time during the talk though they sounded important and insightful.

Labels:

0 Comments:

Post a Comment

<< Home