Skip to content
October 2, 2012 / neurograce

Standing on the shoulders of… people of roughly average height. A review of failed theories in Neuroscience

The history of scientific progress is occasionally portrayed as an epic tale, made up of some combination of serendipity and destiny. Great scientists of the past are looked at as heroes and pioneers who are meant to inspire the current generation. Focusing on the successes and breakthroughs is an effective technique for drawing people into the field. But it’s a far from accurate portrayal of how science usually advances. The fact is, there are a lot of wrong turns on the path to understanding, and we sometimes continue on those wrong paths for a (retrospectively) embarrassingly long time. This may seem like a side to scientific research that we want to ignore or cover-up, but I find it can actually be quite helpful to investigate it. As someone involved in doing research, it is important to realize that false results and incorrect conclusions can look just as legitimate and be supported by just as many smart people as correct ones. It reinforces the “question everything” mentality that we should all have and helps to develop a critical eye. Plus, with 20/20 hindsight some of those failed theories, especially in neuroscience, can be pretty entertaining. So I’ve compiled a list of a few that made their way into the field over the past few hundred years. Even if you don’t learn anything factual about the brain from them, you can at least go away knowing that you’re smarter than Descartes.

The flow of fluids controls the actions of the brain, nerves, and muscles. The notion of “animal spirits” flowing through and controlling the body originated with the ancient Greeks, but stuck around (despite evidence against it) until the late 1700s. The theory places the ventricles (the fluid-filled chambers in the middle of the brain) at the center of the action by describing them as repositories of the spirits, which are sent out to the peripheral nerves as needed. The spirits flow through the peripheral nerves and then affect muscle fibers via hydraulic power. Descartes expanded this view by incorporating his view that the pineal gland was the seat of the soul. He posited that the pineal gland’s soul-induced movements could alter the flow of the spirits, and thus alter thought and behavior.

Of course we know now that motor neurons work through electrical activity, not hydraulics. And that their activity stems from that of the cortex and other brain structures, not the neuron-less ventricles. Though to his credit, Descartes did have a neat idea of how memory works that is a pretty good analogy of our current understanding:

The pores or gaps lying between the tiny fibers of the substance of the brain may become wider as a result of the flow of animal spirits through them. This changes the pattern in which the spirits will later flow through the brain and in this way figures may be “preserved in such a way that the ideas which were previously on the gland can be formed again long afterwards without requiring the presence of the objects to which they correspond. And this is what memory consists in” 

Descartes 

That squares more or less with the notion of activity-dependent synaptic plasticity. So we’ll just say he broke even.

But it looks so scientific…

Phrenology, aka ‘you are your skull shape’. This is a classic example of brain science gone bad from the early 19th century. It is based on three basic principles: 1. Cognitive functions are localized in the brain. 2. The size of the brain area devoted to those functions is proportional to their presence in a patient. 3. The shape of the skull is an accurate way to measure the shape of the brain. When you put those together, you get doctors massaging your head and then telling you that you have abnormally high “love of home” but a deficit in “agreeableness.” The problem with the science of phrenology is that those three principles have decreasing levels of accuracy. The first one is generally still agreed upon. Certain functions can be associated with specific brain areas and will disappear if there is a lesion there. However, phrenologists were more concerned with character traits than concrete cognitive functions, as the phrenology map shows. The notion of personality traits being so localized and distributed across the brain is not supported. As for the second, the importance of size is only true in a gross sense. Patients with significantly degraded hippocampii, for example, will have memory problems. But a little extra mass in primary visual cortex probably won’t mean much. Finally, the third principle is simply wrong. Skull shapes vary from person to person, but have little relation to brain shape.

Sleep is like nightly hydrocephalus. Hydrocephalus is bad. It’s caused by excessive fluid pressure on the brain and has symptoms that can include low pulse, inability to process sensory stimuli, and low muscle tone. Hey those happen during sleep too! And as this article from 1841 points out, that is pretty good evidence that they’re caused by the same thing. The specific mechanism the author suggests is that there is an increase in blood flow to the brain at night, and this puts excess pressure on the brain causing the symptoms of sleep. The mechanisms of sleep are tricky to figure out, but luckily the blood-induced hydrocephalus theory died out pretty quick. Research nowadays credits the sleep cycle to the ability of hypothalamus cells to control the dynamics of the cortex.

The brain is one giant, connected cell. Microscope technology wasn’t great in the late 19th century. And neurons are pretty densely packed in the brain. So looking at a slide full of tangled cell bodies, axons, and dendrites may not be terribly informative. Many people thought protoplasm was shared amongst cell bodies via micro-bridges. But cell theory suggested that neurons should be independent, membrane-bound cells and the evidence was inconclusive. So, the debate raged over the turn of the century. Camillo Golgi, a prominent scientist studying the nervous system opposed the so-called neuron doctrine, favoring the old syncytium view instead. Another now-famous researcher, Santiago Ramon y Cajal, held the opposite view. He used, ironically enough, the stain that Golgi invented to stain neurons in a way that showed them as separate entities. We now accept the fact that in the vast majority of cases, two neurons are separated by a synapse. And the world of neuroscience would be very different if that were not true. But don’t feel too bad for Golgi, he got the Nobel prize in 1906 for his work. He just had to share it, with Ramon y Cajal.

A single neuron can be both excitatory and inhibitory. Whether a neuron increases or decreases the activity of its post-synaptic target depends on two things: what neurotransmitter it releases and what receptors the post-synaptic cell has. And in the middle of the 20th century, there was much speculation about both of those. Work on how ascending neurons excite their target motor neuron while inhibiting the antagonist muscle’s motor neuron was hot at the time and it suggested that the firing of a single neuron was causing both the excitation and inhibition. At that time the notion of multi-transmitter production was only recently being investigated in most cell types, but acetylcholine (Ach) was well-established as the transmitter of motor neurons. So, it was posited, Ach must have a different effect at different synapses. As it turns out, the inhibition that was seen in the antagonist fibers was caused by small inhibitory interneurons called Renshaw cells that the motor neuron excited. These interneurons release a different neurotransmitter (glycine) that is responsible for the inhibition. No manic-depressive motor neurons after all.
There’s always exceptions to the rule, but on the whole, we now believe that neurons produce one class of neurotransmitter and are either excitatory or inhibitory at all of their synapses. It’s what we call Dale’s Principle (not so much because Henry Dale came up with it, but more because John Eccle’s says so). Interestingly, there’s no reason, biologically speaking, that this needs to be true. It’s conceivable that cells could produce a variety of neurotransmitters (they do a lot more complicated things already), although perhaps segregating those into different synapses could be tough. But cells already have many different receptors, and so the control of inhibition/excitation could be determined on a synapse-by-synapse basis on the post-synaptic end. But evolution did not design it so. And it turns out that the reality of Dale’s principle has a huge impact on the computational abilities of the brain. As this paper shows, if Dale’s principle is violated (i.e, the absolute values of synaptic weights are randomized as opposed to being constant for a cell), spiking correlations can decrease and firing rate fluctuations can increase. This can have big consequences on information processing. So let’s all be glad for the consistency of our neurons.

No new neurons! Up until quite recently, neuroscientists would warn you to be very careful with your brain cells, because they’re the only ones you’re ever gonna get. Even our old friend Ramon y Cajal was a proponent of the fixed nature of the nervous system and with his support the idea stuck for over 50 years. But unfortunately this was mostly the result of an absence of evidence being used as evidence of absence for decades. A smattering of studies throughout the 1960s and 70s hinted at the possibility of adult neurogenesis but were none too convincing. Then came the 80s, and along with the Rubik’s cube and pocket calculators, scientists were gifted with BrdU. A synthetic nucleoside that can label new cells, BrdU was the perfect tool to investigate neurogenesis. And as this lovely review shows, they found it! But before you start bare-knuckle boxing or cracking walnuts with your skull, you should probably know that new neurons are still more the exception than the rule. They’re found only in the olfactory bulb and hippocampus. The later is involved with learning and memory, but the exact role of neurogenesis there is still unclear.

These are some of the major missteps of the field over the ages. And I’m sure there’s millions of smaller ones scattered throughout the literature. I find it fun to study these, but it does make me wonder about what currently entrenched neuro-theories will someday be proved utterly false. I’d like to know which of my conceptions about the brain will be mocked in whatever the future equivalent of a blog is (hoverblog?). But sadly, without foresight we can only progress through doing careful studies and asking critical questions. Luckily, studying the past so as to not repeat it is a great way to learn how to do just that.

 

September 27, 2012 / neurograce

Quick!!! Help fund a great idea for Neuroscience education!

This Kickstarter project has 2 days left to fund the development of a kid-friendly interactive App to help teach neuroscience. This stuff is important, so you should fund it if you can! Let Ned the Neuron live!

From http://www.kizoomlabs.com/

September 24, 2012 / neurograce

Conscious Unawareness

I was at a neuroscience retreat a few months ago when I and some elder neuroscientists were talking about ways of quantifying and measuring perceptions and cognitive functions. I mentioned that I felt that this was a big problem in the study of consciousness, and one of the professors there replied, “Consciousness? That’s a dirty word. Neuroscientists should never talk about consciousness.”

To me, such a notion seemed positively absurd. But I also knew it to be a fairly common sentiment in the field. It is rare to find a well-respected and prominent neuroscientist devoting their time to the study of consciousness openly and directly. Christof Koch is a notable exception, but even he gets a majority of his peer-reviewed publications through his work on visual attention. This absence is noticeable on a large scale by looking at the distribution of abstracts for this year’s Society for Neuroscience (SfN) conference. Of the 17,253 (woah) poster and talk abstracts, only 44 of them are tagged with the keyword “consciousness.”

I understand the aversion to this topic. For one, it has a bit of a stigma for being too far on the philosophical end of the spectrum. Real scientists don’t bother with such semantic nonsense. But I would argue that all science once belonged to the realm of philosophy. Ancient Greeks philosophers described love and strife as the forces behind the attraction and repulsion of objects, but luckily that didn’t stop the pursuit of more accurate and analytical explanations. But if science is failing to provide the data, philosophy and myth are the only paths left. And with a topic like consciousness, which is such an equally integral and mysterious part of human experience, you can be sure that people will explore whatever path is available. And so it will remain a hot topic of philosophical debate until factual evidence can cool it down.

There is another, perhaps stronger, motive for avoiding the study of consciousness: it’s hard. It’s just incredibly difficult. It is a challenge to even decide how to approach it. There is nothing close to a uniformly-accepted definition for consciousness (I haven’t even attempted one here. That will be a topic for another post). Nor are there clear ways of testing and quantifying many of the current definitions available. And the usual approach of neuroscientists—discovering or creating an animal model—isn’t entirely feasible given the nature of this subject. But none of this constitutes a valid excuse for not studying consciousness. Part of the scientific process involves defining your terms and deciding what are reasonable questions to be asked about them. And we haven’t been halted by such difficulties in the past. Remember those SfN abstracts? Well 285 of them are devoted to the equally ill-defined and un-model-able disorder of schizophrenia. And that only affects 1% of the population. So rather than to say that we can’t study consciousness because we don’t even know what it is, I would say we need to study consciousness in order to find out what it is.

Finally, the notion of scientific “dirty words” or off-limits topics for neuroscientists goes against our whole mantra. I feel that in order to be in this field, you have to believe that everything that is seen, felt, remembered, experienced, etc is all a product of neural activity, and can be explained as such once we understand it. To say that such a key aspect of thought is not even approachable for scientific study is to acknowledge a crippling weakness in our field that I don’t think is there. Furthermore, if not us, then who? What field is better suited to tackle this problem? Plenty have tried, including theology, mathematics, and physics. And from that we’ve gotten cyclical (and thus meaningless) answers such as ‘God is the source of consciousness and consciousness proves the existence of God.’ Or the nonsense that has come from trying to apply what is known about quantum mechanics to explain the workings of the brain (When the only tool you have is a hammer every problem looks like a nail). Just because neuroscience might shy away from the issue due to the lack of a solid foundation doesn’t mean everyone will. Rather than have the void filled by others, I think it is best that neuroscience claim its role as the rightful owner of this very tricky problem and work on solving it.

Luckily, there are some people out there who agree. The Mind Science Foundation (MSF) keeps this impressive database of people working in the field of consciousness research. It includes a fair amount of philosophers, writers, and other not-technically-scientists. But that’s because the goal of the MSF is to bring together all people who support the notion of a biological basis of consciousness and a scientific approach to the study of it. In this way, we can make the study of consciousness an interdisciplinary pursuit that is fueled primarily by neuroscientific findings. This pursuit doesn’t have to hide or ignore the above-mentioned difficulties inherent in the task. There just needs to be honesty about the current limitations and an effort to work around or remove them. And if this effort succeeds, then I look forward to the day when that terrible c-word can proudly become a part of every neuroscientist’s vocabulary.

September 18, 2012 / neurograce

Neuroinformatics: Cleaning up the mess we’re all making

Despite what we like to tell ourselves, doing neuroscience is…not an exact science. We say that reproducibility of results is key. But the truth is that there’s a nauseating amount of variability across labs in every step of the scientific process. Data is collected with different brands of equipment, or tools made in-house. Different algorithms for cleaning and filtering results can turn even identical raw data into vastly different datasets after only one stage of processing. Assorted file formats, unique record keeping procedures, and a lack of commonly accepted nomenclature can make sharing data a technical nightmare. This makes the notion of any lab exactly replicating another’s results a near impossibility.

Enter Neuroinformatics, a sort of meta-field of neuroscience that tries to standardize the way we collect, label, and share the fruits of our labor. The boundary between neuroinformatics and computational neuroscience isn’t always clear, since according to the International Neuroinformatics Coordinating Facility, computational neural modeling falls under the banner of “integrating research findings”. Suffice it to say, neuroinformaticists are trying to battle the data deluge and disorganization that currently plagues our field. And I, for one, couldn’t be more grateful. In fact, I just spent some time hanging out with these fine people at the INCF 2012 conference in Munich. Here are some of the great tools I found out about there:

Data and Code Sharing

CARMEN  – With a clean and simple interface, this data-storing space from the UK is a platform for neurophysiologists to keep and share their data and analysis code.

CRCNS – This site collects and shares neurophysiological and eye movement data for the purpose of informing computational models. They are also working on a standardized methodology for describing and sharing stimuli, both visual and audio.

Information Databases

NIF – The motherload. This NIH-backed super-database has a search tool that allows simultaneous searching across an array of biochemical, genetics, and imaging databases, both for literature and primary data. Their NeuroLex initiative also helps determine synonyms for brain areas which can lead to more accurate search results. Its size makes it a little unwieldy to use, but it definitely provides a lot.

BrainInfo – Primarily a neuroantomy source, this easy-to-use site allows you to search a specific brain area and get a variety of information on it. They also provide mapping for primate and rodent data.

INV-BF – This Japanese initiative includes imaging, anatomical and systems data for a variety of invertebrates, which makes it a good resource for lesser-studied organisms.

NeuroElectro  – The goal of this newly-launched site is to compile the neurophysiological properties of a variety of neuron types into one central database.

Spikesorting Evaluation

G-node  – I am hugely supportive of the current push to share, standardize, and actually test spike-sorting algorithms. This project provides simulated raw data which can be run through your spike-sorter of choice and then compared to the ground truth spike data. It doesn’t actually provide different algorithms, but hopefully the successful ones will get published and compiled elsewhere.

Online Atlases/Visualizations

Virtual Fly Brain  – A wealth of imaging data from everyone’s favorite insect is collected and displayed in a very attractive format in this virtual atlas.

3-D Brain*  – This site offers a variety of brain atlases to choose from, which can then be viewed through their 3-D constructor

OpenWorm*  – This is just fun, I don’t care who you are. The knowledge of the C. elegan’s entire neural wiring diagram is utilized to make an interactive platform for exploring anything you could want to do with the little worm’s anatomy.

*These sites tend to not cooperate with Firefox. I suggest Chrome.

It may seem like neuroinformatics is mainly concerned with housekeeping details, and perhaps is only of passing interest to the rest of the neuroscience community. I would argue that this is a very misguided view. How we choose to store and share our data will determine the pace of progress in this field. New discoveries are all about connecting previously disconnected ideas. Ignoring the need for a proper infrastructure just adds extra obstacles to this path. Only with the right tools can we make our work as efficient and accurate as possible. And for that reason, I think we should all take up the banner of neuroinformatics by utilizing the means that are currently provided, and working to shape them for the future.

September 9, 2012 / neurograce

The elephantress in the room

If I had to guess, I would say that most first-year graduate students feel insecure. We’re tossed into a completely new environment, with new peers, responsibilities and challenges. For most of us we’re also at a new university and a whole new city. In many ways, we’re each building up our reputation from scratch. No one wants to be the one to mess up. As a result, in many cases, we’re probably all watching ourselves more closely than anyone is watching us. However, I read an article recently that made me think perhaps one group may be acting even more cautiously: women.

Hold your eye-rolling, post-feminists. My focus here is not actually on sexism in the workplace, or preconceptions about women per se. Rather, I’m hoping to address the issues that anyone who is part of an underrepresented group in their field may face. And for women in the STEM fields, that is certainly the situation, as this (slightly-outdated) table  shows. With 41.3% of PhDs going to women (who were 50.9% of the population in 2000), neuroscience is actually doing comparatively well. However, this does not take in to account the notorious “pipeline problem” whereby women tend to leave academia at a significantly higher rate than men at each successive stage of their career. Also, the specific field that I’m interested in, theoretical neuroscience, is not specified on that chart but is notoriously male-dominated. To roughly quantify it I looked at the ratios in Columbia’s labs. Between the 6 groups doing theory here the percentages of women are: grad students, 12.5%; post-docs, 21.4%; and professors, 0%. I believe the scientific term for that is sausage fest.

So what are the issues that come with being a girl in the boy’s club? Well, in the particular fields of math and science there are a lot of traditional stereotypes that may need to be fought: women can’t do math, women don’t think analytically enough, we’re too emotional, or gossipy. Having to work against those is obviously an uphill battle. But even ignoring any specific preconceptions, I think what it comes down to is the problem of forced representation. As the only women in the room, you can become the de facto mouth of all womankind. What you say, think or do, the mistakes you make or opinions you express can be taken (whether consciously or not on the part of your colleagues) as the stances or shortcomings of women in general. Representing a large portion of the population is a heavy job, especially for someone who didn’t apply for it. That kind of pressure can affect your performance.

What’s worse, as the article points out, the problem is there even if your specific colleagues don’t actually hold any stereotypes:

XKCD knows all.

“When there’s a stereotype in the air and people are worried they might confirm the stereotype by performing poorly, their fears can inadvertently make the stereotype become self-fulfilling.”[emphasis added]

I suspect the same is true of the forced representation problem alone. That is, even without your colleagues projecting your qualities on to all women (stereotypes aside), just the worry that they may is enough to affect your performance.

I know that, for both my failures and successes, I want to be judged individually. I did not volunteer to be the delegate from the great state of womanhood (I’ve been watching too much convention coverage…). If I mess up, then I messed up. There are no further conclusions to be drawn; my performance and my sex are unrelated.

It gets complicated, however, when we consider the numerous measures out there that are meant to draw more women into the STEM fields. There are special scholarships, awards, and societies dedicated to “women in science.” They put special emphasis on funding, educating, and promoting women. This seems to work against the notion that we should all be judged solely by the quality of our work. Furthermore, it can lead to women being pushed ahead in their career before they’re ready, solely for the purpose of battling the pipeline problem. But a woman in a position she is not adequately trained for does not make for a good role model. These kinds of initiatives can also increase doubt and self-consciousness: “Did I get this position just because I’m a woman? Does he think I got this position just because I’m a woman? “ But yet many proclaim that getting and keeping more women in the sciences is essential to battle the stereotypes and forced representation problem in the first place. And indeed, if I walked into a theoretical neuroscience meeting half-full of women, I wouldn’t feel that the burden of representation falls on me. But there are clearly problems with how we’re trying to balance the population. The whole thing is complicated, circular, and I have no solution.

All I know is that being a grad student is tough enough. No one should have to feel an extra burden based on their demographic information. Everyone has the right to speak for themselves, and only themselves. So let’s all do the mature thing and judge, disparage, and pigeonhole each other for our science, and not our sex.

September 3, 2012 / neurograce

What is the goal of neuroscience?

I’ve just completed the neuroscience bootcamp that precedes the start of classes at Columbia. The bootcamp consists of some lectures, some lab tours, and some more hands-on lab demonstrations. Through it, I have been exposed to a vast variety of methods and studies. This includes everything from genetic sequencing to extracellular physiology and behavior. As a a result, I couldn’t help but think: what is the goal of neuroscience? Conveniently, that question makes for a pretty nice first blog post topic.

So, what is the purpose of this field that I and and my fellow first years are subscribing to for at least the next 5-6 years and presumably far beyond? Webster’s dictionary defines… no I’m not going to go there. But there is a general consensus that the study of neuroscience is the pursuit of “understanding the brain.” But what would that really entail? What do we have to achieve to say that we’ve understood the brain? Many labs are working on the seemingly simple problem of learning the basics of neuroarchitecture, what connects to what. Other labs are looking at what proteins are associated with certain neuro-degenerative diseases. There are people investigating topics such as dendritic spines, neural correlates of visual attention, the dynamics of brain blood flow, and computational models of olfactory processing, to name a few. But even if we somehow got to the point of ‘understanding’ one of these topics, would we say that we understand the brain? What if we understood all of them? Are these seemingly disparate lines of research working towards one cohesive theory of the brain?

I’m inclined to say that at present they are not, and for a variety of reasons. Two main ones being a difference of level and a difference of purpose. By difference of level I am referring to Marr’s levels as he describes in his book, Vision. While Marr was referring specifically to the study of visual information processing, I think his distinctions can be applied to the brain’s processes at large. To Marr, information processing is done on three levels: computational, algorithmic, and physical. This ordering basically represents a funneling of how information can be processed from the very broad description of the goal of the information processing, to the specific approach used to tackle that goal, and finally to the implementation of that approach in the real world. To borrow an example from computer science: Let’s say you have a list of unordered numbers. At the computational level, the goal is to put the numbers in order. Algorithmically, a mergesort approach may used to put the elements in order. That mergesort algorithm can be implemented physically by writing and executing a program in Python on your Mac book. The distinction between the levels is clear when you consider how a different sorting algorithm could’ve just as well been executed on the same computer, or that the mergesort algorithm could have been implemented on your dad’s old Hewlett-Packard desktop from the 90s. Since much the same distinctions can be made about the brain’s information processing, it’s unsurprising that we’ve ended up studying it in different ways on different levels. Large-scale imagers and psychologists may be more interested in what the goal of specific area is, whereas theorists and systems researchers are more likely to care about the algorithms being implemented, and cellular and molecular people are focused on the physical details. The level of study determines the method used, the questions asked, and the value of findings. Of course the distinctions aren’t completely clear, as many forms of research can span levels, or perhaps belong to their own “half-level” somewhere in between. But in any case, to say that we are all working toward the same understanding of the brain, when the brain itself is operating on these different levels that each need to be understood, seems incorrect. We are all working towards an understanding of the brain, but not necessarily the same one.

The other problem, the difference of purpose, is related to how we define success. The way in which we frame the motivation for our research informs how we know when progress is made. In a lab studying Parkinson’s, any experiment that successfully slows or reverses the progression of symptoms would certainly be considered successful, and rightly so. Equivalently, a lab working on prosthetic limb development is happy to find any recording method or decoding algorithm that allows for more accurate control by the user. But do these advances mean an increase in our understanding of the brain? Not necessarily. For instance, treatments of depression (both pharmaceutical and extreme methods such as deep brain stimulation) have been black boxes in terms of how and why they are effective. Certainly these mysteriously effective treatments have led some to investigate their methods, which can produce knowledge about the brain. But in many translational labs the purpose of the research is only tangentially related to understanding the brain. A deep understanding of the brain is only helpful insofar as it leads to new treatments, but treatments without an understanding can be just as beneficial. I wouldn’t go so far as to say this kind of research isn’t neuroscience, but it does demonstrate the diversity of the field, and why it can be so hard to define.

However, I do feel that the varied, irregular and disjointed terrain of this field is merely a product of our present (very limited) knowledge of the brain. We don’t have enough knowledge to see how a cohesive theory of the brain could arise from all our disparate branches of research. Of course if we did reach a full understanding of how the brain works, it would cover all possible levels and serve any purpose. Our knowledge of the computational, algorithmic, and physical workings of Alzheimer’s and the brain areas involved with it, for instance, would make the production of treatments straight-forward. The field will be united. But for now, we are all working on separate chunks of a puzzle who’s end picture none of us knows for certain. The best we can do is try to add one more piece onto our chunk in the hopes that they”ll all come together some day. However, for now, I think the goal of neuroscience will continue to vary from lab to lab, from researcher to researcher, and maybe even from day to day.  Until we’ve all worked hard enough to realize that we’re working on the same thing.