Standing on the shoulders of… people of roughly average height. A review of failed theories in Neuroscience
The history of scientific progress is occasionally portrayed as an epic tale, made up of some combination of serendipity and destiny. Great scientists of the past are looked at as heroes and pioneers who are meant to inspire the current generation. Focusing on the successes and breakthroughs is an effective technique for drawing people into the field. But it’s a far from accurate portrayal of how science usually advances. The fact is, there are a lot of wrong turns on the path to understanding, and we sometimes continue on those wrong paths for a (retrospectively) embarrassingly long time. This may seem like a side to scientific research that we want to ignore or cover-up, but I find it can actually be quite helpful to investigate it. As someone involved in doing research, it is important to realize that false results and incorrect conclusions can look just as legitimate and be supported by just as many smart people as correct ones. It reinforces the “question everything” mentality that we should all have and helps to develop a critical eye. Plus, with 20/20 hindsight some of those failed theories, especially in neuroscience, can be pretty entertaining. So I’ve compiled a list of a few that made their way into the field over the past few hundred years. Even if you don’t learn anything factual about the brain from them, you can at least go away knowing that you’re smarter than Descartes.
The flow of fluids controls the actions of the brain, nerves, and muscles. The notion of “animal spirits” flowing through and controlling the body originated with the ancient Greeks, but stuck around (despite evidence against it) until the late 1700s. The theory places the ventricles (the fluid-filled chambers in the middle of the brain) at the center of the action by describing them as repositories of the spirits, which are sent out to the peripheral nerves as needed. The spirits flow through the peripheral nerves and then affect muscle fibers via hydraulic power. Descartes expanded this view by incorporating his view that the pineal gland was the seat of the soul. He posited that the pineal gland’s soul-induced movements could alter the flow of the spirits, and thus alter thought and behavior.
Of course we know now that motor neurons work through electrical activity, not hydraulics. And that their activity stems from that of the cortex and other brain structures, not the neuron-less ventricles. Though to his credit, Descartes did have a neat idea of how memory works that is a pretty good analogy of our current understanding:
“The pores or gaps lying between the tiny fibers of the substance of the brain may become wider as a result of the flow of animal spirits through them. This changes the pattern in which the spirits will later flow through the brain and in this way figures may be “preserved in such a way that the ideas which were previously on the gland can be formed again long afterwards without requiring the presence of the objects to which they correspond. And this is what memory consists in”
That squares more or less with the notion of activity-dependent synaptic plasticity. So we’ll just say he broke even.
Phrenology, aka ‘you are your skull shape’. This is a classic example of brain science gone bad from the early 19th century. It is based on three basic principles: 1. Cognitive functions are localized in the brain. 2. The size of the brain area devoted to those functions is proportional to their presence in a patient. 3. The shape of the skull is an accurate way to measure the shape of the brain. When you put those together, you get doctors massaging your head and then telling you that you have abnormally high “love of home” but a deficit in “agreeableness.” The problem with the science of phrenology is that those three principles have decreasing levels of accuracy. The first one is generally still agreed upon. Certain functions can be associated with specific brain areas and will disappear if there is a lesion there. However, phrenologists were more concerned with character traits than concrete cognitive functions, as the phrenology map shows. The notion of personality traits being so localized and distributed across the brain is not supported. As for the second, the importance of size is only true in a gross sense. Patients with significantly degraded hippocampii, for example, will have memory problems. But a little extra mass in primary visual cortex probably won’t mean much. Finally, the third principle is simply wrong. Skull shapes vary from person to person, but have little relation to brain shape.
Sleep is like nightly hydrocephalus. Hydrocephalus is bad. It’s caused by excessive fluid pressure on the brain and has symptoms that can include low pulse, inability to process sensory stimuli, and low muscle tone. Hey those happen during sleep too! And as this article from 1841 points out, that is pretty good evidence that they’re caused by the same thing. The specific mechanism the author suggests is that there is an increase in blood flow to the brain at night, and this puts excess pressure on the brain causing the symptoms of sleep. The mechanisms of sleep are tricky to figure out, but luckily the blood-induced hydrocephalus theory died out pretty quick. Research nowadays credits the sleep cycle to the ability of hypothalamus cells to control the dynamics of the cortex.
The brain is one giant, connected cell. Microscope technology wasn’t great in the late 19th century. And neurons are pretty densely packed in the brain. So looking at a slide full of tangled cell bodies, axons, and dendrites may not be terribly informative. Many people thought protoplasm was shared amongst cell bodies via micro-bridges. But cell theory suggested that neurons should be independent, membrane-bound cells and the evidence was inconclusive. So, the debate raged over the turn of the century. Camillo Golgi, a prominent scientist studying the nervous system opposed the so-called neuron doctrine, favoring the old syncytium view instead. Another now-famous researcher, Santiago Ramon y Cajal, held the opposite view. He used, ironically enough, the stain that Golgi invented to stain neurons in a way that showed them as separate entities. We now accept the fact that in the vast majority of cases, two neurons are separated by a synapse. And the world of neuroscience would be very different if that were not true. But don’t feel too bad for Golgi, he got the Nobel prize in 1906 for his work. He just had to share it, with Ramon y Cajal.
A single neuron can be both excitatory and inhibitory. Whether a neuron increases or decreases the activity of its post-synaptic target depends on two things: what neurotransmitter it releases and what receptors the post-synaptic cell has. And in the middle of the 20th century, there was much speculation about both of those. Work on how ascending neurons excite their target motor neuron while inhibiting the antagonist muscle’s motor neuron was hot at the time and it suggested that the firing of a single neuron was causing both the excitation and inhibition. At that time the notion of multi-transmitter production was only recently being investigated in most cell types, but acetylcholine (Ach) was well-established as the transmitter of motor neurons. So, it was posited, Ach must have a different effect at different synapses. As it turns out, the inhibition that was seen in the antagonist fibers was caused by small inhibitory interneurons called Renshaw cells that the motor neuron excited. These interneurons release a different neurotransmitter (glycine) that is responsible for the inhibition. No manic-depressive motor neurons after all.
There’s always exceptions to the rule, but on the whole, we now believe that neurons produce one class of neurotransmitter and are either excitatory or inhibitory at all of their synapses. It’s what we call Dale’s Principle (not so much because Henry Dale came up with it, but more because John Eccle’s says so). Interestingly, there’s no reason, biologically speaking, that this needs to be true. It’s conceivable that cells could produce a variety of neurotransmitters (they do a lot more complicated things already), although perhaps segregating those into different synapses could be tough. But cells already have many different receptors, and so the control of inhibition/excitation could be determined on a synapse-by-synapse basis on the post-synaptic end. But evolution did not design it so. And it turns out that the reality of Dale’s principle has a huge impact on the computational abilities of the brain. As this paper shows, if Dale’s principle is violated (i.e, the absolute values of synaptic weights are randomized as opposed to being constant for a cell), spiking correlations can decrease and firing rate fluctuations can increase. This can have big consequences on information processing. So let’s all be glad for the consistency of our neurons.
No new neurons! Up until quite recently, neuroscientists would warn you to be very careful with your brain cells, because they’re the only ones you’re ever gonna get. Even our old friend Ramon y Cajal was a proponent of the fixed nature of the nervous system and with his support the idea stuck for over 50 years. But unfortunately this was mostly the result of an absence of evidence being used as evidence of absence for decades. A smattering of studies throughout the 1960s and 70s hinted at the possibility of adult neurogenesis but were none too convincing. Then came the 80s, and along with the Rubik’s cube and pocket calculators, scientists were gifted with BrdU. A synthetic nucleoside that can label new cells, BrdU was the perfect tool to investigate neurogenesis. And as this lovely review shows, they found it! But before you start bare-knuckle boxing or cracking walnuts with your skull, you should probably know that new neurons are still more the exception than the rule. They’re found only in the olfactory bulb and hippocampus. The later is involved with learning and memory, but the exact role of neurogenesis there is still unclear.
These are some of the major missteps of the field over the ages. And I’m sure there’s millions of smaller ones scattered throughout the literature. I find it fun to study these, but it does make me wonder about what currently entrenched neuro-theories will someday be proved utterly false. I’d like to know which of my conceptions about the brain will be mocked in whatever the future equivalent of a blog is (hoverblog?). But sadly, without foresight we can only progress through doing careful studies and asking critical questions. Luckily, studying the past so as to not repeat it is a great way to learn how to do just that.
Leave a Reply