Skip to content
May 10, 2013 / neurograce

Methodological Mixology: The harmful side of diversity in Neuroscience

The range of tools used to study the brain is vast. Neuroscientists toss together ideas from genetics, biochemistry, immunology, physics, computer science, medicine and countless other fields when choosing their techniques. We work on animals ranging from barely-visible worms and the common fruit fly to complicated creatures like mice, monkeys, and men. We record from any brain region we can reach, during all kinds of tasks, while the subject is awake or anesthetized, freely moving or fixed, a full animal or merely a slice of brain…and the list goes on. The result is a massive, complex cocktail of neuroscientific information.

Now, I’ve waxed romantic about the benefits of this diversity before. And I still do believe in the power of working in an interdisciplinary field; neuroscientists are creating an impressively vast collection of data points about the brain, and it is exciting to see that collection continuously grow in every direction. But in the interest of honesty, good journalism, and stirring up controversy, I think it’s time we look at the potential problems stemming from Neuroscience’s poly-methodological tendencies. And the heart of the issue, as I see it, is in how we are connecting all those points.

Combining data from the two populations and calculating the mean (dashed grey line) would show no difference between Variable A and Variable B. In actuality, the two variables are anti-correlated in each population.

Combining data from the two populations and calculating the mean (dashed grey line) would show no difference between Variable A and Variable B. In actuality, the two variables are anti-correlated in each population.

When we collect data from different animals, in different forms, and under different conditions, what we have is a lot of different datasets. Yet what we seem to be looking for, implicitly or explicitly, are some general theories of how neurons, networks, and brains as a whole work. So, for example, we get some results about the molecular properties needed for neurogenesis in the rat olfactory bulb, and we use these findings to support experiments done in the mouse and vice versa. What we’re assuming is that neurons in these different animals are doing the same task, and using the same means to accomplish it. But the fact is, there are a lot of different ways to accomplish a task, and many different combinations of input that will give you the same output. Combining these data sets as though they’re one could be muddling the message each is trying to send about how its system is working. It’s like trying to learn about a population with bimodally distributed variables by studying their means (see Fig 1). In order to get accurate outcomes, we need self-consistent data. If you use the gravity on the Moon to calculate how much force you need to take off from the Earth, you’re not going to get off the ground.

Not to malign my own kind, but theorists, with their abstract “neural network” models, can actually be some of the worst offenders when it comes to data-muddling. By using average values for cellular and network properties pulled from many corners of the literature, and building networks that aren’t meant to have any specific correlate in the real world, modelers can end up with a simulated Frankenstein: technically impressive, yes, but not truly recreating the whole of any of its parts. This quest for the Platonic neural network—the desire to explain neural function in the abstract—seems, to me, misguided. Rather, even as theorists, we should not be attempting to explain how neurons do what they do—but rather how V1 cells in anesthetized adult cats show contrast invariant tuning, or how GABA interneurons contribute to gamma oscillations in mouse hippocampal slices, and so on. Being precise in determining what our models are trying to be will better fuel how we design and constrain them, and lead to more directly testable hypotheses. The search for what is common to all networks should be saved until we know more of what is specific to each.

Eve Marder at Brandeis University has been something of a crusader for the notion that models should be individualized. She’s taken to running simulations to show how the same behavior can be produced by a vast array of different parameter values. For example, in this PNAS paper, Marder shows that the same bursting firing patterns can be created by different sets of synaptic and membrane conductances (Fig 2). This shows how simply observing a similar phenomenon across different preparations is not enough to assume that the mechanisms producing it are the same. This assumption can lead to problems if, in the pursuit of understanding bursting mechanisms, we measured sodium conductances from the system on the left, and calcium conductances from that on the right. Any resulting model we could create incorporating both these values would be an inaccurate explanation of either system. It’s as though we’re combining the pieces from two different puzzles, and trying to reassemble them as one.

Figure 2. The voltage traces show that the two systems have similar spiking behavior. But it is accomplished with different conductance values.

Figure 2. The voltage traces show that the two systems have similar spiking behavior. But it is accomplished with different conductance values.

Now of course most researchers are aware of the potential differences across different preparations, and the fact that one cannot assume that what’s true for the anesthetized rat is true for the behaving one. But these sorts of concerns are usually relegated to a line or two in the introduction or discussion sections. On the whole, there is still the notion that ideas can be borrowed from nearby lines of research and bent to fit into the narrative of the hypothesis at hand. This view is not absurd, of course, and it comes partly from reason, but also from necessity: there’s just some types of data that we can only get from certain preparations. Furthermore, time and resource constraints mean that it is frequently not plausible to run the exact experiment you may want. And on top of the practical reasons for combining data, there is also the fact that evolution supports the notion that molecules and mechanisms would be conserved across brain areas and species. This is, after all, why we feel justified in using animal models to investigate human function and disorder in the first place.

But, like with many things in Neuroscience, we simply can’t know until we know. It is not in our best interest, in the course of trying to understand how neural networks work, to assume that different networks are working in the same way. Certainly frameworks found to be true in specific areas can and should be investigated in others. But we have to be aware of when we are carefully importing ideas and using evidence to support the mixing of data, and when we’re simply throwing together whatever is on hand. Luckily, there are tools for this. Large scale projects like those at the Allen Brain Institute are doing a fantastic job of creating consistent, complete, detailed, and organized datasets of specific animal models. And even for smaller projects, neuroinformatics can help us keep track of what data comes from where, and how similar it is across preparations. Overall, it needn’t be a huge struggle to keep our lines of research straight, but it is important. Because a poorly mixed cocktail of data will just make the whole field dizzy.

ResearchBlogging.org Marder, E. (2011). Colloquium Paper: Variability, compensation, and modulation in neurons and circuits Proceedings of the National Academy of Sciences, 108 (Supplement_3), 15542-15548 DOI: 10.1073/pnas.1010674108

3 Comments

Leave a Comment
  1. Jonathan Cannon / May 11 2013 4:05 am

    Right on. It’s so tempting to jump to abstractions, especially for those of us coming from math or physics backgrounds, but without a properly focused and observation-driven approach, it all ends up as a pile of meaningless bifurcation diagrams. Also, my favorite of Eve Marder’s talking points was that individual lobsters in the same population each had neuronal parameters that could be used to create functional gastric simulations, but taking the average value of their parameters made a dysfunctional lobster. The average lobster doesn’t exist!

  2. Cyrus Omar (@neurocy) / May 13 2013 4:58 am

    preach, Grace

Trackbacks

  1. Weekly Magapsine – S20 | elDronte

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: