Skip to content
October 30, 2012 / neurograce

The “Big Science” approach to Neuroscience and why it succeeds even if it fails

“Big Science” is a way of going about research and discovery that, being generally logical, systematic, unified, and well-funded, seems to stand in contradiction to how science is normally done. It comes about when some large, unifying entity (the government, a coalition of nations, a very wealthy investor) decides that a problem is important and recruits an army of scientists to get it solved. Prominent examples include the Manhattan Project, the Human Genome Project, and the Large Hadron Collider. The field of Neuroscience has been experiencing its own “Big Science” boom as of late and, similarly to the above examples, both critics and supporters have been vocal.
The biggest target of scorn/praise is easily Henry Markram and his Blue Brain project. His stated goal of reverse engineering and replicating the human brain in a supercomputer in 10 years has been politely described as ambitious. What his group is currently drudging through is the creation of minutely-detailed models of cortical columns, including everything from specific connectivity properties down to dendrite spine structures. It’s the type of data-heavy project that could only be achieved through Big Science. But despite its monopoly on press coverage, Blue Brain isn’t the only project of such caliber. The Allen Institute for Brain Science is pursuing similar goals. One of which is the atlasing of the entire brain, including not just physical structures but gene expression as well. Their “high-throughput pipeline” for data harvesting is strict, standardized, and state-of-the-art in order to most efficiently produce results. There are also the Connectone Projects, for mouse and human brains. These NIH-backed endeavors span several universities and have the goal of creating a detailed connectivity map of the entire brain (down to the cellular level for the mouse). Again, this is achieved through systematic methodologies and procedures, focused on anatomical imaging.

So what is it that critics of Big Neuroscience object to? There are people who disagree with the very science. They claim that Blue Brain’s level of detail is unnecessary, or that knowing the mouse connectome won’t explain the brain. Another complaint is that the goals, despite appearances, are actually ill-defined and untestable: How do we know that the cortical column model is accurate and complete? What does it mean to obtain a cellular-level connectome when there are animal-to-animal variances on that level? Another major component to the critical choir is that the goals are not achievable. We simply don’t have the experimental techniques available yet to accurately model every neuron morphology or map every synapse. And throwing a bunch of scientists at that problem who are systematically repeating the same experiment won’t fix that. We also don’t even know what what we need to know. As Bradley Voytek said on his post about Blue Brain, “the neurosciences right now are where physics was in the early 1900s. A bunch of people thought Newtonian mechanics could explain everything. Turns out, the physical universe is much more complicated than that.”

There is another theme to the opposition, and it stems from a concern over the public perception of science. These kinds of projects tend to draw mainstream media attention, and are therefore subject to some misinterpretation, simplification, and sensationalizing. It doesn’t help when the lead scientists themselves are party to the over-hyping. Blue Brain’s own self-description contains a smattering of “exceptional”, “perfected”, and “complete virtual brain”. The fear is that the intensity of these challenges will be under-reported and the payoffs over-reported. This can create unrealistic expectations that, when not fulfilled, will reflect poorly on the field. This is especially problematic given the amount of money involved: The NIH gave a total of $38.5 million for the Human Connectome Project. The Allen Institute started its work with $100 million from Paul Allen. Blue Brain is currently seeking a one billion Euro award from Future and Emerging Technologies Flagship Initiatives. With the field’s reputation and that kind of cash on the line, success is more important than ever. But some say putting the emphasis on some kind of public measure of success, rather than good science (which routinely ends in failures) is a dangerous practice.

But even with all these arguments of varying validity, I stand in support of the Big Science. If all these projects live up to their hype, the leaps in knowledge will be incredible. But even if they don’t, there will be a wealth of collateral benefits from the process itself. Attempting projects that require such detail has, for example, revealed a systemic problem in our data reporting policies. Both Sean Hill of Blue Brain and Christoph Koch of the Allen Institute have cited the insufficient detail found in the current literature as a reason for doing experiments in-house. In this way, a by-product of these large-scale data collection and modeling projects is pinpointing the weaknesses in the normal small science practices. It is especially helpful in advancing neuroinformatics which, as we all know, is essential for advancing the field as a whole. Because of the nature of their experiments, these projects create a huge database of neuronal properties that are both uniformly-sampled and thoroughly-recorded. And with the (worrisome) exception of Blue Brain, these projects make much of their data available to the public. This not only provides the research community with the actual data, but also with a standardized form for reporting such data and, in some cases, a model to plug it in to. So even if the critics are right, and the goals of the project are too ambitious for our time, the infrastructure will be there waiting when the experimental results are achievable. These projects also spur advances in the automation of experimental procedures, since such automation is necessary to achieve results at the required rate. So, merely attempting Big Neuroscience on a specific problem can create positive ripples across the field.

Finally, despite the different approach, Big Science projects ultimately have the same objectives as any normal laboratory. Blue Brain’s detailed cortical model, for example, is just a means to find out what are the essential aspects of neural networks for computation. But rather than the usual approach of working from the simplest model and adding complexity, they take the opposite route. Many labs across the world investigate gene expression in their brain region of interest and correlate it with their behavior or disease of study. The Allen Institute is simply gathering all that information in one swoop. Even the over-hyping isn’t unique to Big Science; professors talk up their work to the media (and grant readers) all the time. And the effect is probably a net positive, as the start of a new project is likely to bring attention and favor to the field, while any failure is less likely to gain coverage. And even if these projects do fail by whatever measure they are being held to, that will provide an opportunity, just like in any scientific failure, to discover why. And to then apply those lessons to the next project, big or small. The benefits of these behemoths, in any case, outweigh the risks. So, let’s put away our pitchforks, and let the giants live.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: