Centralization and Scientific Progress
The more social, cultural, and financial capital flows into a scientific institution’s coffers, the better off its stakeholders will be. Centralization can help a scientific institution, for example, when it has community solidarity, a single unifying brand accessible to all its members, and is tied down to a legally fixed organization. All these features let stakeholders get more out of their institution’s success and all of them can accelerate scientific progress. But it’s not so one-sided: solidarity can turn into chauvnism, a unified brand can give cover to free riders, and legal foundation can give a tight chain of command coercive control over which ideas are allowed.
Beyond a certain point, centralization starts getting in the way of scientific progress. But at what point does it start getting in the way of a stakeholder’s access to capital? That point may be much, much further down the line. Solidarity can slip into chauvinism and start hampering scientific progress long before it starts affecting access to capital enough for stakeholders to care. So it’s important to know exactly where the point is that centralization turns sour so we can stop ourselves from slipping past it. Yet there are so many moving parts, it’s hard to answer with a thought experiment.
Having used agent-based simulations to study peer review and grant allocation systems, I was excited to see them pointed at this problem too. In Agent-Based Models of Dual-Use Research Restrictions, Elliot Wagner and Jonathan Herrington argue that when a group of labs work on the same problem, greater connectivity between them can actually be an obstacle to scientific progress. They model a community of research labs as a network of bayesian bandits exploring two competing scientific theories and simulate their behavior. They measure the effect of varying the connectivity in a network of labs, starting with one where all labs communicate with each other, all the way down to a network of isolated labs studying the same question simultaneously. They find that decentralizing the community by cutting off communication actually improves the chances that the truth gets discovered and doesn’t affect how fast it happens. That is, it’s good for scientific progress.
They argue that this effect (which they call the “Zollman effect” after Kevin Zollman) is caused by social proof. When a scientist confidently endorses a position that others haven’t had the time to evaluate yet, other scientists connected to them will end up with greater credence in that position. Then, those other scientists will themselves proceed as if that position were more likely to be true, and the whole endeavor will be biased on that assumption. But if that position was false, then the community as a whole will be more likely to reach a false conclusion. Isolating some labs from each other prevents this from happening, increasing the chances that more of them independently discover the truth without being biased by earlier mistakes made by others.
Their results are interesting, but real scientific practice is a lot more complex and I wouldn’t be surprised if their model is blind to variables that undermine their conclusion. Thankfully, we don’t have to depend on it. In Meta-Research: Centralized Scientific Communities are Less Likely to Generate Replicable Results, Valentin Danchev, Andrey Rzhetsky, and James A Evans produced a massive empirical study on the relationship between the structure of scientific networks and the accuracy of their outputs in biomedical science. These empirical results make the conclusions of Zollman, Wagner, and Herington seem a lot more credible.
Biomedical science is an attractive place to study the sociology and social epistemology of science not just because of its impact on human health, but because it has a robust tradition of annotating its publications. For many years, tens of thousands of papers published in this field have been digitally annotated with the specific chemical and biological interactions they observe, opening them up to large-scale analysis.
One of my favorite applications of these annotations has been in Tradition and Innovation in Scientists’ Research Strategies by Jacob G. Foster, Andrey Rzhetsky, and James A. Evans. They use the annotations to study trends in scientists’ research strategies: whether scientists choose to study biochemical interactions that have already been thoroughly studied before (the traditional strategy) or to study completely novel interactions and relationships (the innovation strategy).
Danchev et al. also use another resource available in biomedical sciences: massive, automated experiments where machines record the interactions of thousands of chemicals with different measuring instruments in parallel. They combine the annotations with the results of these massive, automated experiments in order to study the relationship between the network of scientists behind a body of research and the replicability of the claims they endorse, a proxy for the accuracy of the scientific community in question.
They found that papers published by overlapping groups of authors were significantly more likely to agree on the existence and direction of a given drug-gene interaction (DGI). They also found that more replicable DGI’s discovered by these massive experiments are significantly more likely to have been endorsed by the scientific literature than less replicable DGI’s. But most importantly for the question of progress and centralization, they found that centralization has a significant negative relationship with the replicability of published DGI claims. Given the amount of data at their disposal, this result gives significant empirical credibility to Zollman and Wagner et. al’s theoretical conclusions.
Yet another example is described in Does Science Advance One Funeral at a Time? by Pierre Azoulay, Christian Fons-Rosen, and Joshua S. Graff Zivin. They empirically study the effects of the death of star scientists on progress in the life sciences. By studying a dataset of citations from PubMed, they found that when elite scientists pass away, outsiders are significantly more likely to enter their field. Their conclusions is that these outsiders bring novel approaches to the field’s problems and make significant contributions, at least as measured by downstream citations. That is, the methods and members underlying a scientific community and its institutions become more diverse. Like Zollman and Wagner et. al’s theories and Danchev et al.’s empirical results suggest, decentralization appears to benefit scientific progress. I personally want these star scientists to keep on living and I’m sure there’s a nonlethal way to overcome this centralizing effect—if it needs to be overcome at all, as Azouley et al. speculate that this exclusiveness may be beneficial for the growth and stability of a new field.
Overall, these results suggest that the optimal degree of centralization might look less like a walled city with a few big castles in the center or a clubhouse where everyone is chummy, and more like a diverse, sparsely connected tapestry of cliques, and that it might be a good idea to add in some random noise past the filter of the elite members of a field.
Maybe some new evidence will undermine the entire impression I got from these results. All I can do is wring my hands at these massive machines of meat and money and hope they move science forward fast enough to flatten the risks that loom ahead.