The Personal Blog of Stephen Sekula

Thoughts on “The Economist” article on reforming science

I have a subscription to “The Economist,” but I’ve been so busy lately that I’ve neglected the last few issues. So it was with great interest that I found from an acquaintance of mine that they recently printed an article entitled “How Science Goes Wrong: Scientific Research Has Changed the World. Now it needs to change itself.” I finally had a chance to sit and read the article this morning. Here are my thoughts.

First – what is “science”?

Before a discussion of the article can commence, one needs to first define “science” – failure to do this clearly, as is the case in “The Economist” article – means that arguments made from the undefined term are, at best, guilty of being based on equivocation. “Science” is a process of making observations, proposing testable and falsifiable explanations, testing the explanations, assessing the tests, and disseminating the outcomes for continued testing, verification, and (if useful) application. In essence, “science” is an error-correcting framework for establishing reliable explanations of natural phenomena. It’s an ideal to which anyone seeking a reliable body of knowledge will strive.

It’s also important to explain what science is not. Seeing as today is “Carl Sagan Day,” I’ll begin with an insightful quote from this communicator of science: “Science is more than a body of knowledge; it’s a way of thinking.” Science is not a collection of facts. Science is not something done only by “scientists,” though scientists practice science and strive to achieve its ideal framework and goals. Science is not defined only by what is printed in journals, presented at conferences, or repeated in sexy headlines by the media. It is, rather, a framework that includes those things – facts, observations, publications, presentations – but is bigger than that, ever seeking to correct for mistakes in any of its parts. That may take days, or decades, or centuries.

The Economist Article – a summary

If I were to summarize the article, I would do it thus. They rightly point out that, at the heart of science, we have the concept of “trust, but verify.” The author then argues that science’s success has made it complacent, admitting more bad research than good research. They argue that so much bad science squanders resources. They argue that competition for limited funding has forced scientists to trump up any result in order to grasp a piece of the funding, pushing results that go unverified. They note flaws in review of results. The author suggests ways to improve, such as improving mastery of statistics, publishing protocols ahead of conducting the research itself, and encourage replication of results.

My thoughts

I don’t disagree with the suggestions that the author makes for improving aspects of the practice of science; we already know that each piece is flawed, and only the entire framework, applied consistently and continually, can identify the errors in the parts. Particle physics took the mastery of statistics more seriously in the last few decades, and this has led to tremendous revolutions not only in sifting tenuous from reliable results, but also in the level of discourse in the field about statistics and its intricacies. I find, comparatively, a lot of medical literature laughable in its naivete when it comes to statistical analysis and confounding factors in an experiment. That isn’t to say there is only weak medical science – in fact, the best medical science is marked by its deep appreciation for and application of  for statistics (just as the best science in any field is marked by a complete respect for the best data analysis practices). I agree that protocols should be made more public, and I especially agree that the publication and funding system should encourage more replication.

However, The Economist is truly guilty of many fallacies in constructing the argument that science is in trouble. Primarily, they mistake science for its parts. Science is greater than the sum of its parts, because it is a long-term, error-correcting framework that seeks to correct the flaws in its steps. Good ideas survive because they turn out to be useful, not because they are merely published in a journal. It’s nearly impossible to determine usefulness ahead of funding, publication and dissemination, and this is where The Economist truly misses the point: you must ALWAYS admit a high rate of publishing weaker or ineffective ideas, so that the community as a whole can go through the process of assessing and then trying to apply them.  If you already know your idea is good, why bother with science? Science is a process to establish reliability, and to determine that you have to see it through.

The author also cherry-picks his way through the entire landscape of science, noting that “prominent” members of a couple of fields decry the level of poor results in journals in their fields. Every scientist in every generation in every field has, at some point, been able to complain about this; the pace of publication is accelerated by our modern toolkit, but that is no excuse to whine about the number of bad results in journals. Bad results have ALWAYS been published in journals or presented at conferences; what matters is that the process of science is still able to sort out, after the fact, the truly useless from the truly useful. Progress results. Progress in science has not stopped; in fact, if anything, the paceo f progress is exponential and positive in many areas. That, more than anything, tells me that science is healthy. It still works. Yes, bad ideas get out there; but what matters is that the few good ideas also get out there, they get used, and they lead to new knowledge. You fundamentally cannot have the good ideas without the bad.

This process has a price. Since it’s impossible to stop every bad idea at the point of publication, one must allow the entire process of science to proceed. Journals can do a better job, but they cannot do a perfect job – if journals were the sole arbiter of truth, one would not need the scientists in the first place. I’d rather that LOTS of ideas get published so that we can catch the few good, useful ones, rather than risking the best ideas solely in the name of reducing the publication rate. Good ideas might be killed prematurely as a result, and that is unacceptable.

Regarding the incentives to cheat to get funding, all systems encourage cheating because no system is perfect. Personally, I admire the academic review and faculty tenure system; it’s brutal and frustrating and grueling and irritating . . . just as it should be. Universities and colleges invest tremendous resources in their faculty, in order to gamble for a few major breakthroughs and a long commitment to excellent teaching. They have every right to expect that this investment is returned, through grants and publications. A physicist, depending on their field, might expect to get a faculty start-up package anywhere between $100k to $5M – yes, that’s right . . . millions of dollars. That’s a serious investment in equipment  (labs, machines, supplies, etc.) and people (students, post-docs, staff, etc.). It’s incumbent on the faculty member to utilize those resources to launch a successful research program, and that is stressful. The University expects to have the investment returned when the faculty member, using the start-up as seed money, draws in research grants. It’s ironic that The Economist criticizes this system, since it’s not terribly different from a business investment model that begins with investor seed money and ends with taking the company public. Faculty have to sell their results, that is true; but companies have to sell their product, and in both cases hype is inevitable. What matters is not the hype – what matters is the usefulness of the results; scientists are BRUTAL when they detect all hype and no substance. Sure, that process can takes years, or decades. Find a better system for investing and generating return on investment, with error-correction built in, and we can talk.

The job of the scientist

The job of the scientist is exhausting. You need to advance your own ideas while critically appraising the results of others. It’s a bloodbath. But it’s rewarding, because in 1 year, or 5 years, or 20 years, you’ll have a more reliable body of knowledge than you have now. Maybe you’ll even get to answer one of those pressing questions that drew you into your field in the first place.

That is the goal of science – reliable and useful knowledge. A single result could be crap, or it could be genius – only the meat grinder of the scientific method will sort that out. Scientists are people, and people make mistakes. But science gives us an ideal framework for sorting fact from crap, and we strive to that ideal.

It would be suicide to human knowledge to slow the trickle of bad ideas, because there is no perfect solution for identifying only the good ideas while ignoring the bad ones. Oh wait . . . yes there is such a solution . . . and it’s called “science.” It is practiced by imperfect people, but it’s the best system ever established by our species for generating reliable information. Can it be improved? Sure. Does it need to change? Only if you like your facts to be more useless.

[1] http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong