The humanities, the sciences, and numbers

A few days ago I ran across a Scientific American blog post that struck me as interesting but somewhat disappointing: Humanities aren’t a science. Stop treating them like one. The writer, Maria Konnikova, begins by noting, quite reasonably, that precise, mathematical approaches to knowledge are not always appropriate. This idea that quantitative approaches aren’t universally applicable is repeated several times throughout the piece, but overall it sounds more like a “barbarians at the gate” polemic, only in this case the barbarians are the number-crunchers who are taking over the humanities. I was disappointed by this because I think there are a lot of interesting things to be said about when and where mathematical approaches should be used or avoided.

For starters, I think the point about excessive reliance on numbers and statistics and “hard science” techniques is valid in some cases, but it’s not limited to the humanities. Certain areas of the biological sciences in particular are just about as squishy and hard to pin down as anything in the humanities. I would bet that there are plenty of ecologists or animal ethologists, for example, who are quite interested in things that are not easily quantified. Mathematical models of ecosystems don’t tell the whole story (although that doesn’t make them worthless). I’d also guess that old-fashioned field work and qualitative observations are being downplayed to one degree or another in favor of more high-tech pursuits.

At the Consilience Conference in St. Louis this April, I heard several talks by humanists who use mathematics or statistics in their work. Granted, it’s a small sample, but all of them seemed sensitive to the lurking pitfalls and careful about defining the role of statistical work. One of the presenters, Jonathan Gottschall, has also conducted a statistical analysis of fairy tales, which I read about just before the conference. His goal in that study was to examine a particular claim about fairy tales: that European ones more or less uniquely reinforce a particular set of patriarchal gender roles, and more broadly that gender roles are much more influenced by nurture than by nature. He and his colleagues did this by a careful statistical analysis of tales from around the world. (Short answer: Overall, the portrayal of gender roles appears to be broadly similar worldwide. The study appears in the book The Literary Animal.)

In the introduction to that paper, he traced the use of statistics in the study of human populations and behavior back to the 1660s and pointed out that ever since they first began to be used, statistics have revealed unexpected or counterintuitive facts and relationships that help a field of study to grow and mature. He predicts that their “limited and judicious use” can do the same for literary studies and even improve the “power and precision” of more qualitative work. (The impression I get from reading his work and that of Joseph Carroll, perhaps the original literary Darwinist, is that, at least among their established colleagues, they still need to argue for wider acceptance of the idea that evolutionary science and statistical studies are appropriate to the study of literature. I’m no literary scholar myself, but it doesn’t look to me like they’re riding a wave of overwhelming approval or that their approach is steamrollering the humanities.)

Konnikova seems skeptical that mathematical studies of literary works can enrich the field. Carroll, Gottschall, John A. Johnson, and Daniel J. Kruger recently wrote Graphing Jane Austen, a book that analyzes the characters in approximately two hundred 19th-century British novels. They asked literary scholars to fill out questionnaires about the characters, analyzed the results, and came up with insights into how these works reflect views on personality and gender roles and how they fit into an evolutionary understanding of human nature. It’s clearly not the only way to study literature, but it certainly seems worthwhile and even fascinating. Judging from the number of young people at the conference who seemed enthusiastic about the work of Carroll, Gottschall, and others, it looked to me like the humanities are maybe not so much being “relegated … to a bunch of trends and statistics and frequencies,” as Konnikova claims, as rejuvenated.

In addition, the fact that numerical data are difficult to obtain or that they ignore parts of the picture doesn’t make them useless. Konnikova quotes Carol Tavris on how psychological researchers often ignore the ways their findings might be affected by factors such as social class, culture, or personal history. This doesn’t mean that those things can’t be taken into account in future studies; the results may be valuable even if they’re still imperfect. Human societies and human individuals are “fuzzy” at all levels of analysis, and there is a danger in pretending that things are more hard-edged than they are. (For example, any psychological study on the behavior of five hundred 20-year-olds at a midwestern university is not giving you anything like the whole picture on human nature, and there’s even some suggestion that people in western industrialized nations are not necessarily all that representative of humankind in general.) But then, human bodies are by no means uniform and their operation is far from clear-cut, and it’s still worth doing medical research to try to arrive at general truths, even if those must then be refined.

Along the same lines, Peter Turchin spoke at the conference about Cliodynamics (history as science; the name comes from the muse of history, Clio). He began by describing part of his motivation for using a data-driven, analytical approach. Historians have pretty much stopped proposing general laws of history, but when they construct a historical narrative, they also propose explanations for why things happened as they did. They may not articulate general laws for, say, how an empire falls, but Turchin feels that the laws are lurking there implicitly in these explanations. The very richness and complexity of historical information make it difficult to reason by historical analogy when seeking causes (too many plausible stories can be constructed with no way to test them).

He thinks the only way forward is to build general models and test them using historical data. He noted that you have to be careful when searching for appropriate proxies for whatever it is you want to study (e.g., he used the number of filibusters as one way to get at whether America’s social capital is declining), and how hard it can be to find usable long-term data; Konnikova also mentions this point, but Turchin described some ways to work within these limitations (relying on multiple proxies and checking them against each other, for example). The difficulties don’t invalidate the approach.

I suspect that my main problem with the SciAm blog post was the confusion over what the sciences are and do compared to the humanities. It seemed to identify numerical analysis solely with the hard sciences (even though psychology and sociology have relied heavily on statistical studies for decades) and the hard sciences more or less entirely with precise mathematical methods (even though the biological and geological sciences, for example, have relied on qualitative and descriptive methods). (It also wasn’t clear what counts as a social science and what counts as the humanities.) There was also some confusion about whether math makes things harder or easier. Math supposedly makes things seem tidy, linear, and easily graspable, even if they’re not, but then political science and psychology rely too much on “fancy statistics.” My limited exposure to statistics has been enough to tell me that sophisticated statistics are not necessarily simple or clean-cut, either in their application or in their interpretation (although I will admit that careless news stories sometimes make them seem that way). There’s also quite a lot to be said about the role of imagination, intuition, and similar intangibles in the scientific process. The humanities indeed are not the sciences, but there’s more common ground (and a far more intricate and complex set of similarities and differences) than came through in that post. I’d love to hear from humanists and scientists both about their perceptions of their fields.


  1. As a humanist who is starting to do more and more with numbers, I think your take here is fairly on-point.

    I do understand the reaction in the SciAm blog post. A lot of other fields (mathematics, psychology) are starting to take literary history as an object of study, and when they do, the results are often dismayingly shallow. Literary scholars are right to push back against that.

    But there’s also a real potential here for real discovery, when quantitative methods are combined with an appropriate sort of humanistic caution. And I think you’re absolutely right to stress that the boundaries between science/social science/humanities are inherently blurry. For that matter the qualitative/quantitative boundary itself is more blurry than we usually acknowledge.

  2. Thanks for the feedback! I’m glad to hear from someone inside the humanities who’s familiar with this question, and your work looks interesting. I know there are various efforts in the digital humanities going on at Indiana University too, and it’s good to get an inside look at some of the questions and challenges involved in that type of approach.

  3. Focusing on history, which I know best, I believe that a lot of historians take offense to remarks like “the only way forward” by Turchin(I know, it is not a direct quote, but I mean the idea). It seems to imply that the current historical practice is not yet up to its full potential and needs cliodynamics to get there. This, I believe, is indeed not an accurate description of the field of history: the field of history is quite successful in giving explanations. Historical narratives, qualitative analysis if you want, can and do explain perfectly well. On the other hand I think it is a bit of a strawdog argument to say that all cliodynamics think that historians don’t give complete explanations. Most, but not all cliodynamic historians I have met – I am a trained philosophers of history from the Netherlands – have a pluralistic view about these matters. They see their sub-field as one that is just as important as others. As such, I believe that history as a field resembles biology in that both approaches, qualitative and quantitative, can lead to new and important insights. The only addition I would like to make to this nuanced view is that giving qualitative explanations of historical phenomenon is precisely the strong point of the traditional historians. Cliodynamics just hasn’t got the same track record as conventional narrative accounts have.

  4. Interesting, thanks! That’s a great point about the need for a variety of approaches. I hope I haven’t inadvertently misrepresented Peter Turchin’s views; they came to you filtered through my best understanding as a curious outsider. One of his points about explanations, as I recall, was that they tend to multiply, but the poorer ones are not necessarily eliminated, which is where he seemed to think that Cliodynamics might contribute. The question of what progress means in different fields (in terms of the balance between accumulation and elimination) came up in a couple of the other talks as well; it seems like a really interesting question for any kind of cross-disciplinary work.

  5. I get the impression that Konnikova is mostly just feeling threatened… It’s hard to guess to what extent that is due to valid concerns about whether quantitative techniques are appropriate, and to what extent it’s due to the threat quantification poses to what we might charitably call “intellectual freedom” or less charitably call “making things up”.

    There are a couple of more direct problems here. First, I’m not sure if Konnikova understands how science works; she’s conflating “science”, or at. least “hard science” with a strictly quantitative view of the world. Instead, I would say that science is simply concerned with empirical understanding of the world. To the extent that humanities are concerned with comprehension of texts, art, societies, etc., it either already is a science, or is non-empirical (i.e., “making stuff up”). There are also, of course, more creative aspects to the humanities (writing fiction, creating art, etc.) that are clearly not science and in which making things up is perfectly appropriate. Empirical understanding of the world should always, at least in theory, be amenable to quantitative analysis; however, that doesn’t mean it should consist solely of quantitative analysis, or that quantification is always the right approach. For instance, in evolutionary biology the question of whether certain traits (generally social traits) are better understood through the concepts of kin selection or of group selection has gotten some attention recently. At this point, there are quantitative models that can treat these traits equally well (at least, so far as I can tell) in either conceptualization. So the question isn’t really a quantitative one, but a question of which conceptualization is more coherent, broadly applicable, and likely to yield new insights. Modern physics also deals with similar issues; in area of quantum mechanics where the math is pretty well settled, for instance, there are disagreements about what the math -means- and what kind of conceptualization of the world is best drawn from our quantitative understanding.

    The other worrying thing here is something you’ve pointed out above–yes, the humanities are often sufficiently complicated and “fuzzy” that quantification is not easy or straightforward. That’s neither unique to the hmanities nor a good reason not to try. Ecology in particular is a real pain to try to understand quantitatively because you’re dealing with very complicated interactions between many different individuals of different species, all of their interactions with the environment, etc. Yet, there’s a lot of good quantitative research going on in ecology. It just isn’t easy and has limits that need to be understood. You can’t model an entire ecosystem, for instance, so you adopt various simplifications and incomplete models of what’s going and have to keep in mind that these simplifications may be close enough to yield a pretty good understanding of what’s going on, or may be completely unrealistic and unhelpful. It’s still worth the effort because quantification doesn’t create these problems, it just makes them more explicit and easier to evaluate! Complicated phenomena don’t magically become simple and comprehensible if we refrain from using math. Instead, without quantification we’re likely to find ourselves with multiple, competing qualitative models that cannot be meaningfully compared or evaluated because they are not tied to real observations in any straightforward and objective fashion. Quantification isn’t going to completely fix that (using math isn’t magic, either), but it can at least reduce the scope of disagreeing interpretations to those that are consistent with what we know about reality.

Comments are closed.