Peer Gibberish

Well, this is an embarrassing thing to see: according to Nature, the Springer publishing and the Institute of Electrical and Electronic Engineers (IEEE) have had to withdraw at least 120 papers from their subscription services, because the papers were gibberish produced by a program, SCIgen, that strings together words and phrases into computer science-ish texts. SCIgen and this sort of thing are meant for fun (Nature also linked to arXiv vs snarXiv, which lets you try to figure out whether titles are actual preprints on the arXiv server or gibberish), but such nonsense papers have been accepted for conferences or published in, typically, poorly-reviewed forums, to general amusement and embarrassment when it’s noticed.

I’m sympathetic to the people who were supposed to review these papers. It’s hard reading any kind of academic paper, for one. They tend to be written with the goal of presenting novel findings efficiently; whether they’re pleasant to read isn’t a factor. (I wouldn’t be surprised if authors had no idea how to write so as to be enjoyable to read, either. I didn’t get any training in writing-to-be-read and I don’t remember seeing courses in that.) It’s also very hard to read something outside your specialty: the terminology and vocabulary and writing styles can be ferociously localized. Just today I was reading a WordPress post which started from the equations Euler used to describe the flow of viscosity-free fluids, which was at the heart of my thesis, and before eight paragraphs it had got into symbols I barely recognized and into points I’ll need to re-read and re-think before I can grasp them. And reviewing papers is really unappreciated; the best you can really hope for is to dig deep into the paper and understand it so thoroughly you can write a better version of it than the authors did, and so be thanked for making perceptive criticisms when the revised version of the paper comes out. The system makes it too easy to conclude something like “well, I don’t really have the time to understand all of this, but I on skimming it I don’t see anything plainly offensive to all persons, so, it probably makes sense to people who are looking for this kind of paper” and go on to a more pressing deadline, and I admit I don’t have a better system in mind.

I’m also reminded of a bit of folklore from my grad school days, in a class on dynamical systems. That’s the study of physics-type problems, with the attention being not so much on actually saying what something will do from this starting point — for example, if you push this swing this hard, how long will it take to stop swinging — and more on what the different kinds of behavior are — can you make the swing just rock around a little bit, or loop around once and then rock to a stop, or loop around twice, or loop around four hundred times, or so on — and what it takes to change that behavior mode. The instructor referred us to a paper that was an important result but warned us to not bother trying to read it because nobody had ever understood it from the paper. Instead, it was understood — going back to the paper’s introduction — by people having the salient points explained by other people who’d had it taught to them in conversations, all the way back to the first understanders, who got it from the original authors, possibly in talking mathematics over while at the bar. I’m embarrassed to say I don’t remember which paper it was (it was a while ago and there are a lot of key results in the field), so I haven’t even been able to figure how to search for the paper or the lore around it.


Author: Joseph Nebus

I was born 198 years to the day after Johnny Appleseed. The differences between us do not end there.

2 thoughts on “Peer Gibberish”

  1. I have enjoyed this article as I have always been a fan of the Sokal hoax.

    When I had to review articles at the university these were only from my own narrow niche of specialization. I probably did what the reviewers of my papers also did: Zoom in on my pet topics and check for things that I knew from experience were difficult to measure or difficult to reproduce (it was experimental physics).

    It would be interesting to know how much the focus of papers to be reviewed deviates from the specialization of reviewers – and if there is a measure for that.. a measure for specialization / deviation. I guess interdisciplinary research is then most difficult to evaluate – as the Sokal hoax also proved.


    1. I’m of mixed minds about the Sokal hoax, and of related rings like the SCIgen papers that’ve been slipped in. Part of me really appreciates pranks and hoaxes: putting aside the comic value of slipping nonsense into the sensible, they also provide a way of testing that a person, or group, or organization is not just reading things but thinking critically about them. If too much nonsense is passing through one’s filters that suggests a problem with the filtering.

      On the other hand there’s something mean spirited about pranks meant to deceive without enlightening. And the way (some) “hard” science people get snobby about the “soft” fields is truly obnoxious, since, as the SCIgen thing shows, the “hardness” of a field hasn’t got much to do with whether peer review is functioning the way it’s supposed to.

      I think you’re right that it’d be interesting to measure how much the specialization of reviewers correlates to poorly-conducted peer review. I’m not sure just how to quantify a reviewer’s specialization, much less a paper’s, but I think I can sort of imagine ways to estimate both.


Please Write Something Good

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s