Publicity is justly commended as a remedy for social and industrial diseases.
Sunlight is said to be the best of disinfectants;
electric light the most efficient policeman.

Louis D. Brandeis,
United States Supreme Court Associate Justice from 1916 to 1939, in
“Other People’s Money and How the Bankers Use It” (1914), Chapter 5

Two years ago I started a personal experiment in transparency; I began attaching my name to every peer review I wrote for scientific journals and conferences. A strong believer in the tenet that power is best tempered by transparency, I had become uncomfortable with exercising the power to evaluate others’ research while masking my identity from them. I believed that transparency would help me to review unto others as I would have them review unto me.

Indeed, knowing that authors could identify which reviews I had written with certainty, and which reviews I had not written, changed how I evaluated papers and how I communicated my evaluation. Transparency made me more mindful of the subjective biases I bring to the review process. I become more aware of the tone and content of my reviews. I also found myself spending more time thinking about the fundamental goals of peer review, the norms for the practice of peer review within our community, and whether the norms that evolved in our community — many dating back to a time when sharing written documents and engaging in written discussion was both slow and expensive—are still in the best interest of scientists and science as a whole.

I’ve written this essay to share my experiences reviewing transparently, my evolving view of the state of peer review, and to explain why I cannot return to reviewing research papers anonymously.

Where did peer review go wrong?

Peer review helps keep the scientific community honest.

While reviewing cannot prevent the maliciously dishonest from forging results and intentionally misleading the public, the process provides an opportunity to catch inadvertent errors and help to identify situations in which researchers may mislead ourselves and, as a result, unintentionally mislead others. Peer reviewers help to prevent researchers from making the claims that exceed what can be safely concluded from an experiment. If the experiment isn’t documented with sufficient clarity to allow claims to be reviewed, reviewers can require that authors document and clarify the research methods used in an experiment. If authors claim to have proven a hypothesis while failing to disprove a viable alternate hypothesis, reviewers can require their fellow scientists to explain this limitation before the work is published.

This process of verifying scientific claims does have some level of subjectivity; reasonable scientists may differ as to what details of an experiment are essential to document or as to the plausibility of hypotheses not tested by an experiment. Fortunately, few authors find it too onerous to add methodological details to a research paper or to document additional hypotheses that may explain an experimental result. They’re usually just happy when their papers are accepted for publication.

But peers review for more than scientific accuracy.

We in computer science also use peer review to evaluate much more subjective criteria. Are the ideas being tested in the experiment novel? Is the outcome surprising or expected? Is the hypotheses interesting? Are the results interesting? Are the ideas mature enough to have an impact on the practice or do they require further refinement? Program committees may reject work using any of these subjective measures.

Review is used to keep score, not just check facts.

The temptation to reject papers on subjective measures is high, as the top publication venues in computer science tend to be conferences with an inflexible (if not entirely fixed) number of speaking slots and highly variable numbers of submissions. This makes acceptance to conferences a zero-sum game with competitiveness similar to that of college admissions, graduate admissions, and job searches. However, unlike admissions and job searches, the referees are often evaluating the work of the very individuals they, or their students, are competing against.

http://academicnegativity.tumblr.com/image/38227917628

http://academicnegativity.tumblr.com/image/38227917628

To make matters worse, the conference submission game is decidedly different from admissions and job searches in the tone and level of courtesy afforded to those who lose a given round. Reviewing the decidedly large sample of rejection letters from my experience applying to graduate school and to research jobs, I have observed that they are remarkably consistent in their level of courtesy. For scientific paper submissions, courtesy usually ends at the bottom of the message from the editors or program chairs. Reviews may be hostile or angry. All too often, reviewers jump to conclusions and summarize failures of the authors as opposed to focusing on the paper. All too often, reviews lack constructive ideas for improving work so that it can be accepted elsewhere.

Impersonal reviewing can lead to discourteous reviews.

When we do not know who will receive our peer reviews and authors won’t know which reviews we wrote, we have a harder time being diligent about courtesy than we do in face-to-face interactions, or in other forms of more personal interaction. The impunity of anonymity and lack of feedback from prior errors makes it easier for us to mistake our subjective opinions for objective facts that should have been obvious to authors. I’ve seen colleagues who are extraordinarily fair and kind in person, who devote a great deal of energy to reviewing papers and write reviews with the best intentions, yet will critique papers as ‘tedious’ and ‘uninteresting.’ In a more personal interaction, reviewers would be more likely to recognize the subjectivity of concluding that work is ‘uninteresting’; they would presumably recognize that the people they were speaking to had clearly been interested enough in the topic to research it and write up their findings. I suspect my well-meaning colleagues would re-evaluate their choice of words if the interaction were more personal—such as if they knew they would receive a video of authors reading their peer review when it was delivered.

While some program chairs make a concerted effort to weed out the most egregiously subjective, incompetent, or hostile reviews, this alone is not sufficient to address the problems that result from the social distance between reviewers and authors. Since this distance impacts the great majority of reviews, our baseline levels of courtesy and constructiveness will not shift until the research community comes to terms with our biases and work collectively to police ourselves.

http://academicnegativity.tumblr.com/image/34714410768

http://academicnegativity.tumblr.com/image/34714410768

My experience reviewing transparently

I’ve found that reviewing transparently encourages me to think of authors less as adversaries and more as as collaborators—though, importantly, as informal collaborators whose work and individual success I have no personal interest in. Rather than viewing myself as the guard that prevents work that is below some threshold from being accepted, I see my role as to provide authors with the information they require to get work above that threshold.

I evaluate authors’ work with the knowledge that the authors will inevitably be evaluating me, and their evaluations will depend on how accurate and constructive I can make my feedback. Taking a more collaborative view of peer review has not diminished my ability to identify hidden flaws and undocumented limitations of the research I review, but I now feel obligated to invest more thought into providing suggestions that could help resolve these flaws and overcome these limitations.

I try to be kinder when words can be attributed to me.

To improve the tone and civility of reviews, I’ve found it helpful to focus on specific behaviors. This past year my focus was to ensure that my concerns were focused on the submitted document and not those who wrote it. For example, if I thought the document should have cited a particular prior work, I tried to explain my concern as an error of omission of information that would benefit the reader, and not evidence that the author must have been ignorant of it. Training myself to identify reviews that critique authors where they should be critiquing the work, or make assumptions about the authors based on the work, has made me aware of how common it is to see reviews that focus their critiques on authors.

Writing kinder words doesn’t mean I have to advocate to accept more papers than I did before, but if an author receives a rejection from me they should find out what I liked about the paper and receive suggestions for making it better.

Fear of being wrong forces me to double-check facts.

Since I began reviewing transparently, I have noticed myself double-checking (and triple-checking) the facts behind statements in my reviews in situations where I previously might not have done so. I have also found that I police myself more diligently against other forms of laziness.

Laziness has become a particular concern in security, my research area, as the number of submissions for some conferences have been increasing by many tens of percentage points in the past year. On this topic, one fellow program committee member who reached out to express concern about my transparent reviewing conceded that “lazy reviewing is fairly common” but countered that “the reviewing workload is also very high and lazy authoring is also fairly common!” Such rationalizations perpetuate laziness both by authors and by reviewers. Being on a program committee can indeed require a great commitment of time, but it is also an honor that is accepted voluntarily. Reviewers should put as much time into reviewing each paper as they would want others to put into reviewing their own. If they think they may be unable to commit the time, they should either negotiate a load they can handle or not accept the position.

I avoid using subjective measures against papers.

Being transparent has made me much less comfortable with the subjective side of the peer review process. I’ve always been suspicious of evaluating the novelty of ideas; well-designed and well-executed experiments are far rarer than unpublished ideas. Science often proceeds in small increments that go mostly unnoticed until crossing some previously-undiscovered threshold at which an obscure line of work suddenly becomes critically important to the whole community. Consider, for example, research on alternate payment systems at conferences like Financial Cryptography. I had assumed much of this research irrelevant a few years ago, but BitCoin proved me (and many others) wrong. Similarly, had I been a reviewer at SIGCHI in 2005 and been presented with a paper about a blogging that restricts posts to 140 characters as a ‘#feature,’ I suspect I would have found the prospect laughable.

Because I recognize the limitations of my own subjective judgments, I am very unlikely to take the position that a paper should be rejected based on subjective metrics alone. However, this rarely means I find myself needing to champion a paper that I find irrelevant, boring, or have other subjective objections to. I suspect that this may be due to what appears to be a correlation between the presence of subjective shortcomings and objective shortcomings. However, I can’t discard the hypothesis that I’m biased to find the objective shortcomings of papers I find lacking otherwise. This is why I often remind myself about my track record of predicting which blogging systems and alternate payment systems are practical and relevant.

Others’ concerns with transparency

Those who review transparently will inevitably have colleagues, who are accustomed to the social norm of reviewing ‘anonymously,’ confront them with concerns. Those who come to me with an open mind and willingness to question long-held beliefs usually come to appreciate (though not necessarily adopt) my point of view. I’ve tried my best to list and address common concerns below.

Transparent reviewers decrease others’ anonymity.

The primary concern I’ve received regarding transparent reviewing is that, by revealing my identity, I am making it harder for others to remain anonymous. Those concerned often assert that my actions thus harm other reviewers.

However, volunteering to join a program committee as a transparent reviewer does not make others less anonymous, as the set of anonymous reviewers (the `anonymity set’) remains unchanged when a transparent reviewer is added. Rather, transparent reviewers can only be faulted for failing to help grow the anonymity set. As such, conferences and journals that ban transparent reviewing do not provide greater anonymity for their reviewer. Rather, all such policies accomplish is to prevent program chairs, other reviewers, and authors from hearing the opinions of those who review transparently. In short, forbidding transparent reviews silences those who disagree with the social norms yet provides no benefit.

That said, I’m sympathetic with reviewers who are not comfortable reviewing transparently, especially if another reviewer and I are drawn from a small number of experts on the topic of a paper. When authors receive two reviews from two reviewers who claim to be experts, and my identity is not hidden, the authors may become quite confident that they know who the other reviewer is. (Again, the same would be true if I were not a reviewer, and the other reviewer was the lone expert.) To assist other reviewers in such cases, a transparent reviewer can not only write his or her own review, but can work with other reviewers to write up a consensus review.

Because situations in which only a small number of reviewers have expertise or interest in a topic, reviewers are often less anonymous than they may assume. Anonymity presents an enticing puzzle that invites speculation among authors — researchers who are likely to be of a curious nature and good at deductive reasoning. Senior researchers learn to recognize the style of others’ reviews by serving on program committees with them—they may do so unintentionally or even subconsciously. Reviewers may not realize that they have unique behaviors that identify their reviews to others (sometimes referred to using the poker term ‘tell’). They may be unaware that many of the senior authors are able to recognize the reviews of authors they’ve been on program committees with in the past. They may be unaware of the ability of authors to use stylometry to identify review authorship. The false sense of security that junior researchers feel when reviewing ‘anonymously’ can lead them to make bad decisions.

One under-appreciated risk of reviewing ‘anonymously’ is that authors may hold a conscientious reviewer in suspicion for, or even blame the reviewer for, the review of a less conscientious reviewer. A reviewer can find themselves being used as a human shield, with other reviewers shooting down the work of rivals while trying to make their review look like it came from someone else.

Thus, no matter how conscientious reviewers take their role in reviewing, those who review ‘anonymously’ risk being suspected of actions they neither committed nor condone. Suspicions can cause as much damage as the truth. In some ways, suspicion can be more pernicious than knowing the truth because uncertainty prevents authors and reviewers from discussing their opinions and finding common ground, causing misunderstanding and resentment to build over time.

One protection afforded to those of us who make a commitment to review transparently is that we remove opportunities for suspicion and speculation. First-time authors and longtime insiders are on equal ground when it comes to establishing who wrote my reviews. When I hear members of the community griping about the quality of anonymous reviews, I don’t have to worry that they suspect I might have been responsible for them. With my reputation for transparent reviewing established, I only fear disdain or retaliation for the offences I’ve actually committed.

Transparent reviewing burdens program chairs

Allowing some reviewers to eschew anonymity does make tracking anonymity sets more challenging for program chairs and can make it harder to find enough ‘anonymous’ reviewers. If a reviewer asked for an extra review on a paper because mine failed to grow the anonymity set, I would volunteer to make up for the additional required review by reviewing an additional paper. I’d suggest other transparent reviewers do the same.

Transparent reviewers are more easily biased or coerced.

Some are concerned that those who review transparently will write more favorable reviews of works written by those with power over them—those likely to evaluate the reviewer’s future paper submissions or those who may evaluate the reviewer’s candidacy for a job, promotion, or award.

One way to reduce bias is to blind the identities of authors whenever feasible. Like reviewer blinding, author blinding does not guarantee anonymity, and sometimes authors’ identities are all but impossible to hide. However, as it is reviewers who hold power over authors for a given submission, blinding authors’ identities by default helps to assuage, as opposed to magnify, a difference in power. Program committees may have good reasons for revealing the identity of authors to reviewers, such as ensuring that authors aren’t being punished for not citing their own work, detecting authors who submit the same material to two publications at once, and discouraging submissions that are not yet ready for review. To facilitate acceptable uses of author identification, program chairs can unblind authorship after reviewers have submitted their evaluations and after the program committee has made its preliminary decisions. Any changes to reviews and outcomes can then be carefully monitored to make sure authorship information is not abused.

Transparency makes other reviewers uncomfortable.

Some argue that transparent reviewers may make other reviewers less comfortable being anonymous. Since transparent reviewing is rare, transparent reviewers often feel obligated to include not only their name in their reviews, but a brief explanation of why they are reviewing transparently so as to reduce the level of discomfort to authors unfamiliar with the practice. Those accustomed to reviewing ‘anonymously’ may become less comfortable explaining their conformance to the norm.

Indeed, my experiment with transparent reviewing began due to discomfort I began to feel reviewing anonymously when I learned that respected colleagues were attaching their names to their reviews. I had a hard time justifying the choice to review ‘anonymously’ to myself. Even now that I review transparently the power of peer review still gives me a certain level of discomfort. However, if a small amount of discomfort leads to more individual introspection on our roles in peer review, and more open discussion of how to improve peer review, we all stand to benefit. While I support others in their choice to review ‘anonymously’, I believe they should make this choice consciously and with forethought about the power of anonymity and its temptations.

Transparent reviewing destroys a comfortable illusion.

Some fear that revealing themselves to those whose work they criticize, and learning who had criticized their own work, will reduce the ‘comity’ of a conference or community. This seems a reasonable concern. Some of us will indeed be much more comfortable receiving the fiction that a colleague was supportive of work they found problematic, presenting the fiction that we liked a colleague’s work more than we did, or accepting a more general fiction that we have never reviewed each other’s work. Yet, those of us who view the scientific endeavor is a search for truth may be justifiably troubled by constructing such fictions, especially if it is presented to us as a requirement. The presence of these fictions are themselves a source of discomfort.

Authors don’t want to know who reviewed their papers.

Some have justified banning transparent reviewing because they fear it may ‘creep out’ or offend authors. I find this argument particularly concerning, because it justifies censoring a form of speech that some support order to ‘protect’ against offence. It’s a false trade-off. Authors need not be forced to have information they don’t want. We have the technology to easily facilitate the needs of authors who choose to opt out of learning reviewers’ identities.

Wait! I have a better way to fix peer review.

The most common refrain among critics of transparent review is that there are other, better, ways to improve the quality and tone of peer reviews than transparency: accept more papers; have authors or other reviewers rate reviews and their reviewers (anonymously, of course); give best-reviewer awards; task program chairs with auditing and improving every review; give authors a rebuttal process; add a revise-and-resubmit option to conferences; shift our field to journals; publish all submitted papers and their reviews; or open the whole process up for public comment and scrutiny from start to finish. I support and have offered to assist many of these approaches. However, these ideas have been around for a long time and, at the current rate of progress, I cannot fool myself to believe that they will be resolved during my career. For now, I am doing what I can do to address the part of the problem I have control over.

http://academicnegativity.tumblr.com/image/34766689445

http://academicnegativity.tumblr.com/image/34766689445

Considering transparent reviewing?

If you are thinking about trying out transparent reviewing for yourself, I recommend that you only accept reviewing responsibilities if the program chairs or editors are aware that you are considering reviewing transparently and are encouraging of it. You may also want to ask the chairs to inform other reviewers of your plans in advance, so that your decision doesn’t come as a surprise to reviewers who may expect to be able to hide behind your reviews when they write theirs. I made this mistake to give sufficient notice when I first started my experiment for a reason that I expect is common — when I accepted the invitation to join the program committee that year, and even when was writing my reviews and participating in discussions, I was not yet sure I’d be ready to go through with adding my name to my reviews.

The risks of transparency are not where you think.

Most reviewers’ first concern about reviewing transparently is how authors will react. Having reviewed over a hundred papers transparently, I’ve never received anything but respectful and thoughtful communications from authors. I have received one request for clarification that resulted in a friendly conversation and an apology (on my part) for having provided a review that was not sufficiently clear in its guidance.

Anecdotally, I’ve found students are particularly supportive of transparent reviewers, perhaps because they are the least powerful participants in the peer review process, and the most common victims of academic negativity.

The risk from those who purport to protect others.

Rather, if you are considering the benefits and risks of transparent reviewing, I emphasize that you focus on how fellow reviewers will react to seeing a social norm threatened. Should you start to review transparently, you may find that other reviewers will question your motives, harass you, try to intimidate you, and work to put policies in place to forbid transparent reviewing. They will take these actions under the guise that they are protecting others: reviewers who will be harmed if you are not there for them to remain anonymous among and authors who will be harmed if their faceless critics are given a face. The real risks of reviewing transparency lie here. And that is why…

I cannot return to ‘anonymous’ reviewing

Transparency is an ideal that Science should support.

No reviewer who genuinely believes in the ideal that transparency is the best protection against abuse of power should have to violate their ideals to participate in peer review. The scientific endeavor cannot survive in an environment in which the exploration of uncomfortable or unconventional ideas is forbidden.

Transparent reviewing can protect the vulnerable.

Well-informed scientists, including those facing job searches and promotion cases, may feel that their interests are best served by opting out of reviewing ‘anonymously.’ Being transparent about which papers they have reviewed protects them from the pernicious effects of suspicion and speculation.

I myself made my choice to review transparently with the knowledge that my employer seeks outside feedback when making promotion decisions, and that forces beyond my control could force me to find a new job—knowledge that was recently reinforced when my employer recently closed a research lab.

Just as ‘anonymous’ reviewers seek safety in numbers, transparent reviewers benefit from being part of a community; It is easier to single out and attack the character of a ‘heretic’ than it is to target a community. If I believe others should have the option to review transparently — and I do—I cannot in good conscience abandon the practice of transparent review. To do so would be to abandon those who currently engage in transparent review and those who might benefit from doing so in the future.

Regardless of whether this choice proves optimal for my career advancement, it’s the only choice that I can feel good about given the values I believe in.

[At the time of this writing] Stuart Schechter [was] a Researcher at Microsoft Research.

Stuart would like to thank John Douceur, Serge Egelman, Simson Garfinkel, Jon Howell, Jeffrey Naughton, Bryan Parno, Lowell Schechter, and Jean Yang for providing constructive feedback on earlier drafts of this essay.