This probably won’t make much sense to people out of academia. But anyways.
As you are probably aware, peer reviewed publications go through an elaborate reviewing process. In Computer Science (and perhaps in other fields as well), typically conferences follow some kind of anonymization on the review process. These are called blind reviews. Depending on the degree of anonymity, one has the following three kinds of review processes:
* Zero blind: the reviewers know who the authors are; the authors know who wrote their reviews.
* Single blind: the reviewers know who the authors are; the authors don’t know the reviewers.
* Double blind: neither the authors nor the reviewers know each others’ identities.
I don’t really know of any (good) conference that is zero blind, but there are several under the single and double blind categories. Now time and again, people get into debates on which system is the best. The debate, naturally, is about the anonymity of the authors — there seems to be consensus on the anonymity of the reviewers.
Advocates of the single blind process argue that sometimes a weak paper (such as one with a good idea but backed by and not-so-good implementation/evaluation) might get accepted if the reviewers knew the authors and were convinced (from their reputation/past record/whatever) that they will do a good job by the camera ready deadline. On the flip side, of course, there is the danger that “well known” authors may get an unfair advantage and the under-dogs and small-fish’s potential will be undermined.
Meanwhile, proponents of the double-blind process claim that not knowing authors’ identities makes the reviewing process more fair. Critics, however, argue (sometimes correctly) that usually these research communities are so tightly knit that practically everyone knows the authors anyways. So the whole double blind thing doesn’t really work; besides it unnecessarily inconviniences the authors since they have to put in some extra effort to anonymize their submissions.
For some conferences (such as SIGCOMM), double blind seems to work fairly well — every year there’s atleast one “surprise” paper. While for SOSP, it just seems to be a pain — there are far fewer submission submissions than SIGCOMM and pretty much everyone knows who has written which papers. I guess in the end its up to the community to figure out what works best for them. But what really pisses me off is people’s carelessness — if you are submitting to a double-blind conference, you //must// honor the guidelines. Some of the papers I’ve reviewed have been just unbelievably callous.
So how is it with other fields?