Sunday, December 27, 2009

Scientific Controversies and Hot-button Issues

Posted by Danny Tarlow
I haven't done much work over the holiday, but I've had a chance to do quite a bit of reading. The book next to my bed is Shrijver's three volume series on Combinatorial Optimization: Polyhedra and Efficiency (Algorithms and Combinatorics), which lives up to the hype.

On the (arguably) lighter side, I've discovered the not-so-secret underworld of what I'll call "math 2.0" websites and blogs. Maybe I'm just out of the loop, but from my perspective, it seems that math and theoretical computer science have a more active internet community than the more applied practitioners like us in machine learning, applied statistics, and algorithms (there are of course many notable exceptions).

Anyhow, what I really wanted to write about comes from reading the blogs. The observation is how difficult it is for computer scientists and mathematicians to get their points across to the general public. Two examples particularly illustrate my point: In both cases, the overarching story is one of a mathematician against the press. As I understand it, in the first case, it's an information theorist arguing against an intelligent design argument based on information; in the second, it's a computational complexity theorist trying to dispel some of the possibly exaggerated claims made about a company's supposed quantum computer. The comment threads under both posts are quite interesting (and sometimes sadly comical) reads.

It's not just the stereotypical story of a mathematician being unable to communicate with normal people. Instead, the common theme in both cases is that the mathematician has spent a lot of time carefully putting down their position in a public and/or peer-reviewed form. The problem is that their opinion--though they claim it is still valid--is several years old. In the interim, their adversaries have come up with more to say and claim to have refuted the criticisms in more recent citations. The mathematicians disagree, saying that their old arguments still hold, citing the original papers, saying they can't be bothered to repeat their arguments fresh every time a new, unrelated argument that doesn't address their original concern gets made.

Now this all may sound reasonably well and good if we were in an academic setting: get a few well-respected members of the field and ask their opinion. In academia, we have two advantages, though:
  • We are used to trusting the opinions of our peers expressed via blind peer review schemes, and
  • It is difficult enough to write a reasonably good conference paper that we aren't completely (some may disagree) overwhelmed with crackpot ideas to evaluate.
Unfortunately, in blogs and the popular press, neither holds. Everybody assumes everybody else has a hidden agenda, and there is absolutely no way to get a qualified person to review every possibly crazy idea out there, much less fight a prolonged battle over it. As we often see in politics, the side with the most money, cleverest spin, and loudest voice tends to be heard clearest.

So then the question becomes where to go from here. The mathematician making the original argument can't be expected to spend time fighting every little battle, but very few people have the time, inclination, and ability to credibly pick up the battle in their stead (in the two example cases, the issue is over different definitions of information, and subtleties about how "quantum" a quantum computer is. See here and comments 23, 24, and 26 here for examples). Even in ridiculous cases where an argument really has no merit, there is enough jargon and a long enough comment thread to make a casual observer think that the issue is "complicated," and therefore either side could be right with equal plausibility.

Yet, for lack of a better idea, we as academics by and large still use the same strategy for making an argument: write the paper, then move on. If we need to make a point, refer the reader to the paper. By putting the ideas in the record and possibly presenting the ideas to our peers, we've done our part.

I don't think this is good enough, but there's not an easy answer. Going back to the politics setting, there are difficult questions about when to put up a fight and when not to even dignify some crazy assertion with a response. Minor wording issues can be blown out of proportion. It is often considered harmful in a political debate to give a long answer. And yet to fight these battles on the public stage, we as academics are woefully inept, which is no surprise, since public relations is a tricky game and most of us have zero training.

So here's one idea: academic conference and journal bodies are quite good at deciding whether an idea deserves their stamp of approval. Peer review isn't perfect, but it's pretty good. Unfortunately, once a paper is accepted or rejected, the responsibility of the reviewing body ends. They will passively make the material accessible (either for free or behind a pay wall) and not give it further thought.

I'd like to see these bodies take a more active position in the public eye. The content published in the conference or journal becomes the agenda of the reviewing body. If somebody puts out an idea in the public eye that contradicts something published in the proceedings, the review body's public relations (PR) wing decides how to address it. In some cases, it may be proper to ignore it. In others, a real, prolonged fight may be needed.

I don't know all the exact details of how it would work, but there are several benefits to having a conference-specific PR body fight the battle:
  • It's harder to undermine the motivations of an entire organization than an individual scientist. It's also harder for one scientist to co-opt the opinion of the full organization.
  • By having the same organization's name come up repeatedly, it will begin to build a reputation for the body in the public eye.
  • Professional public relations people would be in the loop to help scientists make their point effectively.
There are other tangential benefits, increasing the exposure of the field to the public and making more clear the practical implications of the research being done.

The downside is obviously that it would cost money and require additional organization to maintain a permanent PR body for each major conference. I can't help but think that a concerted PR effort would be a good investment for many of these hard-to-understand fields, though.

Edit: I didn't find a place to put it, but this story came up in one of the comment threads.

Edit 2: Another related example. This time an academic against possibly faulty sex ratio statistics (how often parents have boys vs girls) that got picked up by the popular press. The academic was then "refuted" by Wikipedia.

[Aside] I'd love it if machine learning people started using Math Overflow (MO). It seems like a nice way to get a bit of crossover between the fields, which in many cases aren't as different as they appear on the surface. Here are some example posts that have a fair number of upvotes (i.e., they are good questions according to the MO community) and that I find interesting as a machine learning researcher: There are plenty more examples in topics like combinatorics, statistics, probability, graph theory, and optimization. [/Aside]