Reflections on making research more practical

During the I3D (aka ACM SIGGRAPH conference on Interactive 3D Graphics and Games) dinner, Chris Wyman from NVIDIA Research, gave a speech entitled “Bridging the gap: Making research more practical”.

Chris voiced two primary complaints with academics: (1) that academics produce too much research that is of no practical use and (2) that research that would be relevant for industry is too difficult to publish (*).

One, I really respect that Chris spoke about his own experiences and frustrations so openly (because of the highly political nature of publishing and promotions, academics are often afraid to voice their opinions publicly). Although he did say that he only did so because he left academia to work in industry.

Now, regarding Chris’s first point, it is not the responsibility of academia to do free research for companies. When it happens, sure, companies can really benefit: new product innovations without the risk and cost of doing the research themselves. But industry really shouldn’t expect it. Like everyone else, researchers both need to get paid and perform duties which help them advance their careers. However, if industry is interested in solving a specific problem, they should be willing to pay for it (if federal funding diminishes, this may be the trend for the future. Possible ethical questions aside, for computer science such collaborations can work really well.

I think the second point is more problematic. Chris criticized reviewers as being too harsh and too trigger happy with rejections, with the consequence that it’s too time consuming and difficult for industry folks to publish. Reviews are intended to catch mistakes, omissions, methodological flaws, and the like. I don’t think we should give this up, nor that industry folks should be held to a lower standard.

But the undercurrent of Chris’s complaints was a feeling of disenfranchisement. What do you do when the research community isn’t interested in your research? When papers are rejected on non-technical grounds, often with the feedback of “your work is too incremental”? In Chris’s specific case, he voiced frustration that his papers in high performance rendering were deemed too incremental: for example, they were improving the performance of algorithms which are already real-time. To be honest, such research does not interest me; however, I can believe there might be others who are interested, particularly in industry.

And I can relate to Chris’s complaints on an abstract level. Some of my projects, which in my view are really creative, have encountered their fair share of indifference (“not useful enough”). Cynically, it can be easier to work in a crowded/known area where there is already an established group of people who think the problem is interesting!

An other factor, not discussed at the banquet, is that bias can affect the peer review process. For example, in a 1982 experiment (with a nice summary here), scientists resubmitted papers which had already been accepted and published in a competitive journal with new names and institutions substituted. The following is copied from the abstract:

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

So yeah, we all like to think of ourselves as objective, but….

And for competitive conferences like SIGGRAPH, where only 0-2 papers are accepted per graphics topic area, the subjectivity and capriciousness involved in the review process affects careers and so is an emotional topic for academics. Although the SIGGRAPH publication process strives to be as objective as possible (double-bind review), it cannot ever be completely blind since some people, such as editors and conference organizers, will need to know who the authors are. When you are an outsider to top-tier conference system, it’s not possible to believe that bias doesn’t creep into the system.

My response to the uncertainty in the process is to simply remember that non top-tier conferences also publish good papers. Which brings me back to why small conferences can be so great: in additional to being focused on your subject area, they help build communities and connections between people with common interests, which frankly is impossible at large conferences like SIGGRAPH. And back to Chris’s comments, I3D seems to be a good conference for real-time rendering work. Maybe not animation, though. But that’s ok. Let’s keep our small conferences focused.

(*) Paraphrased in my own words