By Karen Bliss
How did the work of 244 judges in the 2016 Moody’s Mega Math (M3) Challenge to narrow the field of 1,084 submitted papers to 90 worthy of recognition remain as consistent and valid as possible?
More than ten years ago, the now national contest was offered only in the NYC metro area. This expansion is both exciting and somewhat daunting; we continue to further spread the idea of mathematical modeling to high school students while ensuring a rigorous and highly-consistent judging process.
There will always be variation among judges (some are more lenient, for example), and our goal is to make sure that despite variations, the best papers rise to the top so that the scholarship awards and the recognition and prestige they entail go to the very best submissions. During the online scoring round of the judging process called triage judging, a normalization algorithm is used, and calibration is made possible by all judges reading the same sample of papers, thus establishing a basis for the adjustment of scores.
Following triage judging, an in-person round, called contention judging, takes place in Philadelphia, where twelve experienced judges immerse themselves in reading papers over a long weekend at SIAM offices. Prior to 2016, contention judging rounds involved only informal conversations; with focused discussion involving all twelve judges occurring only at the very end of the judging process, during which time the rank order of the top papers was determined.
It was clear during those final conversations in past years that the judges had much to say, and much to absorb as others spoke, yet there was limited collaboration time, mostly at the very end of the process. The important, strong, impassioned opinions about what individual judges value among those top papers had evolved over the weekend, and did not always match up. Some judges may have determined that if a paper had a particular flaw, then it wasn’t worthy of being one of the top papers. Other judges were willing to overlook flaws if they found other aspects about a paper that were outstanding. These discrepancies were more difficult to untangle at the end than if they had emerged earlier in the weekend for discussion and thought. Resolution always happened, but could be painful!
This year, for the 2016 Challenge, we decided to incorporate planned, intentional discussions throughout the entire process. From the moment the judges arrived at SIAM offices, they shared what they had seen and liked about papers during the triage round of judging. Focused and moderated conversations continued between reading sessions, over lunches and dinners, and covered topics like ways to differentiate papers, creativity displayed, “must-haves” and fatal flaws. The judges, whose opinions and experience we valued enough to invite them to this paramount contention round of judging, all had the opportunity to be heard. There was plenty of heated discussion (!), but in the end, judges were able to understand one another’s perspectives and did a phenomenal job identifying top papers, and coming to consensus on the finalists who would present their work live at Moody’s in New York, NY.
As members of the applied mathematics community, we go to conferences to showcase our work and receive feedback, and we engage with research collaborators, all because the final product, whatever that may be, is better when we share ideas. Thanks to the M3 Challenge, we learned this lesson again in a new setting: discussions with our colleagues help our thought processes and enable us to see all sides of an issue or solution, and yield a better end product.