Paul Davis offers a detailed overview of some of the invited talks, prize lectures, and minisymposia presentations at CSE19.
Quantization—an efficient means of producing low-precision DNNs—yields interesting mathematical issues.
Could Hopf bifurcation be responsible for fluctuating wait times? Jamol Pender investigates at CSE19.
Roy Goodman presents an overview of Markdown, a lightweight markup language that is well suited for smaller writing projects.
Panelists at a CSE19 discussion on diversity spoke about ongoing efforts towards equal representation in academia.
View a multitude of prize photos from CSE19 and GS19, which took place earlier this year.
Exceptional mathematician with background in number theory and/or recursive function theory sought to help prepare paper.
IPAM seeks proposals from the mathematical, statistical, and scientific communities for long programs and workshops.
A paper titled “Why Are Big Data Matrices Approximately Low Rank?,” by Madeleine Udell and Alex Townsend (both of Cornell University), appears in the first edition of the SIAM Journal on Mathematics of Data Science (SIMODS). Rachel Ward (University of Texas at Austin), the review editor for this paper, was struck by the work’s originality and potential for large impact in the field of data science. “Madeleine and Alex were motivated by the observation that low-rank matrices in applications are everywhere,” she notes. “However, instead of going down the ‘usual’ route of improving or generalizing one of the many existing methods for low-rank matrix analysis, they took a different path and asked the following question: Why are all of these matrices low rank? What commonalities could the processes generating these datasets share?”
Rachel had the opportunity to chat with Madeleine to learn about how the paper came to be, what inspires and motivates her research more broadly, and what lies ahead in her future career.
Rachel: What is your scientific background and the general focus of your research?
Madeleine: My undergraduate training was in mathematics and physics, followed by a Ph.D. in computational and mathematical engineering. My general aim is to find structure in high-dimensional data and use that structure to design more efficient algorithms and answer questions about the data. Recently I’ve been focusing on low-rank structure. We’ve used it to design low-memory optimization methods, automate hyperparameter search in machine learning, control for latent variables in causal inference, understand medical records and survey data, and more.
Rachel: What inspired the research in your paper and how did your collaborators come together?
Madeleine: Low rank matrices are all around us! In my own research, I’ve encountered low-rank data everywhere from traditional scientific computing applications (combustion simulations and weather data) to finance (environmental, health, and governance indicators), social science (survey data), and medicine (hospitalization records). At first it seemed lucky, but eventually it began to look suspicious. Why are all of these matrices low rank? I was inspired by a talk that Christina Lee Yu presented at Cornell. She demonstrated how to perform collaborative filtering when matrix entries are given by differentiable functions of latent parameters. I suspected that a similar assumption would in fact be enough to show that the matrix was low rank. Together with Alex, who had explored comparable phenomena in mathematics, we set out to understand the origin of low rank in data science.
Rachel: What is the future direction of this work?
Madeleine: We’re now looking at how to exploit low-rank structure to enable fast, memory-efficient optimization.
Rachel: How would you explain the main findings of your paper to non-science-minded family and friends?
Madeleine: People are very complicated. Questions we can ask may be very complicated too. But suppose a function exists that takes everything there is to know about a person, and everything there is to know about a question, and returns that person’s answer to that question. If that function is not too crazy, then it turns out that knowing just a few pieces of information about the person and the question would suffice to predict their answers. In fact, the amount of information we need to know grows as the log of the number of people and number of questions.
Rachel: Why is SIMODS a good home for your work?
Madeleine: Our paper is quite squarely on the mathematics of data science. We use fundamental (and simple!) mathematical ideas to explain a commonality in a very wide variety of datasets arising in “data science” settings.
Rachel: Who are your role models in the field? What qualities do you hope to emulate?
Madeleine: I’d say my biggest role model is my Ph.D. advisor, Stephen Boyd (Stanford University). I admire his vision in pushing forward the full stack of innovations to enable the success of convex optimization, from new algorithms and software packages to modeling tools and an abundance of surprising applications. As a result, scientists in a wide variety of fields can now understand and use these tools, which drives future work in more areas than one person can possibly touch. This kind of research agenda has three pillars: identification of applications that matter, improvement of efficiency and reliability, and prioritization of clarity (in writing) or ease of use (in software).
Madeleine Udell is an assistant professor of operations research and information engineering and a Richard and Sybil Smith Sesquicentennial Fellow at Cornell University. She studies optimization and machine learning for large-scale data analysis and control. Madeleine completed her Ph.D. in computational and mathematical engineering at Stanford University in 2015 under the supervision of Stephen Boyd, and fulfilled a one-year postdoctoral fellowship—hosted by Joel Tropp—at the California Institute of Technology’s Center for the Mathematics of Information. She received a B.S. in mathematics and physics from Yale University.