SIAM News Blog
SIAM News
Print

Reflections on Big Data and Sensitivity of Results

Mathematicians are excited by the significant success of “big data,” methods, and the recent SIAM News article about Julia is no exception. But mathematicians have a responsibility to prove their results or define their validity, no matter how exciting. Many investigations of big data solve inverse problems using outputs of systems to define the inputs and the equations that define the system.

Inverse problems have been analyzed in some detail, and the reliability of results is a central subject in the analysis. The sensitivity of results to uncertainties is often large because of the inherent ill-posedness of most inverse problems. This sensitivity is crucial to determine the reliability and thus utility of results. The fact that Julia can help determine sensitivity is of great importance.

It is also important that workers on big data actually discuss issues of sensitivity and ill-posedness as they assess the reliability of their results. It is a sad fact that many papers on big data do not include the words “ill-posed” or “sensitivity,” let alone confront the issues those words describe. The classical results of the theory of inverse problems cannot help solve the problems of big data unless they are used. Explicit discussion of the sensitivity of results and the ill-posedness characteristic of inverse problems is likely to lead to more reliable and useful results.

Bob Eisenberg


Read “Scientific Machine Learning: How Julia Employs Differentiable Programming to Do it Best,” by Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B. Shah, in the October 2019 issue of SIAM News.

blog comments powered by Disqus