# Recovering Lost Information in the Digital World

We live in an increasingly digital world where computers and microprocessors perform data processing and storage. Digital devices are programmed to quickly and efficiently process sequences of bits. A computer operating on these bits then programs mathematical algorithms translated from signal processing. An analog-to-digital converter converts the continuous time signal into samples; the transition from the physical world to a sequence of bits causes information loss in both time (sampling phase) and amplitude (the quantization step). Is it possible to restore information that is lost in transition to the digital domain?

The answer depends on what we know about the signal. One way to ensure a signal’s recovery from its samples is to limit its speed of change. This idea forms the basis of the famous Nyquist theorem, developed in parallel by mathematicians Edmund Taylor Whittaker and Vladimir Kotelnikov [5]. The theorem states that we can recover a signal from its samples as long as the sampling rate (the number of samples per unit time) is at least twice the highest frequency in the signal. This result is the cornerstone of all current digital applications, which sample at the Nyquist rate or higher.

Despite the theorem’s tremendous influence on the digital revolution, satisfying the Nyquist requirement in modern applications often necessitates complicated and expensive hardware that consumes considerable power, time, and space. Many applications use signals with sizable bandwidth to deliver a high rate of information and obtain good resolution in various imaging applications, such as radar and medical imaging. Large bandwidth translates into high sampling rates that are challenging to execute in practice. Thus, an important question arises: Do we really have to sample at the Nyquist rate, or can we restore information when sampling at a lower rate?

A related concern is the problem of super resolution. Any physical device is limited in bandwidth or resolution, meaning that it cannot obtain infinite precision in time, frequency, and space. For example, the resolution of an optical microscope is limited by the Abbe diffraction limit, which is half the wavelength used for illumination. We can thus view large objects like bacteria in the optical regime, but proteins and small molecules are not visible with sufficient resolution. Is it possible to use sampling-related ideas to recover information lost due to physical principles?

We consider two methods to recover lost information. The first utilizes structure that often exists in signals, and the second accounts for the ultimate processing task. Together they form the basis for the Xampling framework, which proposes practical, sub-Nyquist sampling and processing techniques that result in faster and more efficient scanning, processing of wideband signals, use of smaller devices, improved resolution, and lower radiation doses [5].

The union-of-subspaces model is a popular choice for describing signal structure [7, 9]. As a special case it comprises sparse vectors — vectors with a small number of nonzero values in an appropriate representation, which is the model underlying compressed sensing [6]. It also includes some popular examples of finite-rate-of-innovation signals, such as streams of pulses [11]. An example of this signal arises naturally in a radar system, where a pulse moves towards the targets, which reflect it back to the receiver. The received pulse hence consists of a stream of pulses, where each pulse’s time of arrival is proportional to the distance to the target, and the amplitude conveys information about the target’s velocity through the Doppler effect. Several samplers based on union of subspace modeling appear in Figure 1.

**Figure 1.**Sub-Nyquist prototypes for different applications developed in the Signal Acquisition Modeling and Processing Lab at Technion – Israel Institute of Technology. Image courtesy of Yonina Eldar Lab.

Researchers have also recently explored the actual processing task. We consider three such examples: (i) scenarios in which the relevant information is embedded in the signal’s second-order statistics [3], (ii) cases where the signal is quantized to a low number of bits [8], and (iii) settings in which multiple antennas form an image [1, 2].

An interesting sampling question is as follows: What is the rate at which we must sample a stationary ergodic signal to recover its power spectrum? The rate can be arbitrarily low using appropriate nonuniform sampling methods. If we consider practical sampling approaches—such as periodic nonuniform sampling with \(N\) samplers, each operating at an \(N\)th of the Nyquist rate—then only on the order of \(\sqrt{N}\) samplers are needed to recover the signal’s second-order statistics. This leads to a sampling rate reduction on the order of \(\sqrt{N}\). Next, suppose that we quantize our signal after sampling with a finite-resolution quantizer. Researchers traditionally consider sampling and quantization separately. However, the signal introduced by the quantizer is distorted, which begs the following question: Must we still sample at the Nyquist rate — the rate required for perfect recovery assuming no distortion? It turns out that we can achieve the minimal possible distortion by sampling below the signal’s Nyquist rate without assuming any particular structure of the input analog signal. We attain this result by extending Claude Shannon’s rate-distortion function to describe digital encoding of continuous-time signals with a constraint on both the sampling rate and the system’s bit rate [8].

As a final example of task-based sampling, consider a radar or ultrasound image created by beamforming. An antenna array receives multiple signals reflected off the target; these signals are delayed and summed to form a beamformed output that can often be modeled as a stream of pulses. However, the individual signals typically lack significant structure and are often buried in noise. Nonetheless, by exploiting the beamforming process we can form the final beamformed output from samples of the individual signals at very low rates, despite the signals’ structure scarcity. In addition, we can preserve the beampattern of a uniform linear array by using far fewer elements (a sparse array) and modifying the beamforming process. By applying convolutional beamforming, we can achieve the beampattern associated with a uniform linear array of \(N\) elements using only on the order of \(\sqrt{N}\) elements (see Figure 2).

**Figure 2.**The same cardiac image obtained with delay-and-sum beamforming using a uniform linear array of 63 elements (left) and convolutional beamforming using a sparse array of 16 elements (right). Image courtesy of [4].

Combining the aforementioned ideas allows us to create images in a variety of contexts at higher resolution using far fewer samples. For example, we can recover an ultrasound image from only three percent of the Nyquist rate without degrading image quality (see Figure 3). This ability allows for multiple technology developments with broad clinical significance, such as fast cardiac and three-dimensional imaging, which is currently limited by high data rate. Moreover, the low sampling rate enables the replacement of large standard ultrasound devices and their cumbersome cables with wireless transducers and simple processing devices, such as tablets or phones. The sampled data’s low rate facilitates its transmission over a standard WiFi channel, allowing a physician to recover the image with a handheld device. In parallel, the data may be transmitted to the cloud for remote health and further, more elaborate processing.

**Figure 3.**Ultrasound imaging at three percent of the Nyquist rate (right), as compared to a standard image (left). Image courtesy of [1].

Our approaches can also help increase resolution in fluorescence microscopy [10]. In 2014, William Moerner, Eric Betzig, and Stefan Hell received the Nobel Prize in Chemistry for breaking the diffraction limit with fluorescence imaging. They sought to obtain a high-resolution image by using thousands of images, each containing only a small number of fluorescing molecules. This method—referred to as photo-activated localization microscopy (PALM)—allows researchers to localize and average the molecules in each frame to obtain one high-resolution image. This leads to high spatial resolution but low temporal resolution. Since estimating each pixel’s variance can form a brightness image, we can exploit our ability to perform power spectrum recovery from fewer samples to dramatically reduce the number of samples needed to form a super-resolved image. This approach is called sparsity-based super-resolution correlation microscopy (SPARCOM). Due to the small number of required frames, SPARCOM paves the way for live cell imaging. Figure 4 compares SPARCOM with 60 images and PALM with 12,000 images. Both approaches generate similar spatial resolution, but SPARCOM requires two-orders-of-magnitude fewer samples.

**Figure 4.**Super-resolution in optical microscopy.

**4a.**The image obtained with a standard microscope.

**4b.**The original image at high resolution.

**4c.**The image obtained using 12,000 frames via photo-activated localization microscopy (PALM).

**4d.**The image obtained using only 60 frames via sparsity-based super-resolution correlation microscopy (SPARCOM). Figure courtesy of [10].

The same idea is applicable to contrast-enhanced ultrasound imaging. We may treat the contrast agents flowing through blood similarly to the blinking of the fluorescent molecules; in this way, we perform ultrasound imaging with high spatial and temporal resolution. This distinguishes between close blood vessels and facilitates the observation of capillary blood flow.

In summary, to recover information with higher precision and minimal data we must exploit all of the information we have; here we focused on exploiting structure and the processing task. This yields new mathematical theories that provide bounds on sampling and resolution, and new engineering developments that produce novel technologies to overcome current barriers. In the future, the combination of mathematics and engineering—seeing information with a precision that is presently unavailable and tracking effects faster than is currently possible—can pave the way for innovative scientific breakthroughs.

**References**

[1] Chernyakova, T., & Eldar, Y.C. (2014). Fourier Domain Beamforming: The Path to Compressed Ultrasound Imaging. *IEEE Trans. Ultrason. Ferr. Freq. Con., 61*(8), 1252-1267.

[2] Cohen, D., Cohen, D., Eldar, Y.C., & Haimovich, A.M. (2018). SUMMeR: Sub-Nyquist MIMO Radar. *IEEE Trans. Sig. Process., 66*(16).

[3] Cohen, D., & Eldar, Y.C. (2014). Sub-Nyquist Sampling for Power Spectrum Sensing in Cognitive Radios: A Unified Approach. *IEEE Trans. Sig. Process., 62*(15), 3897-3910.

[4] Cohen, R., & Eldar, Y.C. (2018). Sparse Convolutional Beamforming for Ultrasound Imaging. *IEEE Trans. Ultrason. Ferr. Freq. Con*., in press.

[5] Eldar, Y.C. (2015). *Sampling Theory: Beyond Bandlimited Systems*. New York, NY: Cambridge University Press.

[6] Eldar, Y.C., & Kutyniok, G. (2012). *Compressed Sensing: Theory and Applications*. New York, NY: Cambridge University Press.

[7] Eldar, Y.C., & Mishali, M. (2009). Robust Recovery of Signals from a Structured Union of Subspaces. *IEEE Trans. Inform. Theor., 55*(11), 5302-5316.

[8] Kipnis, A., Goldsmith, A., & Eldar, Y.C. (2018). Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits. *IEEE Sig. Process. Mag., 35*(3), 16-39.

[9] Lu, Y.M., & Do, M.N. (2008). A theory for sampling signals from a union of subspaces. *IEEE Trans. Sig. Proc., 56*(6), 2334-2345.

[10] Solomon, O., Mutzafi, M., Segev, M., & Eldar, Y.C. (2018). Sparsity-based Super-resolution Microscopy from Correlation Information. *Optic. Exp., 26*(14), 18238-18269.

[11] Vetterli, M., Marziliano, P., & Blu, T. (2002). Sampling signals with finite rate of innovation. *IEEE Trans. Sig. Process., 50*(6), 1417-1428.