Acoustic imaging systems such as biomedical devices and sonar play an important role in situations where optical systems cannot be used due to limited visibility (e.g. in living tissue or in turbid water in the ocean). Despite the wide variety of these systems, a major challenge for state-of-the-art devices is how to increase the image resolution. There are two ways to address this challenge. First, the resolution could be enhanced by increasing the number of transducer elements on the phased array of sensors. However, most imaging systems have a centralized architecture in which synchronizing the transducers complicates the hardware and increases costs significantly. This limits the number of transducer elements that can be recorded simultaneously. Second, the resolution could be increased by judiciously processing the measured acoustic field. Comparisons between artificial and biological systems (e.g. human-made sonar versus biosonar) reveal that the former could be improved significantly but how to do so is still an open question. In this work, we propose novel strategies to improve acoustic imaging systems using both metamaterial-based and bio-inspired approaches.
First we propose the concept of out-of-band phase conjugation to enable imaging/communication systems with excellent performance inside complex media. In this method, a decentralized metasurface composed of arrays of non-synchronized sensing and source transducers responds locally to the received field and generates higher harmonic waves that propagate back to the original source. We demonstrate the concept in electromagnetics and acoustics where second harmonic waves were generated from the original wave. Due to the high frequency nature of the outgoing waves, excellent self-focusing properties with enhanced spatial resolutions can be achieved in dynamically varying inhomogeneous media.
We further investigate this novel benefit using a metasurface that down-converts the wavelength by five orders of magnitude using sound-to-light transduction with an acousto-optical metasurface (AOM). In this work, we study the physics that drive this AOM imaging, and demonstrate that this method achieves promising image quality. Unlike most conventional acoustic imaging devices, our proposed AOM operates in a totally decentralized manner and thus the complexity scales linearly with the number of transducer elements in contrast with conventional synchronized systems where complexity scales polynomially. This advantage will enable devices with more sensors at a low level of complexity, resulting in images with a significantly improved spatial resolution.
Systems that exploit the non-synchronized down-conversion method described above rely on large sensor arrays. However, there are applications where an image may need to be formed using measurements from a limited number of sensors. To gain insight into how to effectively use information from a limited number of sensors, we examined a biological system, the bottlenose dolphin. Acoustic data are used by these animals for echolocation, localization, and communication in the marine environment. In this work, we studied the echolocating bottlenose dolphins in an experimental setting. A physics-based model of the acoustic environment, experimental data collected from the animal, and a machine learning-based model were used to investigate how dolphins discriminate targets using echolocation. These models were validated using both experiments and numerical simulations in a case study with three scenarios. The proposed framework provides insight into key acoustic features from the echoes that may be used for target identification and indicates that adaptive positioning is used by the animals to modulate the acoustic information needed for target discrimination.