The U-M Mechanical Engineering Department is very pleased to announce that Bogdan Popa and Alex Shorter, assistant professors of mechanical engineering, and their multidisciplinary research team have been awarded a Multidisciplinary University Research Initiative (MURI) award from the U.S. Department of Defense. Their project, titled Neurobehavioral, Physiological, and Computational Processes of Auditory Object Learning in Mammals seeks to understand how dolphins use biological sonar (biosonar) to identify underwater objects and create a map of their environment.
This research will support the development of acoustic imaging technology with a wide range of applications, such as ultrasound diagnostics in health care and sensing for autonomous vehicles. Human-made underwater sonar systems have been created using the same principles dolphins use with their biosonar but are not as effective. The use of these sonar systems also pollutes the ocean with so much sound they threaten marine species that rely on sound to communicate and navigate underwater.
“This project aims to understand how dolphins biosonar is so much more performant than its human-made counterpart while using low-power sound that doesn’t pose a threat to marine life,” said Popa.
Through direct experimentation, the team plans to explore the “acoustic images” formed by the dolphins using biosonar. This biosonar is the result of echolocation which is done during a series of clicks or sound pulses the animals send out to sense their environment. Human-made sonar systems also use sound pulses but are often not able to use the information in the returning echoes as effectively as dolphins.
“How the animals use the information in the returning echoes to form an “acoustic image” of the environment, and what features are in the echoes the animals use to identify objects are both open research questions,” said Shorter.
To investigate these questions, the Michigan team is working as part of a larger collaboration based at Carnegie Mellon University to formulate a framework that uses experimental sound and movement data collected during echolocation tasks with physics-based and data-driven acoustic models to create an estimated acoustic scene.