Draft:PACPOD: Planar Acoustic Camera

From Wikipedia, the free encyclopedia
This user has publicly declared that they have a conflict of interest regarding the Wikipedia article PACPOD: Planar Acoustic Camera.

PACPOD: Planar Acoustic Camera[edit]

Problem Statement:[edit]

Planar acoustic cameras offer a promising solution for real-world sound localization, employing beamforming techniques for accurate spatial analysis. However, integrating and synchronizing multiple lines of audio data pose significant challenges. The multiplexing process, crucial for efficient operation, must handle diverse data sources across the camera's surface while preserving spatial and temporal fidelity.

Traditional multiplexing techniques may not directly apply to planar acoustic cameras due to their unique geometry and distributed microphone arrays. Achieving high-resolution sound localization demands sophisticated strategies to manage large data volumes with low latency and high precision. Moreover, multiplexing must be robust against environmental factors like ambient noise and reverberation.

This research aims to unravel the complexities of multiplexing in planar acoustic cameras, investigating innovative strategies to overcome data integration, synchronization, and fidelity preservation challenges. By addressing these issues, we can enhance the capabilities of planar acoustic cameras, enabling their widespread adoption in applications ranging from industrial monitoring to immersive audio experiences.

Goals and Requirements:[edit]

Goals and Requirements for this Planar Acoustic Camera were developed by our group through research on existing Acoustic Cameras on the market. We knew that in order to meet market standards we needed to accomplish the following: Requirements:

  • GUI with spectrograms and high pass, low pass, band pass, and band stop filters
  • Visual and acoustic image overlay
  • Audio input from the microphones
  • Video input from the camera
  • Create a beam-formed matrices using Acoular, a Python Library

Goals:

  • Pinpoint frequency emissions across the x, y, and z axes
  • Filter the proper frequencies based on the user’s input
  • Real-time analysis
  • Create a microphone array with a high signal-to-noise ratio and suppress side lobes

Approach:[edit]

In order to utilize the time that we had most efficiently, we split up different tasks amongst the group members. The different tasks to tackle were the Graphical User Interface, PCB design and manufacturing, localization of the sounds from the microphone array and USB camera, and operation of the USB camera and visualization. After completion, all the tasks will fit seamlessly together to create our overall system. The GUI takes in user input that is sent to start the beam forming process. Wave signals and a picture will then be received from the microphone and camera. This data will be displayed as an image overlaid with a heatmap of the filtered signal on the GUI.

Design:[edit]

Our acoustic camera design employs a sophisticated approach, utilizing a 16-microphone array shield on the MINIDSP UMA 16 that samples at 16 kHz in combination with a USB camera to accurately localize sources of sound using Acoular to utilize beam forming techniques. The 16-microphone array is strategically positioned to capture sound from various directions, providing comprehensive coverage and enhancing the accuracy of the system. This array is capable of capturing a wide range of frequencies, ensuring the effective detection and localization of sound sources.

The process begins with the Python code, which controls the microphone array. The code records the audio and generates the necessary beamformed matrices. Subsequently, the data is seamlessly transferred to a MATLAB Graphical User Interface (GUI), where it is merged with the video feed from the USB camera. This integration allows for the visualization of sound sources on the video feed. The MATLAB GUI provides an intuitive platform for the user to interact with the data to visualize the time series and the spectrograms of the WAV file. Users can select specific frequencies they want to filter, enabling them to focus on particular sound sources. Once the user selects the frequencies, the beamforming process is restarted to create a new set of matrices, customized to the selected frequencies. The overlay of the beamformed matrices on the video feed provides users with a clear and precise visualization of the sources of sound, facilitating a more efficient analysis.

The use of a 16-microphone array, in conjunction with the USB camera and the MATLAB GUI, presents an accessible, user-friendly solution for sound localization. This design not only simplifies the process but also enhances the accuracy and efficiency of the acoustic camera system.

Component and System Testing:[edit]

We have created a custom four-layer PCB, designed to interface with the miniDSP UMA-16 processing system. The board features 16 digital MEMS microphones, all of which record simultaneously, then their output is stored in a .wav file and transmitted via USB. An asymmetrical layout was chosen to increase accuracy.

While we await our custom PCB, we have been testing using the miniDSP UMA-16 microphone array. Any heat maps shown here were generated using output from this device. This has allowed us to ensure the software portion of our project functions before we use our own array.

Project Status:[edit]

At the moment, the user is able to input an audio file into our MATLAB code. Then, they may use filters to focus on different frequencies contained within the file, as well as analyze separate channels within the file. The data is then sent to our Python code, where beam forming calculations are performed.

A heat map corresponding to 10 seconds of audio is generated and overlaid onto an image, indicating where the various noises originate. The camera footage is recorded in tandem with the audio.

References[edit]