My First Electronics Project

\(\)

I wanted to document my intro into electronics and hardware. And as I’ve yet to put anything useful on this domain this seems like an appropriate place to document such a journey. As the primary interest of the coming posts is not education but documentation, the posts might and will likely contain many errors and amateur design decisions. At the same time I do hope someone might find interesting and/or useful information in my attempts to learn about the subject.

The first project I’ve decided to undertake is the development of a sonar video camera. Using sound to generate graphical representations of objects seems cool. The basic idea is creating a single frequency sound wave and using a microphone to listen for the echos of that sound. Since sound travels at a constant speed we can use the echo to find the distance to objects in the environment. Using a single microphone will only give us distance information so there needs to be at least another microphone to get the orientation of that object with respect to the sound source. The echos will hit one of the microphones first which we can use to get the angle of those echos.

Diagram showing the way the system should work

Diagram showing the way the system should work

To generate the images from the echos, we’ll convert the distance/angle data to a single 3 dimensional position vector. We can then take those position vectors and translate them to a 2d image, using a method called the projection matrix.

As I come from a software background, the last part is where I’ll begin. It will also make things easier to debug. In some sense the hardware part is putting two microphones and single-frequency generator in a nice looking small box and a board that can make those calculations fast enough.

Of course that impression might and probably is naivety. One issue I did run into when it comes to generating a single-frequency sound is that if this sonar-camera is to ever be useful it shouldn’t create an annoying sound when it’s used. Human hearing range is between 31 to 19K hertz. A simple solution is to use a higher or lower frequency sound than that range. The issue stems from common house pets like cats and bats that do live in urban environments, and some of which are sensitive to sounds that are up to 200K hertz. I’ve looked into sub 10 hertz frequencies, as they are the bottom limit for all animals, but they require bigger equipment that will probably not fit within a small case. For now I’ve decided to concentrate on higher frequencies probably above 80k hertz. Hopefully there are appropriate microphones that are sensitive to those frequencies. One helpful aspect of this dilemma is that at least in the first stages of development the frequency doesn’t matter as much and most of the other parts of the system are agnostic as to the frequency used.

Animal Hearing Range Table

Animal Hearing Range Table

In the beginning I want to implement the software using python. It will be composed of three systems:

1. A debug framework that will take a 3d image and convert it into distance/angle data. The opposite of what the entire system is supposed to do. This will allow for effective testing of the system.

2. A system that simulates hardware events coming from the microphones, and converting them into distance/angle data and then to the position vectors.

3. An image generator that is fed the position vectors to generate the final image

One aspect that I wanted to keep for a later date, but I think might be useful now is that of “loop”. The system works by generating a single sound, and waiting for the multiple echos coming from the environment, and converting them to position vectors. Since different objects are at different distances they will take different amount of time to echo from. One issue is that we might detect echos from a previous signal. To prevent that scenario we need to choose a window of time that is dedicated for a single “run” of the signal→detection loop. The length of the loop is limited by frames-per-second of the video we want to create. If we wait half a second in the loop, we can only create two video frames. To get a more realistic wait time for the loop, we need to choose the maximum distance the camera should be able to detect. The formula is: 

$$loop\ time = {\frac{distance}{speed\ of\ sound}}$$

For 10 meters for instance it comes out as:

$$loop\ time = {\frac{10m}{340.29{\frac{m}{s}}}} = 0.029s$$

if we divide 1 with the loop time we get the FPS which in this case comes out as 34.

Sonar Camera Distance Vs FPS Graph

Distance Vs FPS graph

 

This was a general overview of the project. In the interest of keeping this series somewhat self contained I’ll delve deeper into the different aspect of the project in more details. The first order of business is explaining some general properties of sound waves and computer sound to be better positioned to understand how they are useful for this task.