This week I worked on a program that aligns audio data so that the sounds have the same starting time. This improves accuracy in the Spectrograms plotted from the data. I used audio data collected by Nhung for an earlier practice to test the program, but this afternoon we’re going to integrate it into the actual project and see how much of a difference it will produce in both inputs and outputs.
I am currently practicing making a simple CNN that takes in spectrograms from audio data and predicts depth. The inputs are the same data that I used in testing the program. Once I can successfully train the network I think I will be ready to start working more directly with Nhung on the project. I’m focusing on getting more practice in building and training successful neural networks that deal with audio data.
For the next week, we will be collecting hundreds of thousands of data in different locations and of different perceptive fields of objects. Going to the bat lab last Friday at Virginia Tech was a great insight on how bats and echolocation work, and collecting some great data by pointing the bat at a leaf wall (as well as waiting for more data to be sent over).