Diana joins me this week to take on the task of training Bruce, our moody artificial echolocator. One of our biggest goals was to create a large dataset— 100,000 audio-depth samples from various rooms in the ISAT/CS building. Below are the problems we dealt with and the solutions we worked at:
- The current configuration of the microphones produced weak audio signals (i.e. the majority of each waveform fell below the audio alignment threshold). Possible approaches:
- Change of the angles and rotations of the mics and the bat ears attached to them.
- Remove the bat ears completely for unobstructed microphone pop shields.
- Increase the gain. This was the option we took first, and it worked! We got much stronger audio signals.
- The data we collected takes up a lot of memory, so we moved it to a flash drive. Then we weren’t sure how to get Zemenar to access the data remotely. Possible approaches:
- Simplify the architecture of our neural network, and run it on a GPU equipped Linux lab machine, which we could physically connect the flash drive to. This did not work— too many nodes, not enough GPU memory.
- Use scp command to copy all data to Zemenar overnight? Each file takes about two hours… we wanted this to be our last resort.
- Our solution: Attach the flash drive to any computer we had physical access to, preprocess the data there, and then scp the resulting file (contains the xtrain, ytrain, xtest, and ytest sets) to Zemenar. That file is all that’s required to run the neural network files, and it doesn’t take too long to move over.
- The data collecting script, written by Dr. Sprague, was producing faulty h5 files… or so we thought? We had a field day with this problem. We could not get our first batch of samples to preprocess; the program kept crashing when it tried to open the files. While debugging the problem, we got error messages like “can’t find ‘audio’ or ‘depth'”, “dataset does not exist”, and “file does not exist.” The files were not being saved under names appended with ‘.h5’, which is why we initially couldn’t open the files. Our attempts at debugging:
- Sanity check: collect a really small set of data using the last known-to-work data collecting script, which saves npz files instead of h5 files. These files were able to be preprocessed smoothly, so the preprocessing program was not the obvious problem.
- Comb through data collecting script to find possible places where the result data file could have been compromised. Nothing stuck out— the code to write the h5 files was mostly straight forward.
- Replace the h5 code in the most recent data collecting script with npz code. That was harder to do than expected because we didn’t understand the reshaping/resizing logic that was happening to samples before they were being saved as datasets in the output file.
- Our solution: after many hours of unsuccessful debugging, we undid most changes we made to all the scripts. Just to do it, we tried running the preprocessing program on that first batch of samples again, and lo and behold, it worked. We literally have no idea what fixed our morning problem, but we finally got a working system of gathering data. Goodness, we haven’t even gotten to the neural network training part of this project yet…
- It’s almost comical how many little problems we ran into during the week that Dr. Sprague is away. Every other time that we are transporting Bruce to a new room to collect data, something happens to the laptop. Most of the classrooms in the CS portion of ISAT have been occupied this week (conferences? classes? orientation?), so the laptop has usually fallen asleep by the time we’ve found an empty room to collect data in. Problems we’ve faced in waking it up: frozen screen, no text (literally. only the text disappears from everything on the screen), unresponsive sound settings. Sadly, we’ve had to throw away big sets of data because the collected data is wacky (e.g. Audacity waveforms are just blocks of color). Sometimes, Bruce gets fed up and doesn’t produce any sound.
In summary, the little problems added up in a sort of humorous way. Sort of. We’ve been running neural networks on the samples we’ve collected so far (while we collect more data) just to see the impact of sample size, but no conclusive observations yet. Looking forward to working with Dr. Sprague next week.