At the end of last week, I finished creating the Python script that generates 180 different 1D grayscale banded images, simulating the scan of a soap bubble cluster at every angle from 0 to 180 degrees. As a memory-refresher:
Fig. 1: 1D Image representing a scan from a given angle
This week, I worked on creating a new program that would take these 180 different images (which were all horizontally banded), rotate them the appropriate angle, and then overlay them all in order to create a master image. This required quite a bit more time and brainpower than I had initially imagined, and involved a lot of learning about how to store, manipulate, and save images in both Java and Kotlin. Before I could add a specific image to the master overlay image, I first had to load it back into the Kotlin program (since it was generated using the Python script), then pad it with a sufficient border so that after each was rotated, they would all still be the same dimensions, and then finally carry out the necessary rotation for that given image. Here is a sample of a few of the images after this padding and rotation was carried out:
Fig 2: Rotated, padded versions of a random sample of five scans
Then, after the padding and rotation for a given image was complete, I had to convert the image, which was stored as a BufferedImage object in Kotlin, back into a 2D pixel array format, so that I could manipulate the pixel values. Once converted, I then was able to add the pixel values from that image to the master image pixel array. This was done for each of the 180 images, resulting in a master image pixel array with values potentially from 0 to 255*180. Consequently, this array had to be scaled to get appropriate values from 0 to 255. Lastly, this master image 2D array had to be converted back into a BufferedImage image object, so that it could be saved as a valid image file. Here are images of the ground truth image of a randomly generated cluster, the corresponding overlay image generated, and a combined image of the ground truth cluster and the overlay image.
Fig. 3: Ground truth image for Fig. 4: Overlay image of bubble cluster Fig. 6: Combined overlay and randomly generated soap bubble pictured in Fig. 3 computed using. ground truth image using the cluster. the naive method. naive algorithm.
As you can see, the overlay image on its own does not seem to resemble the ground truth image very well. So, to try and make some more sense of the overlay picture, I then worked on code to combine the ground truth image for the soap bubble cluster with the overlay image, to see if the edges somewhat lined up. Upon combining the two images, it became clear that it seemed pretty inaccurate. Part of the reason, we conjectured, was that we used a pretty naive algorithm for computing the original 1D banded images. You may recall that what we originally did was imitate the scans at given angles by computing the number of intersection points between lines shot at that angle and the arcs of the Voronoi Diagram. We then simply stored the number of intersection points in a 1D array, converted the 1D array into an n x n 2D array scaled with valid pixel values between 0 and 255, and then generated the grayscale banded image in this way.
However, this does not take into account the reflection or refraction of light, for example light glancing off a soap bubble surface if the ray hits the surface close to tangentially. Therefore, in order to improve upon our method, we wanted to consider the intensity of a light ray reaching a sensor, based on how much light was lost to reflection or refraction through the cluster. So, today I began working on creating an algorithm that takes this into account, by, instead of tracking the number of intersections between a given light ray and the arcs of the diagram, tracks an intensity level initially at 255 and diminishing based on not only the intersections but the angles of these intersections. So far, the algorithm generates an image, however, as you can see from the one below, right now it is still far off from the ground truth image. In the next few days I will be trying to improve the method for generating the overlay image, so that hopefully we will be able to integrate the machine learning aspect into the project during the final week of the formal REU program.