Located in Bisei, in the town of Ibara, this observatory is renowned for its modern facilities for observing stars and celestial objects. Equipped with a large 101 cm telescope, it offers educational programs and events for astronomy enthusiasts of all ages. Visitors can take part in night-time observation sessions, accompanied by researchers, to explore the wonders of the starry sky.
This time, I went there to meet the various astrophysicists who work there. I'll have the opportunity to help them with the computer problems they face during later stages of my internship.
I learned about the importance of location conditions and why every piece of equipment is used to enhance the observation of the stars. The telescope alone isn't enough, and using it only to take photos isn't very interesting. Gathering information on the conditions of each photo enables us to build up more detailed analyses and predict the next best observation conditions, as when viewing the Milky Way. In addition, a number of tools were developed last year by the two interns who preceded me. These include the telescope's web interface, as well as a tool for recording light pollution according to the Moon's cycle.
During my week's work at the observatory, I was able to choose various projects to help them. From the list of needs I was given, I chose to develop a programme to detect the presence of shooting stars in images. The camera taking these shots is a 24h camera placed on the roof of the observatory, its lens pointed at the sky, capturing wide-angle shots. I found this subject quite interesting and looked into the various possible methods for tackling the problem.
One effective approach is to use the Canny transform to detect edges in the image. This method identifies edges by analysing changes in luminosity. So, from a colour image, we obtain a black and white representation, where the white highlights the detected contours.
Next, I used the Hough transform to detect shooting stars, which generally appear as lines in images. This method looks for alignments of points forming line segments, while taking into account parameters such as the minimum line length and the maximum distance between segments to connect them. To obtain accurate results, it was necessary to fine-tune these parameters.
Once the script has been run, it filters out images containing at least one detected line. A weakness of this program remains the differentiation between shooting stars and other objects such as planes, which could be even more improved. However, this was beyond my capabilities in the limited time I had available.
The next project was to create an automatic pipeline to process the long-term images taken with the main telescope. Usually, image processing is done by hand using proprietary software. Although it doesn't take very long to do, they would like to have automatic processing so that they can show the results to the public more quickly, in less than two minutes. The raw images are in FITS format, a format used in astronomy to store images and scientific data. This format has a header with information such as the date and time of observation or the position of the celestial object, and the data that stores the image itself.
I had to understand and sequence the different steps they were doing in order to reproduce them in Python and create a chain of operations. To process a shot properly, you need at least three types of photos.
The first type is 'darks': photos taken without light, which capture the thermal noise of the sensor. By combining several of these images and calculating their average, we obtain an accurate representation of the background noise to be subtracted from the other images.
The second type is 'flat's: images used to correct lighting variations due to optical imperfections and shadows. Each 'flat' is first subtracted from the 'dark' image to correct the noise, then the corrected images are combined to create an average 'flat' image, which is used to even out the brightness of the object image.
And finally, the last type is 'objects': shots to be processed. Their exposure time can vary enormously, and in my case, they were shots with 1 minute exposure.
Original image
The 'flat' and 'dark' can be reused, which is why these steps are separate. As they are not necessarily processed systematically, it was decided to move them to another script.
The average of the 'dark' images is subtracted from the object image to eliminate background noise, which clarifies the image and makes astronomical details more visible.
The image of the object is then divided by the 'flat' image to correct for variations in illumination, ensuring that the final image has uniform brightness, which makes it easier to analyse the details.
Digital camera sensors use a Bayer filter to capture colours, and debayerisation is the process that converts this raw data into an RGB colour image.
The colours in the most common digital images are represented by three main components: Red, Green and Blue (RGB). Each pixel is a combination of these three basic colours, expressed as numbers between 0 and 255. For example, a pixel with high values for red and low values for green and blue will appear bright red. In an image file, the colours are stored sequentially for each pixel, allowing millions of different colours to be represented by combining these intensities. In the FITS format, each colour component can be stored in a separate layer, requiring this debayerisation process to transform the raw data into a complete colour image.
To adjust the colour balance, the histogram of the image is modified using the median of the pixel values, then stretched with the standard deviation of the pixels to correct imbalances and improve colour fidelity.
Next, the brightness of the image is modified to make full use of the available dynamic range, bringing the image to life and bringing out details hidden in dark or light areas.
Noise reduction eliminates speckles and grain in the image, improving sharpness and allowing clearer observation of astronomical objects.
Finally, an adaptive contrast enhancement method increases local contrast to bring out fine detail, helping to accentuate important features of the image without over-saturating bright areas.
Final result
Sequencing these different steps wasn't easy. Not knowing what to expect as a result, I often went in the wrong direction. Fortunately, I was able to see the expected end result, which helped me a lot in orienting my research.
Many thanks to Dr. Itoh, who welcomed and guided me during my week at the observatory. It was really interesting to learn more about capture techniques in astronomy.
Comments