1 Introduction
The popularity of hyperspectral imaging has increased in recent years, as it is very useful in many scientific fields, for example, remote sensing [20], agriculture [13], military applications [18], medicine [5], food [14], etc. Hyperspectral images are a map of reflectance [3] of different wavelengths, from either an object or scene, consecutively captured within a specified range of the electromagnetic spectrum. Hyperspectral imaging generates a hyperspectral data cube where the spectral information represents a third dimension (λ) of a two-dimensional spatial image (x, y) [17].
The main characteristic of hyperspectral images is that they allow the recording of unique spectral signatures, so they can be used by a classifier capable of recognizing an object's physical and chemical properties [16].
Hyperspectral imaging systems typically work with monochrome image sensors, resulting in grayscale images that contain the number of photons coming from the object [8]. Image colorization consists of adding color attributes to grayscale images and it is already a classic problem due to the loss of color information [21].
2 Related Work
Various techniques for adding color to grayscale images are reported in the literature. One of the most used techniques is based on scribbling the colors you want to see in the image [12, 7, 23, 10, 9, 6, 19].
Another method of colorizing grayscale image is segmentation, this technique helps to separate certain regions of the image and thus add several colors [4, 22, 15].
There are many other methods for adding color to grayscale images; some are based on using example images to pick up the colors [2, 1]; others perform histogram regression to achieve color mapping [11]. In recent years, work has been done on simpler and more optimized algorithms using embedded systems such as FPGAs for hardware image processing and colorization [24]
3 Methodology
3.1 Hyperspectral Imaging System
The hyperspectral imaging system (HIS) used (Fig. 1) consists of a halogen lamp as a point source of light, a light collimating optical element, an optical lens array, 2 diaphragms, a diffraction grating, sensor image, an aluminum plate and a NEMA17 stepper motor.
The halogen lamp is encapsulated in a cube made of shell paper, lined with aluminum foil on the inside and sealed on the outside to best concentrate the light from the lamp.
The aluminum plate has a thickness of 3mm, a width of 10cm and a total length of 40cm. This plate has holes that allow the fastening of laboratory elements. In the upper part, the image sensor, a diaphragm and a lens are coupled by means of rods and laboratory bases, the latter are necessary to form the image of the diffraction grating. At the bottom and at one of the ends, the stepper motor is coupled, which allows the plate an angular displacement necessary to capture images from the diffraction grating.
3.2 HIS Working
When the halogen lamp is turned on, light escapes from the cube and enters the light collimator element. Once the light has been collimated, it is passed through a diaphragm that allows you to control the amount of light that will reach the object.
Once the light is adequate, it will hit the object of study, which will cause a part of the light to be absorbed by the object, while another part will be transmitted through the object.
The light coming from the object is passed through an arrangement of optical lenses, which we have called ‘telescope’, with which a scaling of the image coming from the object is achieved.
When the image has passed through the array of optical lenses, through the diffraction grating, which causes each of the light rays that form the image of the object to be diffracted into its different frequencies of the visible electromagnetic spectrum. The diffracted light can be observed both on the right side of the zero order, known as order 1, and on the left side of the zero order, known as -1 order.
Each one of the three orders (-1,0,1) of the HIS can be detected by the image sensor, thanks to the fact that the aluminum plate has a NEMA17 stepper motor coupled with a wheel, which allows an angular scanning motion around the diffraction grating.
The stepper motor is configured to work at half steps (∆S = 0.9°) and is controlled with a PIC16F877A microcontroller programmed in C language and an A4988 integrated circuit as power controller for the motor.
Finally, each wavelength diffracted by the grating can be found with the following equation:
where λ is the diffracted wavelength and θ is the angle of maximum intensity of the diffracted wavelength.
3.3 Image Capture
Once the HIS is ready to work, the images of each of the wavelengths coming from the diffraction grating were captured. Image capture was performed twice. The first image capture was made using a commercial RGB image sensor (CANON camera), this to have a color reference for each of the wavelengths that make up the hyperspectral image cube.
The second image capture was performed using a laboratory monochrome image sensor (ThorLabs camera). These images are the ones that will be used for hyperspectral analysis and visualization.
3.4 HIS Characterization and Resolution
As can be seen in Fig. 2, the HIS with the RGB sensor can detect between the wavelengths λ=400nm and λ=670nm. While with the monochrome sensor it is capable of detecting wavelengths between λ=400nm and λ=730nm.
The total number of images captured in the range described above was 400 images with the RGB sensor and 500 images with the monochrome sensor. This difference in the total number of images captured is because the monochrome sensor is more sensitive than the RGB sensor.
Knowing the value of the range of detected wavelengths and the total number of captured images, we can determine the resolving power of the HIS as follows:
Substituting values into the equation 2 we get:
An OCEAN OPTICS brand spectrophotometer was used for the characterization of the HIS and to obtain the spectral signature of the halogen lamp.
3.5 Image Processing
The set of captured images will be called hyperspectral data cube from now on.
Once the information from the hyperspectral data cubes (RGB cube and monochrome cube) has been obtained, the information analysis was carried out with the help of MATLAB software.
Because the original spatial dimension of RGB images is too large, these dimensions were first cropped to reduce digital processing work. For this, a region of interest (ROI) was selected, which only contains the useful information to carry out the analysis.
Once the ROI of the RGB images has been selected, the maximum intensity value of the pixels that make up the ROI was obtained. As can be seen in fig. 4, the distribution of RGB colors in each wavelength corresponds to what is expected according to the electromagnetic spectrum in the visible range.
It also allows us to know the intensity value in RGB that must be represented in the visualization of the monochrome hyperspectral cube.
The analysis of the hyperspectral cube of monochrome data was also carried out, since it is necessary to know the spatial information contained in each image.
The monochrome analysis revealed that not only the areas with the highest level of intensity contain the information, but also that the areas in black also have an intensity value, albeit at a very low level. This is important as these images are the ones that will be used for colorization and visualization. The maximum intensity values of each image were obtained as well as the average of these values.
The spatial dimensions of the monochrome images are adequate for digital processing, so it was decided not to modify their original size.
3.6 Detection and Masking
As previously mentioned, the monochrome information is not limited to the area with the most light intensity, but information exists in all the pixels that make up the image. It is for this reason that a binary mask (Fig. 5j) was fabricated to limit the color mapping processing to only the region containing the object information.
The process to obtain the binary mask is as follows:
The information of the original image is obtained (Fig. 5a).
The search is made for the high frequencies that the image contains, these frequencies are found at the edges of the image, resulting in edge detection (Fig. 5b).
The data in Fig. 5b is complemented, that is, white becomes black and black becomes white, thus obtaining Fig. 5c.
To obtain Fig. 5d, we apply image edge detection again, but with the difference that the threshold value for detection is less than the one applied in step 2.
Fig. 5e is the result of applying the multiplication of the data of Fig. 5c with the data of Fig. 5d.
A “filling” of the missing information is applied to the edges obtained in Fig. 5e, thus obtaining Fig. 5f.
To the image information of FIG. 5f, a complement is applied to the data resulting in Fig. 5h.
Fig. 5i is the result of applying a “filling” of the missing information in the data of Fig. 5b.
Fig. 5h is multiplied with Fig. 5i, thus obtaining the binary mask (Fig. 5j).
3.7 Adding Color Attributes to Monochrome Images
Once the binary mask has been obtained, we will use it as a map of the pixels that will be assigned RGB values that will allow us to visualize the approximate color (also known as false color) of the monochrome hyperspectral images.
The colorization process of monochrome hyper- spectral images is explained in the flowchart of fig. 6.
4 Results
The results obtained by applying the monochrome hyperspectral image colorization algorithm are shown in Fig. 7.
Column 1) corresponds to the image resulting from the algorithm. The original information of the image captured with the monochrome sensor (ThorLabs camera) is in column 2).
In column 3) is the image obtained with the RGB sensor (CANON camera) which is used for color comparison of the images.
Finally, each of the different sections are used as a reference for the wavelengths captured with the HIS and were obtained with an OCEAN OPTICS brand spectrophotometer.
5 Conclusion
Based on the results obtained, it can be seen that the algorithm in charge of determining the area to be colored is not perfect and has errors that must be repaired, even so, this first algorithm helps us to understand the process to follow for coloring and displaying images. hyperspectral.
One of the errors in the detection of the area can be seen in Fig. 7b, where the algorithm was not able to determine the entire edges of the image object.
A recurring error occurs from subsection d) to subsection i), where the algorithm was able to detect the circumference of the object but failed to determine the area corresponding to number 3.
It should be mentioned that the errors present in Fig. 7 are not strictly caused in the programming of the algorithm, since it works directly with the data obtained from the monochrome images, so if the information of the images is inaccurate then the algorithm will not work correctly.
Finally, it should be noted that the images taken with the ThorLabs camera (Fig. 7 column 2) continue up to λ = 730nm but it is not possible to assign RGB attributes as there is no information to which it is related.