SciELO - Scientific Electronic Library Online

 
vol.27 número4Systematic Literature Review on Machine Learning and its Impact on APIs DeploymentDeep Learning-Based Classification and Segmentation of Sperm Head and Flagellum for Image-Based Flow Cytometry índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.27 no.4 Ciudad de México oct./dic. 2023  Epub 17-Mayo-2024

https://doi.org/10.13053/cys-27-4-4608 

Articles

Colorization of Monochrome Hyperspectral Images

Martín. A. Vázquez-Castrejón1 

Omar Palillero-Sandoval1  * 

J Jesús Escobedo-Alatorre1 

Pedro A. Márquez-Aguilar1 

José A. Marbán-Salgado1 

Jonny P. Zavala-De Paz2 

Álvaro Zamudio-Lara1 

E. Eduardo Antúnez-Cerón1 

Francisco A. Castillo-Velásquez2 

Carlos Rodriguez-Donate3 

11 Center for Research in Engineering and Applied Science (CIICAp), Mexico. martin.vazquezcs@uaem.edu.mx

22 Universidad Politécnica de Querétaro, Mexico. jonny.zavala@upq.edu.mx, francisco.castillo@upq.edu.mx.

33 Universidad de Guanajuato, División de Ingenierías, Campus Irapuato-Salamanca, DEM, Mexico.


Abstract:

Hyperspectral images have been used for several years, since the information they provide is very useful in many areas of science. The present work focuses on the visualization of hyperspectral images of the visible range in the RGB color space. The images were obtained using a hyperspectral imaging system (HIS) that was built in the laboratory and a monochrome image sensor was used to capture the images. The visualization process was achieved by means of an algorithm programmed in MATLAB for the coloration of the monochrome images and the comparison of the colored images with images captured in the same wavelengths with an RGB sensor was carried out.

Keywords: MATLAB; RGB; monochrome imaging; Imaging System; hyperspectral imaging

1 Introduction

The popularity of hyperspectral imaging has increased in recent years, as it is very useful in many scientific fields, for example, remote sensing [20], agriculture [13], military applications [18], medicine [5], food [14], etc. Hyperspectral images are a map of reflectance [3] of different wavelengths, from either an object or scene, consecutively captured within a specified range of the electromagnetic spectrum. Hyperspectral imaging generates a hyperspectral data cube where the spectral information represents a third dimension (λ) of a two-dimensional spatial image (x, y) [17].

The main characteristic of hyperspectral images is that they allow the recording of unique spectral signatures, so they can be used by a classifier capable of recognizing an object's physical and chemical properties [16].

Hyperspectral imaging systems typically work with monochrome image sensors, resulting in grayscale images that contain the number of photons coming from the object [8]. Image colorization consists of adding color attributes to grayscale images and it is already a classic problem due to the loss of color information [21].

2 Related Work

Various techniques for adding color to grayscale images are reported in the literature. One of the most used techniques is based on scribbling the colors you want to see in the image [12, 7, 23, 10, 9, 6, 19].

Another method of colorizing grayscale image is segmentation, this technique helps to separate certain regions of the image and thus add several colors [4, 22, 15].

There are many other methods for adding color to grayscale images; some are based on using example images to pick up the colors [2, 1]; others perform histogram regression to achieve color mapping [11]. In recent years, work has been done on simpler and more optimized algorithms using embedded systems such as FPGAs for hardware image processing and colorization [24]

3 Methodology

3.1 Hyperspectral Imaging System

The hyperspectral imaging system (HIS) used (Fig. 1) consists of a halogen lamp as a point source of light, a light collimating optical element, an optical lens array, 2 diaphragms, a diffraction grating, sensor image, an aluminum plate and a NEMA17 stepper motor.

Fig. 1 Hyperspectral imaging system 

The halogen lamp is encapsulated in a cube made of shell paper, lined with aluminum foil on the inside and sealed on the outside to best concentrate the light from the lamp.

The aluminum plate has a thickness of 3mm, a width of 10cm and a total length of 40cm. This plate has holes that allow the fastening of laboratory elements. In the upper part, the image sensor, a diaphragm and a lens are coupled by means of rods and laboratory bases, the latter are necessary to form the image of the diffraction grating. At the bottom and at one of the ends, the stepper motor is coupled, which allows the plate an angular displacement necessary to capture images from the diffraction grating.

3.2 HIS Working

When the halogen lamp is turned on, light escapes from the cube and enters the light collimator element. Once the light has been collimated, it is passed through a diaphragm that allows you to control the amount of light that will reach the object.

Once the light is adequate, it will hit the object of study, which will cause a part of the light to be absorbed by the object, while another part will be transmitted through the object.

The light coming from the object is passed through an arrangement of optical lenses, which we have called ‘telescope’, with which a scaling of the image coming from the object is achieved.

When the image has passed through the array of optical lenses, through the diffraction grating, which causes each of the light rays that form the image of the object to be diffracted into its different frequencies of the visible electromagnetic spectrum. The diffracted light can be observed both on the right side of the zero order, known as order 1, and on the left side of the zero order, known as -1 order.

Each one of the three orders (-1,0,1) of the HIS can be detected by the image sensor, thanks to the fact that the aluminum plate has a NEMA17 stepper motor coupled with a wheel, which allows an angular scanning motion around the diffraction grating.

The stepper motor is configured to work at half steps (∆S = 0.9°) and is controlled with a PIC16F877A microcontroller programmed in C language and an A4988 integrated circuit as power controller for the motor.

Finally, each wavelength diffracted by the grating can be found with the following equation:

λ=(1×106)sinθ, (1)

where λ is the diffracted wavelength and θ is the angle of maximum intensity of the diffracted wavelength.

3.3 Image Capture

Once the HIS is ready to work, the images of each of the wavelengths coming from the diffraction grating were captured. Image capture was performed twice. The first image capture was made using a commercial RGB image sensor (CANON camera), this to have a color reference for each of the wavelengths that make up the hyperspectral image cube.

The second image capture was performed using a laboratory monochrome image sensor (ThorLabs camera). These images are the ones that will be used for hyperspectral analysis and visualization.

3.4 HIS Characterization and Resolution

As can be seen in Fig. 2, the HIS with the RGB sensor can detect between the wavelengths λ=400nm and λ=670nm. While with the monochrome sensor it is capable of detecting wavelengths between λ=400nm and λ=730nm.

Fig. 2 HIS resolution 

The total number of images captured in the range described above was 400 images with the RGB sensor and 500 images with the monochrome sensor. This difference in the total number of images captured is because the monochrome sensor is more sensitive than the RGB sensor.

Knowing the value of the range of detected wavelengths and the total number of captured images, we can determine the resolving power of the HIS as follows:

Resolution=wavelengthrangecapturedimages. (2)

Substituting values into the equation 2 we get:

Resolution=270nm400images=0675nmimage. (3)

An OCEAN OPTICS brand spectrophotometer was used for the characterization of the HIS and to obtain the spectral signature of the halogen lamp.

3.5 Image Processing

The set of captured images will be called hyperspectral data cube from now on.

Once the information from the hyperspectral data cubes (RGB cube and monochrome cube) has been obtained, the information analysis was carried out with the help of MATLAB software.

Because the original spatial dimension of RGB images is too large, these dimensions were first cropped to reduce digital processing work. For this, a region of interest (ROI) was selected, which only contains the useful information to carry out the analysis.

Once the ROI of the RGB images has been selected, the maximum intensity value of the pixels that make up the ROI was obtained. As can be seen in fig. 4, the distribution of RGB colors in each wavelength corresponds to what is expected according to the electromagnetic spectrum in the visible range.

Fig. 3 ROI selection 

Fig. 4 Maximum intensity per RGB channel 

It also allows us to know the intensity value in RGB that must be represented in the visualization of the monochrome hyperspectral cube.

The analysis of the hyperspectral cube of monochrome data was also carried out, since it is necessary to know the spatial information contained in each image.

The monochrome analysis revealed that not only the areas with the highest level of intensity contain the information, but also that the areas in black also have an intensity value, albeit at a very low level. This is important as these images are the ones that will be used for colorization and visualization. The maximum intensity values of each image were obtained as well as the average of these values.

The spatial dimensions of the monochrome images are adequate for digital processing, so it was decided not to modify their original size.

3.6 Detection and Masking

As previously mentioned, the monochrome information is not limited to the area with the most light intensity, but information exists in all the pixels that make up the image. It is for this reason that a binary mask (Fig. 5j) was fabricated to limit the color mapping processing to only the region containing the object information.

Fig. 5 Masking 

The process to obtain the binary mask is as follows:

  1. The information of the original image is obtained (Fig. 5a).

  2. The search is made for the high frequencies that the image contains, these frequencies are found at the edges of the image, resulting in edge detection (Fig. 5b).

  3. The data in Fig. 5b is complemented, that is, white becomes black and black becomes white, thus obtaining Fig. 5c.

  4. To obtain Fig. 5d, we apply image edge detection again, but with the difference that the threshold value for detection is less than the one applied in step 2.

  5. Fig. 5e is the result of applying the multiplication of the data of Fig. 5c with the data of Fig. 5d.

  6. A “filling” of the missing information is applied to the edges obtained in Fig. 5e, thus obtaining Fig. 5f.

  7. To the image information of FIG. 5f, a complement is applied to the data resulting in Fig. 5h.

  8. Fig. 5i is the result of applying a “filling” of the missing information in the data of Fig. 5b.

  9. Fig. 5h is multiplied with Fig. 5i, thus obtaining the binary mask (Fig. 5j).

3.7 Adding Color Attributes to Monochrome Images

Once the binary mask has been obtained, we will use it as a map of the pixels that will be assigned RGB values that will allow us to visualize the approximate color (also known as false color) of the monochrome hyperspectral images.

The colorization process of monochrome hyper- spectral images is explained in the flowchart of fig. 6.

Fig. 6 Colorization algorithm 

4 Results

The results obtained by applying the monochrome hyperspectral image colorization algorithm are shown in Fig. 7.

Fig. 7 Hyperspectral imaging display 

Column 1) corresponds to the image resulting from the algorithm. The original information of the image captured with the monochrome sensor (ThorLabs camera) is in column 2).

In column 3) is the image obtained with the RGB sensor (CANON camera) which is used for color comparison of the images.

Finally, each of the different sections are used as a reference for the wavelengths captured with the HIS and were obtained with an OCEAN OPTICS brand spectrophotometer.

5 Conclusion

Based on the results obtained, it can be seen that the algorithm in charge of determining the area to be colored is not perfect and has errors that must be repaired, even so, this first algorithm helps us to understand the process to follow for coloring and displaying images. hyperspectral.

One of the errors in the detection of the area can be seen in Fig. 7b, where the algorithm was not able to determine the entire edges of the image object.

A recurring error occurs from subsection d) to subsection i), where the algorithm was able to detect the circumference of the object but failed to determine the area corresponding to number 3.

It should be mentioned that the errors present in Fig. 7 are not strictly caused in the programming of the algorithm, since it works directly with the data obtained from the monochrome images, so if the information of the images is inaccurate then the algorithm will not work correctly.

Finally, it should be noted that the images taken with the ThorLabs camera (Fig. 7 column 2) continue up to λ = 730nm but it is not possible to assign RGB attributes as there is no information to which it is related.

Acknowledgment

This work has been supported by the national council of science and technology (CONACYT), Mexico.

References

1. Bugeau, A., Ta, V. T., Papadakis, N. (2014). Variational exemplar-based image colorization. In IEEE Transactions on Image Processing, Vol. 23, No. 1, pp. 298–307. DOI: 10.1109/TIP.2013.2288929. [ Links ]

2. Li, B., Zhao, F., Su, Z., Liang, X., Lai, Y. K., Rosin, P. L. (2017). Example-based image colorization using locality consistent sparse representation. IEEE Transactions on Image Processing, Vol. 26, No. 11, pp. 5188–5202. DOI: 10.1109/TIP.2017.2732239. [ Links ]

3. Clancy, N., Jones, G., Maier-Hein, L., Elson, D. S., Stoyanov, D. (2020). Surgical spectral imaging. Medical Image Analysis, Vol. 63, p. 101699. DOI: 10.1016/j.media.2020.101699. [ Links ]

4. Sýkora, D., Buriánek, J., Zára, J. (2005). Colorization of black and white cartoon. Image and Vision Computing, Vol. 23, No. 9, pp. 767–782. DOI: 10.1016/j.imavis.2005.05.010. [ Links ]

5. Hussein, A. A., Yang, X. (2012). Colorization using edge preserving smoothing filter. Springer-Verlag, pp. 1681–1689. DOI: 10.1007/s11760-012-0402-5. [ Links ]

6. Kawulok, M., Smolka, B. (2011). Texture-adaptive image colorization framework. EURASIP J. Adv. Signal Process, pp. 99. DOI: 10.1186/1687-6180-2011-99. [ Links ]

7. Khan, M. J., Khan, H. S., Yousaf, A., Khurshid, K., Abbas, A. (2018). Modern trends in hyperspectral image analysis: A review. IEEE Access, Vol. 6, pp. 14118–14129. DOI: 10.1109/ACCESS.2018.2812999. [ Links ]

8. Lagodzinski, P., Smolka, B. (2014). Application of the extended distance transformation in digital image colorization. Multimedia Tools and Applications, Vol. 69, pp. 111–137. DOI: 10.1007/s11042-012-1246-2. [ Links ]

9. Leifman, G., Tal, A. (2012). Mesh colorization. EUROGRAPHICS, Vol. 31. No. 2, pp. 421–430. DOI: 10.1111/j.1467-8659.2012.03021.x. [ Links ]

10. Liu, S., Zhang, X. (2012). Automatic grayscale image colorization using histogram regression. Pattern Recognition Letters, Vol. 33, No. 13, pp. 1673–1681. DOI: 10.1016/j.patrec.2012.06.001. [ Links ]

11. Fang, L., Wang, J., Lu, G., Zhang, D., Fu, J. (2019). Hand-drawn grayscale image colorful colorization based on natural image. The Visual Computer, Vol. 35, pp. 1667–1681. DOI: 10.1007/s00371-018-1613-8. [ Links ]

12. Lu, B., Dao, P. D., Liu, J., He, J., Shang, J. (2020). Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sensing, Vol. 12, No. 16, pp. 2659. DOI: 10.3390/rs12162659. [ Links ]

13. Ma, J., Sun, D. W., Pu, H., Cheng, J. H., Wei, Q. (2019). Advanced techniques for hyperspectral imaging in the food industry: Principles and recent applications. Annual Review of Food Science and Technology, Vol. 10, pp. 197–220. [ Links ]

14. Martinez-Escobar, M., Leng-Foo, J., Winer, E. (2012). Colorization of CT images to improve tissue contrast for tumor segmentation. Computers in Biology and Medicine, Vol. 42, No. 12, pp. 1170–1178. DOI: 10.1016/j.compbiomed.2012.09.008. [ Links ]

15. Shaikh, M. S., Jaferzadeh, K., Thörnberg, B., Casselgren, J. (2021). Calibration of a hyper-spectral imaging system using a low-cost reference. Sensors, Vol. 21, No. 11, pp. 3738. DOI: 10.3390/s21113738. [ Links ]

16. Ravikanth, L., Jayas, D. S., White, N. D. G., Fields, P. G., Sun, D. W. (2017). Extraction of spectral information from hyperspectral data and application of hyperspectral imaging for food and agricultural products. Food Bioprocess Technol, Vol. 10, pp. 1–33. DOI: 10.1007/s11947-016-1817-8. [ Links ]

17. Shimoni, M., Haelterman, R., Perneel, C. (2019). Hypersectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geoscience and Remote Sensing Magazine, Vol. 7, No. 2, pp. 101–117. DOI: 10.1109/MGRS.2019.2902525. [ Links ]

18. Jin, S. Y., Choi, H. J., Tai, Y. W. (2014). A randomized algorithm for natural object colorization. Computer Graphics Forum, Vol. 33, No. 2, pp. 205–214. DOI: 10.1111/cgf.12294. [ Links ]

19. Veraverbeke, S., Dennison, P., Gitas, L., Hulley, G., Kalashnikova, O., Katagis, T., Kuai, L., Meng, R., Roberts, D., Stavros, N. (2018). Hyperspectral remote sensing of fire: State-of-the-art and future perspectives. Remote Sensing of Environment, Vol. 216, pp. 105–121. DOI: 10.1016/j.rse.2018.06.020. [ Links ]

20. Xia, M., Liu, X., Wong, T. T. (2018). Invertible grayscale. ACM Transactions on Graphics, Vol. 37, No. 6, pp. 1–10. DOI: 10.1145/3272127.3275080. [ Links ]

21. Wang, X. H., Jia, J., Liao, H. Y., Cai, L. H. (2012). Affective image colorization. Journal of Computer Science and Technology, Vol. 27, pp. 1119–1128. DOI: 10.1007/s11390-012-1290-4. [ Links ]

22. Yatziv, L., Sapiro, G. (2006). Fast image and video colorization using chrominance blending. EEE Transactions on Image Processing, Vol. 15, No. 5, pp. 1120–1129, DOI: 10.1109/TIP.2005.864231. [ Links ]

23. Zhi, C., Cui, J., Deng, J., Du, W. (2020). An FPGA-based simple RGB-HSI space conversion algorithm for hardware image processing. IEEE Access, Vol. 8, pp. 173838–173853. DOI: 10.1109/ACCESS.2020.3026189. [ Links ]

Received: May 09, 2023; Accepted: September 01, 2023

* Corresponding author: Omar Palillero-Sandoval, e-mail: omar.palillero@uaem.mx

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License