Vol. 23 Núm. 4 (2024): Revista UIS Ingenierías
Artículos

Clasificación espectral mediante una configuración óptica dual y redes neuronales profundas

Andrés Jerez
Universidad Industrial de Santander
Geison Blanco
Universidad Industrial de Santander
Sergio Urrea
Universidad Industrial de Santander
Hans García
Universidad Industrial de Santander
Henry Arguello
Universidad Industrial de Santander

Publicado 2024-11-20

Palabras clave

  • clasificación espectral,
  • cámara de un solo píxel,
  • cámara óptica difractiva,
  • máscara de fase multinivel,
  • optimización de extremo a extremo,
  • redes neuronales profundas
  • ...Más
    Menos

Cómo citar

Jerez, A., Blanco, G., Urrea, S., García, H., & Arguello, H. (2024). Clasificación espectral mediante una configuración óptica dual y redes neuronales profundas. Revista UIS Ingenierías, 23(4), 17–30. https://doi.org/10.18273/revuin.v23n4-2024002

Resumen

La clasificación espectral permite etiquetar materiales basándose en información espectral. Las cámaras de un solo píxel (SPC) se utilizan como una solución de bajo costo para adquirir imágenes espectrales, proporcionando información espectral de alta resolución y espacial de baja resolución. Además, las cámaras ópticas difractivas (DOC) basadas en máscaras de fase multinivel (MPM) pueden adquirir características espectrales para realizar tareas de clasificación. Los enfoques tradicionales de clasificación espectral no han incorporado SPC y DOC en una única arquitectura óptica. Este trabajo propone un sistema óptico dual basado en SPC y DOC para la clasificación espectral. Específicamente, el mapa de altura en MPM y los parámetros de la red neuronal profunda se aprenden conjuntamente a partir de la optimización de un extremo a otro (E2E). El método propuesto contiene una capa óptica que describe el sistema dual, una capa de fusión que estima la imagen espectral y una red de clasificación que etiqueta los materiales en conjuntos de datos espectrales. Los resultados de la simulación muestran una mejora de hasta un 3% en las métricas de clasificación en comparación con otras arquitecturas ópticas.

Descargas

Los datos de descargas todavía no están disponibles.

Referencias

  1. Q. Weng, “Thermal infrared remote sensing for urban climate and environmental studies: Methods, applications, and trends,” ISPRS Journal of photogrammetry and remote sensing, vol. 64, no. 4, pp. 335–344, 2009, doi: https://doi.org/10.1016/j.isprsjprs.2009.03.007
  2. M. J. Khan, H. S. Khan, A. Yousaf, K. Khurshid, and A. Abbas, “Modern trends in hyperspectral image analysis: A review,” Ieee Access, vol. 6, p. 14118–14129, 2018, doi: https://doi.org/10.1109/ACCESS.2018.2812999
  3. D. Guzzi, A. Barducci, P. Marcoionni, I. Pippi, “An atmospheric correction iterative method for high spectral resolution aerospace imaging spectrometers,” in 2009 IEEE International Geoscience and Remote Sensing Symposium, 2009, doi: https://doi.org/10.1109/IGARSS.2009.5418004
  4. M. Shimoni, R. Haelterman, C. Perneel, “Hypersectral imaging for military and security applications: Combining myriad processing and sensing techniques,” IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 2, pp. 101– 117, 2019, doi: https://doi.org/10.1109/MGRS.2019.2902525
  5. M. H. Tran, B. Fei, “Compact and ultracompact spectral imagers: technology and applications in biomedical imaging,” Journal of biomedical optics, vol. 28, no. 4, pp. 040 901–040 901, 2023, doi: https://doi.org/10.1117/1.JBO.28.4.040901
  6. L. Huang, R. Luo, X. Liu, and X. Hao, “Spectral imaging with deep learning,” Light: Science & Applications, vol. 11, no. 1, p. 61, 2022, doi: https://doi.org/10.1038/s41377-022-00743-6
  7. O. Denk, A. Musiienko, K. Žídek, “Differential single-pixel camera enabling low-cost microscopy in near-infrared spectral region,” Optics express, vol. 27, no. 4, pp. 4562–4571, 2019.
  8. H. Garcia, C. V. Correa, and H. Arguello, “Multi-resolution compressive spectral imaging reconstruction from single pixel measurements,” IEEE Transactions on Image Processing, vol. 27, no. 12, pp. 6174–6184, 2018, doi: https://doi.org/10.1109/TIP.2018.2867273
  9. A. Jerez, H. Garcia, and H. Arguello, “Single pixel spectral image fusion with side information from a grayscale sensor,” in 2018 IEEE 1st Colombian Conference on Applications in Computational Intelligence (ColCACI), 2018, doi: https://doi.org/10.1109/ColCACI.2018.8484848
  10. L. Galvis, D. Lau, X. Ma, H. Arguello, G. R. Arce, “Coded aperture design in compressive spectral imaging based on side information,” Applied optics, vol. 56, no. 22, pp. 6332–6340, 2017, doi: https://doi.org/10.1364/AO.56.006332
  11. M. Imani and H. Ghassemian, “An overview on spectral and spatial information fusion for hyperspectral image classification: Current trends and challenges,” Information fusion, vol. 59, pp. 59–83, 2020, doi: http://dx.doi.org/10.1016/j.inffus.2020.01.007
  12. C. Hinojosa, K. Sanchez, H. Garcia, and H. Arguello, “C-3spcd: coded aperture similarity constrained design for spatio-spectral classification of single-pixel measurements,” Applied Optics, vol. 61, no. 8, pp. E21–E32, 2022, doi: https://doi.org/10.1364/AO.445326
  13. J. Bacca, E. Martinez, and H. Arguello, “Computational spectral imaging: a contemporary overview,” JOSA A, vol. 40, no. 4, pp. C115–C125, 2023, doi: https://doi.org/10.1364/JOSAA.482406
  14. Y. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi, “Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,” IEEE transactions on geoscience and remote sensing, vol. 54, no. 10, pp. 6232–6251, 2016, doi: https://doi.org/10.1109/TGRS.2016.2584107
  15. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, “All-optical machine learning using diffractive deep neural networks,” Science, vol. 361, no. 6406, pp. 1004– 1008, 2018, doi: https://doi.org/10.1126/science.aat8084
  16. H. Arguello, S. Pinilla, Y. Peng, H. Ikoma, J. Bacca, and G. Wetzstein, “Shift-variant colorcoded diffractive spectral imaging system,” Optica, vol. 8, no. 11, pp. 1424–1434, 2021, doi: https://doi.org/10.1364/OPTICA.439142
  17. R. Jacome, J. Bacca, and H. Arguello, “Deepfusion: An end-to-end approach for compressive spectral image fusion,” in 2021 IEEE International Conference on Image Processing (ICIP), IEEE, 2021, doi: https://doi.org/10.1109/ICIP42928.2021.9506692
  18. J. Bacca, T. Gelvez-Barrera, H. Arguello, “Deep coded aperture design: An end-to-end approach for computational imaging tasks,” IEEE Transactions on Computational Imaging, vol. 7, pp. 1148–1160, 2021, doi: https://doi.org/10.48550/arXiv.2105.03390
  19. L. Li, L. Wang, W. Song, L. Zhang, Z. Xiong, and H. Huang, “Quantization-aware deep optics for diffractive snapshot hyperspectral imaging,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, doi: https://doi.org/10.1109/CVPR52688.2022.01916
  20. V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wet- zstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Transactions on Graphics (TOG), vol. 37, no. 4, pp. 1–13, 2018, doi: https://doi.org/10.1145/3197517.3201333
  21. H. Garcia, C. V. Correa, and H. Arguello, “Optimized sensing matrix for single pixel multiresolution compressive spectral imaging,” IEEE Transactions on Image Processing, vol. 29, pp. 4243–4253, 2020, doi: https://doi.org/10.1109/TIP.2020.2971150
  22. C. Hinojosa, J. C. Niebles, and H. Arguello, “Learning privacy-preserving optics for human pose estimation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, doi: https://doi.org/10.1109/ICCV48922.2021.00257
  23. M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier, 2013.
  24. E. Vargas, H. Arguello, J. Y. Tourneret, “Spectral image fusion from compressive measurements using spectral unmixing and a sparse representation of abundance maps,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, pp. 5043–5053, 2019, doi: https://doi.org/10.1109/TGRS.2019.2895822
  25. S. H. Chan, X. Wang, O. A. Elgendy, “Plugand-play admm for image restoration: Fixedpoint convergence and applications,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 84–98, 2016, doi: https://doi.org/10.48550/arXiv.1605.01710
  26. R. Jacome, J. Bacca, H. Arguello, “D 2uf: Deep coded aperture design and unrolling algorithmfor compressive spectral image fusion,” IEEE Journal of Selected Topics in Signal Processing, 2022, doi: https://doi.org/10.1109/JSTSP.2022.3207663