SciELO - Scientific Electronic Library Online

 
vol.27 issue4Deep Learning-Based Sentiment Analysis for the Prediction of Alzheimer's DrugsA Multi-Agent System Model to Advance Artificial General Intelligencebased on Piaget’s Theory of Cognitive Development author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Comp. y Sist. vol.27 n.4 Ciudad de México Oct./Dec. 2023  Epub May 17, 2024

https://doi.org/10.13053/cys-27-4-4771 

Articles

Weighted U-NET++ and 2D-HMM Ensemble for Gastrointestinal Image Segmentation

Jairo Enrique Ramírez-Sánchez1 

Pedro A. Martínez-Barrón2 

Hannia Medina-Aguilar2 

Romeo Sánchez-Nigenda2  * 

11 Tecnológico de Monterrey, Monterrey, Mexico. A01750443@tec.mx.

22 Universidad Autónoma de Nuevo León, Facultad de Ingeniería Mecánica y Eléctrica, San Nicolás de los Garza, Mexico. alfonso.martinezbrrn@uanl.edu.mx, hannia.medinaglr@uanl.edu.mx.


Abstract:

One of the most widely used treatments for cancer of the gastrointestinal (GI) tract is radiotherapy, which requires manual segmentation of the affected organs to deliver radiation without affecting healthy cells. Deep learning techniques have been used, especially variants of U-Net, to automate the organ segmentation process, increasing the efficiency of medical treatment. However, the effective segmentation of the GI tract organs remains an open research problem due to their high capacity to deform because of body movement and respiratory function. This work proposes a methodology that develops a weighted ensemble integrating U-Net++ models and Hidden Markov Models (2D-HMM) for semantic segmentation of the stomach and bowels. Our empirical evaluation reports a score of 0.811 for the Dice coefficient using Leave-One-Out Cross-Validation, which provides robustness to the results.

Keywords: Image segmentation; U-NET architecture; machine learning; hidden Markov models

1 Introduction

In 2018, an estimated 4.8 million people were diagnosed with cancer in the gastrointestinal tract worldwide, representing 26% of the global cancer incidence.

Projections based on current trends predict an increase of 58% to 7.5 million by 2040 [1]. Half of these patients are eligible for radiotherapy [13].

During this process, a medical linear accelerator (LINAC) delivers high doses of radiation to cancer cells to kill them, possibly damaging nearby healthy cells in the worst case.

The damage to healthy cells causes side effects such as hearing loss, vomiting, and extreme tiredness, among other side effects [18]. To reduce collateral damage, oncologists try to direct X-rays at tumors avoiding the organs at risk.

Magnetic Resonance Imaging Guided Linear Accelerator (MR-Linac) allows observation of tumors and organs in real-time to adjust the radiation direction; however, oncologists must manually segment organs, extending treatment sessions up to an hour, during which time the patient must remain immobile.

In recent years, Artificial Intelligence techniques such as convolutional neural networks have been able to perform auto-segmentation in cases of brain tumors [6], neck cancer [11], and prostate cancer [9, 8], halving the time of treatment sessions [3]; however, there are few advances in the segmentation of gastrointestinal (GI) tract organs, mainly because soft tissue surrounds abdominal organs, and such organs can vary in shape and location throughout the day due to digestive and respiratory movements [10].

In this work, we propose a methodology, based on deep learning, for pre-processing and segmentation of magnetic resonance images of the digestive tract. The architecture of our approach is a weighted ensemble based on U-Net models and two-dimensional Hidden Markov Models (2D-HMM) that performs semantic segmentation of the stomach, and small and large bowels.

The proposed methodology has the potential to help implement more effective and efficient treatments for patients by speeding up the segmentation process.

We evaluated the proposed methodology using a dataset of images from the UW-Madison Carbone Center, provided publicly on the Kaggle platform as part of the UW-Madison GI Tract Image Segmentation Competitionfn, without compromising the run-time and memory space requirements of the segmentation process. This work is organized as follows. In the next section, we present a review of the literature.

Section 3 describes the proposed methodology illustrating the different stages of the process. Then, Section 4 discusses the results obtained from the generated models. In the last section, we present our conclusions.

2 Related Work

Recent studies in Biomedical Engineering use Artificial Intelligence deep learning techniques to assist in the segmentation of medical images for diagnostics and treatment processes [16]; in particular, variants of U-Net architectures.

Deep learning models have good performance in medical image segmentation because they have the ability to simultaneously combine high and low-level information to extract complex image features.

However, segmentation of the GI tract organs remains a challenging task [7], since these organs have a high capacity to deform by body movement and respiratory functions of individuals.

Due to the above, there are few studies on the successful and extensive use of MR-Linac for cases of stomach cancer [21], and on the application of U-Net architectures for this type of imaging, most studies are based on complex models such as 3D U-Net.

In [12], authors proposed a U-Net to segment the liver, stomach, duodenum, and kidney on 3D patch-based computed tomography (CT) images. Their results were promising for the stomach, reaching a score of 0.813 for the Dice Similarity Coefficient (DSC), but less significant for the duodenum where they obtained 0.595.

Other works proposed a similar approach to segment the organs of the GI tract in 2022 [19]. In a preliminary report, their work compares the performance of different encoders for a classical U-Net architecture, with the Resnet34 encoder reporting the best results.

Additional work presents a U-Net and Region-based Convolutional Neural Networks (Mask R-CNNs) to perform segmentation of GI tract organs [5], on the same UW-Madison dataset we used in this study.

The authors report that their Mask R-CNN model achieved a DSC score of 0.73 in their validation data. Other works use Vision Transformers to segment, in the same way, the images of UW-Madison [15].

The proposed model is hybrid. It uses a LeViT architecture as the encoder and a U-Net++ as the decoder. The resulting model obtains a score of 0.79 for DSC and 0.72 for IoU.

In [7], an automatic contour refinement (ACR) method based on probability maps for correcting self-segmented contours in magnetic resonance-guided radiation therapy is described.

Self-segmentation was generated by a 3D deep CNN architecture (a modified 3D-ResUNet), the DSC changed from 0.44 to 0.56, from 0.33 to 0.55, and from 0.34 to 0.54, in the stomach, small bowel, and large bowel, respectively.

Furthermore, there are works that explore the use of Hidden Markov Models (HMM) for multi-class image segmentation [17], in which the hidden states of a Markov model represent the true segmentation of the image.

In addition, in [2], authors used two-dimensional Markov models (2D-HMM) for effective segmentation of radiographs, multispectral and synthetic images. Despite the potential of HMMs, there are no comprehensive studies of their application in the segmentation of magnetic resonance images.

There are recent works in the literature that combine the use of convolutional operators with adaptive HMMs to segment brain images [14, 20].

However, to the best of our knowledge, no method incorporates HMMs in the segmentation of GI tract images as we propose in this work. In summary, deep learning approaches, especially U-Net variants, are the most explored methods in the literature to analyze biomedical images [16].

The application of these methods to segment images of the gastrointestinal tract remains a challenge and an open area of research.

3 Methodology

In this section, we present a methodology that consists of three phases.

The first phase includes pre-processing of the images of the dataset (3.1), the second is the design and construction of the segmentation models (3.2), and finally, the validation phase of the models through experimentation and analysis of results (4). In Fig. 1, you can see the general stages of the proposed methodology.

Fig. 1 Methodology for the Design and Validation of Segmentation Models of the GI Tract 

3.1 Data Pre-Processing

As we can see in Fig. 1, the first phase of the methodology consists of preparing the data. The dataset used in this research is public and was provided by the UW-Madison Carbone Cancer Center.

The data repository consists of 272 MRI sets in 16-bit grayscale PNG format from 85 cancer patients during radiation treatment. Each scan has 144 slices, which gives a total of 39,168 images.

The training annotations are RLE (Run-Length Encoding) encoded masks for the segmentation of three organs of the GI tract: stomach, large bowel, and small bowel.

The images are of different dimensions; therefore, it was necessary to standardize them. Consequently, we normalized them and their respective RLE-encoded masks to a size of 128 × 128 px.

Furthermore, to visualize the pattern in the distribution of the organs in the sample, we plotted the heatmaps of each organ (see Fig. 2).

Fig. 2 Heatmaps for each organ 

3.2 Design and Construction of Segmentation Models

As previously mentioned, our proposed methodology considers creating two models for organ segmentation and an ensemble that integrates both models. The first model considers a U-Net++ type architecture, while the second one is a two-dimensional HMM (2D-HMM).

The individual processes for the construction and training of both models are described below, as well as the process of their integration for the ensemble.

3.2.1 U-Net++ Model

Such architecture was designed to solve limitations of the base U-Net model in the segmentation of medical images [22] by including a series of additional connections to the original U-Net for the effective recovery of the fine granularity details of the objects, including deep supervision that allows establishing different configurations of its parameters.

The additional connections of the U-Net++ follow a pyramid rule, where the shape of U is filled with convolutional blocks, each one consisting of a certain number of layers that vary according to the network nodes. The original U-Net++ diagram from [22] is shown in figure 3.

Fig. 3 Original architecture of U-Net++ from [22] 

In this work, the network was implemented in Python 3.8 following the version proposed in [22].

The hyper-parameters of the model were adjusted with the Keras API grid search, selecting relu as the activation function in the hidden layers, 0.1 as dropout rate, 5×104 as the learning rate for 50 epochs and Adam as optimizer.

Finally, sigmoid was used as the activation function in the last layer instead of softmax to assign probabilities to each class instead of distributing them.

For the hyper-parameters determination, a partition of 80% of the total images was made for training and 20% for validation.

We used the DICE coefficient optimized per organ as a loss function, integrated as a weighted sum given class imbalance.

Let y128×128×4 be the real segmentation matrix, y^128×128×4 the segmentation predicted by the network, let S {stomach, small bowel, large bowel, background} be the set of states of the classification.

Therefore y^l128×128 refers to the segmentation corresponding to the organ lS.

Finally, let αl be the inverse frequency of the organ class l. The equation 1 shows the process mathematically:

(y,y^)=αl2|yly^l||yl|+|y^l|. (1)

3.2.2 Two-Dimensional Markov Model (2D-HMM)

Hidden Markov Models are a statistical technique that allows the creation of a model with observed and hidden events as causal factors in a probabilistic model.

An HMM consists of two stochastic processes, a hidden state process, and an observable symbol process, where the hidden states form a Markov chain, and the probability distribution of the observed symbol depends on the underlying states. In the case of image segmentation, the intuition is that pixels in an image depend on those surrounding them; that is, they share common characteristics such as color and spatial location.

Therefore, it is possible to treat this pixel dependency as a Markov Random Field, which relates two main probabilities: transition (PT) and observation (PO).

Intuition indicates that pixels (i,j) of an image are related to their neighbors. The transition probability PT indicates that the state s to which a pixel belongs, expressed as si,j, is related to the state of the left side pixel si1,j and the upper one si,j1.

That is, PT=P(si,j=lsi1,j=n,si,j1=m). Let S be the set of states in which a pixel can be classified, and let Ω be the total set of images. The calculation of PT is expressed in equation 2, where I() is the counting function that returns one if the condition is fulfilled, and 0 otherwise:

PT(l|n,m)=1ΩωΩi,jI(si,j,si1,j,si,j1)i,jI(si,j), (2)

where:

si,j=l,si1,j=n,si,j1=m.

For all l,m,nS. The computed transition probabilities are shown in Fig. 4. For example, we can observe in Fig. 4a that if the state of the current pixel corresponded to the stomach, there would be a probability of 4.5% that the upper pixel was the stomach and the left one the background of the image; while there is an 87% probability that both correspond to the same organ.

Fig. 4 Transition matrices 

On the other hand, the likelihood that the neighboring pixels correspond to the other organs is practically null.

The observation probability is calculated based on the color of each pixel (Oi,j) measured between 0 and 255. A skewnorm probability distribution function was fitted to the colors of each of the organs.

Let PO(si,j=lOi,j) be the function that takes as inputs the color of pixel i,j and returns the probability that it belongs to state lS. These functions are shown graphically in figure 5.

Fig. 5 Probability distributions as a function of pixel color 

As can be seen, making a maximum likelihood estimate would be imprecise because the observation probabilities for the stomach and small bowel are similar. More information needs to be integrated.

In the current work, the state of a pixel, in addition to being conditioned by the previously described probabilities, is also influenced by the spatial position in the image; that is, there are high-probability zones in which an organ can appear, as shown in the heatmaps (see Fig. 2).

Thus, the probability Pe that a pixel (i,j) belongs to a state si,j=l is calculated by integrating the probabilities described above as shown in equation 3:

Pe(l|i,j)=1ΩωΩI(si,j=l). (3)

Therefore, the final calculation of the probability that the state of the pixel (i,j) is l is shown in equation 4. Naturally, Pe and PT are computed in advance during the training phase and stored for reference. In the case of PO, the parameters of the distributions are saved and their value is calculated:

P(si,j=l)=PT(l|n,m)PO(l|Oi,j)Pe(l|i,j), (4)

The proposed calculation takes into account the spatial, observational, and transition factors of the pixels. The way to incorporate the calculations for the segmentation of a new image is shown in algorithm 1.

Algorithm 1 2D-HMM Segmentation Algorithm 

In the present work, the multiplication of probabilities was replaced by the logarithmic sum of the probabilities to avoid a problem of negative overflow or underflow.

It is important to note that the purpose of the 2D-HMM is not the segmentation itself, but the calculation of efficient probabilities to improve the performance of U-Net++. For this reason, we omit to include an adaptation of the Viterbi algorithm in 2D.

3.2.3 Ensemble

To integrate the information from the U-Net++ and 2D-HMM models, a weighted ensemble layer is proposed, which uses the probabilities given by both models to enhance the classification.

Let H,U128×128×4 be the probability matrices calculated by 2D-HMM and U-Net++ respectively. The weighted integration is shown in equation 5:

E(H,U)=αU+(1α)H. (5)

The value of α is selected from the search space {0.05, 0.10, 0.20, 0.30, 0.40} that was determined empirically. The metric results for each value of α are shown in the next section. In the final segmentation, maximum likelihood is applied to the ensemble to obtain the highest probability state l in each pixel in the matrix.

4 Results and Analysis

The experimentation was carried out on the Google Colab platform using a Colab notebook with an Intel(R) Xeon(R) 6-core CPU @ 2.20GHz, NVIDIA A100-SXM GPU, and 12 GB of RAM.

For the training and validation phase of the models, the Leave-One-Out Cross-Validation method was followed, which is one of the recommended methods in biomedical sciences to improve the predictive rate of models for clinical studies [4].

The method consists of testing the model on a set of ω images and training both the 2D-HMM and U-Net++ parameters with Ω\ω. With this, the training consists of 271 sets of 39,024 images in total and the test consists of one set of 144 images.

One of the first tasks of the evaluation phase was to adjust the weight parameter αof the proposed ensemble, the results are shown in table 1. The case α=0 would refer to the U-Net++ and α=1 to the 2D-HMM. We can observe that the best results for the ensemble in the evaluation metrics are obtained when α=0.05.

Table 1 Weight values α of 2D-HMM . U-Net++ ensemble evaluated 

Metric Weight α
0.05 0.10 0.20 0.30 0.40
Dice General 0.811 0.799 0.788 0.771 0.742
Stomach 0.888 0.885 0.872 0.844 0.749
Small Bowel 0.812 0.791 0.759 0.701 0.601
Large Bowel 0.817 0.814 0.804 0.786 0.747
IoU 0.777 0.770 0.748 0.709 0.628

We can observe in table 2 the results of all the proposed models. It is noteworthy that the U-Net models that incorporate information from the Markov process report better results in both evaluation metrics, satisfying our intuition for their integration.

Table 2 Experimentation results with the proposed models 

Metric Models
2D-HMM.U-Net++ 2D-HMM.U-Net U-Net++ U-Net
Dice General 0.811 (32%) 0.723 (34%) 0.610 0.538
Stomach 0.888 (10%) 0.803 (26%) 0.808 0.635
Small Bowel 0.812 (38%) 0.711 (29%) 0.585 0.548
Large Bowel 0.817 (5.6%) 0.774 (43%) 0.773 0.538
IoU 0.777 (18%) 0.696 (36%) 0.657 0.511

For example, in the general Dice for the U-Net++ ensemble, there is an improvement percentage of 32% over U-Net++, while the U-Net ensemble obtains an improvement of 34% with respect to its individual model. In the case of the IoU metric, the improvement percentage is 18% for the U-Net++ ensemble and 36% for the one based on U-Net.

Finally, table 3 compares the results of the proposed 2D-HMM U-Net++ model with recent works from the literature on the problem of segmentation of biomedical images of the GI tract, which were discussed in Section 2.

Table 3 Comparison with Recent Segmentation Models of the GI Tract *These models use different data for their evaluation 

Metric Models in the Literature
2D-HMM U-Net++ U-Net Mask R-CNN Resnet34 LeViT384-UNet++ 3D-ResUnet* 3D U-Net*
Dice General 0.81 0.51 0.72 0.79 0.79
Stomach 0.88 0.77 0.81
Small Bowel 0.81 0.75
Large Bowel 0.81 0.76
IoU 0.77 0.85 0.72

Notice that not every approach reports results on the segmentation of individual organs as we do, considering that segmenting the bowels is a harder task due to their physiology.

In addition, there are a couple of models that use a different GI dataset for evaluation. However, we consider it important to include their results as they are involved with the same top-level goal.

We can see that our approach surpasses most of the works in the evaluation metrics, except for the Resnet34 model that only reports results regarding the IoU metric; however, this work followed a traditional 80 - 20 partition methodology to evaluate, which can make the result highly dependent on the partition used. Figures 6 - 8 and 7 - 9 show 2D and 3D visual examples of segmentation for a specific slice of a resonance set. In these examples, the ensemble enhanced the predictions of the U-Net++ by up to 19%. For example, the U-Net++ prediction, illustrated by the fourth image of Fig. 7, misses multiple organ details compared to the true image.

Fig. 6 Example of segmentation in set 249 for a certain cut 

Fig. 7 Example of segmentation in set 143 for a certain cut 

Fig. 8 Example of 3D segmentation for set 249 

Fig. 9 Example of 3D segmentation for set 143 

However, the weighted ensemble is capable of restoring these details, as can be seen in the last column of the same figure set. In general, we can observe how the proposed ensemble significantly increases the quality of the segmentation. In summary, although the U-Net++ model has proven to be an effective architecture for organ segmentation, it has deficiencies in segmenting certain sections of the GI tract by containing two or more classes of organs with high likelihood, due to the high capacity of the GI organs to deform because of body movement and respiratory function.

This work integrates the probabilities of the Hidden Markov Models to discern those cases where the base model fails to segment. Our work considers spatial and transition probabilities, constituting the main difference from related work.

5 Conclusions

Organ segmentation for the treatment of gastrointestinal tract cancer is an important task that requires precision and speed. It is vital to have algorithms that can help automate the process of segmentation, as support for medical specialists, to reduce collateral damage to healthy cells without increasing treatment times. However, segmenting GI tract organs remains complex due to the deformations they undergo from body movement and respiratory function.

This paper proposes a Deep Learning methodology that develops a weighted ensemble integrating U-Net++ and 2D-HMM models for semantic segmentation of the stomach and bowels. Although 2D-HMM does not provide highly accurate segmentation by itself, it boosts U-Net++ predictions in the general Dice by up to 32% and by up to 18% in IoU scores. The final precision of 0.811, obtained by the ensemble in the general Dice, is better than the results reported in the literature.

Furthermore, by using Leave-One-Out cross-validation, the metric provided has a high level of reliability over the dataset used. The proposed architecture has the potential to help implement more effective and efficient treatments for cancer patients by speeding up the targeting process of segmentation and minimizing risks.

Part of the future work will consider the integration of automatic contour refinement techniques or additional recurrent layers in the networks, which we believe could improve the quality given by the spatial and transition probabilities of the proposed ensemble. In addition, we plan to replicate the proposed methodology in other datasets to evaluate its generalization.

References

1. Arnold, M., Abnet, C. C., Neale, R. E., Vignat, J., Giovannucci, E. L., McGlynn, K. A., Bray, F. (2020). Global burden of 5 major types of gastrointestinal cancer. Gastroenterology, Vol. 159, No. 1, pp. 335–349. DOI: 10.1053/j.gastro.2020.02.068. [ Links ]

2. Baumgartner, J., Georgina-Flesia, A., Gimenez, J., Pucheta, J. (2013). A new approach to image segmentation with two-dimensional Hidden Markov models. BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence, pp. 213–222. DOI: 10.1109/brics-cci-cbic.2013.43. [ Links ]

3. Bertelsen, A. S., Schytte, T., Møller, P. K., Mahmood, F., Riis, H. L., Gottlieb, K. L., Agergaard, S. N., Dysager, L., Hansen, O., Gornitzka, J., Veldhuizen, E., O’Dwyer, D. B., Christiansen, R. L., Nielsen, M., Jensen, H. R., Brink, C., Bernchou, U. (2019). First clinical experiences with a high field 1.5 t MR linac. Acta Oncologica, Vol. 58, No. 10, pp. 1352–1357. DOI: 10.1080/0284186x.2019.1627417. [ Links ]

4. Chicco, D., Jurman, G. (2022). The ABC recommendations for validation of supervised machine learning results in biomedical sciences. Frontiers in Big Data, Vol. 5. DOI: 10.3389/fdata.2022.979465. [ Links ]

5. Chou, A., Li, W., Roman, E. (2022). Gi tract image segmentation with U-Net and mask R-CNN. CS231n: Deep Learning for Computer Vision 164, Stanford University. [ Links ]

6. Despotović, I., Goossens, B., Philips, W. (2015). MRI segmentation of the human brain: Challenges, methods, and applications. Computational and Mathematical Methods in Medicine, Vol. 2015, pp. 1–23. DOI: 10.1155/2015/450341. [ Links ]

7. Ding, J., Zhang, Y., Amjad, A., Xu, J., Thill, D., Li, X. A. (2022). Automatic contour refinement for deep learning auto-segmentation of complex organs in MRI-guided adaptive radiation therapy. Advances in Radiation Oncology, Vol. 7, No. 5, pp. 100968. DOI: 10.1016/j.adro.2022.100968. [ Links ]

8. Eppenhof, K. A. J., Maspero, M., Savenije, M. H. F., de Boer, J. C. J., van der Voort-van Zyp, J. R. N., Raaymakers, B. W., Raaijmakers, A. J. E., Veta, M., van den Berg, C. A. T., Pluim, J. P. W. (2020). Fast contour propagation for MR-guided prostate radiotherapy using convolutional neural networks. Medical Physics, Vol. 47, No. 3, pp. 1238–1248. DOI: 10.1002/mp.13994. [ Links ]

9. Fransson, S., Tilly, D., Strand, R. (2022). Patient specific deep learning based segmentation for magnetic resonance guided prostate radiotherapy. Physics and Imaging in Radiation Oncology, Vol. 23, pp. 38–42. DOI: 10.1016/j.phro.2022.06.001. [ Links ]

10. Johansson, A., Balter, J. M., Cao, Y. (2021). Gastrointestinal 4D MRI with respiratory motion correction. Medical Physics, Vol. 48, No. 5, pp. 2521–2527. DOI: 10.1002/mp.14786. [ Links ]

11. Kawahara, D., Tsuneda, M., Ozawa, S., Okamoto, H., Nakamura, M., Nishio, T., Nagata, Y. (2022). Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients. Journal of Applied Clinical Medical Physics, Vol. 23, No. 5. DOI: 10.1002/acm2.13579. [ Links ]

12. Kim, H., Jung, J., Kim, J., Cho, B., Kwak, J., Jang, J. Y., Lee, S. W., Lee, J. G., Yoon, S. M. (2020). Abdominal multi-organ auto-segmentation using 3D-patch-based deep convolutional neural network. Scientific Reports, Vol. 10, No. 1. DOI: 10.1038/s41598-020-63285-0. [ Links ]

13. Lee, S. L., Li, Y., Meudt, J. J., Strang, J., Hebel, D., Alfson, A., Olson, S. J., Kruser, T. R., Smilowitz, J. B., Borchert, K., Loritz, B., Bayouth, J., Bassetti, M. (2022). UW-Madison GI tract image segmentation. [ Links ]

14. Li, G., Sun, J., Song, Y. (2019). Segmentation of medical images with a combination of convolutional operators and adaptive Hidden Markov model. IEEE 5th International Conference on Computer and Communications (ICCC), pp. 1782–178. DOI: 10.1109/iccc47050.2019.9064034. [ Links ]

15. Nemani, P., Vollala, S. (2022). Medical image segmentation using levit-unet++: A case study on gi tract data. 26th International Computer Science and Engineering Conference (ICSEC), pp. 7–13. DOI: 10.1109/ICSEC56337.2022.10049343. [ Links ]

16. Punn, N. S., Agarwal, S. (2022). Modality specific U-Net variants for biomedical image segmentation: A survey. Artificial Intelligence Review, Vol. 55, No. 7, pp. 5845–5889. DOI: 10.1007/s10462-022-10152-1. [ Links ]

17. Pyun, K. P., Lim, J., Won, C. S., Gray, R. M. (2007). Image segmentation using Hidden Markov gauss mixture models. IEEE Transactions on Image Processing, Vol. 16, No. 7, pp. 1902–1911. DOI: 10.1109/tip.2007.899612. [ Links ]

18. Schaue, D., McBride, W. H. (2015). Opportunities and challenges of radiotherapy for treating cancer. Nature Reviews Clinical Oncology, Vol. 12, No. 9, pp. 527–540. DOI: 10.1038/nrclinonc.2015.120. [ Links ]

19. Sharma, M. (2022). Automated GI tract segmentation using deep learning. DOI: 10.48550/ARXIV.2206.11048. [ Links ]

20. Song, Y., Adobah, B., Qu, J., Liu, C. (2020). Segmentation of ordinary images and medical images with an adaptive Hidden Markov model and Viterbi algorithm. Current Signal Transduction Therapy, Vol. 15, No. 2, pp. 109–123. DOI: 10.2174/1574362413666181109113834. [ Links ]

21. Song, Y., Li, Z., Wang, H., Zhang, Y., Yue, J. (2022). MR-LINAC-guided adaptive radiotherapy for gastric MALT: Two case reports and a literature review. Radiation, Vol. 2, No. 3, pp. 259–267. DOI: 10.3390/radiation2030019. [ Links ]

22. Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N., Liang, J. (2018). UNet++: A nested U-Net architecture for medical image segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. [ Links ]

Received: June 13, 2023; Accepted: September 10, 2023

* Corresponding author: Romeo Sánchez-Nigenda, e-mail: romeo.sanchezng@uanl.edu.mx

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License