Next Article in Journal
Exploring an Intelligent Classification Model for the Recognition of Automobile Sounds Based on EEG Physiological Signals
Previous Article in Journal
Feynman Diagrams beyond Physics: From Biology to Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shearlet Transform Applied to a Prostate Cancer Radiomics Analysis on MR Images

1
Dipartimento di Matematica e Informatica, Università degli Studi di Palermo, 90123 Palermo, Italy
2
Ri.MED Foundation, 90133 Palermo, Italy
3
Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
4
Department of Biomedicine, Neuroscience and Advanced Diagnostics—Section of Radiology, University Hospital “Paolo Giaccone”, 90127 Palermo, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1296; https://doi.org/10.3390/math12091296
Submission received: 15 March 2024 / Revised: 12 April 2024 / Accepted: 23 April 2024 / Published: 25 April 2024

Abstract

:
For decades, wavelet theory has attracted interest in several fields in dealing with signals. Nowadays, it is acknowledged that it is not very suitable to face aspects of multidimensional data like singularities and this has led to the development of other mathematical tools. A recent application of wavelet theory is in radiomics, an emerging field aiming to improve diagnostic, prognostic and predictive analysis of various cancer types through the analysis of features extracted from medical images. In this paper, for a radiomics study of prostate cancer with magnetic resonance (MR) images, we apply a similar but more sophisticated tool, namely the shearlet transform which, in contrast to the wavelet transform, allows us to examine variations along more orientations. In particular, we conduct a parallel radiomics analysis based on the two different transformations and highlight a better performance (evaluated in terms of statistical measures) in the use of the shearlet transform (in absolute value). The results achieved suggest taking the shearlet transform into consideration for radiomics studies in other contexts.
MSC:
42C40; 94A08; 62P10

1. Introduction

Wavelet theory, born in the 1980s to solve some problems in dealing with Fourier analysis, received particular attention, not only in mathematics, but also in many applied sciences [1,2,3,4,5,6,7]. A key concept of the wavelet theory is the scale leading to a multi-resolution analysis of the functions which capture both fine and coarse characteristics [8,9].
Before giving the definition of the continuous wavelet transform, which is at the base of wavelet theory, we recall some standard notations. By L 2 ( R n ) , we denote the Hilbert space of square integrable (measurable) functions f : R n R from the Euclidean space R n of dimension n 1 to the set of real numbers R . The space L 2 ( R n ) is equipped with the inner product · , · , defined by
f , g = R n f ( x ) g ( x ) ¯ d x , f , g L 2 ( R n ) .
Definition 1
([9]). Let ψ L 2 ( R n ) , a > 0 , t R n and
ψ a , t ( x ) = a n 2 ψ x t a , x R n .
The continuous wavelet transform of f L 2 ( R n ) is the function W ψ f : R + × R n R given by
W ψ f ( a , t ) = f , ψ a , t , f o r ( a , t ) R + × R n .
The variables a R + and t R n are called scale and location parameters, respectively, and the continuous wavelet transform can be interpreted in the following way. If ψ has compact support centered in the origin, then W ψ f contains local information of f near t. Moreover, for a small value of a, the support of ψ a , t (called a wavelet) is tight, so W ψ f gives better local details near t. Besides the definition above and discretized versions, (in particular, the stationary wavelet transform SWT [10]) has been formulated for dealing with discrete data, like images, based on decompositions at different scales (levels). Wavelet theory is also related to a more general field, frame theory [9,11,12], concerning analysis, synthesis and reconstruction by sequences of elements.
The wavelet transform is very efficient to approximate a one-dimensional signal f : R R containing pointwise discontinuity; however, the same does not hold for multidimensional data f : R n R ( n > 1 ), in general. This is due to the fact that multidimensional data present other types of discontinuities, for instance, an edge in a 2D image corresponds to a sharp change of level grays and so it is a (non-pointwise) discontinuity. For this reason, other tools such as those proposed in [13,14,15,16,17,18,19] have been developed in the last decades to process multidimensional data (in particular, 2D images). One of these new methods is the shearlet transform, introduced in [17,18], which is efficient to analyze functions along directions. In order to show the definition (confining to the 2D case), we first set some additional notations. Let ψ L 2 ( R 2 ) . For a > 0 , s R and t R 2 , we put
ψ a , s , t ( x ) = a 3 4 ψ ( A a 1 S s 1 ( x t ) ) , x R 2 ,
which we call a shearlet and where
A a = a 0 0 a 1 2 and S s = 1 s 0 1
are the parabolic scaling and shearing matrices, respectively. The variables a R + , s R and t R 2 are called scale, shearing (or orientation) and location parameters, respectively. Figure 1 illustrates how the shearing parameter affects the orientation of the support of ψ (the parameter has a similar effect also on the support of the Fourier transform ψ ^ ).
Definition 2
([18]). Let ψ L 2 ( R 2 ) . The continuous shearlet transform of f L 2 ( R 2 ) is the function S ψ f : R + × R × R 2 R given by
S ψ f ( a , s , t ) = f , ψ a , s , t , for ( a , s , t ) R + × R × R 2 .
The shearlet transform, as one can see in the above definition, takes inspiration from the wavelet transform. However, it is more proficient in the analysis of images along given orientations and, in fact, the shearlet transform has been employed in [20,21] to develop a method for the analysis and detection of edges in images. Moreover, it found several other applications, e.g., denoising [22,23,24], image fusion [25,26] and segmentation [27,28]. Finally, in contrast to other methods, the shearlet transform is based on an algebraic structure of group similarly to the wavelet transform [18].
Returning to the beginning of our discussion, due to its success in image processing, the discretized version of the wavelet transform has become a common filter for radiomics, a rising interdisciplinary field that combines medicine and informatics. Radiomics focuses on the extraction of quantitative features from medical images, especially from tomographic images such as positron emission tomography (PET) [29], computed tomography (CT) [30] and magnetic resonance (MR) imaging  [31]. This approach enables the conversion of qualitative information, based on medical doctors’ experience, into objective information. In other words, by analyzing quantitative features, radiomics aims to uncover valuable information that may not be discernible through traditional visual inspection alone [32]. The radiomics features can be extracted starting from the original images or after performing the wavelet filter, i.e., the images are first decomposed according by the SWT and then the features are calculated on each part  [33,34,35,36,37,38,39,40]. Thus, wavelet analysis plays a significant role in radiomics since it increases the range and the type of features usable to characterize tumor properties and responses to treatment.
Motivated by the use of the wavelet transform as a filter for radiomics and by the better properties of the shearlet transform, in this paper we carry out a radiomics analysis of prostate cancer based on MR imaging using the absolute shearlet transform as a filter and compare it to the classical wavelet transform. For absolute shearlet transform filtering, we mean that the images are decomposed in approximation and details coefficients at different levels and orientations, applying the so-called non-subsampled shearlet transform (NSST) [17], which has analogy with the SWT, and then the absolute values of the coefficients are taken. MR imaging provides high-quality quantitative images that are suitable for prostate volume selection and segmentation, screening, detection, classification, risk stratification and treatment planning [41]. In particular, MR imaging demonstrates excellent sensitivity and specificity for detecting prostate cancer [42] and has been recommended as a diagnostic tool by the American College of Radiology and the European Society of Urogenital Radiology [43]. Given the subjective and inconsistent nature of interpreting prostate MR imaging, suspicion scores for prostate cancer have been developed, utilizing a one-to-five-point scale “Prostate Imaging Reporting and Data System” (PI-RADS) to enhance standardization in MR imaging interpretation and reporting. Furthermore, the integration of computerized diagnostic tools alongside radiologists’ assessments has shown promise in improving the sensitivity and specificity of prostate cancer detection [44]. Consequently, there is a notable interest in exploring the potential of radiomics for prostate cancer detection [45].
The specific radiomics analysis investigated in this study consists in the prediction of the nature of lesions (prostate cancer or non-neoplastic lesions) within the prostate, employing a dataset of MR images. The primary objective of this study is not identifying the absolute best predictive model for distinguishing between non-neoplastic lesions and prostate cancers, but evaluating whether the incorporation of the absolute shearlet transform can enhance the outcomes of radiomics compared to utilizing the wavelet transform alone. We accomplish this by employing a conventional radiomics pipeline that incorporates three machine learning models: linear discriminant analysis, linear support vector machine and decision tree. For the purpose of comparison between the two transforms, we evaluate the performance of the classifiers in terms of standard statistical measures (area under the receiver operating characteristic curve, specificity, sensitivity and accuracy). Our method, involving the absolute shearlet transform as preprocessing of the images, gives a better performance according to the statistical indices and on the lesser number of features employed by the models. For our study, we make use of PyRadiomics [46] (version v3.0.1), an Image Biomarker Standardization Initiative (IBSI) compliant analysis software, for the feature extraction and MATLAB (version R2023b) for the training and validation of the models.
In the literature, other works involved the shearlet transform and the radiomics, but under different points with respect to our study. In particular, the authors of [47,48] used the shearlet transform, not as a filter, but for multi-modal medical image fusion before conducting radiomics studies. In [49], the reproducibility of radiomics features (among which the shearlet features) from ultrasound images is studied. The shearlet transform was used also to extract more features for the detection of Alzheimer’s disease in [50], for the classification of breast tumors in [51] and of brain cancer in [52]. Related works concerning not medical but histological images are [53,54,55,56].
This paper is organized as follows. In Section 2, we describe the image dataset we used for the analysis and we talk about radiomics focusing on PyRadiomics software and on the feature types. We also recall how the SWT, implemented in PyRadiomics, is defined, and we continue the discussion about the shearlet transform. Finally, we present our method for the radiomics study in the object and the metrics for the evaluation. Section 3 reports the results obtained by the method and the comparison with the wavelet approach. Finally, we discuss the results in Section 4 and then make some concluding considerations in Section 5. This paper also contains Appendix A with a more technical result about the shearlet transform.

2. Materials and Methods

2.1. Medical and Imaging Dataset

2.1.1. Patient Selection

This retrospective study was conducted at a single facility, the AOUP “Paolo Giaccone”, under the auspices of the University of Palermo for consecutive patients who underwent multiparametric prostate MR imaging between 1 June 2019 and 31 January 2023. We selected 73 Prostate Imaging Reporting and Data Systems (PI-RADS) greater or equal to 3 lesions, subsequently divided into 2 groups based on the histological results obtained from fusion biopsy. Group 1 ( n = 45 ; 61.6 % ) comprises all lesions with a histological result indicating prostate cancer with a Gleason score greater than or equals to 6, while Group 2 ( n = 28 ; 38.4 % ) includes all other non-neoplastic lesions (prostatitis, atypical small acinar proliferation and prostatic intraepithelial neoplasia). Our case series do not include prostate tumors with a Gleason score less than 6 because, in our hospital, lesions with such a Gleason score are not usually reported in the histological findings. A large part of the dataset, namely made of 71 patients, was used in [57] for a different analysis.

2.1.2. MR Imaging Technique

MRI exams were performed using a 1.5T MR scanner (Achieva, Philips Healthcare, Best, The Netherlands) with a pelvic phased-array coil (16-channel HD Torso XL), using the same imaging protocol in all patients.
The standard clinical prostate MRI examination comprised axial, coronal and sagittal T2-weighted turbo spin-echo images of the prostate and seminal vesicles, along with axial T1-weighted turbo spin-echo images. Additionally, diffusion-weighted imaging (DWI) was performed at b values of 0, 700, and 1400 s/mm2, and post-processed to generate apparent diffusion coefficient (ADC) maps. Following the unenhanced imaging, patients were administered 1 mmol/kg body weight of Gadoteric acid (Gd-DOTA, Dotarem, Guerbet) at a rate of 3 mL/s, followed by an infusion of 20–30 mL of saline solution at the same rate. Axial T1-weighted three-dimensional spoiled gradient-recalled echo volumetric interpolated images were acquired after contrast agent injection to capture the perfusion MRI of the prostate (dynamic contrast-enhanced (DCE)). Axial T2-weighted images and ADC maps were used for the purposes of this study. Detailed pulse sequence parameters are listed in Table 1.

2.2. Radiomics

The radiomics aim is to extract a large number of quantitative features from medical images for building descriptive and predictive models. To be clearer, the aim is to predict if a new case can be considered as a member of Group 1 or Group 2. To extract features related to a specific anatomical district (such as the prostate, in our case), a medical image of the district is needed as well as a mask containing the region of interest, in short, the ROI (Figure 2 shows a slice of a MR image and the related mask). The mask can be realized by a manual segmentation of the radiologist (as in our case) or by automatic/semi-automatic segmentation algorithms [58].
Based on a T2-weighted image and the corresponding mask, a total of 110 radiomics features can be extracted utilizing IBSI [59] compliant analysis software, i.e., PyRadiomics version v3.0.1 (https://pyradiomics.readthedocs.io/en/latest/index.html, accessed on 15 March 2024) [46], for increasing the reproducibility of the extracted features. PyRadiomics is a Python-based open-source program designed for scientific computing, compatible with various platforms. The software extracts various types of features, including shape descriptors, first-order statistics, and texture matrices such as the gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level dependence matrix (GLDM), gray-level size-zone matrix (GLSZM) and neighboring gray-level dependence matrix (NGLDM). Shape descriptors are concerned with the geometric characteristics of the objects in the image and are not influenced by the intensity distribution of gray levels. These descriptors encompass attributes such as volume, maximum diameter, surface area, compactness and sphericity. First-order statistical descriptors, also known as histogram-based features, analyze the frequency distribution of a voxel (short for “volumetric picture element”, which is a three-dimensional pixel representing a point of a regular grid in a 3D space on which the image is defined) and intensities within an organ by examining the histogram of gray-level intensity values. Texture features, on the other hand, provide insights into the spatial arrangement of gray levels within the image. They evaluate the relative positions of voxels, offering information about the spatial organization of gray levels within the organ of interest.

2.3. Wavelet Transform

For many applications, the continuous wavelet transform defined by (1) is discretized. This is the case of the image processing, since an image is considered as a discrete object. In particular, the PyRadiomics package for the feature extraction processes the images with the so-called stationary wavelet transform (SWT) [10], which is determined (for a 3D image f) by the following level decomposition scheme.
  • In the first level decomposition, the image f is decomposed in an image f L 1 and an image f H 1 by applying a low-pass filter and a high-pass filter on the first variable, respectively (low-pass refers to low frequencies, while high-pass to high frequencies). Next, f L 1 is decomposed into an image f L L 1 and a image f L H 1 applying the filters with respect to the second variable, respectively. The same is carried out for f H 1 and produces f H L 1 and f H H 1 . Finally, starting from f L L 1 , f L H 1 , f H L 1 , f H H 1 , similar decompositions are performed in relation to the third variable and eight images are obtained, f L L L 1 , f L L H 1 , f L H L 1 , f L H H 1 , f H L L 1 , f H L H 1 , f H H L 1 and f H H H 1 . The first of the eight images is called the approximation coefficients image at the first level, while the other ones are called detail coefficient images at the first level. This decomposition is shown in Figure 3.
  • The scheme is iterated times, where is the decomposition level desired, in the following way. If k = 2 , , , then the k t h level decomposition takes as initial image the approximation f L L L k 1 of the previous level and applies the decomposition as described above into eight images, which are denoted by f L L L k , f L L H k , f L H L k , f L H H k , f H L L k , f H L H k , f H H L k and f H H H k . To conclude, the image f L L L k is called the approximation coefficients image at the k t h level, while the other ones are called detail coefficients images at the k t h level. Figure 4 shows a representation of the level decomposition with two levels.
The levels concerned in the SWT correspond to different scales, while the point coordinates represent the locations. All the images produced by the SWT have the same size of the original image, in contrast to the case of the other version, the classical discrete wavelet transform [9], which generates images with halved dimensions from a level to the next one. We make this remark because the radiomics features are calculated in terms of the image voxels inside a ROI and so the image must have the same size of the mask containing the ROI.

2.4. Shearlet Transform

Despite the similarity with the wavelet transform, Equation (2) of the shearlet transform presents a directional bias problem for large values of s, see Section 4.3 of [18] (the problem of large values of s can also be seen in Figure 1: the support of ψ a , s , t tends to stretch as s increases). Thus, to limit the range of s, a variation was proposed. First of all, the Fourier domain is divided into four cones C 1 , C 2 , C 3 , C 4 and a square R centered around the origin as shown in Figure 5. The square is the low-frequency region and in each cone the orientation varies over a bounded range.
The new formulation, called the cone-adapted continuous shearlet transform, separates the horizontal cones from the vertical cones [18]. In particular, given three functions ϕ , ψ , ψ ˜ L 2 ( R 2 ) , the cone-adapted continuous shearlet transform takes into account the approximation coefficients  A ( t ) = f , ϕ t (related to the square region of the Fourier domain of low-frequencies), where ϕ t ( x ) = ϕ ( x t ) , and the detail coefficients  S ψ f ( a , s , t ) = f , ψ a , s , t and S ψ ˜ f ( a ˜ , s ˜ , t ˜ ) = f , ψ ˜ a ˜ , s ˜ , t ˜ (related to the regions C 1 C 3 and C 2 C 4 , respectively), where a , a ˜ , s , s ˜ vary with limitations.
To apply the cone-adapted shearlet transform to images, a discretization is made performing the so-called non-subsampled shearlet transform (NSST) [17]. In particular, we used the toolbox for MATLAB available at https://www.math.uh.edu/~dlabate/shearlet_toolbox.zip (accessed on 15 March 2024) and made by the same authors of [17]. The NSST is resumed with the following level decomposition scheme, represented in Figure 6.
  • In the first level decomposition the image f is decomposed into a low-pass image f a 1 and a high-pass image f d 1 . Then the image f d 1 is in turn decomposed applying band-pass filters into a number of images corresponding to the directional subbands.
  • In the the second level decomposition one starts with the previous step and decomposes f a 1 to obtain a low-pass image f a 2 and a high-pass image f d 2 . Next, the image f d 2 is decomposed into a number of images according to the directional subbands.
  • The scheme iterates until the level of decomposition desired is reached. The final results is an approximation coefficient image f a and for any k = 1 , , a set of details coefficient images { f d , i k } for different orientations.
As we did about the SWT, we remark that by applying the NSST all the images obtained have the same sizes of the original image f. This is important because in radiomics a mask containing the ROI must be overlapped to an image of the same sizes. Figure 7 shows a NSST decomposition with three levels and four orientations of a slice of a MR image containing a prostate (in the center).
Finally, in our application we will take the absolute value of the NSST results. This is motivated, as explained in the Appendix A, by the fact that the absolute value of the shearlet transform (For brevity, we will use in the paper the expression absolute shearlet transform) gives an indication about the edge orientation.

2.5. The Proposed Method

The workflow of our method (which, for brevity, we indicate with AST referring to Absolute value Shearlet Transform) is illustrated in Figure 8 and described in the following. Firstly, the images were acquired and, in each of them, the prostate was manually segmented by a radiologist with 14 years of MR imaging experience, ensuring a high level of accuracy in delineating volumes of interest [42]. An example of a slide of an image and the ROI segmented by the radiologist is in Figure 2. Moreover, biopsy tests were conducted to detect the presence of tumor inside the prostate.
In the second step, the gray levels of images in the dataset were normalized to the interval [ 0 , 1 ] , then the shearlet transform (by NSST discretization) decomposition was determined on each slide of the images and the absolute value was applied. The number of levels of the NSST decomposition in other applications (such as [23,26,54]) is usually set to a small number, from 3 to 6. Thus, we made an analysis with a total of decomposition levels varying from 1 to 6. For each level chosen, we collect the approximation image and all the detail images up to the that level. The amount of features extracted depends also on the number of orientations (which must be a power of 2). Since one of our goal was to compare the shearlet method to the standard wavelet method, in order to have similar quantities of features between the shearlet and wavelet approaches, the number of orientations was set to 8 for each level (we remember that the coarse details images in the SWT decomposition are right 8).
Next, the radiomics features have been extracted using PyRadiomics, starting from the manual segmentation superimposed both on the original images, on the approximation and on the detail images obtained from the absolute shearlet decomposition. All the possible types of features, as described in Section 2.2, were enabled (the shape-based features were extracted only one time since they depends only on the ROIs). The number of features involved for a specific maximum decomposition level is listed in Table 2.
In the fourth step, the features extracted have been ordered for decreasing relevance. This is a standard technique to train the models on the most relevant features [57]. In particular, we made the ordering by using the MATLAB function fscchi2 which works as follows. For each feature a chi-square test [60] is applied on the values extracted from all the images and opposed to the responses, i.e., to the image types (Group 1 or Group 2). Once all the p-values are calculated, the features are ordered for p-values increasing. A small p-value of the statistic test gives an indication that the feature is dependent on the response variable and then more relevant.
The final step of the method was the training and validation of binary classification models, for the prediction of the lesion type (prostate cancer or non-neoplastic lesion), and the resulting feature selection. Specifically, for the classifiers we considered the linear discriminant analysis model, the linear Support Vector Machine (SVM) model and the decision tree model. Moreover, we operated a repeated (10 times) k-fold cross-validation (In a k-fold cross-validation the dataset is randomly partitioned into k equal sized subsamples (folds). A single fold is used for the validation of the model, and the remaining k 1 folds are used as training data. This is repeated k times, where each of the k folds is employed exactly once as the validation data, and the results are averaged. In our case, we repeated 10 times the k-fold cross-validation to have a more stable estimates (for each time the partition was balanced, i.e., it contained about the same proportions of the two groups, prostate cancer and non-neoplastic lesion cases) [61], with k = 5 , for each case.
Because of the very high number of features, we followed this process to reduce the computational time of the training of each model and for the feature selection:
  • we started to define the model on the first feature and took the AUC value as main performance index (see Section 2.6 for the definition of AUC);
  • we moved to set the model on the first two features and calculated the new AUC value;
  • if the new AUC was lesser than the previous, then the process stopped and we took only the first feature in consideration and the model of point 1; if instead the new AUC value was greater than the previous one we trained the models on the first three features and calculated the corresponding AUC value;
  • the process iterated and stopped at the first M features when the model trained on the first M + 1 features gave an AUC less than that for the first M features, which constituted precisely the features we selected at the end.
To validate our method we made a comparison with the use of wavelet transform (which, for brevity, we indicate with WT method), which is already implemented in PyRadiomics (The PyRadiomics settings for the wavelet filtering were configured as default (apart for the bin width for each levels which depends on the range of coefficient values)), making parallel tests on the same normalized dataset and with the same partitions for the the 5-fold cross-validation. The radiomics analysis workflow was analogue: the number of total levels was chosen from 1 to 6 and took the final approximation coefficients and the details coefficients up to the final level.
Finally, we also made a comparison with the radiomics analysis on the features extracted just from the original images, without applying any filter.

2.6. Performance Metrics

To compare the radiomics analysis carried out on the two different filters (shearlet and wavelet), we evaluated the classification models in terms of standard performance indices. The main index we look at is the
A U C = area under the receiver operating characteristic ( ROC ) curve .
The ROC curve [62] is a plot showing the performance of a binary classification model at varying threshold values (an example is shown in Figure 9). The AUC of a ROC curve is a number between 0 and 1 and represents the probability that the classifier will rank a randomly chosen positive instance higher than a randomly chosen negative instance (thus a high value of AUC is desired).
The other indices (with values comprised between 0 and 1, too) we computed are:
S e n s i t i v i t y = T P T P + F N , S p e c i f i c i t y = T N T N + F P ,
A c c u r a c y = T P + T N T P + T N + F P + F N ,
where T P is the number of true positives, T N is the number of true negatives, F P is the number of false positives and F N is the number of false negatives obtained from a test. Since we executed a repeated k-fold cross-validation on the models we finally have several tests. Then, to summarize the values, we calculated the mean and the standard deviation for each index.

3. Results

Table 3 contains the results of the performance evaluation explained here for the three methods in comparison (AST: features extracted from the original images and from the preprocessing with the Absolute Shearlet Transform; WT: features extracted from the original images and from the preprocessing with the Wavelet Transform; Original: features extracted only from the original images). Additional evaluations, concerning TP, TN, FP and FN numbers, are in Table 4. For AST and WT methods the values refer to the best result over the total decomposition levels from 1 to 6. In particular, as one can see on the AUC graphs of Figure 10 and Figure 11 related to the most performant models (linear discriminant analysis and linear SVM), the best result for AST is obtained for the total of levels equals to 5, while the best result for WT is obtained for the total of levels equals to 2.
Table 3 also reports the numbers of features selected by the methods; in particular, we highlight that for our shearlet approach a less number of features were selected (1 vs. 2). The features under consideration by the linear discriminant analysis and by linear SVM can be read in Table 5 and will be discussed in the next section (for more details about the definitions see https://pyradiomics.readthedocs.io/en/latest/features.html (accessed on 15 March 2024)). We specify that “shearlet5orientation7_glcm_Idn” refers to the GLCM texture feature Inverse Difference Normalized (IDN) of the shearlet detail coefficients at the 5th level and with 7th range of orientation (which is pictured in Figure 12). Moreover, “wavelet1LHH_glcm_ClusterShade” corresponds to the GLCM texture feature Cluster Shade calculated on the wavelet detail coefficients LHH of the second level and “wavelet1HHL_glszm_ZoneVariance” refer to the GLSZM feature Zone Variance on the wavelet detail coefficients HHL of the second level.
In Figure 13 we show some examples containing prostates and, in some cases, also the cancers, discussing in addition the results of the prediction of the best models.
As completeness, we also took in consideration all the features, from original and from shearlet and wavelet decompositions, together. This analysis does not improve the results. The best case scenario remains that of models trained exclusively on the “shearlet5orientation7_glcm_Idn” feature.

4. Discussion

As can be seen from Table 3, the proposed radiomics method, based on the extraction of features from the images processed by the absolute shearlet transform, gives better results for the prediction of the prostate cancer in comparison with the classical wavelet transform.
The best classifiers are the linear discriminant analysis and the linear SVM. Thus, in the following we confine the consideration to these models. The means not only of AUC, but also of the other performance indices, are higher (with the exception of the sensitivity for the linear SVM classifiers) for AST and, at the same time, the standard deviations are lower, meaning a reduced variability among the test cases. With this observation, we can state that the absolute shearlet transform is a finer preprocessing tool. This is established also by the fact that the analysis with overall features does not improve the results of the models trained on the feature “shearlet5orientation7_glcm_Idn”.
The AST method significantly outperforms (with approximately a 10 % difference in AUC) the procedure that solely involves feature extraction from the original images. While the specificity is higher in the latter case, its corresponding sensitivity is notably low, and the accuracy is lower compared to AST. Moreover, paying attention to Table 4, we can observe that the incorrect predictions (the sum of FP and FN) are lower among all the methods for AST.
In [57] a prediction model for prostate cancer was developed using a different statistical method and without the use of any filter. As previously said, the dataset from [57] overlaps with our study, comprising 71 cases. Since the datasets are almost identical, we report the performance indices of [57]: AUC of 68.46 % , sensitivity of 76.25 % , specificity of 73.15 % , and accuracy of 71.02 % . Although our primary aim is not to identify the best predictive model, it is possible to see how the results obtained with the shearlet transform are better.
As already remarked, the proposed method leads to a single selected feature, while WT method need two features. From a theoretical point of view, this difference is reflected into simpler prediction models for the AST method. Remaining on the subject, we discuss the interpretation of the selected features. As stated in https://pyradiomics.readthedocs.io/en/latest/features.html (accessed on 15 March 2024), the inverse difference normalized is a measure of the local homogeneity of an image. The fact that the shearlet feature is relevant in distinguishing the prostate cancer and the non-neoplastic lesion cases means that there is a difference between the local homogeneity between the two cases, in the detail image at the 5th level and with 7th orientation. We attribute the relevance of this feature in the fact that a prostate cancer is determined by edges (even if slight in some cases) and, as stated by Theorem A1, the absolute shearlet transform efficiently recognizes the presence of edges. On the other hands, wavelet1LHH_glcm_ClusterShade is a measure of the asymmetry about the mean of the GLCM when the low-pass is applied in the first variable and the high-pass is applied in the second and third variables. The feature wavelet1HHL_glszm_ZoneVariance measures instead the variance in region volumes for the zones when the high-pass is applied in the first and second variables and the low-pass is applied in the third variable. Finally, original_shape_MinorAxisLength is the second-largest axis length of the ellipsoid which encloses the prostate. Although the simplicity of using one or two features for classification may be unexpected, it is essential to consider that radiomic features, especially those derived from advanced transforms such as shearlets or wavelets, can capture complex information from medical images. Obviously, it is crucial to conduct further analyses involving larger patient cohorts and comprehensive clinical data. Interpreting radiomic features in clinical settings is challenging due to the complex nature of medical imaging data.
As can be deduct from their definitions (https://pyradiomics.readthedocs.io/en/latest/features.html accessed on 15 March 2024), the shape descriptors, the first-order statistics and the texture matrices are invariant if the image and the ROI are translated by the same quantity, if they are both mirror-reflected or rotated by multiples of 90 degrees. Consequently, the Original method produces the same result if a translation, a mirror reflection or a rotation is applied. Moreover, since the wavelet and shearlet transforms do not work with a preferred direction (indeed, for instance, the low-pass and the high-pass filters in the SWT decomposition are the same applied on each spatial variable) and incorporate translations, the types of features (like Idn, ClusterShade and ZoneVariance) obtained by the AST and WT methods do not change if mirror-reflections, rotations by multiples of 90 degrees and shifts are applied. Of course, the labels like orientation7 for AST and LHH for WT, which depend on the direction, change accordingly to the reflection or the rotation.
Our study refrained from exploring numerous machine learning or feature selection methods as we focused on evaluating the impact of integrating the absolute shearlet transform into the traditional radiomics workflow. We compared it with the wavelet transform, usually used in radiomics and strictly similar to it. Therefore, we employed conventional techniques to assess the improvement achieved by the absolute shearlet transform. Future investigations should aim for a more comprehensive radiomics analysis, moving beyond the assessment of shearlet transform utility and striving to identify a radiomics pipeline that maximizes the discriminatory power of predictive models. To address this aim, we plan to incorporate the shearlet transform into matRadiomics, a tool developed by our research group [63], facilitating broader-spectrum analyses in the future.
Another limitation is that our study is restricted to T2-weighted (T2w) images, whereas multiparametric prostate MRI studies include diffusion-weighted (DWI) and dynamic contrast-enhanced (DCE) sequences. A clinically relevant radiomics analysis should be capable to accurately categorize these sequences as well. However, this represents the initial phase of shearlet transform of applying to prostate cancer radiomics analysis on MR images. Additional investigation is necessary to determine if a similar approach can be applied to DWI and DCE images.

5. Conclusions

The shearlet transform represents an evolution of the wavelet transform, and some applications like denoising, image fusion, segmentation [22,23,24,25,26,27,28] are a confirmation. However, currently, the shearlet transform has rarely been used to increment the number of radiomics features extracted [49,50,51,52] and, consequently, to enhance prediction models aimed at supporting clinical decisions.
In this study, we utilized the shearlet transform to decompose MR images of the prostate at various levels and orientations, conducting a radiomics analysis to differentiate between non-neoplastic lesion and prostate cancer cases. We confine the analysis up to the sixty level, following the approach adopted in some state-of-the-art studies that involve the shearlet transform. Additionally, we opt for a number of orientations equals to 8 for each level to maintain consistency with the number of detail images at the final level in the SWT decomposition. Furthermore, we conducted a comparative analysis of prostate cancer prediction performance using both the absolute shearlet transform and the classical wavelet transform as image preprocessing techniques. The results we obtained indicate that the absolute shearlet transform is more effective than the wavelet transform, which is a more popular filter in radiomics, in discovering crucial features.

Author Contributions

Conceptualization, R.C. and A.C.; methodology, R.C., A.S. and A.C.; software, R.C.; validation, R.C.; formal analysis, R.C.; investigation, R.C.; resources, G.S.; data curation, G.S.; writing—original draft preparation, R.C., A.S., G.S. and A.C.; writing—review and editing, A.S., G.S. and A.C.; visualization, R.C.; supervision, A.S. and A.C.; project administration, A.C.; funding acquisition, R.C. All authors have read and agreed to the published version of the manuscript.

Funding

R.C. acknowledges partial financial supports by the European Union through the Italian Ministry of University and Research (FSE-REACT EU, PON Ricerca e Innovazione 2014–2020, CUP B75F21002310001), by the Università degli Studi di Palermo (FFR2024) and by the “Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni” (CUP E53C23001670001).

Data Availability Statement

The datasets presented in this article are not readily available.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUCArea Under the receiver operating characteristic Curve
ASTAbsolute Shearlet Transform radiomics method
MRMagnetic Resonance
NSSTNon-Subsampled Shearlet Transform
SVMSupport Vector Machine
SWTStationary Wavelet Transform
WTWavelet Transform radiomics method

Appendix A

In this appendix we report more technical properties of the shearlet transform which are recalled in the paper. As said in the introduction, in contrast to the wavelet transform, the shearlet transform is able to precisely recognize the geometry of edges and, in particular, this is described by the behavior of the shearlet transform for small values of a. More precisely, let us consider this setting for a representation of a 2-D image: Ω = [ 0 , 1 ] 2 = k = 1 n Ω k Γ is the set of ‘objects’ points, where Ω k is a connected open subset for k = 1 , , n (representing an object) and Γ = k = 1 n 𝜕 Ω k is the set of the ‘edges’ points, where 𝜕 Ω k is the boundary of Ω k . We model an image as function f : Ω R satisfying
f = k = 1 n u k ( x ) χ Ω k ( x ) , x Ω Γ ,
for some functions u k : Ω R , k = 1 , , n . Under this framework, the following result has been proved in Theorem 4.1 [20] (see also Theorem II [21]).
Theorem A1.
Let t Ω . If
1. 
t Γ (i.e., t is not a point of an edge) or
2. 
t Γ (i.e., t is a point of an edge), the edge can be parametrized in a neighborhood of t = ( t 1 , t 2 ) as a regular curve ( E ( t 2 ) , t 2 ) and s E ( t 2 ) (i.e., s does not correspond to the normal to the edge in the point t),
then
lim a 0 a 3 4 S ψ f ( a , s , t ) = 0 .
Finally, if t Γ , the edge can be parametrized in a neighborhood of t = ( t 1 , t 2 ) as a regular curve ( E ( t 2 ) , t 2 ) and s = E ( t 2 ) (i.e., s corresponds to the normal to the edge in the point t), then there exist c 1 , c 2 > 0 such that
c 1 | [ f ] t | lim a 0 a 3 4 | S ψ f ( a , s , t ) | c 2 | [ f ] t | ,
where [ f ] t denotes the jump of f in t occurring in the normal direction to the edge.
By (A1), in presence of an effective jump, i.e., when [ f ] t 0 , then lim a 0 a 3 4 S ψ f ( a , s , t ) 0 . In conclusion, by Theorem A1 the values of a 3 4 | S ψ f ( a , s , t ) | , for a near to zero, allow to distinguish the orientation of edges. For this reason, in our application to radiomics we actually take the absolute shearlet transform in consideration rather the shearlet transform itself.

References

  1. Mallat, S. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef]
  2. Mallat, S.; Hwang, W. Singularity detection and processing with wavelets. IEEE Trans. Inf. Theory 1992, 38, 617–643. [Google Scholar] [CrossRef]
  3. Antonini, M.; Barlaud, M.; Mathieu, P.; Daubechies, I. Image coding using wavelet transform. IEEE Trans. Image Process. 1992, 1, 205–220. [Google Scholar] [CrossRef]
  4. Chang, S.; Yu, B.; Vetterli, M. Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. Image Process. 2000, 9, 1532–1546. [Google Scholar] [CrossRef] [PubMed]
  5. Grinsted, A.; Moore, J.C.; Jevrejeva, S. Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlinear Process. Geophys. 2004, 11, 561–566. [Google Scholar] [CrossRef]
  6. Lee, T.S. Image representation using 2D Gabor wavelets. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 959–971. [Google Scholar] [CrossRef]
  7. Daubechies, I.; Lu, J.; Wu, H.T. Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Appl. Comput. Harmon. Anal. 2011, 30, 243–261. [Google Scholar] [CrossRef]
  8. Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar] [CrossRef]
  9. Mallat, S. A Wavelet Tour of Signal Processing; Elsevier: Amsterdam, The Netherlands, 2009. [Google Scholar] [CrossRef]
  10. Nason, G.P.; Silverman, B.W. The Stationary Wavelet Transform and Some Statistical Applications; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar] [CrossRef]
  11. Christensen, O. An Introduction to Frames and Riesz Bases. Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  12. Corso, R. Generalized frame operator, lower semiframes, and sequences of translates. Math. Nachrichten 2023, 296, 2715–2733. [Google Scholar] [CrossRef]
  13. Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef]
  14. Candès, E.J.; Donoho, D.L. Ridgelets: A key to higher-dimensional intermittency? Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 1999, 357, 2495–2509. [Google Scholar] [CrossRef]
  15. Candès, E.J.; Donoho, D.L. New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities. Commun. Pure Appl. Math. 2004, 57, 219–266. [Google Scholar] [CrossRef]
  16. Kingsbury, N. Complex Wavelets for Shift Invariant Analysis and Filtering of Signals. Appl. Comput. Harmon. Anal. 2001, 10, 234–253. [Google Scholar] [CrossRef]
  17. Easley, G.; Labate, D.; Lim, W.Q. Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 2008, 25, 25–46. [Google Scholar] [CrossRef]
  18. Kutyniok, G.; Labate, D. Introduction to Shearlets; Springer: Boston, MA, USA, 2012. [Google Scholar] [CrossRef]
  19. Mallat, S. Geometrical grouplets. Appl. Comput. Harmon. Anal. 2009, 26, 161–180. [Google Scholar] [CrossRef]
  20. Guo, K.; Labate, D.; Lim, W.Q. Edge analysis and identification using the continuous shearlet transform. Appl. Comput. Harmon. Anal. 2009, 27, 24–46. [Google Scholar] [CrossRef]
  21. Yi, S.; Labate, D.; Easley, G.R.; Krim, H. A shearlet approach to edge analysis and detection. IEEE Trans. Image Process. 2009, 18, 929–941. [Google Scholar] [CrossRef] [PubMed]
  22. Easley, G.R.; Labate, D.; Colonna, F. Shearlet-based total variation diffusion for denoising. IEEE Trans. Image Process. 2009, 18, 260–268. [Google Scholar] [CrossRef] [PubMed]
  23. Hou, B.; Zhang, X.; Bu, X.; Feng, H. SAR image despeckling based on nonsubsampled shearlet transform. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2012, 5, 809–823. [Google Scholar] [CrossRef]
  24. Yang, H.Y.; Wang, X.Y.; Niu, P.P.; Liu, Y.C. Image denoising using nonsubsampled shearlet transform and twin support vector machines. Neural Netw. 2014, 57, 152–165. [Google Scholar] [CrossRef]
  25. Miao, Q.G.; Shi, C.; Xu, P.F.; Yang, M.; Shi, Y.B. A novel algorithm of image fusion using shearlets. Opt. Commun. 2011, 284, 1540–1547. [Google Scholar] [CrossRef]
  26. Yin, M.; Liu, X.; Liu, Y.; Chen, X. Medical Image Fusion with Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain. IEEE Trans. Instrum. Meas. 2019, 68, 49–64. [Google Scholar] [CrossRef]
  27. Helmy, A.K.; El-Taweel, G.S. Image segmentation scheme based on SOM-PCNN in frequency domain. Appl. Soft Comput. J. 2016, 40, 405–415. [Google Scholar] [CrossRef]
  28. Wang, X.; Zhao, X.; Zhu, Y.; Su, X. NSST and vector-valued C-V model based image segmentation algorithm. IET Image Process. 2020, 14, 1614–1620. [Google Scholar] [CrossRef]
  29. Laudicella, R.; Comelli, A.; Liberini, V.; Vento, A.; Stefano, A.; Spataro, A.; Crocè, L.; Baldari, S.; Bambaci, M.; Deandreis, D.; et al. [68Ga]DOTATOC PET/CT Radiomics to Predict the Response in GEP-NETs Undergoing [177Lu]DOTATOC PRRT: The “Theragnomics” Concept. Cancers 2022, 14, 984. [Google Scholar] [CrossRef] [PubMed]
  30. Pasini, G.; Stefano, A.; Russo, G.; Comelli, A.; Marinozzi, F.; Bini, F. Phenotyping the Histopathological Subtypes of Non-Small-Cell Lung Carcinoma: How Beneficial Is Radiomics? Diagnostics 2023, 13, 1167. [Google Scholar] [CrossRef] [PubMed]
  31. Vernuccio, F.; Arnone, F.; Cannella, R.; Verro, B.; Comelli, A.; Agnello, F.; Stefano, A.; Gargano, R.; Rodolico, V.; Salvaggio, G.; et al. Diagnostic performance of qualitative and radiomics approach to parotid gland tumors: Which is the added benefit of texture analysis? Br. J. Radiol. 2021, 94, 20210340. [Google Scholar] [CrossRef] [PubMed]
  32. Logotheti, S.; Georgakilas, A.G. More than Meets the Eye: Integration of Radiomics with Transcriptomics for Reconstructing the Tumor Microenvironment and Predicting Response to Therapy. Cancers 2023, 15, 1634. [Google Scholar] [CrossRef] [PubMed]
  33. Kumar, R.; Gupta, A.; Arora, H.S.; Pandian, G.N.; Raman, B. CGHF: A Computational Decision Support System for Glioma Classification Using Hybrid Radiomics- and Stationary Wavelet-Based Features. IEEE Access 2020, 8, 79440–79458. [Google Scholar] [CrossRef]
  34. Jing, R.; Wang, J.; Li, J.; Wang, X.; Li, B.; Xue, F.; Shao, G.; Xue, H. A wavelet features derived radiomics nomogram for prediction of malignant and benign early-stage lung nodules. Sci. Rep. 2021, 11, 22330. [Google Scholar] [CrossRef]
  35. Zhou, J.; Lu, J.; Gao, C.; Zeng, J.; Zhou, C.; Lai, X.; Cai, W.; Xu, M. Predicting the response to neoadjuvant chemotherapy for breast cancer: Wavelet transforming radiomics in MRI. BMC Cancer 2020, 20, 100. [Google Scholar] [CrossRef]
  36. Choe, J.; Lee, S.M.; Do, K.H.; Lee, G.; Lee, J.G.; Lee, S.M.; Seo, J.B. Deep Learning–based Image Conversion of CT Reconstruction Kernels Improves Radiomics Reproducibility for Pulmonary Nodules or Masses. Radiology 2019, 292. [Google Scholar] [CrossRef]
  37. Ansari, G.; Mirza-Aghazadeh-Attari, M.; Afyouni, S.; Mohseni, A.; Shahbazian, H.; Kamel, I.R. Utilization of texture features of volumetric ADC maps in differentiating between serous cystadenoma and intraductal papillary neoplasms. Abdom. Radiol. 2024, 49, 1175–1184. [Google Scholar] [CrossRef] [PubMed]
  38. Qian, W.L.; Chen, Q.; Zhang, J.B.; Xu, J.M.; Hu, C.H. RESOLVE-based radiomics in cervical cancer: Improved image quality means better feature reproducibility? Clin. Radiol. 2023, 78, E469–E476. [Google Scholar] [CrossRef] [PubMed]
  39. Brown, K.H.; Ghita-Pettigrew, M.; Kerr, B.N.; Mohamed-Smith, L.; Walls, G.M.; McGarry, C.K.; Butterworth, K.T. Characterisation of quantitative imaging biomarkers for inflammatory and fibrotic radiation-induced lung injuries using preclinical radiomics. Radiother. Oncol. 2024, 192, 110106. [Google Scholar] [CrossRef] [PubMed]
  40. Tang, V.H.; Duong, S.T.M.; Nguyen, C.D.T.; Huynh, T.M.; Duc, V.T.; Phan, C.; Le, H.; Bui, T.; Truong, S.Q.H. Wavelet radiomics features from multiphase CT images for screening hepatocellular carcinoma: Analysis and comparison. Sci. Rep. 2023, 13, 19559. [Google Scholar] [CrossRef] [PubMed]
  41. Ferro, M.; de Cobelli, O.; Musi, G.; del Giudice, F.; Carrieri, G.; Busetto, G.M.; Falagario, U.G.; Sciarra, A.; Maggi, M.; Crocetto, F.; et al. Radiomics in prostate cancer: An up-to-date review. Ther. Adv. Urol. 2022, 14, 175628722211090. [Google Scholar] [CrossRef]
  42. Cutaia, G.; la Tona, G.; Comelli, A.; Vernuccio, F.; Agnello, F.; Gagliardo, C.; Salvaggio, L.; Quartuccio, N.; Sturiale, L.; Stefano, A.; et al. Radiomics and prostate MRI: Current role and future applications. J. Imaging 2021, 7, 34. [Google Scholar] [CrossRef] [PubMed]
  43. Lee, H.; Hwang, S.I.; Lee, H.J.; Byun, S.S.; Lee, S.E.; Hong, S.K. Diagnostic performance of diffusion-weighted imaging for prostate cancer: Peripheral zone versus transition zone. PLoS ONE 2018, 13, e0199636. [Google Scholar] [CrossRef] [PubMed]
  44. Hambrock, T.; Vos, P.C.; de Kaa, C.A.H.; Barentsz, J.O.; Huisman, H.J. Prostate Cancer: Computer-aided Diagnosis with Multiparametric 3-T MR Imaging—Effect on Observer Performance. Radiology 2013, 266, 521–530. [Google Scholar] [CrossRef]
  45. Sidhu, H.S.; Benigno, S.; Ganeshan, B.; Dikaios, N.; Johnston, E.W.; Allen, C.; Kirkham, A.; Groves, A.M.; Ahmed, H.U.; Emberton, M.; et al. Textural analysis of multiparametric MRI detects transition zone prostate cancer. Eur. Radiol. 2017, 27, 2348–2358. [Google Scholar] [CrossRef]
  46. Griethuysen, J.J.V.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.G.; Fillion-Robin, J.C.; Pieper, S.; Aerts, H.J. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 2017, 77, e104–e107. [Google Scholar] [CrossRef]
  47. Hermessi, H.; Mourali, O.; Zagrouba, E. Deep feature learning for soft tissue sarcoma classification in MR images via transfer learning. Expert Syst. Appl. 2019, 120, 116–127. [Google Scholar] [CrossRef]
  48. Ouerghi, H.; Mourali, O.; Zagrouba, E. Glioma classification via MR images radiomics analysis. Vis. Comput. 2022, 38, 1427–1441. [Google Scholar] [CrossRef]
  49. Li, M.D.; Cheng, M.Q.; Chen, L.D.; Hu, H.T.; Zhang, J.C.; Ruan, S.M.; Huang, H.; Kuang, M.; Lu, M.D.; Li, W.; et al. Reproducibility of radiomics features from ultrasound images: Influence of image acquisition and processing. Eur. Radiol. 2022, 32, 5843–5851. [Google Scholar] [CrossRef]
  50. Acharya, U.R.; Fernandes, S.L.; WeiKoh, J.E.; Ciaccio, E.J.; Fabell, M.K.M.; Tanik, U.J.; Rajinikanth, V.; Yeong, C.H. Automated Detection of Alzheimer’s Disease Using Brain MRI Images—A Study with Various Feature Extraction Techniques. J. Med. Syst. 2019, 43, 302. [Google Scholar] [CrossRef] [PubMed]
  51. Zhou, S.; Shi, J.; Zhu, J.; Cai, Y.; Wang, R. Shearlet-based texture feature extraction for classification of breast tumor in ultrasound image. Biomed. Signal Process. Control. 2013, 8, 688–696. [Google Scholar] [CrossRef]
  52. Muthaiyan, R.; Malleswaran, M. An automated brain image analysis system for brain cancer using shearlets. Comput. Syst. Sci. Eng. 2022, 40, 299–312. [Google Scholar] [CrossRef]
  53. Liang, M.; Ren, Z.; Yang, J.; Feng, W.; Li, B. Identification of Colon Cancer Using Multi-Scale Feature Fusion Convolutional Neural Network Based on Shearlet Transform. IEEE Access 2020, 8, 208969–208977. [Google Scholar] [CrossRef]
  54. Rezaeilouyeh, H.; Mollahosseini, A.; Mahoor, M.H. Microscopic medical image classification framework via deep learning and shearlet transform. J. Med. Imaging 2016, 3, 044501. [Google Scholar] [CrossRef]
  55. Rezaeilouyeh, H.; Mahoor, M.H. Automatic gleason grading of prostate cancer using shearlet transform and multiple kernel learning. J. Imaging 2016, 2, 25. [Google Scholar] [CrossRef]
  56. Bratsun, D.; Krasnyakov, I. Study of architectural forms of invasive carcinoma based on the measurement of pattern complexity. Math. Model. Nat. Phenom. 2022, 17, 15. [Google Scholar] [CrossRef]
  57. Ali, M.; Benfante, V.; Cutaia, G.; Salvaggio, L.; Rubino, S.; Portoghese, M.; Ferraro, M.; Corso, R.; Piraino, G.; Ingrassia, T.; et al. Prostate Cancer Detection: Performance of Radiomics Analysis in Multiparametric MRI. In Proceedings of the Image Analysis and Processing—ICIAP 2023 Workshops, ICIAP 2023; Lecture Notes in Computer Science; Frontoni, P.L.M., Fusiello, A., Hancock, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2024; Volume 14366, pp. 83–92. [Google Scholar] [CrossRef]
  58. Cairone, L.; Benfante, V.; Bignardi, S.; Marinozzi, F.; Yezzi, A.; Tuttolomondo, A.; Salvaggio, G.; Bini, F.; Comelli, A. Robustness of Radiomics Features to Varying Segmentation Algorithms in Magnetic Resonance Images. In Proceedings of the Image Analysis and Processing, ICIAP 2022 Workshops, ICIAP 2022; Lecture Notes in Computer Science; Frontoni, P.L.M., Sclaroff, S., Distante, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13373, pp. 462–472. [Google Scholar] [CrossRef]
  59. Fornacon-Wood, I.; Mistry, H.; Ackermann, C.J.; Blackhall, F.; McPartlin, A.; Faivre-Finn, C.; Price, G.J.; O’Connor, J.P. Reliability and prognostic value of radiomic features are highly dependent on choice of feature extraction platform. Eur. Radiol. 2020, 30, 6241–6250. [Google Scholar] [CrossRef] [PubMed]
  60. Corder, G.W.; Foreman, D.I. Nonparametric Statistics for Non-Statisticians; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar] [CrossRef]
  61. James, G.; Witten, D.; Hastie, T. Introduction to Statistical Learning with Applications in R; Springer: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  62. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  63. Pasini, G.; Bini, F.; Russo, G.; Comelli, A.; Marinozzi, F.; Stefano, A. matRadiomics: A Novel and Complete Radiomics Framework, from Image Visualization to Predictive Model. J. Imaging 2022, 8, 221. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The effects of the parameters a and s on the support of a function ψ : R 2 R . The support of ψ (coinciding with ψ 1 , 0 , ( 0 , 0 ) ) is shown in (a) and every figure reports the support of ψ a , s , t for different values of a and s (while the location parameter is always set as t = ( 0 , 0 ) ). All the figures have the same scale. (a) a = 1 and s = 0 , (b) a = 1 and s = 1 , (c) a = 1 and s = 1 , (d) a = 1 4 and s = 0 , (e) a = 1 4 and s = 1 , (f) a = 1 4 and s = 3 .
Figure 1. The effects of the parameters a and s on the support of a function ψ : R 2 R . The support of ψ (coinciding with ψ 1 , 0 , ( 0 , 0 ) ) is shown in (a) and every figure reports the support of ψ a , s , t for different values of a and s (while the location parameter is always set as t = ( 0 , 0 ) ). All the figures have the same scale. (a) a = 1 and s = 0 , (b) a = 1 and s = 1 , (c) a = 1 and s = 1 , (d) a = 1 4 and s = 0 , (e) a = 1 4 and s = 1 , (f) a = 1 4 and s = 3 .
Mathematics 12 01296 g001
Figure 2. In (a), a slice of a MR image containing a prostate (in the center). In (b), the corresponding mask containing the ROI (the prostate manually segmented by the radiologist).
Figure 2. In (a), a slice of a MR image containing a prostate (in the center). In (b), the corresponding mask containing the ROI (the prostate manually segmented by the radiologist).
Mathematics 12 01296 g002
Figure 3. Scheme of the first level SWT decomposition of a 3D image f. The result is the bottom row, i.e., the approximation image f L L L 1 and the detail images f L L H 1 , f L H L 1 , f L H H 1 , f H L L 1 , f H L H 1 , f H H L 1 and f H H H 1 .
Figure 3. Scheme of the first level SWT decomposition of a 3D image f. The result is the bottom row, i.e., the approximation image f L L L 1 and the detail images f L L H 1 , f L H L 1 , f L H H 1 , f H L L 1 , f H L H 1 , f H H L 1 and f H H H 1 .
Mathematics 12 01296 g003
Figure 4. Scheme of the SWT decomposition of a 3D image with two levels. The result includes the detail images of the first and second level ( f L L H k , f L H L k , f L H H k , f H L L k , f H L H k , f H H L k , f H H H k , k = 1 , 2 ) and the approximation image f L L L 2 of the second level.
Figure 4. Scheme of the SWT decomposition of a 3D image with two levels. The result includes the detail images of the first and second level ( f L L H k , f L H L k , f L H H k , f H L L k , f H L H k , f H H L k , f H H H k , k = 1 , 2 ) and the approximation image f L L L 2 of the second level.
Mathematics 12 01296 g004
Figure 5. The subdivision of the Fourier domain for the cone-adapted continuous shearlet transform.
Figure 5. The subdivision of the Fourier domain for the cone-adapted continuous shearlet transform.
Mathematics 12 01296 g005
Figure 6. The scheme of the NSST decomposition of a 2-D image with two levels. The result includes the detail images of the first and second level ( f d , 1 k , …, f d , m k k , k = 1 , 2 ) and the approximation image f a 2 of the second level.
Figure 6. The scheme of the NSST decomposition of a 2-D image with two levels. The result includes the detail images of the first and second level ( f d , 1 k , …, f d , m k k , k = 1 , 2 ) and the approximation image f a 2 of the second level.
Mathematics 12 01296 g006
Figure 7. Some images resulting from a NSST decomposition (with three levels and four orientations for each level) of a slice of a MR image containing a prostate. In the first row at the left: the original image. In the first row at the right: the NSST approximation image at the third level. In the remaining rows there are the detail images for different levels and orientations (first level in the second row, second level in the third row and third level in the fouth row).
Figure 7. Some images resulting from a NSST decomposition (with three levels and four orientations for each level) of a slice of a MR image containing a prostate. In the first row at the left: the original image. In the first row at the right: the NSST approximation image at the third level. In the remaining rows there are the detail images for different levels and orientations (first level in the second row, second level in the third row and third level in the fouth row).
Mathematics 12 01296 g007
Figure 8. The steps of the proposed method. From the MR images and corresponding segmentations of the prostate, NSST decomposition (followed by the absolute value) was applied to the images to extract radiomics features using PyRadiomics. These features were then ordered and selected for training and validating the machine learning models for the prediction of the lesion type (taking into account the histological results obtained from the biopsy).
Figure 8. The steps of the proposed method. From the MR images and corresponding segmentations of the prostate, NSST decomposition (followed by the absolute value) was applied to the images to extract radiomics features using PyRadiomics. These features were then ordered and selected for training and validating the machine learning models for the prediction of the lesion type (taking into account the histological results obtained from the biopsy).
Mathematics 12 01296 g008
Figure 9. The ROC curve obtained for a validation test of the linear discriminant analysis classifier for the AST method. The area under the ROC curve is the AUC.
Figure 9. The ROC curve obtained for a validation test of the linear discriminant analysis classifier for the AST method. The area under the ROC curve is the AUC.
Mathematics 12 01296 g009
Figure 10. Plots of the means of the performance indices of the best linear discriminant analysis model for different levels. In red: model trained with the absolute shearlet transform. In blue: model trained with the wavelet transform.
Figure 10. Plots of the means of the performance indices of the best linear discriminant analysis model for different levels. In red: model trained with the absolute shearlet transform. In blue: model trained with the wavelet transform.
Mathematics 12 01296 g010
Figure 11. Plots of the means of the performance indices of the best linear SVM model for different levels. In red: model trained with the absolute shearlet transform. In blue: model trained with the wavelet transform.
Figure 11. Plots of the means of the performance indices of the best linear SVM model for different levels. In red: model trained with the absolute shearlet transform. In blue: model trained with the wavelet transform.
Mathematics 12 01296 g011
Figure 12. The range of orientation related to the selected feature by the AST method.
Figure 12. The range of orientation related to the selected feature by the AST method.
Mathematics 12 01296 g012
Figure 13. Examples of prostates delineated in green. Figures (a,b) are cases of Group 1 (with prostate cancer in red) while figure (c) belongs to Group 2. In the following we refer to some predictions of the models in Table 5. The first case is a TP for AST and for Original (both classifiers), while it is a FN for WT (both classifiers). The second case is a TP for AST-linear discriminant analysis and for WT and for Original (both classifiers), while it is a FN for AST-linear SVM. The third case is a TN for AST (both classifiers), while it is a FP for WT and for Original (both classifiers).
Figure 13. Examples of prostates delineated in green. Figures (a,b) are cases of Group 1 (with prostate cancer in red) while figure (c) belongs to Group 2. In the following we refer to some predictions of the models in Table 5. The first case is a TP for AST and for Original (both classifiers), while it is a FN for WT (both classifiers). The second case is a TP for AST-linear discriminant analysis and for WT and for Original (both classifiers), while it is a FN for AST-linear SVM. The third case is a TN for AST (both classifiers), while it is a FP for WT and for Original (both classifiers).
Mathematics 12 01296 g013
Table 1. MR imaging parameters for the population study.
Table 1. MR imaging parameters for the population study.
ParameterT2w TSE
Repetition time (ms)3091
Echo time (ms)100
Flip angle (degrees)90
Slice thickness (mm)3.3
Reconstruction interval (mm)0.3
Acquisition matrix320 × 320
Signal averages3
Signal-to-noise ratio1
Voxel size (mm)0.5625 × 0.5625 × 3.3
Table 2. The numbers of features extracted for the two methods (AST and WT): applying the absolute shearlet transform or the wavelet transform.
Table 2. The numbers of features extracted for the two methods (AST and WT): applying the absolute shearlet transform or the wavelet transform.
Total Decomposition LevelsNumber of Features Extracted (AST Method)Number of Features Extracted (WT Method)
1792704
214961320
322001932
429042552
536083168
643123784
Table 3. Performance indices (expressed in percentage) for the three radiomics analysis. For AST and WT methods the values refer to the best result over the total decomposition levels, namely 5 levels in the case of AST (for any classifier), 2 levels in the case of WT with discriminant analysis and SVM, and 3 levels in the case of WT with decision tree. In each entry, the first number represents the mean of the corresponding index over all tests in the repeated k-fold cross-validation. Similarly, the second number (after the symbol ±) represents the standard deviation over all tests. The last column gives the number of features selected.
Table 3. Performance indices (expressed in percentage) for the three radiomics analysis. For AST and WT methods the values refer to the best result over the total decomposition levels, namely 5 levels in the case of AST (for any classifier), 2 levels in the case of WT with discriminant analysis and SVM, and 3 levels in the case of WT with decision tree. In each entry, the first number represents the mean of the corresponding index over all tests in the repeated k-fold cross-validation. Similarly, the second number (after the symbol ±) represents the standard deviation over all tests. The last column gives the number of features selected.
ClassifierMethodAUCSensitivitySpecificityAccuracyN. Features
Linear Discr.
Analysis
AST81.5 ± 11.691.6 ± 8.364.0 ± 16.480.9 ± 8.71
WT77.8 ± 13.491.1 ± 9.856.5 ± 20.377.9 ± 10.02
Original71.8 ± 12.794.2 ± 8.545.7 ± 20.175.6 ± 8.61
Linear SVMAST81.2 ± 11.590.7 ± 9.464.0 ± 16.480.4 ± 9.91
WT77.2 ± 13.491.8 ± 8.953.7 ± 23.177.2 ± 10.22
Original71.3 ± 12.996.0 ± 6.332.8 ± 19.171.9 ± 8.51
Decision TreeAST73.9 ± 12.376.4 ± 11.668.7 ± 18.473.5 ± 9.81
WT77.0 ± 11.880.9 ± 13.160.2 ± 22.573.0 ± 10.73
Original61.9 ± 15.574.7 ± 15.651.2 ± 23.265.8 ± 10.32
Table 4. Mean and the standard deviation of True Positives, True Negatives, False Positives and False Negatives numbers for the three radiomics analysis. The total decomposition levels for AST and WT methods are indicated in the caption of Table 3. The table should be read taking into account that each fold for the cross-validation contains about the same proportion of the two groups, i.e., a mean of 9 non-neoplastic lesion cases and 5.6 prostate cancer cases.
Table 4. Mean and the standard deviation of True Positives, True Negatives, False Positives and False Negatives numbers for the three radiomics analysis. The total decomposition levels for AST and WT methods are indicated in the caption of Table 3. The table should be read taking into account that each fold for the cross-validation contains about the same proportion of the two groups, i.e., a mean of 9 non-neoplastic lesion cases and 5.6 prostate cancer cases.
ClassifierMethodTPTNFPFN
Linear Discr.
Analysis
AST8.24 ± 0.743.56 ± 0.882.04 ± 0.990.76 ± 0.74
WT8.20 ± 0.883.16 ± 1.152.44 ± 1.150.80 ± 0.88
Original8.48 ± 0.762.56 ± 1.113.04 ± 1.120.52 ± 0.76
Linear SVMAST8.16 ± 0.843.56 ± 0.882.04 ± 0.990.84 ± 0.84
WT8.26 ± 0.803.00 ± 1.292.60 ± 1.320.74 ± 0.80
Original8.64 ± 0.561.84 ± 1.093.76 ± 1.120.36 ± 0.56
Decision TreeAST6.88 ± 1.043.84 ± 1.091.76 ± 1.082.12 ± 1.04
WT7.28 ± 1.183.38 ± 1.292.22 ± 1.231.72 ± 1.18
Original6.72 ± 1.402.90 ± 1.392.70 ± 1.252.28 ± 1.04
Table 5. Features selected by the methods (both in case of linear discriminant analysis and linear SVM classifiers) with regard the best result over the levels (AST: 5 levels, WT: 2 levels). Note that PyRadiomics actually starts to label the levels for the wavelet filter from 0, so these wavelet features belong of the second level of decomposition.
Table 5. Features selected by the methods (both in case of linear discriminant analysis and linear SVM classifiers) with regard the best result over the levels (AST: 5 levels, WT: 2 levels). Note that PyRadiomics actually starts to label the levels for the wavelet filter from 0, so these wavelet features belong of the second level of decomposition.
MethodFeatures Selected
ASTshearlet5orientation7_glcm_Idn
WTwavelet1LHH_glcm_ClusterShade
wavelet1HHL_glszm_ZoneVariance
Originaloriginal_shape_MinorAxisLength
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Corso, R.; Stefano, A.; Salvaggio, G.; Comelli, A. Shearlet Transform Applied to a Prostate Cancer Radiomics Analysis on MR Images. Mathematics 2024, 12, 1296. https://doi.org/10.3390/math12091296

AMA Style

Corso R, Stefano A, Salvaggio G, Comelli A. Shearlet Transform Applied to a Prostate Cancer Radiomics Analysis on MR Images. Mathematics. 2024; 12(9):1296. https://doi.org/10.3390/math12091296

Chicago/Turabian Style

Corso, Rosario, Alessandro Stefano, Giuseppe Salvaggio, and Albert Comelli. 2024. "Shearlet Transform Applied to a Prostate Cancer Radiomics Analysis on MR Images" Mathematics 12, no. 9: 1296. https://doi.org/10.3390/math12091296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop