Cascaded Hough Transform-Based Hair Mask Generation and Harmonic Inpainting for Automated Hair Removal from Dermoscopy Images (2024)

  • Journal List
  • Diagnostics (Basel)
  • PMC9777124

As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsem*nt of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice

Cascaded Hough Transform-Based Hair Mask Generation and Harmonic Inpainting for Automated Hair Removal from Dermoscopy Images (1)

Link to Publisher's site

Diagnostics (Basel). 2022 Dec; 12(12): 3040.

Published online 2022 Dec 4. doi:10.3390/diagnostics12123040

PMCID: PMC9777124

PMID: 36553047

Amira S. Ashour,1,* Basant S. Abd El-Wahab,1 Maram A. Wahba,1 Diaa-Eldin A. Mansour,2,* Abeer Abd Elhakam Hodeib,3 Rasha Abd El-Ghany Khedr,4 and Ghada F. R. Hassan3

Cecilia Di Ruberto, Academic Editor, Andrea Loddo, Academic Editor, Lorenzo Putzu, Academic Editor, Alessandro Stefano, Academic Editor, and Albert Comelli, Academic Editor

Author information Article notes Copyright and License information PMC Disclaimer

Associated Data

Data Availability Statement

Abstract

Restoring information obstructed by hair is one of the main issues for the accurate analysis and segmentation of skin images. For retrieving pixels obstructed by hair, the proposed system converts dermoscopy images into the L*a*b* color space, then principal component analysis (PCA) is applied to produce grayscale images. Afterward, the contrast-limited adaptive histogram equalization (CLAHE) and the average filter are implemented to enhance the grayscale image. Subsequently, the binary image is generated using the iterative thresholding method. After that, the Hough transform (HT) is applied to each image block to generate the hair mask. Finally, the hair pixels are removed by harmonic inpainting. The performance of the proposed automated hair removal was evaluated by applying the proposed system to the International Skin Imaging Collaboration (ISIC) dermoscopy dataset as well as to clinical images. Six performance evaluation metrics were measured, namely the mean squared error (MSE), the peak signal-to-noise ratio (PSNR), the signal-to-noise ratio (SNR), the structural similarity index (SSIM), the universal quality image index (UQI), and the correlation (C). Using the clinical dataset, the system achieved MSE, PSNR, SNR, SSIM, UQI, and C values of 34.7957, 66.98, 42.39, 0.9813, 0.9801, and 0.9985, respectively. The results demonstrated that the proposed system could satisfy the medical diagnostic requirements and achieve the best performance compared to the state-of-art.

Keywords: skin cancer, dermoscopy, hair mask generation, Hough transform, harmonic image inpainting, adaptive Wiener filter

1. Introduction

Dermoscopy, also known as dermatoscopy, is a non-invasive skin imaging technique that enhances the visibility of the subsurface structures compared to typical clinical images. Close examination of pigmented skin lesions in this manner improves the accuracy of the clinical diagnosis by introducing new morphological indicators for recognizing malignant lesions apart from other melanocytic and non-melanocytic benign skin lesions. However, using dermoscopy by inexperienced physicians affects the diagnosis accuracy. Therefore, the development of computer-based automated diagnostic systems is of significant importance in the early diagnosis of malignant lesions.

The existence of hair in dermoscopic images is a major challenge in computer-aided diagnostic systems. Hair pixels usually occlude the lesion area affecting its morphological characteristics, such as texture and boundary. Hence, hair removal is an essential pre-processing stage in such systems. Inefficient hair removal procedures from dermoscopy images cause poor segmentation and inaccurate pattern recognition. The conventional process of hair removal includes two main steps, namely (i) detecting and removing the hair pixels, then (ii) estimating the texture and color of the lesion and/or skin behind the identified hairs and reconstructing the removed hair pixels by the estimated pixels of skin. The main challenges of hair detection are the presence of thick and thin hairs along with the presence of hairs with colors that are close to the skin lesion colors. Moreover, the reconstruction techniques may disrupt the tumors’ texture and patterns by causing blurring or color bleeding. Furthermore, most hair removal systems require complex computations and high processing time.

Several techniques have been proposed for hair removal in dermoscopy images. One of the most widely implemented techniques is DullRazor [1], which applies thresholding in the three color bands after applying morphological closing operations in different directions. Then, the verified hair pixel is replaced by applying bilinear interpolation, followed by applying a median filter. Zagrouba et al. [2] introduced DullRazor and a 5 × 5 median filter to exclude thick and thin hair, which was applied to 200 RGB images and achieved 79.1% classification accuracy for malignant and benign lesions. Fiorese et al. [3] drew the hair on 20 RGB images and introduced top hat filtering to obtain the hair mask. After that, thresholding was applied to verify if the output of the hair mask represented true or false hair based on the geometric shape, followed by partial differential equation (PDE)-inpainting. This method resulted in 15.6% misclassification compared to the DullRazor which achieved 47.1%. Kiani et al. [4] applied edge detection using the Prewitt filter and radon transform to create the correct hair mask. Then, the non-hair pixels were removed and the hair pixels were replaced by averaging the gray levels of the neighbor background.

Moreover, Xie et al. [5] designed a system based on morphological closing-based top hat filtering and thresholding for hair detection. Additionally, PDE-based inpainting anisotropic diffusion was utilized to replace the hair pixels. This procedure was employed on 40 images in which the hair masks were drawn manually as ground truth resulting in 18% hair extraction error compared to 30.7% using DullRazor. For hair detection, Abbas et al. [6] applied two-dimensional derivatives of Gaussian (DOG) and proposed an exemplar-based inpainting method for hair replacement. A region-based active contour model was used for the segmentation process. This method was applied on 320 images and improved the true detection rate by 4.31%. Abbas et al. [7] used the derivative of Gaussian (DOG) and morphological techniques for hair detection in the CIE L*a*b* color space, followed by coherence transport inpainting. This method was applied on 100 images and achieved 2.98% hair detection error (HDE) and 4.21% tumor-disturb patterns (TDP). Huang et al. [8] applied a multiscale matched filter with hysteresis thresholding for hair detection. Additionally, a region growing procedure with linear discriminant analysis (LDA) was employed for recovering complicated hair intersection patterns. This procedure was applied on 20 images and resulted in a 58% hair detection rate. Toossi et al. [9] utilized an adaptive canny edge detector with a morphological operator for hair detection, in addition to coherence transport inpainting with multiple resolutions on 50 images. The results achieved 88.3% diagnostic accuracy, a 93.2% true detection rate (TDR), and a 4% false positive rate (FPR). Joanna et al. [10] applied a Laplacian filter and top-hat transform on 50 dermoscopy images leading to 88.7% diagnostic accuracy and 90.8% sensitivity.

Francisco et al. [11] applied the histogram to enhance the contrast and a Canny edge detector for hair detection. George et al. [12] exploited the grayscale morphological closing with various direction structures on the red channel only, followed by Otsu’s thresholding, then 2D interpolation to restore the hair pixels. Koehoorn et al. [13] utilized gap-detection using multiscale skeletons and a fast marching process for hair pixel inpainting. Salido et al. [14] applied a median filter and bottom-hat filter on each of the RGB color channels for hair detection followed by morphological operation, in addition to replacing hair pixels using harmonic inpainting. This procedure achieved a 33.41 peak signal-to-noise ratio (PSNR); on the other hand, the DullRazor achieved 32.44. Hamet et al. [15] introduced a curvilinear hair detector based on a color morphological process over the CIE L*a*b* space followed by a morphological inpainting procedure. Zaqout et al. [16] utilized a top-hat operator for finding the hair structures using YIQ space, then the histogram and morphological closing operation were utilized on each block for inpainting. Bibiloni et al. [17] introduced soft color closing top-hat operators for hair detection, while morphological transformations were used for the inpainting process based on different sized kernels of 9 × 9 and 11 × 11. Finally, Talavera–Martínez et al. [18] designed convolutional neural networks for the hair removal process. The results showed superior performance compared to other state-of-the-art methods in terms of the average values of the different metrics, namely 27.847 MSE, 0.926 SSIM, 35.137 PSNR, and 4.790 RMSE using 185 dermoscopy images.

The limitation of the previous related work is extracting and removing the hair pixels without affecting the region of interest (ROI). Accordingly, in the proposed technique, the Hough transform block was applied to overcome this limitation. As the hair does not usually take a specific shape, such as a curve or a line, the image was broken into blocks to approximate the shape of the hair into a line or a small curve. Subsequently, the Hough transform (HT) was applied on each block to produce a sub-hair mask from each one. Then, the sub-hair masks were combined to produce the overall hair mask of the original image.

2. Materials and Methods

A total of 900 dermoscopy images from the International Skin Imaging Collaboration (ISIC) 2016 challenge dataset [19] were used to evaluate the proposed hair removal technique. The proposed hair removal system depends on the iterative-based thresholding process as follows: (i) the input image is converted into a grayscale image using principal component analysis (PCA); (ii) the binary image is generated by thresholding the resultant gray image; (iii) the hair mask is created by splitting the binary image into blocks and applying Hough transform (HT) on each block; finally, (iv) harmonic inpainting is applied on the product of the original image and the hair mask.

2.1. Image Preprocessing

The input RGB dermoscopy image was transformed to the L*a*b* color space. Subsequently, the L*a*b* color space was converted into a grayscale image by applying PCA. The contrast of the enhanced grayscale image was adjusted by contrast-limited adaptive histogram equalization (CLAHE). To separate the background pixels, the average filter was applied to background exclusion. After that, the thresholding method was employed to convert the grayscale image into a binary image.

2.2. Iterative Thresholding Method

The thresholding process using the iterative procedure in [20] was applied to the grayscale images to obtain their corresponding binary images. Initially, the histogram was fragmented into two parts by utilizing an initial threshold value. Two mean values were computed from the foreground and the background gray pixel values. Then, the average value of the two computed means was computed to obtain the new threshold as follows:

Thi=Tha+Thb2

(1)

where Tha and Thb are the mean value of the above part and the mean value of the below part, respectively. This iterative procedure was repeated until the threshold value became fixed. These steps are presented as follows in Algorithm 1.

Algorithm 1. Thresholding procedure
Start
   i=0
Input gray image
     Ifi=0
        Thi1= Initial value
     End
Fragment the histogram utilizing Thi1 into two parts
Compute the mean value of above Tha and the mean value of below Thb
Compute the new threshold value as in Equation (1)
    IfThiThi1
       i=i+1
       Thi1=Thi
    Repeat from start
    Else if
       Th=Thi
    Normalize the Th to the range [0, 1]
    End if
OutputTh
End

The estimated threshold was applied for thresholding the gray image to obtain the binary image. Subsequently, the binary image was exploited in the hair mask detection process, as the binary image was then divided into blocks, and HT was applied on each block to extract the line or the curve representing the hair.

2.3. Hough Transform

Hough transform [21,22] is a technique for extracting the edges from the images. The HT is based on mapping a point in the image space into a line or a curve in the Hough space. Subsequently, applying some of the Hough space properties, identifying and detecting pixel clusters having the same properties indicate that they belong to the same line or group of lines. The required information to draw the detected lines was provided. Mathematically, a line or an edge can be represented in the image space using the following formula [23]:

y=ax+b

(2)

where a and b are the slope of the line and the intercept of the line with the y-axis, respectively. The HT is based on representing a line or edge in the polar space. So, each point in an image can be transformed into a sinusoidal curve in HT using the following transformation [24]:

x=rsinθ

(3)

ρ=xcosθ+ysinθ

(5)

where ρ represents the distance from the origin to the closest point on the straight line, and ϴ is the angle between the x-axis and the line connecting the origin with that closest point, as shown by the following equations.

r=x2+y2

(6)

θ=tan1xy

(7)

Accordingly, the HT maps each (x, y) position in the image into a sinusoidal curve in the Hough space (ρ,θ). Accordingly, the resultant binary image b (with m×n) from the thresholding process containing hair was divided into blocks bi (with q×q) to approximate the hair into a line or a small curve. After that, the HT was applied on each block to track the hairs and wipe the pixels that do not belong to the hair, which preserves the region of interest without any distortion. This process can be illustrated as follows:

bi=bN

(8)

b=b1bibibN

(9)

Bi=HTbi

(10)

B=B1BiBiBN

(11)

where b, bi, and N are the binary image, binary block, and the number of blocks, respectively. HT was applied on each block bi to obtain the corresponding hair sub-mask Bi. The overall mask B was obtained by combining the resultant hair mask blocks. Next, the overall hair mask was multiplied by the original RGB image to remove the hair pixels. The value at the removed hair pixel position was restored using a harmonic inpainting procedure.

2.4. Harmonic Inpainting

Image inpainting [25,26,27] is used for area restoration by restoring a missing region in an image. It is usually obtained by using the known information provided by the present regions. In the present case, the product of the resultant hair mask and the dermoscopy image have the missing segments. Accordingly, the information covered by the hair pixels is then retrieved by applying a harmonic inpainting procedure. The harmonic inpainting was applied on each of the RGB color channels. The missing segment in the inpainting domain can be filled using the regularizing term ψu as follows [28]:

minuΩDuu02dx+ψu

(12)

where u0, u and ψu are known information, missing segment, and the regularizing term, respectively. The fidelity term was utilized to minimize Equation (11) performed in a suitable Banach space S, which relies on the election of the regularizing function. In the event of the harmonic inpainting, the regularizing term ψu can be represented as follows [29]:

ψu=αΩu2dx

(13)

S=W1,2Ω

(14)

2.5. The Proposed Hair Removal Method

The proposed hair removal system is based on the iterative method outlined in Figure 1. The input RGB dermoscopy image is transformed into the CIE L*a*b* color space. The PCA is then applied to convert the L*a*b* image into an enhanced grayscale image, followed by CLAHE to improve the contrast of the grayscale image. Then, the average filter is employed for background alienation.

Open in a separate window

Figure 1

Proposed hair removal technique.

The resultant gray image is transformed into a binary image by computing a global threshold that can be applied to convert an intensity image to a binary image, and then the connected components (objects) that are less than 25 pixels are removed from the binary image. Successively, the hair mask is generated by splitting the binary image into blocks to approximate the hair into a line or curve to detect the hair by using HT. The output hair mask is multiplied by the RGB original image, and then the harmonic inpainting is applied to remove the hair pixels. To ensure that all the hair pixels in the image are removed, the threshold is computed again. If the threshold is greater than 0.02 (after trial and error), all the pre-mentioned steps are repeated until the threshold becomes less than 0.02. Finally, the adaptive median filter, median filter 3 × 3, and median filter 5 × 5 are applied separately to evaluate their impact on smoothing the output hairless image.

3. Results and Discussion

3.1. Implementation of the Proposed Technique

According to the phases of the proposed hair removal system, at every iteration, the PCA was applied to obtain the grayscale image from the L*a*b* color space dermoscopy image, as revealed in Figure 2a,b. The resultant gray image was enhanced utilizing CLAHE, as shown in Figure 2c. The background was alienated by applying (7 × 7) average filter (after trial and error). Afterward, the thresholding method was applied to obtain the binary image, as shown in Figure 2d. The resultant binary image was scanned from left to right and from top to bottom and broken into blocks with the block size 50 × 50 (using trial and error). The block location was kept by saving the first-pixel position, which indicates the minimum row and minimum column. For each data block in the image, the HT was applied on the block to obtain the hair sub-mask. The resultant block from the HT was returned to the saved location to obtain the overall hair mask. Consequently, the overall hair mask for this iteration was obtained by combining the hair mask for each block multiplied by the original image. The harmonic inpainting was then used to remove the hair pixels. After every iteration, depending on the threshold, the overall hair mask and the hairless image were obtained, as shown in Figure 2e,f. Finally, the hairless image was enhanced using an adaptive median filter, 5 × 5 median filter, or 3 × 3 median filter, as reported in Figure 2g–i.

Open in a separate window

Figure 2

Qualitative analysis of the proposed hair removal system, where (a) original image; (b) the output gray image from PCA; (c) enhanced gray image; (d) binary image after thresholding; (e) collected mask from all iterations; (f) hairless image; (g) adaptive median filter; (h) 5 × 5 median filter; (i) 3 × 3 median filter.

At the HT stage, the HT was applied in each binary block to obtain the curve and the line in the image space. It was obtained by converting the block in the image space into the Hough space. In the Hough space, the features of the line and curve can be extracted, as shown in Figure 3. The result from the HT is the hair sub-masks, as illustrated in Figure 4.

Open in a separate window

Figure 3

Line representation in Hough space.

Open in a separate window

Figure 4

Sub-masks using Hough transform at each block.

Figure 4 illustrates that the HT has tracked the curve or the line in the block. The HT was applied under the conditions using trial and error, where (i) the gap between 2 lines was filled to recover the hair pixels in case the distance was 5 pixels or less, and (ii) the length of the line or the curve less than 5 pixels was considered as noise. The HT results from each block were collected to produce the overall hair mask.

3.2. Proposed System Evaluation

The performance of the hair removal system is assessed by measuring six performance measures. These quality metrics are the mean squared error (MSE) [30], the structural similarity index (SSIM) [31], the peak signal-to-noise ratio (PSNR) [31], the signal-to-noise ratio (SNR) [31], the universal quality image index (UQI) [32], and the correlation (C) between the output hairless image and the reference image.

To evaluate the proposed hair removal method, the DullRazor program was applied to all the dataset images in order to obtain the hairless images, which were used as the reference image in the evaluation process, as the DullRazor method is considered the benchmark for hair removal methods in CAD systems. Table 1 compares the performance of the different filters and inpainting techniques in terms of the MSE, PSNR, SNR, SSIM, UQI, and C. The evaluated inpainting techniques include the harmonic, Mumford–Shah, AMLE, Cahn–Hilliard, and transport inpainting [33].

Table 1

Performance quality metrics of the proposed hair removal with different filters and inpainting techniques.

Inpainting TechniqueMSEPSNRSNRSSIMUQIC
Adaptive Median FilterHarmonic 112.5958.618833.90710.96550.99930.9904
Mumford–Shah 123.1957.618132.52440.95850.97430.9902
AMLE 129.9955.317431.25130.95660.97370.9897
Cahn–Hilliard 6824.331.91747.851340.77470.92750.7182
Transport 460.3149.926525.86040.92330.97970.9669
Median Filter (5 × 5)Harmonic 424.4643.590219.52410.94060.95970.9872
Mumford–Shah 420.4443.611919.54590.94090.99970.9867
AMLE 424.4043.632219.56610.94130.99980.9874
Cahn–Hilliard 25,449.224.07420.008120.65870.60690.7204
Transport 840.0340.371516.30540.91160.99650.9629
Median Filter (3 × 3)Harmonic 309.8745.569921.50390.95380.96980.9888
Mumford–Shah 305.0945.618221.55220.96420.99970.9882
AMLE 311.9345.593321.52720.96390.99970.9889
Cahn–Hilliard 25,451.224.07380.00780.66560.60650.7175
Transport 759.5540.890916.82490.92870.99650.9637

Open in a separate window

Table 1 demonstrates the superiority of the proposed system with harmonic inpainting and adaptive median filter, where it achieved 58.6188 PSNR while achieving PSNR of 43.5902 and 45.5699 using a 5 × 5 median filter and 3 × 3 median filter, respectively. Also, the SNR value using the adaptive median filter was 33.9071, while the other filters achieved SNR values of 19.5241 and 21.5039. The least MSE was observed for the adaptive median filter, as the MSE value reached 112.59, while the other filters’ MSE values were 424.46 and 309.87. Also, the highest SSIM, UQI, and C values were observed for the adaptive median filter, as these metrics reached 0.9655, 0.9993, and 0.9904, respectively. The harmonic inpainting realized the least processing time of 22.41 s, while the processing time for Mumford–Shah, AMLE, Cahn–Hilliard, and Transport inpainting were 181.33 s, 139.38 s, 336.40 s, and 422.26 s, respectively.

3.3. Comparison between the Proposed System and DullRazor Using HairSim

In this section, a study is carried out to compare the performance of the proposed system to the DullRazor procedure. To conduct this comparison, we initially selected the hairless images from the database, which were used as reference or clean images for evaluating the systems. Then, the hair simulation was applied to those clean images. Two different hair simulators were applied. The first implemented simulator was by Attia et al. [34] to produce pragmatic results (we denote it by RH). The other simulator was by Mirzaalian et al. [35,36], whose program is accessible and called “HairSim” (we denote it by HS). Table 2 displays the comparison between DullRazor and the proposed method by applying the two hair simulation programs.

Table 2

Comparison between the proposed algorithm and DullRazor using both RH and HS hair simulators.

MSEPSNRSNRSSIMUQIC
Proposed MethodHS32.981266.970735.89620.98000.99800.9934
RH25.976163.381629.29670.99100.99900.9902
DullRazorHS194.2645.651921.58650.91840.99960.9856
RH316.036743.275119.18350.89720.99680.9844

Open in a separate window

Table 2 reveals that the results of the proposed method are superior to the DullRazor results using the 2 hair simulators achieving PSNR values of 63.3816 (RH) and 66.9707 (HS), while DullRazor achieved PSNR values of 43.2751 (RH) and 45.6519 (HS). Additionally, the SNR values of the proposed method were 29.2967 (RH) and 35.8962 (HS). On the other side, DullRazor achieved SNR values of 19.1835 (RH) and 21.5865 (HS). The least MSE values were established by the proposed method, as the MSE values were 32.9812 (HS) and 25.9761 (RH), while DullRazor MSE values were 194.26 (HS) and 316.0367 (RH). The highest SSIM, UQI, and C were also realized by the proposed method. Therefore, the proposed method is superior to DullRazor for hair removal.

3.4. Proposed Method Evaluation on Clinical Images

In this section, the clinical images were studied to evaluate the effectiveness of the proposed system and its ability to handle clinical, real-time images. The proposed method was applied on 284 clinical images. To evaluate the results, the DullRazor algorithm was initially applied to the clinical images. The resultant images from DullRazor were considered as the reference images in the evaluation. Table 3 reflects the average results of evaluating the proposed method on the clinical images.

Table 3

Evaluation of the proposed method on the clinical images.

MSEPSNRSNRSSIMUQIC
Proposed method34.795766.986842.39600.98130.98010.9985

Open in a separate window

Table 3 shows that the proposed method resulted in an MSE value of 34.7957, in addition to achieving MSE, PSNR, SNR, SSIM, UQI, and C values of 34.7957, 66.98, 42.39, 0.9813, 0.9801, and 0.9985, respectively. Samples of the resultant clinical images after applying the proposed method are shown in Figure 5.

Open in a separate window

Figure 5

Samples of clinical images before and after applying the proposed hair removal method, where (a1c1) images before hair removal and (a2c2) images after hair removal method.

3.5. Discussion

To further verify the primacy of the proposed method, Table 4 compares the proposed system and six traditional hair removal procedures in terms of their performance. The six methods have applied various hair detection, extraction, and inpainting methods. These methods were chosen on the basis of their accessibility and scalability. These methods are Abbas et al. [7], Huang et al. [8], Bibiloni et al. [17] using 9 × 9 and 11 × 11 kernel filters Toossi et al. [9] and Xie et al. [5], as a comparison with state-of-the-art.

Table 4

Comparison of the proposed hair removal system against six conventional hair removal methods.

Proposed MethodAbbasHuangBibiloni
(9 × 9 Kernel)
Bibiloni
(11 × 11 Kernel)
ToossiXie
MSEHS32.9812257.007387.0417123.8135127.9177263.854247.8494
RH25.9761143.4654106.9602100.486898.2271142.203836.3482
SSIMHS0.98000.88980.93480.88980.89000.87510.9599
RH0.99100.90180.88620.92450.92450.89340.9531
PSNRHS66.970725.390640.332534.619234.108224.688853.7967
RH63.381633.063938.084739.232639.515533.148448.7572
UQIHS0.9980.9930.9970.9960.9960.9930.997
RH0.9990.9940.9980.9960.9960.9940.999

Open in a separate window

From Table 4, the proposed method achieved the best result compared to the traditional method, in addition to its ability to remove thin and thick hair. The least MSE values were established by the proposed method, as the MSE values were 25.9761 (RH) and 32.9812 (HS). The biggest PSNR, SSIM, and UQI values were also observed for the proposed method, as the PSNR values were 63.3816 (RH) and 66.9707 (HS), the SSIM values were 0.9800 (HS) and 0.9910 (RS), and the UQI values were 0.998 (HS) and 0.999 (RH).

The proposed hair removal system achieved the best performance compared to the other methods. This method applied different sizes of median and adaptive median filters to choose the most suitable for dermoscopy images. The result found that the adaptive median filter was suitable for the normal type of dermoscopy images because it decides and classifies which pixels in the image are affected by noise and replaces these only by the value of the median pixel from the neighboring pixels. On the other side, the adaptive hom*omorphic, anisotropic diffusion, and Frost filters are suitable for ultrasound images [37,38]. Therefore, in future work, other filters, such as the Wiener filter and Wavelet transform, will be studied to compare them and enhance the overall system performance.

4. Conclusions

This paper has introduced a new system for automated hair removal using Hough transform and harmonic inpainting. This process is of major significance in dermoscopy image pre-processing, as it aids in the accurate classification of skin lesions by occluding the noisy information and obstruction caused by hairs. This proposed method has established superior performance with respect to the traditional methods, as well as its ability to eliminate thin and thick hairs without deteriorating the ROI.

The proposed method was applied to two different datasets, namely the ISIC 2016 dermoscopy dataset and clinical dataset. Firstly, to estimate the system performance, DullRazor was applied to all the images in the datasets to obtain the hairless (i.e., reference) images. The results showed that the proposed system with harmonic inpainting and adaptive median filter achieved the highest results. The system achieved MSE, PSNR, SNR, SSIM, UQI, and C values of 112.59, 58.6188, 33.9071, 0.9655, 0.9993, and 0.9904, respectively, applied to the ISIC 2016 dataset, and MSE, PSNR, SNR, SSIM, UQI, and C values of 34.7957, 66.98, 42.39, 0.9813, 0.9801, and 0.9985, respectively, when applied to the clinical dataset. Moreover, to verify the superiority of the proposed method, hairless images were selected from the dataset to compare the algorithms. Next, the hair simulation was implemented to these specific images. Accordingly, the proposed method achieved the best performance compared to the other algorithms, such that the MSE values were 25.9761 (RH) and 32.9812 (HS). The highest PSNR, SSIM, and UQI values were also observed, as the PSNR values were 63.3816 (RH) and 66.9707 (HS), the SSIM values were 0.9800 (HS) and 0.9910 (RS), and the UQI values were 0.998 (HS) and 0.999 (RH).

Funding Statement

This research was funded by the Research Funding Center, Postgraduate Studies and Scientific Research Sector, Tanta University, Egypt. (Grant ID: TU: 19-03-01).

Author Contributions

Conceptualization, A.S.A. and B.S.A.E.-W.; methodology, B.S.A.E.-W., A.S.A., and M.A.W.; software, B.S.A.E.-W., A.S.A.; validation, M.A.W., B.S.A.E.-W., and G.F.R.H.; formal analysis, A.S.A.; investigation, R.A.E.-G.K., G.F.R.H., and A.A.E.H.; resources, All authors; data curation, M.A.W., G.F.R.H., A.A.E.H., and R.A.E.-G.K.; writing—original draft preparation, A.S.A., B.S.A.E.-W., and M.A.W.; writing—review and editing, All authors; visualization, M.A.W. and D.-E.A.M.; supervision, A.S.A. and D.-E.A.M.; project administration, A.S.A.; funding acquisition, A.S.A. and D.-E.A.M. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available at: https://challenge.isic-archive.com/data/ (accessed on 5 January 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

1. Lee T., Ng V., Gallagher R., Coldman A., McLean D. Dullrazor®: A software approach to hair removal from images. Comput. Biol. Med. 1997;27:533–543. doi:10.1016/S0010-4825(97)00020-6. [PubMed] [CrossRef] [Google Scholar]

2. Zagrouba E., Barhoumi W. A prelimary approach for the automated recognition of malignant melanoma. Image Anal. Stereol. 2004;23:121–135. doi:10.5566/ias.v23.p121-135. [CrossRef] [Google Scholar]

3. Fiorese M.F., Peserico E.P., Silletti A.S. VirtualShave: Automated hair removal from digital dermatoscopic images; Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society; Boston, MA, USA. 30 August–3 September 2011. [PubMed] [Google Scholar]

4. Kiani K., Sharafat A.R. E-shaver: An improved DullRazor® for digitally removing dark and light-colored hairs in dermoscopic images. Comput. Biol. Med. 2011;41:139–145. doi:10.1016/j.compbiomed.2011.01.003. [PubMed] [CrossRef] [Google Scholar]

5. Xie F.Y., Qin S.Y., Jiang Z.G., Meng R.S. PDE-based unsupervised repair of hair-occluded information in dermoscopy images of melanoma. Comput. Med. Imaging Graph. 2009;33:275–282. doi:10.1016/j.compmedimag.2009.01.003. [PubMed] [CrossRef] [Google Scholar]

6. Abbas Q., Fondón I., Rashid M. Unsupervised skin lesions border detection via two-dimensional image analysis. Comput. Methods Programs Biomed. 2011;104:e1–e15. doi:10.1016/j.cmpb.2010.06.016. [PubMed] [CrossRef] [Google Scholar]

7. Abbas Q., Celebi M.E., García I.F. Hair removal methods: A comparative study for dermoscopy images. Biomed. Signal Process. Control. 2011;6:395–404. doi:10.1016/j.bspc.2011.01.003. [CrossRef] [Google Scholar]

8. Huang A., Kwan S.Y., Chang W.Y., Liu M.Y., Chi M.H., Chen G.S. A robust hair segmentation and removal approach for clinical images of skin lesions; Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Osaka, Japan. 3–7 July 2013. [PubMed] [Google Scholar]

9. Toossi M.T.B., Pourreza H.R., Zare H., Sigari M.H., Layegh P., Azimi A. An effective hair removal algorithm for dermoscopy images. Ski. Res. Technol. 2013;19:230–235. doi:10.1111/srt.12015. [PubMed] [CrossRef] [Google Scholar]

10. Jaworek-Korjakowska J., Tadeusiewicz R. Hair removal from dermoscopic color images. Bio-Algorithms Med.-Syst. 2013;9:53–58. doi:10.1515/bams-2013-0013. [CrossRef] [Google Scholar]

11. Ortuño F., Rojas I. Bioinformatics and Biomedical Engineering; Proceedings of the 3rd International Conference, IWBBIO 2015; Granada, Spain. 15–17 April 2015; Cham, Switzerland: Springer; 2015. [Google Scholar]

12. George Y., Aldeen M., Garnavi R. Skin hair removal for 2D psoriasis images; Proceedings of the International Conference on Digital Image Computing, Techniques and Applications (DICTA); Adelaide, Australia. 23–25 November 2015. [Google Scholar]

13. Koehoorn J., Sobiecki A.C., Boda D., Diaconeasa A., Doshi S., Paisey S., Jalba A., Telea A. International Symposium on Mathematical Morphology and Its Applications to Signal and Image Processing. Springer; Cham, Switzerland: 2015. Automated digital hair removal by threshold decomposition and morphological analysis; pp. 15–26. [Google Scholar]

14. Julie Salido J.S., Ruiz C.R. Using morphological operators and inpainting for hair removal in dermoscopic images; Proceedings of the Computer Graphics International Conference; New York, NY, USA. 27–30 June 2017. [Google Scholar]

15. Hamet P., Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69:S36–S40. doi:10.1016/j.metabol.2017.01.011. [PubMed] [CrossRef] [Google Scholar]

16. Zaqout I.Z. An efficient block-based algorithm for hair removal in dermoscopic images. Comput. Opt. 2017;41:521–527. doi:10.18287/2412-6179-2017-41-4-521-527. [CrossRef] [Google Scholar]

17. Bibiloni P., González-Hidalgo M., Massanet S. Skin hair removal in dermoscopic images using soft color morphology; Proceedings of the Conference on Artificial Intelligence in Medicine in Europe; Vienna, Austria. 21–24 June 2017; Cham, Switzerland: Springer; 2017. [Google Scholar]

18. Lidia Talavera-Martínez L.T., Pedro Bibiloni P.B., Manuel González-Hidalgo M.G. An encoder-decoder CNN for hair removal in dermoscopic images. arXiv. 2020201005013 [Google Scholar]

19. Gutman D.G., Codella N.C., Celebi M.C., Helba B.H., Marchetti M.M., Mishra N.M., Halpern A.H. Skin Lesion Analysis toward Melanoma Detection: A Challenge at the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the International Skin Imaging Collaboration (ISIC) arXiv. 20161605.01397 [Google Scholar]

20. Ridler T.R., Calvard S.C. Picture thresholding using an iterative selection method. IEEE Trans. Syst. Man Cybern. 1978;8:630–632. [Google Scholar]

21. Gerig G.G. Linking image-space and accumulator-space: A new approach for object recognition; Proceedings of the 1st International Conference on Computer Vision; London, UK. 8 June 1987. [Google Scholar]

22. Ballard D.B. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1981;13:111–122. doi:10.1016/0031-3203(81)90009-1. [CrossRef] [Google Scholar]

23. Mukhopadhyay P.M., Chaudhuri B.C. A survey of Hough transform. Pattern Recognit. 2015;48:993–1010. doi:10.1016/j.patcog.2014.08.027. [CrossRef] [Google Scholar]

24. Dahyot R.D. Statistical Hough transform. IEEE Trans. Pattern Anal. Mach. Intell. 2008;31:1502–1509. doi:10.1109/TPAMI.2008.288. [PubMed] [CrossRef] [Google Scholar]

25. Bertalmio M.B., Sapiro G.S., Caselles V.C., Ballester C.B. Image inpainting; Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques; New Orleans, LA, USA. 23–28 July 2000. [Google Scholar]

26. Drori I.D., Cohen-Or D.C., Yeshurun H.Y. Fragment-based image completion. ACM SIGGRAPH. 2003;22:303–312. doi:10.1145/882262.882267. [CrossRef] [Google Scholar]

27. Shena J.S., Jina X.J., Zhoua C.Z., Wangb C.W. Gradient based image completion by solving the Poisson equation. Comput. Graph. 2007;31:119–126. doi:10.1016/j.cag.2006.10.004. [CrossRef] [Google Scholar]

28. Damelin S.B., Hoang N. On surface completion and image inpainting by biharmonic functions: Numerical aspects. Int. J. Math. Math. Sci. 2018;2018:3950312. doi:10.1155/2018/3950312. [CrossRef] [Google Scholar]

29. Chan T., Shen J. Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 2001;12:436–449. doi:10.1006/jvci.2001.0487. [CrossRef] [Google Scholar]

30. Wang Z.W., Bovik A.B. Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 2009;26:98–117. doi:10.1109/MSP.2008.930649. [CrossRef] [Google Scholar]

31. Wang Z., Bovik A., Sheikh H., Simoncelli E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004;13:600–612. doi:10.1109/TIP.2003.819861. [PubMed] [CrossRef] [Google Scholar]

32. Wang Z.W., Bovik A.B. A universal image quality index. IEEE Signal Process. Lett. 2002;9:81–84. doi:10.1109/97.995823. [CrossRef] [Google Scholar]

33. Schönlieb C.S. Partial Differential Equation Methods for Image Inpainting. Volume 29 Cambridge University Press; Cambridge, UK: 2015. [Google Scholar]

34. Attia M.A., Hossny M.H., Zhou H.Z., Yazdabadi A.Y., Asadi H.A., Nahavandi S.N. Realistic hair simulator for skin lesion images using conditional generative adversarial network. Preprints. 2018 doi:10.20944/preprints201810.0756.v1. [CrossRef] [Google Scholar]

35. Hair Sim Software. [(accessed on 21 March 2019)]. Available online: https://www2.cs.sfu.ca/~hamarneh/software/hairsim/welcome.html

36. Mirzaalian H.M., Lee T., Hamarneh G. Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and MRF-based multilabel optimization. IEEE Trans. Image Process. 2014;23:5486–5496. doi:10.1109/TIP.2014.2362054. [PubMed] [CrossRef] [Google Scholar]

37. Khan M.N., Altalbe A. Experimental evaluation of filters used for removing speckle noise and enhancing ultrasound image quality. Biomed. Signal Process. Control. 2022;73:103399. doi:10.1016/j.bspc.2021.103399. [CrossRef] [Google Scholar]

38. Khan M.N., Hasnain S.K., Jamil M., Ullah S. Electronic Signals and Systems Analysis. River Publishers; Gistrup, Denmark: 2020. [Google Scholar]

Articles from Diagnostics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

Cascaded Hough Transform-Based Hair Mask Generation and Harmonic Inpainting for Automated Hair Removal from Dermoscopy Images (2024)

References

Top Articles
Latest Posts
Article information

Author: Msgr. Refugio Daniel

Last Updated:

Views: 5612

Rating: 4.3 / 5 (74 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Msgr. Refugio Daniel

Birthday: 1999-09-15

Address: 8416 Beatty Center, Derekfort, VA 72092-0500

Phone: +6838967160603

Job: Mining Executive

Hobby: Woodworking, Knitting, Fishing, Coffee roasting, Kayaking, Horseback riding, Kite flying

Introduction: My name is Msgr. Refugio Daniel, I am a fine, precious, encouraging, calm, glamorous, vivacious, friendly person who loves writing and wants to share my knowledge and understanding with you.