Multiscale Morphological Reconstruction for Hair Removal in Dermoscopy Images

The automatic diagnosis of melanoma is usually affected by the noise that is often included in an image, during the acquisition stage or by superficial factors such as hair. Specifically, hair on the surface of a lesion can cause enough distortion, resulting in an erroneous diagnosis of the region of interest. To solve this issue, several techniques have been presented to detect hair on the surface of a dermoscopy image and substitute a surface approximation for these regions. Nonetheless, the existing methods are prone to false detections or reconstructions that are not uniform, demand high computing resources and modify the textures of important characteristics. Therefore, we proposed a method that detects the hairs by means of a convolution of the image with a kernel belonging to the first derivative of the Gaussian function and replaces the hairs using a multiscale morphological reconstruction. In addition, we integrated a refining stage that contributes to maintaining the quality of the patterns on the lesion. We used 36 dermoscopy images in the evaluation, which included a total of 586 hairs that were automatically detected with the proposed process and validated with their respective manual segmentations. Our results showed sensitivity and specificity performance measurements of 94.14% and 99.89%, respective.


Introduction
Among many types of cancer, skin cancers are the most common form of cancers in human (Bhuiyan, et. al. 2013, pp. 1-6).This cancer is the sixth most common one among American men and women and is the main factor of cancer death in 25-30 years old women.Also, melanoma is the most common type of cancer in 20-44 years old men in Australia and New Zealand (Ramezani, et. al. 2014, pp. 281-290) (Kutlubay, et. al. 2013. pp. 67-125).Because timely detection of this cancer can extend the patient's life considerably (Sadeghi, et. al. 2011, pp. 137-143) (Abbas, 2011, pp. 91-100), specialists have made various contributions to the development of new methods that contribute to the early diagnosis of melanoma, specifically in visual examination procedures.Among these methods are the ABCD Rule (Soltani, et. al. 2015, pp. 412-500), the Menzies method (Menzies. 1996(Menzies. , pp. 1178(Menzies. -1182)), and the 7-point checklist (Argenziano, et. al. 1998(Argenziano, et. al. , pp. 1563-157)-157).However, the diagnostic process is not sufficiently precise.Therefore, optical tools that enlarge the dimensions of the image are frequently used to broadly define the variety of malignant structures within the lesion.Dermoscopy is a magnifying diagnosis technique using polarized light that has proven to be useful in the categorization of melanocytic proliferations (Cymerman, et. al. 2015, pp. e197-e208).Despite efforts to increase the accuracy of melanoma diagnoses, the estimated precision for expert dermatologists is within the range of 75-84%.
In order to increase the performance in melanoma diagnosis, several automatic diagnosis systems have been developed (Li, et. al. 2014).These diagnostic systems have been developed in recent years, due to the contribution they can make to the specialist's decision-making process (Ali, et. al. 2012) (Capdehourat, et. al. 2011(Capdehourat, et. al. , pp. 2187(Capdehourat, et. al. -2196)).These methods employ techniques that reduce the subjectivity associated with traditional diagnostic methods and they can emphasize important details that are usually not observable by the human eye.Segmentation is one of the main processing tasks of this type of system.This step is perhaps the most affected by the quantity of noise, frequently observed as a number of bubbles, pores, scale marks or hairs (Abbas, et. al. 2010, pp 1-15).All these types of noise usually make the results inefficient.Within these common factors, hairs are the most harmful since they are located randomly on the surface of the image and can be found in regions associated with the lesion, obstructing crucial characteristics such as its edges.To diminish the impact of this noise, a preprocessing stage is generally developed.This stage can include: a color transformation to reduce the negative contribution of noise (Kutlubay, et. al. 2013. pp. 67-125) (Xu, et. al. 1999, pp. 65-74) (Rajab, et. al. 2004, pp. 61-68) (Mendonza, et. al.2010, pp. 531-540), the softening of the image by means of spatial filters, specifically by the average filter (Celebi, et. al. 2008, pp. 347-353), the combination of the previous two methods (Sadeghi, et. al. 2011, pp. 137-143) (Celebi, 2009, pp. 148-153) or the implementation of new specialized methods for noise removal (Ramezani, et. al. 2014, pp. 281-290) (Abbas, 2011, pp. 91-100) (Lee, et. al. 1997, pp. 533-543), particularly, methods specific to hair elimination.
Hair removal is an issue of wide interest for automatic melanoma treatment (Somnathe, et. al. 2015, pp. 73-76).In general, the automatic hair removal procedure has three stages.First, a model of the hairs is proposed; subsequently, the hairs are detected and finally, the hairs are removed from the image by substitution.Reports indicate that hair modeling can be performed based on a series of profiles that define the hair (Abbas, 2011, pp. 91-100) by means of convolution, with the kernel corresponding to the partial derivative of the Gaussian function (Jaworek. 2015) or by means of background elimination obtained through morphological operations with special structures, such as rectangular structures rotated at 0°, 45° and 90° (Lee, et. al. 1997, pp. 533-543) (Saugeon, et. al. 2003, pp. 65-78).After detecting the hairs by means of the proposed model, the challenge consists in obtaining a surface that works as an appropriate substitute for the regions where the hairs are found and correct the false detections generated in the previous stage.To solve this problem, the intensities adjacent to the regions the hairs are employed by means of linear interpolation techniques (Lee, et. al. 1997, 533-543).The restoration is made based on partial differential equations (Criminisi, et. al. 2004, pp. 1-13), or a new surface is generated based on the morphological closing operation (Saugeon, et. al. 2003, pp. 65-78).In the literature, there exist several publications about hair removal in dermoscopic image.For instance, in (Abbas, et. al. 2013, e27-e36), a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs.The algorithm is based on the rough hairs are segmented using a matched filtering with first derivative of Gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques.
Although these methods have the ability to detect hair, it is necessary to search for methods that do not detect other regions of interest that affect the texture and modify the discriminating patterns in malignancy recognition.Additionally, it is difficult to assess the exact performance of these methods because the performance analyses is determined in a qualitative way.Therefore, in this document, we propose a study designed for hair removal as well as effective substitution of the hair by applying an iterative morphological reconstruction approach without altering the characteristics of the lesion.In addition, we propose an assessment method to obtain the traditional performance measurements (sensitivity and specificity) so that the specialist's criteria are integrated successfully and in a timely manner.Thus, the proposed method is evaluated both, qualitatively and quantitatively, under the supervision of a team of two specialists in the detection of dermatological pathologies.This Article is divided as follows: Section 2 introduces the proposed method.Section 3 offers a discussion of the experimental results, and section 4 presents the conclusions.

Material and Methods
Figure 1(a) shows a dermoscopic image presenting a possible melanoma, including natural hairs; furthermore, in this image the magnification of a region exhibiting a hair that vertically oriented unfolds is included.The cross-section of the same area and its respective three-dimensional representation is presented in figure 1(b).As observed in this figure, the image includes the region associated with the depression caused by the presence of the hair that can be presented in multiple orientations.Since hairs are particularly distinguishable by their lengthened, thin structures and in many cases, dark tones, hair recognition is conducted based on the detection of the most similar geometric structure as dark lines (Abbas, et. al. 2010, pp 1-15) (Li. 2008).For this procedure, the algorithm presented in the diagram of figure 2 is proposed.Each procedure included in this algorithm will be deeply explained below in this document.
Generally, the detection of dark lines is conducted based on the use of the second derivative of the Gaussian function; nevertheless, it is well known that this method produces a non-desirable response to objects with light tonalities.For solving this issue, a method using the convolution of the image with a window associated to the first derivative of the Gaussian function is proposed (Mendonza, et. al.2010, pp. 531-540).This method suggests the hair detection using its cross section.In (Saugeon, et. al. 2003, pp. 65-78), the researchers showed that the kernel associated with the derivative of the Gaussian function performs efficiently in the detection of dark lines, generating more exact responses.Because the melanocytic lesions have relevant patterns with light tones (e.g., reticular pattern, blue-white veil and edges), in this work we employ the Derivative of Gaussian (DOG) function presented in equation ( 1).Let X be the array referring to the kernel's spatial region in the axis x and y.And let Σ be the covariance matrix of X (1) If an ideal case of a depression is considered, as is shown in figure 3(a), the convolution obtained using the DOG function kernel is shown in figure 3(c).Observing this figure, it is concluded that the center of the depression is located in the middle of two extreme points having opposite signs.However, in real cases, According to the cross-section of the image shown in figure 1(a) as a red line (in color image) this cross section behaves as is indicated in Figure 3(b), were several fluctuations are observed due to the noise, pores or changes in lighting.
The result of calculating the profile convolution with the DOG kernel to obtain the signal for this real image is shown in 3(d).In this figure, the center of the depression is not necessarily placed at the middle of the two extremes.For solving this issue, a simple solution is proposed, as is explained bellow, using a 2D convolution.Based on the concepts exposed before, the procedure can be extrapolated to two dimensions.Here, the DOG kernel can be rotated to an angle θ, guaranteeing the detection of the multidirectional hairs, having into account that hairs are randomly oriented.It is important to take care in rotated the kernel around its principal axis.Since there exists a lot of orientations, and to rate the kernel around all of them implies a great computational cost, the experimental results suggest that the angles 0°, 45°, 90° and 135° can be chosen to identify the randomly oriented hairs.The kernel rotation is performed using equations (2). (2) Where x and y are the non-rotated coordinates, while x' and y' are the rotated coordinates.With these equations, the kernel rotated in the proposed angles is generated, as is shown in figure 4. At this point, the convolution guarantees that the kernel g'(x,y) will be applied on all the image surface I(x,y), generating as a result, a filtered image F(x,y), according to equation (3) (3) Finally, the identification of the center of each hair can be determined by means of equation ( 4). (4) Each i-th array of the preceding expressions corresponds to a specific rotation of the employed kernel.For the integration of all the arrays C ni , the maximum intensity associated with the pixel at the position (x,y) is selected, according to equation ( 5).
( 5 )  eliminated with the application of a thresholding algorithm (Otsu Method).With this algorithm, a morphological operation was previously applied over the image R(x,y).The morphological operation implicitly balances the histogram for the image R(x,y).This histogram allows to discover a more discriminating threshold in the classification of the tones associated with the background and the hair.A square geometric structure was selected as the morphological structure with a windowing dimension ws = 5.A filter was applied to eliminate insignificant objects after the thresholding (Figure 5(c)), removing objects with areas of sizes lower than 20 pixels.Consequently, a partial binary mask M kp is automatically generated for the hairs presented in the original image (Figure 5(d)).As the area of the morphological structure increases, the thresholding algorithm accepts region with tones similar to the region of interest, but do not belong to it, being more imprecise.

Refining Stage
Because we attempt to remain the relevant texture of the original image (figure 6(a)) and patterns of the region of interest (ROI), this study proposes a refinement stage, in order to eliminate the false screenings of the previous stage.For this reason, a stage that claims few computing resources while conducting this task efficiently is developed.The Otsu method is a popular technique that is known for its efficiency (Otsu, et. al. 1979, pp. 62-66).
In this case, this method is employed to obtain a preliminary detection of the lesion (Figure 6

Substitution
After obtaining the binary mask M k , (Figure 6(c)) that defines the areas occupied by the hairs on the surface of the original image I(x,y), morphological operations are used to find an uniform surface approximation S a , belonging to the image texture, that is suitably substituted by the regions associated with the hair.
Accordingly, let´s consider the original image without the regions associated to the hairs, presented in Figure 7(a).Let us suppose that this image is dilated n times.In each iteration, the morphological structure grows linearly; for this reason, the empty spaces (corresponding to the region occupied by the hair) gradually disappear as new intensities (belonging to the skin texture) occupy the region.Considering the previous concept, the surface S a is built with new intensities that fill the holes after the image is dilated using a circular structure with a growing radius.To guarantee a uniform and exact surface, only the intensities that entered the region for the first time are kept, without taking the intensities found in the previous scale into account.Please note that each scale is assigned to a specific color and only eight iterations are required to obtain the new surface.In the first iteration, we obtain the pixels located at the edges of the object, which have a blue intensity.
As the scale grows, new pixels are accepted in this way until all regions defined in the mask M k are eliminated; in this case, they are represented by shades of dark red.The final intensity of the discovered pixels is determined from the dilation in which they were obtained.The final substitution surface will be the sum of the contribution of the n-th dilation, as in equation ( 6). (6) Each image S ai is calculated having into account the pixels that have already occupied some region within the mask.This procedure is carried out for avoiding a previous consideration of them in a new search.As a result, each iteration depends on the previous one.The S a approximation image was calculated by means of equation ( 7). ( where corresponds to the binary image of all the values different from zero in the image S aj .Operator refers to dilation operation between image and W i that represents the circular kernel structure with increasing radius i.The operation is equivalent to the complement of the binary image , and operation corresponds to the point-to-point products of both matrixes.Finally, the expression required to calculate the hair-free image H L is indicated in equation ( 8), which essentially eliminates the regions covered by hairs (Figure 7(a)).Later, these regions are replaced with the obtained surface S a , as in Figure 8 ( Where represents the complement of the mask M k .In other words, the regions indicated by the complement of regions tagged as hairs are left intact but only gaps are filled with the new estimated surface S a .The mask is applied to all channels in the color image and S a can be found for each channel separately by using the method described above.

Experimental Frame
A quantitative and qualitative comparative study was developed under the supervision of a team of two specialists in dermatology from the University of Caldas.A database comprising 36 dermoscopic images was acquired under a specific acquisition protocol for being used in this study.Most of these images exhibited a strong presence of hairs on surface, while the remainder only showed metric marks.All the images were captured with a DermLite II Pro dermatoscope connected to a Canon PowerShot A2200 camera.These images were digitalized in RGB-Color format with an initial dimension of 1200x1600.They were later reduced to a third of that size for processing without any loss of information.The implementation of the proposed methods was conducted in Matlab© R2008a, and evaluated using a computer with a 2.4 GHz Intel Core i5 processor with 3 GB RAM of memory.

Results
To evaluate the performance, the hair mask was extracted manually from both, the original and the processed images.Since there exists a great presence of hairs in the image, the manual detection of these hairs is a laborious task.To solve this issue, hairs were modeled using few points selected by a specialist, as shown in Figure 9(a).The points are used to generate a curve by means of splines and then to generate a binary image where the lines represent the original hairs mask obtained with the expert selection, as in Figure 9(b).In this way, the task of manual detection of the hairs is drastically reduced.From the database, 36 images were selected and into them, 586 dark lines were detected manually.These lines are formed by hairs and metric marks.On average, the dispersion of hairs and metric marks comprised 3.2% of the original surface per image.These hairs and marks are then defined as variable characteristics specifically in terms of coloring, width, length and tortuosity.However, because the aim of this work is to determine the performance of the detection method, the image where the hairs were successfully detected is obtained.The calculation is generated by means of the subtraction of both masks, as shown in equation ( 9). (9) Finally, the precision of the methods can be expressed quantitatively in terms of sensitivity and specificity.The formulas employed for obtaining parameters used for determining sensitivity and specificity are shown in In accordance with the qualitative analysis conducted by the team of specialists, the proposed method does not affect the quality of the image, nor does it insert noise or damage the regions of study of the lesion.In fact, the DullRazor method does not adequately rebuild the regions covered by the hairs, adding discontinuities, diffusing regions and mixing colors in the final image.We simultaneously studied the algorithm exemplar-based image inpainting (Abbas, et. al. 2010, pp 1-15) (Criminisi, et. al. 2004, pp. 1-13) given for the substitution of the hair-covered regions.However, the exaggerated computational cost required to obtain the surface covered by a single hair was obvious, and although the approximation is also built based on the surface surrounding the hairs, this method did not generate an adequate representation.

Conclusions
An important discriminating characteristic for the identification of melanoma is the lesion edges, which generally exhibit a low contrast and the distinctive noise inherent in pigmented skin lesions.Specifically, the hair and the nonhomogeneous illumination are the most problematic patterns, making the automatic detection of the lesion a complex and inexact task.
This study employed the convolution of the image for being used in hair detection and removal, with a window corresponding to the first derivative of the Gaussian function rotated in the main angles.In addition, we proposed including a refining stage, which main task is to filter out the false detections located on the lesion surface.Finally, and as the main contribution of this study, we proposed a stage for the replacing the hairs based on the multiscale morphological reconstruction.
The result of the refining stage was the ability to preserve the texture and the different characteristics located on the region of interest.Regarding the presence of hairs placed outside the lesion, we found an adequate surface that uniformly corresponds to the areas associated with the skin.The experimental results suggest that the general proposed technique adequately removes the hairs located on the surface of the dermoscopy image, generating hair detection performances with sensitivity and specificity values equivalent to 94.14% and 99.89%, respectively.
In general, the method presented in this study detects and efficiently replaces the hairs present in the image, surpassing the general performance of the popularly employed technique

Figure 1 .
Figure 1.(a) Dermoscopic image of a possible melanoma including natural hairs.The magnified image contains a vertically oriented hair and (b) Cross section of the image including the depression caused by the hair

Figure 2 .
Figure 2. General diagram of the method proposed for hair removal

Figure 3 .
Figure 3. (a) Ideal profile of a hair, (b) Profile the hair in normal conditions, (c) convolution of the ideal hair with the DOG Kernel and (e) Convolution of a normal hair using the DOG Kernel.

Figure 5
Figure 5(a) shows the original Dermoscopy image.As a result of the combination of the C ni channels determined after processing the original image, a new image containing structures corresponding to the dark lines (hairs) is obtained; however, undesired structures are also included, as shown in Figure 5(b).These undesired regions are

Figure 5 .
Figure 5. (a) Dermoscopy image including hairs, (b) Image R(x,y) resulting from the detection procedure of dark lines, (c) Detection of the relevant structures using thresholding and (d) M kp mask associated with hair dispersion (b)), where the entry image belongs to the rectangular area associated with the region of interest (ROI), as shown in Figure6(a).The ROI surface is essentially obtained with the definition of the sudden changes in intensity within the original image when the scanning is carried out from the top to the bottom and then, from the left to the right.The initial 6(b)) will be used to limit the hair mask exclusively to regions outside the lesion.The result of this process is the mask M k , presented in Figure6(c).

Figure 6 .
Figure 6.(a) Extraction of the region of interest (ROI), (b) Partial lesion segmentation and (c) Final M k mask associated with hair dispersal

Figure
(a) Original dermoscopy image without hair-associated regions.(b) Surface substitution growth process.
(a).The resulting image H L , devoid of hairs, is shown in Figure 8(b).(a) (b) Figure 8.(a) Surface rebuilt by means of the proposed substitution process and (b) Image resulting from the proposed hair-removal process.

Figure 9
Figure 9 (a).Manual hairs identification in the original image using the points selection and splines fitting, (b)Binary image obtained from the manual identification procedure applied to the original image, (c) hairs identification applied to the processed image and (d) binary image obtained after the manual hairs detection for the processed image

Table 1 .
table 1 and their values are presented in table 2. Variables used to calculate performance.DullRazor.The performance values of both methods of hair detection are shown in Table 2.The results indicate the superiority of the proposed method to efficiently detect hair.

Table 3 .
General performance of the methods used for hair removal.