Module: filters
¶
skimage.filters.inverse (data[, …]) |
Apply the filter in reverse to the given data. |
skimage.filters.wiener (data[, …]) |
Minimum Mean Square Error (Wiener) inverse filter. |
skimage.filters.gaussian (image[, sigma, …]) |
Multi-dimensional Gaussian filter. |
skimage.filters.median (image[, selem, out, …]) |
Return local median of an image. |
skimage.filters.sobel (image[, mask]) |
Find the edge magnitude using the Sobel transform. |
skimage.filters.sobel_h (image[, mask]) |
Find the horizontal edges of an image using the Sobel transform. |
skimage.filters.sobel_v (image[, mask]) |
Find the vertical edges of an image using the Sobel transform. |
skimage.filters.scharr (image[, mask]) |
Find the edge magnitude using the Scharr transform. |
skimage.filters.scharr_h (image[, mask]) |
Find the horizontal edges of an image using the Scharr transform. |
skimage.filters.scharr_v (image[, mask]) |
Find the vertical edges of an image using the Scharr transform. |
skimage.filters.prewitt (image[, mask]) |
Find the edge magnitude using the Prewitt transform. |
skimage.filters.prewitt_h (image[, mask]) |
Find the horizontal edges of an image using the Prewitt transform. |
skimage.filters.prewitt_v (image[, mask]) |
Find the vertical edges of an image using the Prewitt transform. |
skimage.filters.roberts (image[, mask]) |
Find the edge magnitude using Roberts’ cross operator. |
skimage.filters.roberts_pos_diag (image[, mask]) |
Find the cross edges of an image using Roberts’ cross operator. |
skimage.filters.roberts_neg_diag (image[, mask]) |
Find the cross edges of an image using the Roberts’ Cross operator. |
skimage.filters.laplace (image[, ksize, mask]) |
Find the edges of an image using the Laplace operator. |
skimage.filters.rank_order (image) |
Return an image of the same shape where each pixel is the index of the pixel value in the ascending order of the unique values of image , aka the rank-order value. |
skimage.filters.gabor_kernel (frequency[, …]) |
Return complex 2D Gabor filter kernel. |
skimage.filters.gabor (image, frequency[, …]) |
Return real and imaginary responses to Gabor filter. |
skimage.filters.try_all_threshold (image[, …]) |
Returns a figure comparing the outputs of different thresholding methods. |
skimage.filters.meijering (image[, sigmas, …]) |
Filter an image with the Meijering neuriteness filter. |
skimage.filters.sato (image[, sigmas, …]) |
Filter an image with the Sato tubeness filter. |
skimage.filters.frangi (image[, sigmas, …]) |
Filter an image with the Frangi vesselness filter. |
skimage.filters.hessian (image[, sigmas, …]) |
Filter an image with the Hybrid Hessian filter. |
skimage.filters.threshold_otsu (image[, nbins]) |
Return threshold value based on Otsu’s method. |
skimage.filters.threshold_yen (image[, nbins]) |
Return threshold value based on Yen’s method. |
skimage.filters.threshold_isodata (image[, …]) |
Return threshold value(s) based on ISODATA method. |
skimage.filters.threshold_li (image, *[, …]) |
Compute threshold value by Li’s iterative Minimum Cross Entropy method. |
skimage.filters.threshold_local (image, …) |
Compute a threshold mask image based on local pixel neighborhood. |
skimage.filters.threshold_minimum (image[, …]) |
Return threshold value based on minimum method. |
skimage.filters.threshold_mean (image) |
Return threshold value based on the mean of grayscale values. |
skimage.filters.threshold_niblack (image[, …]) |
Applies Niblack local threshold to an array. |
skimage.filters.threshold_sauvola (image[, …]) |
Applies Sauvola local threshold to an array. |
skimage.filters.threshold_triangle (image[, …]) |
Return threshold value based on the triangle algorithm. |
skimage.filters.threshold_multiotsu (image[, …]) |
Generate classes-1 threshold values to divide gray levels in image. |
skimage.filters.apply_hysteresis_threshold (…) |
Apply hysteresis thresholding to image . |
skimage.filters.unsharp_mask (image[, …]) |
Unsharp masking filter. |
skimage.filters.LPIFilter2D (…) |
Linear Position-Invariant Filter (2-dimensional) |
skimage.filters.rank |
inverse¶
-
skimage.filters.
inverse
(data, impulse_response=None, filter_params={}, max_gain=2, predefined_filter=None)[source]¶ Apply the filter in reverse to the given data.
Parameters: data : (M,N) ndarray
Input data.
impulse_response : callable f(r, c, **filter_params)
Impulse response of the filter. See LPIFilter2D.__init__.
filter_params : dict
Additional keyword parameters to the impulse_response function.
max_gain : float
Limit the filter gain. Often, the filter contains zeros, which would cause the inverse filter to have infinite gain. High gain causes amplification of artefacts, so a conservative limit is recommended.
Other Parameters: predefined_filter : LPIFilter2D
If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.
wiener¶
-
skimage.filters.
wiener
(data, impulse_response=None, filter_params={}, K=0.25, predefined_filter=None)[source]¶ Minimum Mean Square Error (Wiener) inverse filter.
Parameters: data : (M,N) ndarray
Input data.
K : float or (M,N) ndarray
Ratio between power spectrum of noise and undegraded image.
impulse_response : callable f(r, c, **filter_params)
Impulse response of the filter. See LPIFilter2D.__init__.
filter_params : dict
Additional keyword parameters to the impulse_response function.
Other Parameters: predefined_filter : LPIFilter2D
If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.
gaussian¶
-
skimage.filters.
gaussian
(image, sigma=1, output=None, mode='nearest', cval=0, multichannel=None, preserve_range=False, truncate=4.0)[source]¶ Multi-dimensional Gaussian filter.
Parameters: image : array-like
Input image (grayscale or color) to filter.
sigma : scalar or sequence of scalars, optional
Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.
output : array, optional
The
output
parameter passes an array in which to store the filter output.mode : {‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The
mode
parameter determines how the array borders are handled, wherecval
is the value when mode is equal to ‘constant’. Default is ‘nearest’.cval : scalar, optional
Value to fill past edges of input if
mode
is ‘constant’. Default is 0.0multichannel : bool, optional (default: None)
Whether the last axis of the image is to be interpreted as multiple channels. If True, each channel is filtered separately (channels are not mixed together). Only 3 channels are supported. If
None
, the function will attempt to guess this, and raise a warning if ambiguous, when the array has shape (M, N, 3).preserve_range : bool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of
img_as_float
. Also see https://scikit-image.org/docs/dev/user_guide/data_types.htmltruncate : float, optional
Truncate the filter at this many standard deviations.
Returns: filtered_image : ndarray
the filtered array
Notes
This function is a wrapper around
scipy.ndi.gaussian_filter()
.Integer arrays are converted to float.
The multi-dimensional filter is implemented as a sequence of one-dimensional convolution filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision.
Examples
>>> a = np.zeros((3, 3)) >>> a[1, 1] = 1 >>> a array([[ 0., 0., 0.], [ 0., 1., 0.], [ 0., 0., 0.]]) >>> gaussian(a, sigma=0.4) # mild smoothing array([[ 0.00163116, 0.03712502, 0.00163116], [ 0.03712502, 0.84496158, 0.03712502], [ 0.00163116, 0.03712502, 0.00163116]]) >>> gaussian(a, sigma=1) # more smoothing array([[ 0.05855018, 0.09653293, 0.05855018], [ 0.09653293, 0.15915589, 0.09653293], [ 0.05855018, 0.09653293, 0.05855018]]) >>> # Several modes are possible for handling boundaries >>> gaussian(a, sigma=1, mode='reflect') array([[ 0.08767308, 0.12075024, 0.08767308], [ 0.12075024, 0.16630671, 0.12075024], [ 0.08767308, 0.12075024, 0.08767308]]) >>> # For RGB images, each is filtered separately >>> from skimage.data import astronaut >>> image = astronaut() >>> filtered_img = gaussian(image, sigma=1, multichannel=True)
Examples using skimage.filters.gaussian
¶
median¶
-
skimage.filters.
median
(image, selem=None, out=None, mask=None, shift_x=False, shift_y=False, mode='nearest', cval=0.0, behavior='ndimage')[source]¶ Return local median of an image.
Parameters: image : array-like
Input image.
selem : ndarray, optional
If
behavior=='rank'
,selem
is a 2-D array of 1’s and 0’s. Ifbehavior=='ndimage'
,selem
is a N-D array of 1’s and 0’s with the same number of dimension thanimage
. If None,selem
will be a N-D array with 3 elements for each dimension (e.g., vector, square, cube, etc.)out : ndarray, (same dtype as image), optional
If None, a new array is allocated.
mask : ndarray, optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default). Only valid when
behavior='rank'
Deprecated since version 0.16:
mask
is deprecated in 0.16 and will be removed 0.17.shift_x, shift_y : int, optional
Offset added to the structuring element center point. Shift is bounded by the structuring element sizes (center must be inside the given structuring element). Only valid when
behavior='rank'
.Deprecated since version 0.16:
shift_x
andshift_y
are deprecated in 0.16 and will be removed in 0.17.mode : {‘reflect’, ‘constant’, ‘nearest’, ‘mirror’,’‘wrap’}, optional
The mode parameter determines how the array borders are handled, where
cval
is the value when mode is equal to ‘constant’. Default is ‘nearest’.New in version 0.15:
mode
is used whenbehavior='ndimage'
.cval : scalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0
New in version 0.15:
cval
was added in 0.15 is used whenbehavior='ndimage'
.behavior : {‘ndimage’, ‘rank’}, optional
Either to use the old behavior (i.e., < 0.15) or the new behavior. The old behavior will call the
skimage.filters.rank.median()
. The new behavior will call thescipy.ndimage.median_filter()
. Default is ‘rank’.New in version 0.15:
behavior
is introduced in 0.15Changed in version 0.16: Default
behavior
has been changed from ‘rank’ to ‘ndimage’Returns: out : 2-D array (same dtype as input image)
Output image.
See also
skimage.filters.rank.median
- Rank-based implementation of the median filtering offering more flexibility with additional parameters but dedicated for unsigned integer images.
Examples
>>> from skimage import data >>> from skimage.morphology import disk >>> from skimage.filters import median >>> img = data.camera() >>> med = median(img, disk(5))
sobel¶
-
skimage.filters.
sobel
(image, mask=None)[source]¶ Find the edge magnitude using the Sobel transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Sobel edge map.
Notes
Take the square root of the sum of the squares of the horizontal and vertical Sobels to get a magnitude that’s somewhat insensitive to direction.
The 3x3 convolution kernel used in the horizontal and vertical Sobels is an approximation of the gradient of the image (with some slight blurring since 9 pixels are used to compute the gradient at a given pixel). As an approximation of the gradient, the Sobel operator is not completely rotation-invariant. The Scharr operator should be used for a better rotation invariance.
Note that
scipy.ndimage.sobel
returns a directional Sobel which has to be further processed to perform edge detection.Examples
>>> from skimage import data >>> camera = data.camera() >>> from skimage import filters >>> edges = filters.sobel(camera)
Examples using skimage.filters.sobel
¶
sobel_h¶
-
skimage.filters.
sobel_h
(image, mask=None)[source]¶ Find the horizontal edges of an image using the Sobel transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Sobel edge map.
Notes
We use the following kernel:
1 2 1 0 0 0 -1 -2 -1
Examples using skimage.filters.sobel_h
¶
sobel_v¶
-
skimage.filters.
sobel_v
(image, mask=None)[source]¶ Find the vertical edges of an image using the Sobel transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Sobel edge map.
Notes
We use the following kernel:
1 0 -1 2 0 -2 1 0 -1
Examples using skimage.filters.sobel_v
¶
scharr¶
-
skimage.filters.
scharr
(image, mask=None)[source]¶ Find the edge magnitude using the Scharr transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Scharr edge map.
Notes
Take the square root of the sum of the squares of the horizontal and vertical Scharrs to get a magnitude that is somewhat insensitive to direction. The Scharr operator has a better rotation invariance than other edge filters such as the Sobel or the Prewitt operators.
References
[R257] D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives. [R258] https://en.wikipedia.org/wiki/Sobel_operator#Alternative_operators Examples
>>> from skimage import data >>> camera = data.camera() >>> from skimage import filters >>> edges = filters.scharr(camera)
Examples using skimage.filters.scharr
¶
scharr_h¶
-
skimage.filters.
scharr_h
(image, mask=None)[source]¶ Find the horizontal edges of an image using the Scharr transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Scharr edge map.
Notes
We use the following kernel:
3 10 3 0 0 0 -3 -10 -3
References
[R259] D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
Examples using skimage.filters.scharr_h
¶
scharr_v¶
-
skimage.filters.
scharr_v
(image, mask=None)[source]¶ Find the vertical edges of an image using the Scharr transform.
Parameters: image : 2-D array
Image to process
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Scharr edge map.
Notes
We use the following kernel:
3 0 -3 10 0 -10 3 0 -3
References
[R260] D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
Examples using skimage.filters.scharr_v
¶
prewitt¶
-
skimage.filters.
prewitt
(image, mask=None)[source]¶ Find the edge magnitude using the Prewitt transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Prewitt edge map.
Notes
Return the square root of the sum of squares of the horizontal and vertical Prewitt transforms. The edge magnitude depends slightly on edge directions, since the approximation of the gradient operator by the Prewitt operator is not completely rotation invariant. For a better rotation invariance, the Scharr operator should be used. The Sobel operator has a better rotation invariance than the Prewitt operator, but a worse rotation invariance than the Scharr operator.
Examples
>>> from skimage import data >>> camera = data.camera() >>> from skimage import filters >>> edges = filters.prewitt(camera)
Examples using skimage.filters.prewitt
¶
prewitt_h¶
-
skimage.filters.
prewitt_h
(image, mask=None)[source]¶ Find the horizontal edges of an image using the Prewitt transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Prewitt edge map.
Notes
We use the following kernel:
1 1 1 0 0 0 -1 -1 -1
Examples using skimage.filters.prewitt_h
¶
prewitt_v¶
-
skimage.filters.
prewitt_v
(image, mask=None)[source]¶ Find the vertical edges of an image using the Prewitt transform.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Prewitt edge map.
Notes
We use the following kernel:
1 0 -1 1 0 -1 1 0 -1
Examples using skimage.filters.prewitt_v
¶
roberts¶
-
skimage.filters.
roberts
(image, mask=None)[source]¶ Find the edge magnitude using Roberts’ cross operator.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Roberts’ Cross edge map.
Examples
>>> from skimage import data >>> camera = data.camera() >>> from skimage import filters >>> edges = filters.roberts(camera)
Examples using skimage.filters.roberts
¶
roberts_pos_diag¶
-
skimage.filters.
roberts_pos_diag
(image, mask=None)[source]¶ Find the cross edges of an image using Roberts’ cross operator.
The kernel is applied to the input image to produce separate measurements of the gradient component one orientation.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Robert’s edge map.
Notes
We use the following kernel:
1 0 0 -1
roberts_neg_diag¶
-
skimage.filters.
roberts_neg_diag
(image, mask=None)[source]¶ Find the cross edges of an image using the Roberts’ Cross operator.
The kernel is applied to the input image to produce separate measurements of the gradient component one orientation.
Parameters: image : 2-D array
Image to process.
mask : 2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : 2-D array
The Robert’s edge map.
Notes
We use the following kernel:
0 1 -1 0
laplace¶
-
skimage.filters.
laplace
(image, ksize=3, mask=None)[source]¶ Find the edges of an image using the Laplace operator.
Parameters: image : ndarray
Image to process.
ksize : int, optional
Define the size of the discrete Laplacian operator such that it will have a size of (ksize,) * image.ndim.
mask : ndarray, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.
Returns: output : ndarray
The Laplace edge map.
Notes
The Laplacian operator is generated using the function skimage.restoration.uft.laplacian().
rank_order¶
-
skimage.filters.
rank_order
(image)[source]¶ Return an image of the same shape where each pixel is the index of the pixel value in the ascending order of the unique values of
image
, aka the rank-order value.Parameters: image : ndarray
Returns: labels : ndarray of type np.uint32, of shape image.shape
New array where each pixel has the rank-order value of the corresponding pixel in
image
. Pixel values are between 0 and n - 1, where n is the number of distinct unique values inimage
.original_values : 1-D ndarray
Unique original values of
image
Examples
>>> a = np.array([[1, 4, 5], [4, 4, 1], [5, 1, 1]]) >>> a array([[1, 4, 5], [4, 4, 1], [5, 1, 1]]) >>> rank_order(a) (array([[0, 1, 2], [1, 1, 0], [2, 0, 0]], dtype=uint32), array([1, 4, 5])) >>> b = np.array([-1., 2.5, 3.1, 2.5]) >>> rank_order(b) (array([0, 1, 2, 1], dtype=uint32), array([-1. , 2.5, 3.1]))
gabor_kernel¶
-
skimage.filters.
gabor_kernel
(frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0)[source]¶ Return complex 2D Gabor filter kernel.
Gabor kernel is a Gaussian kernel modulated by a complex harmonic function. Harmonic function consists of an imaginary sine function and a real cosine function. Spatial frequency is inversely proportional to the wavelength of the harmonic and to the standard deviation of a Gaussian kernel. The bandwidth is also inversely proportional to the standard deviation.
Parameters: frequency : float
Spatial frequency of the harmonic function. Specified in pixels.
theta : float, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidth : float, optional
The bandwidth captured by the filter. For fixed bandwidth,
sigma_x
andsigma_y
will decrease with increasing frequency. This value is ignored ifsigma_x
andsigma_y
are set by the user.sigma_x, sigma_y : float, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that
sigma_x
controls the vertical direction.n_stds : scalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations
offset : float, optional
Phase offset of harmonic function in radians.
Returns: g : complex array
Complex filter kernel.
References
[R261] https://en.wikipedia.org/wiki/Gabor_filter [R262] https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples
>>> from skimage.filters import gabor_kernel >>> from skimage import io >>> from matplotlib import pyplot as plt # doctest: +SKIP
>>> gk = gabor_kernel(frequency=0.2) >>> plt.figure() # doctest: +SKIP >>> io.imshow(gk.real) # doctest: +SKIP >>> io.show() # doctest: +SKIP
>>> # more ripples (equivalent to increasing the size of the >>> # Gaussian spread) >>> gk = gabor_kernel(frequency=0.2, bandwidth=0.1) >>> plt.figure() # doctest: +SKIP >>> io.imshow(gk.real) # doctest: +SKIP >>> io.show() # doctest: +SKIP
Examples using skimage.filters.gabor_kernel
¶
gabor¶
-
skimage.filters.
gabor
(image, frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0, mode='reflect', cval=0)[source]¶ Return real and imaginary responses to Gabor filter.
The real and imaginary parts of the Gabor filter kernel are applied to the image and the response is returned as a pair of arrays.
Gabor filter is a linear filter with a Gaussian kernel which is modulated by a sinusoidal plane wave. Frequency and orientation representations of the Gabor filter are similar to those of the human visual system. Gabor filter banks are commonly used in computer vision and image processing. They are especially suitable for edge detection and texture classification.
Parameters: image : 2-D array
Input image.
frequency : float
Spatial frequency of the harmonic function. Specified in pixels.
theta : float, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidth : float, optional
The bandwidth captured by the filter. For fixed bandwidth,
sigma_x
andsigma_y
will decrease with increasing frequency. This value is ignored ifsigma_x
andsigma_y
are set by the user.sigma_x, sigma_y : float, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that
sigma_x
controls the vertical direction.n_stds : scalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations.
offset : float, optional
Phase offset of harmonic function in radians.
mode : {‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional
Mode used to convolve image with a kernel, passed to ndi.convolve
cval : scalar, optional
Value to fill past edges of input if
mode
of convolution is ‘constant’. The parameter is passed to ndi.convolve.Returns: real, imag : arrays
Filtered images using the real and imaginary parts of the Gabor filter kernel. Images are of the same dimensions as the input one.
References
[R263] https://en.wikipedia.org/wiki/Gabor_filter [R264] https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples
>>> from skimage.filters import gabor >>> from skimage import data, io >>> from matplotlib import pyplot as plt # doctest: +SKIP
>>> image = data.coins() >>> # detecting edges in a coin image >>> filt_real, filt_imag = gabor(image, frequency=0.6) >>> plt.figure() # doctest: +SKIP >>> io.imshow(filt_real) # doctest: +SKIP >>> io.show() # doctest: +SKIP
>>> # less sensitivity to finer details with the lower frequency kernel >>> filt_real, filt_imag = gabor(image, frequency=0.1) >>> plt.figure() # doctest: +SKIP >>> io.imshow(filt_real) # doctest: +SKIP >>> io.show() # doctest: +SKIP
try_all_threshold¶
-
skimage.filters.
try_all_threshold
(image, figsize=(8, 5), verbose=True)[source]¶ Returns a figure comparing the outputs of different thresholding methods.
Parameters: image : (N, M) ndarray
Input image.
figsize : tuple, optional
Figure size (in inches).
verbose : bool, optional
Print function name for each method.
Returns: fig, ax : tuple
Matplotlib figure and axes.
Notes
The following algorithms are used:
- isodata
- li
- mean
- minimum
- otsu
- triangle
- yen
Examples
>>> from skimage.data import text >>> fig, ax = try_all_threshold(text(), figsize=(10, 6), verbose=False)
meijering¶
-
skimage.filters.
meijering
(image, sigmas=range(1, 10, 2), alpha=None, black_ridges=True)[source]¶ Filter an image with the Meijering neuriteness filter.
This filter can be used to detect continuous ridges, e.g. neurites, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.
Calculates the eigenvectors of the Hessian to compute the similarity of an image region to neurites, according to the method described in [R265].
Parameters: image : (N, M[, …, P]) ndarray
Array with input image data.
sigmas : iterable of floats, optional
Sigmas used as scales of filter
alpha : float, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.
black_ridges : boolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
Returns: out : (N, M[, …, P]) ndarray
Filtered image (maximum of pixels across all scales).
References
[R265] (1, 2) Meijering, E., Jacob, M., Sarria, J. C., Steiner, P., Hirling, H., Unser, M. (2004). Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A, 58(2), 167-176. DOI:10.1002/cyto.a.20022
Examples using skimage.filters.meijering
¶
sato¶
-
skimage.filters.
sato
(image, sigmas=range(1, 10, 2), black_ridges=True)[source]¶ Filter an image with the Sato tubeness filter.
This filter can be used to detect continuous ridges, e.g. tubes, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.
Defined only for 2-D and 3-D images. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to tubes, according to the method described in [R266].
Parameters: image : (N, M[, P]) ndarray
Array with input image data.
sigmas : iterable of floats, optional
Sigmas used as scales of filter.
black_ridges : boolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
Returns: out : (N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales).
References
[R266] (1, 2) Sato, Y., Nakajima, S., Shiraga, N., Atsumi, H., Yoshida, S., Koller, T., …, Kikinis, R. (1998). Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical image analysis, 2(2), 143-168. DOI:10.1016/S1361-8415(98)80009-1
Examples using skimage.filters.sato
¶
frangi¶
-
skimage.filters.
frangi
(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, beta1=None, beta2=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True)[source]¶ Filter an image with the Frangi vesselness filter.
This filter can be used to detect continuous ridges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.
Defined only for 2-D and 3-D images. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to vessels, according to the method described in [R267].
Parameters: image : (N, M[, P]) ndarray
Array with input image data.
sigmas : iterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range : 2-tuple of floats, optional
The range of sigmas used.
scale_step : float, optional
Step size between sigmas.
alpha : float, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.
beta = beta1 : float, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gamma = beta2 : float, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridges : boolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
Returns: out : (N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales).
Notes
Written by Marc Schrijver, November 2001 Re-Written by D. J. Kroon, University of Twente, May 2009, [R268] Adoption of 3D version from D. G. Ellis, Januar 20017, [R269]
References
[R267] (1, 2) Frangi, A. F., Niessen, W. J., Vincken, K. L., & Viergever, M. A. (1998,). Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 130-137). Springer Berlin Heidelberg. DOI:10.1007/BFb0056195 [R268] (1, 2) Kroon, D. J.: Hessian based Frangi vesselness filter. [R269] (1, 2) Ellis, D. G.: https://github.com/ellisdg/frangi3d/tree/master/frangi
Examples using skimage.filters.frangi
¶
hessian¶
-
skimage.filters.
hessian
(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, beta1=None, beta2=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True)[source]¶ Filter an image with the Hybrid Hessian filter.
This filter can be used to detect continuous edges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.
Defined only for 2-D and 3-D images. Almost equal to Frangi filter, but uses alternative method of smoothing. Refer to [R270] to find the differences between Frangi and Hessian filters.
Parameters: image : (N, M[, P]) ndarray
Array with input image data.
sigmas : iterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range : 2-tuple of floats, optional
The range of sigmas used.
scale_step : float, optional
Step size between sigmas.
beta = beta1 : float, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gamma = beta2 : float, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridges : boolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
Returns: out : (N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales).
Notes
Written by Marc Schrijver (November 2001) Re-Written by D. J. Kroon University of Twente (May 2009) [R271]
References
[R270] (1, 2) Ng, C. C., Yap, M. H., Costen, N., & Li, B. (2014,). Automatic wrinkle detection using hybrid Hessian filter. In Asian Conference on Computer Vision (pp. 609-622). Springer International Publishing. DOI:10.1007/978-3-319-16811-1_40 [R271] (1, 2) Kroon, D. J.: Hessian based Frangi vesselness filter.
Examples using skimage.filters.hessian
¶
threshold_otsu¶
-
skimage.filters.
threshold_otsu
(image, nbins=256)[source]¶ Return threshold value based on Otsu’s method.
Parameters: image : (N, M) ndarray
Grayscale input image.
nbins : int, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
Returns: threshold : float
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
Raises: ValueError
If
image
only contains a single grayscale value.Notes
The input image must be grayscale.
References
[R272] Wikipedia, https://en.wikipedia.org/wiki/Otsu’s_Method Examples
>>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_otsu(image) >>> binary = image <= thresh
Examples using skimage.filters.threshold_otsu
¶
threshold_yen¶
-
skimage.filters.
threshold_yen
(image, nbins=256)[source]¶ Return threshold value based on Yen’s method.
Parameters: image : (N, M) ndarray
Input image.
nbins : int, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
Returns: threshold : float
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
References
[R273] Yen J.C., Chang F.J., and Chang S. (1995) “A New Criterion for Automatic Multilevel Thresholding” IEEE Trans. on Image Processing, 4(3): 370-378. DOI:10.1109/83.366472 [R274] Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, DOI:10.1117/1.1631315 http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf [R275] ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples
>>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_yen(image) >>> binary = image <= thresh
threshold_isodata¶
-
skimage.filters.
threshold_isodata
(image, nbins=256, return_all=False)[source]¶ Return threshold value(s) based on ISODATA method.
Histogram-based threshold, known as Ridler-Calvard method or inter-means. Threshold values returned satisfy the following equality:
threshold = (image[image <= threshold].mean() + image[image > threshold].mean()) / 2.0
That is, returned thresholds are intensities that separate the image into two groups of pixels, where the threshold intensity is midway between the mean intensities of these groups.
For integer images, the above equality holds to within one; for floating- point images, the equality holds to within the histogram bin-width.
Parameters: image : (N, M) ndarray
Input image.
nbins : int, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
return_all: bool, optional
If False (default), return only the lowest threshold that satisfies the above equality. If True, return all valid thresholds.
Returns: threshold : float or int or array
Threshold value(s).
References
[R276] Ridler, TW & Calvard, S (1978), “Picture thresholding using an iterative selection method” IEEE Transactions on Systems, Man and Cybernetics 8: 630-632, DOI:10.1109/TSMC.1978.4310039 [R277] Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf DOI:10.1117/1.1631315 [R278] ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples
>>> from skimage.data import coins >>> image = coins() >>> thresh = threshold_isodata(image) >>> binary = image > thresh
threshold_li¶
-
skimage.filters.
threshold_li
(image, *, tolerance=None, initial_guess=None, iter_callback=None)[source]¶ Compute threshold value by Li’s iterative Minimum Cross Entropy method.
Parameters: image : ndarray
Input image.
tolerance : float, optional
Finish the computation when the change in the threshold in an iteration is less than this value. By default, this is half the smallest difference between intensity values in
image
.initial_guess : float or Callable[[array[float]], float], optional
Li’s iterative method uses gradient descent to find the optimal threshold. If the image intensity histogram contains more than two modes (peaks), the gradient descent could get stuck in a local optimum. An initial guess for the iteration can help the algorithm find the globally-optimal threshold. A float value defines a specific start point, while a callable should take in an array of image intensities and return a float value. Example valid callables include
numpy.mean
(default),lambda arr: numpy.quantile(arr, 0.95)
, or evenskimage.filters.threshold_otsu()
.iter_callback : Callable[[float], Any], optional
A function that will be called on the threshold at every iteration of the algorithm.
Returns: threshold : float
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
References
[R279] Li C.H. and Lee C.K. (1993) “Minimum Cross Entropy Thresholding” Pattern Recognition, 26(4): 617-625 DOI:10.1016/0031-3203(93)90115-D [R280] Li C.H. and Tam P.K.S. (1998) “An Iterative Algorithm for Minimum Cross Entropy Thresholding” Pattern Recognition Letters, 18(8): 771-776 DOI:10.1016/S0167-8655(98)00057-9 [R281] Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165 DOI:10.1117/1.1631315 [R282] ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples
>>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_li(image) >>> binary = image > thresh
Examples using skimage.filters.threshold_li
¶
threshold_local¶
-
skimage.filters.
threshold_local
(image, block_size, method='gaussian', offset=0, mode='reflect', param=None, cval=0)[source]¶ Compute a threshold mask image based on local pixel neighborhood.
Also known as adaptive or dynamic thresholding. The threshold value is the weighted mean for the local neighborhood of a pixel subtracted by a constant. Alternatively the threshold can be determined dynamically by a given function, using the ‘generic’ method.
Parameters: image : (N, M) ndarray
Input image.
block_size : int
Odd size of pixel neighborhood which is used to calculate the threshold value (e.g. 3, 5, 7, …, 21, …).
method : {‘generic’, ‘gaussian’, ‘mean’, ‘median’}, optional
Method used to determine adaptive threshold for local neighbourhood in weighted mean image.
- ‘generic’: use custom function (see
param
parameter) - ‘gaussian’: apply gaussian filter (see
param
parameter for custom sigma value) - ‘mean’: apply arithmetic mean filter
- ‘median’: apply median rank filter
By default the ‘gaussian’ method is used.
offset : float, optional
Constant subtracted from weighted mean of neighborhood to calculate the local threshold value. Default offset is 0.
mode : {‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘reflect’.
param : {int, function}, optional
Either specify sigma for ‘gaussian’ method or function object for ‘generic’ method. This functions takes the flat array of local neighbourhood as a single argument and returns the calculated threshold for the centre pixel.
cval : float, optional
Value to fill past edges of input if mode is ‘constant’.
Returns: threshold : (N, M) ndarray
Threshold image. All pixels in the input image higher than the corresponding pixel in the threshold image are considered foreground.
References
[R283] https://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#adaptivethreshold Examples
>>> from skimage.data import camera >>> image = camera()[:50, :50] >>> binary_image1 = image > threshold_local(image, 15, 'mean') >>> func = lambda arr: arr.mean() >>> binary_image2 = image > threshold_local(image, 15, 'generic', ... param=func)
- ‘generic’: use custom function (see
Examples using skimage.filters.threshold_local
¶
threshold_minimum¶
-
skimage.filters.
threshold_minimum
(image, nbins=256, max_iter=10000)[source]¶ Return threshold value based on minimum method.
The histogram of the input
image
is computed and smoothed until there are only two maxima. Then the minimum in between is the threshold value.Parameters: image : (M, N) ndarray
Input image.
nbins : int, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
max_iter: int, optional
Maximum number of iterations to smooth the histogram.
Returns: threshold : float
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
Raises: RuntimeError
If unable to find two local maxima in the histogram or if the smoothing takes more than 1e4 iterations.
References
[R284] C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993. [R285] Prewitt, JMS & Mendelsohn, ML (1966), “The analysis of cell images”, Annals of the New York Academy of Sciences 128: 1035-1053 DOI:10.1111/j.1749-6632.1965.tb11715.x Examples
>>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_minimum(image) >>> binary = image > thresh
Examples using skimage.filters.threshold_minimum
¶
threshold_mean¶
-
skimage.filters.
threshold_mean
(image)[source]¶ Return threshold value based on the mean of grayscale values.
Parameters: image : (N, M[, …, P]) ndarray
Grayscale input image.
Returns: threshold : float
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
References
[R286] C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993. DOI:10.1006/cgip.1993.1040 Examples
>>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_mean(image) >>> binary = image > thresh
Examples using skimage.filters.threshold_mean
¶
threshold_niblack¶
-
skimage.filters.
threshold_niblack
(image, window_size=15, k=0.2)[source]¶ Applies Niblack local threshold to an array.
A threshold T is calculated for every pixel in the image using the following formula:
T = m(x,y) - k * s(x,y)
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation.
Parameters: image: ndarray
Input image.
window_size : int, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length
image.ndim
containing only odd integers (e.g.(1, 5, 5)
).k : float, optional
Value of parameter k in threshold formula.
Returns: threshold : (N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground.
Notes
This algorithm is originally designed for text recognition.
The Bradley threshold is a particular case of the Niblack one, being equivalent to
>>> from skimage import data >>> image = data.page() >>> q = 1 >>> threshold_image = threshold_niblack(image, k=0) * q
for some value
q
. By default, Bradley and Roth useq=1
.References
[R287] W. Niblack, An introduction to Digital Image Processing, Prentice-Hall, 1986. [R288] D. Bradley and G. Roth, “Adaptive thresholding using Integral Image”, Journal of Graphics Tools 12(2), pp. 13-21, 2007. DOI:10.1080/2151237X.2007.10129236 Examples
>>> from skimage import data >>> image = data.page() >>> threshold_image = threshold_niblack(image, window_size=7, k=0.1)
Examples using skimage.filters.threshold_niblack
¶
threshold_sauvola¶
-
skimage.filters.
threshold_sauvola
(image, window_size=15, k=0.2, r=None)[source]¶ Applies Sauvola local threshold to an array. Sauvola is a modification of Niblack technique.
In the original method a threshold T is calculated for every pixel in the image using the following formula:
T = m(x,y) * (1 + k * ((s(x,y) / R) - 1))
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. R is the maximum standard deviation of a greyscale image.
Parameters: image: ndarray
Input image.
window_size : int, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length
image.ndim
containing only odd integers (e.g.(1, 5, 5)
).k : float, optional
Value of the positive parameter k.
r : float, optional
Value of R, the dynamic range of standard deviation. If None, set to the half of the image dtype range.
Returns: threshold : (N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground.
Notes
This algorithm is originally designed for text recognition.
References
[R289] J. Sauvola and M. Pietikainen, “Adaptive document image binarization,” Pattern Recognition 33(2), pp. 225-236, 2000. DOI:10.1016/S0031-3203(99)00055-2 Examples
>>> from skimage import data >>> image = data.page() >>> t_sauvola = threshold_sauvola(image, window_size=15, k=0.2) >>> binary_image = image > t_sauvola
Examples using skimage.filters.threshold_sauvola
¶
threshold_triangle¶
-
skimage.filters.
threshold_triangle
(image, nbins=256)[source]¶ Return threshold value based on the triangle algorithm.
Parameters: image : (N, M[, …, P]) ndarray
Grayscale input image.
nbins : int, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
Returns: threshold : float
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.
References
[R290] Zack, G. W., Rogers, W. E. and Latt, S. A., 1977, Automatic Measurement of Sister Chromatid Exchange Frequency, Journal of Histochemistry and Cytochemistry 25 (7), pp. 741-753 DOI:10.1177/25.7.70454 [R291] ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples
>>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_triangle(image) >>> binary = image > thresh
threshold_multiotsu¶
-
skimage.filters.
threshold_multiotsu
(image, classes=3, nbins=256)[source]¶ Generate classes-1 threshold values to divide gray levels in image.
The threshold values are chosen to maximize the total sum of pairwise variances between the thresholded graylevel classes. See Notes and [R292] for more details.
Parameters: image : (N, M) ndarray
Grayscale input image.
classes : int, optional
Number of classes to be thresholded, i.e. the number of resulting regions.
nbins : int, optional
Number of bins used to calculate the histogram. This value is ignored for integer arrays.
Returns: idx_thresh : array
Array containing the threshold values for the desired classes.
Notes
This implementation relies on a Cython function whose complexity is \(O\left(\frac{Ch^{C-1}}{(C-1)!}\right)\), where \(h\) is the number of histogram bins and \(C\) is the number of classes desired.
References
[R292] (1, 2) Liao, P-S., Chen, T-S. and Chung, P-C., “A fast algorithm for multilevel thresholding”, Journal of Information Science and Engineering 17 (5): 713-727, 2001. Available at: <http://ftp.iis.sinica.edu.tw/JISE/2001/200109_01.pdf> [R293] Tosa, Y., “Multi-Otsu Threshold”, a java plugin for ImageJ. Available at: <http://imagej.net/plugins/download/Multi_OtsuThreshold.java> Examples
>>> from skimage.color import label2rgb >>> from skimage import data >>> image = data.camera() >>> thresholds = threshold_multiotsu(image) >>> regions = np.digitize(image, bins=thresholds) >>> regions_colorized = label2rgb(regions)
Examples using skimage.filters.threshold_multiotsu
¶
apply_hysteresis_threshold¶
-
skimage.filters.
apply_hysteresis_threshold
(image, low, high)[source]¶ Apply hysteresis thresholding to
image
.This algorithm finds regions where
image
is greater thanhigh
ORimage
is greater thanlow
and that region is connected to a region greater thanhigh
.Parameters: image : array, shape (M,[ N, …, P])
Grayscale input image.
low : float, or array of same shape as
image
Lower threshold.
high : float, or array of same shape as
image
Higher threshold.
Returns: thresholded : array of bool, same shape as
image
Array in which
True
indicates the locations whereimage
was above the hysteresis threshold.References
[R294] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986; vol. 8, pp.679-698. DOI:10.1109/TPAMI.1986.4767851 Examples
>>> image = np.array([1, 2, 3, 2, 1, 2, 1, 3, 2]) >>> apply_hysteresis_threshold(image, 1.5, 2.5).astype(int) array([0, 1, 1, 1, 0, 0, 0, 1, 1])
Examples using skimage.filters.apply_hysteresis_threshold
¶
unsharp_mask¶
-
skimage.filters.
unsharp_mask
(image, radius=1.0, amount=1.0, multichannel=False, preserve_range=False)[source]¶ Unsharp masking filter.
The sharp details are identified as the difference between the original image and its blurred version. These details are then scaled, and added back to the original image.
Parameters: image : [P, …, ]M[, N][, C] ndarray
Input image.
radius : scalar or sequence of scalars, optional
If a scalar is given, then its value is used for all dimensions. If sequence is given, then there must be exactly one radius for each dimension except the last dimension for multichannel images. Note that 0 radius means no blurring, and negative values are not allowed.
amount : scalar, optional
The details will be amplified with this factor. The factor could be 0 or negative. Typically, it is a small positive number, e.g. 1.0.
multichannel : bool, optional
If True, the last
image
dimension is considered as a color channel, otherwise as spatial. Color channels are processed individually.preserve_range: bool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of
img_as_float
. Also see https://scikit-image.org/docs/dev/user_guide/data_types.htmlReturns: output : [P, …, ]M[, N][, C] ndarray of float
Image with unsharp mask applied.
Notes
Unsharp masking is an image sharpening technique. It is a linear image operation, and numerically stable, unlike deconvolution which is an ill-posed problem. Because of this stability, it is often preferred over deconvolution.
The main idea is as follows: sharp details are identified as the difference between the original image and its blurred version. These details are added back to the original image after a scaling step:
enhanced image = original + amount * (original - blurred)When applying this filter to several color layers independently, color bleeding may occur. More visually pleasing result can be achieved by processing only the brightness/lightness/intensity channel in a suitable color space such as HSV, HSL, YUV, or YCbCr.
Unsharp masking is described in most introductory digital image processing books. This implementation is based on [R295].
References
[R295] (1, 2) Maria Petrou, Costas Petrou “Image Processing: The Fundamentals”, (2010), ed ii., page 357, ISBN 13: 9781119994398 DOI:10.1002/9781119994398 [R296] Wikipedia. Unsharp masking https://en.wikipedia.org/wiki/Unsharp_masking Examples
>>> array = np.ones(shape=(5,5), dtype=np.uint8)*100 >>> array[2,2] = 120 >>> array array([[100, 100, 100, 100, 100], [100, 100, 100, 100, 100], [100, 100, 120, 100, 100], [100, 100, 100, 100, 100], [100, 100, 100, 100, 100]], dtype=uint8) >>> np.around(unsharp_mask(array, radius=0.5, amount=2),2) array([[ 0.39, 0.39, 0.39, 0.39, 0.39], [ 0.39, 0.39, 0.38, 0.39, 0.39], [ 0.39, 0.38, 0.53, 0.38, 0.39], [ 0.39, 0.39, 0.38, 0.39, 0.39], [ 0.39, 0.39, 0.39, 0.39, 0.39]])
>>> array = np.ones(shape=(5,5), dtype=np.int8)*100 >>> array[2,2] = 127 >>> np.around(unsharp_mask(array, radius=0.5, amount=2),2) array([[ 0.79, 0.79, 0.79, 0.79, 0.79], [ 0.79, 0.78, 0.75, 0.78, 0.79], [ 0.79, 0.75, 1. , 0.75, 0.79], [ 0.79, 0.78, 0.75, 0.78, 0.79], [ 0.79, 0.79, 0.79, 0.79, 0.79]])
>>> np.around(unsharp_mask(array, radius=0.5, amount=2, preserve_range=True), 2) array([[ 100. , 100. , 99.99, 100. , 100. ], [ 100. , 99.39, 95.48, 99.39, 100. ], [ 99.99, 95.48, 147.59, 95.48, 99.99], [ 100. , 99.39, 95.48, 99.39, 100. ], [ 100. , 100. , 99.99, 100. , 100. ]])
Examples using skimage.filters.unsharp_mask
¶
LPIFilter2D
¶
-
class
skimage.filters.
LPIFilter2D
(impulse_response, **filter_params)[source]¶ Bases:
object
Linear Position-Invariant Filter (2-dimensional)
-
__init__
(impulse_response, **filter_params)[source]¶ Parameters: impulse_response : callable f(r, c, **filter_params)
Function that yields the impulse response.
r
andc
are 1-dimensional vectors that represent row and column positions, in other words coordinates are (r[0],c[0]),(r[0],c[1]) etc. **filter_params are passed through.In other words,
impulse_response
would be called like this:>>> def impulse_response(r, c, **filter_params): ... pass >>> >>> r = [0,0,0,1,1,1,2,2,2] >>> c = [0,1,2,0,1,2,0,1,2] >>> filter_params = {'kw1': 1, 'kw2': 2, 'kw3': 3} >>> impulse_response(r, c, **filter_params)
Examples
Gaussian filter: Use a 1-D gaussian in each direction without normalization coefficients.
>>> def filt_func(r, c, sigma = 1): ... return np.exp(-np.hypot(r, c)/sigma) >>> filter = LPIFilter2D(filt_func)
-