Contact at mumbai.academics@gmail.com or 8097636691
Responsive Ads Here

Thursday, 8 March 2018

Image Processing using MATLAB: Basic operations (Part 2 of 4)

Image processing covers a wide and diverse array of techniques and algorithms. Fundamental processes underlying these techniques include sharpening, noise removal, deblurring, edge extraction, binarisation, contrast enhancement, and object segmentation and labeling.
Sharpening enhances the edges and fine details of an image for viewing by human beings. It increases the contrast between bright and dark regions to bring out image features. Basically, sharpening involves application of a high-pass filter to an image.
Noise removal techniques reduce the amount of noise in an image before it is processed any further. It is necessary in image processing and image interpretation so as to acquire useful information. Images from both digital cameras as well as conventional film cameras pick up noise from a variety of sources. These noise sources include salt-and-pepper noise (sparse light and dark disturbances) and Gaussian noise (each pixel value in the image changes by a small amount). In either case, the noise at different pixels can either be correlated or uncorrelated. In many cases, noise values at different pixels are modeled as being independent and identically distributed and hence uncorrelated. In selecting noise reduction algorithm, one must consider the available computer power and time and whether sacrificing some image detail is acceptable if it allows more noise to be removed and so on.
Deblurring is the process of removing blurring artifacts (such as blur caused by defocus aberration or motion blur) from images. The blur is typically modelled as a convolution point-spread function with a hypothetical sharp input image, where both the sharp input image (which is to be recovered) and the point-spread function are unknown. Deblurring algorithms include methodology to remove the blur from an image. Deblurring is an iterative process and you might need to repeat the process multiple times until the final image is the best approximation of the original image.
Edge extraction or edge detection is used to separate objects from one another before identifying their contents. It includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply.
Edge detection approaches can be categorised into search-based approach and zero-crossing-based approach. Search based methods detect edges by first computing a measure of edge strength (usually a first-order derivative function) such as gradient magnitude and then searching for a local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. Zero-crossing methods look for zero-crossings in a second-order derivative function computed from the image to find the edge. First-order edge detectors include Canny edge detector, Prewitt and Sobel operators, and so on.
Other approaches include second-order differential approach of detecting zero-crossings, phase congruency (or phase coherence) methods or phase-stretch transform (PST). Second-order differential approach detects zero-crossings of the second-order directional derivative in the gradient direction. Phase congruency methods attempt to find locations in an image where all sinusoids in the frequency domain are in phase. PST transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index).
Binarisation refers to reducing a greyscale image to only two levels of grey, i.e., black and white. Thresholding is a popular technique for converting any greyscale image into a binary image.
Contrast enhancement is done to improve an image for human viewing as well as for image processing tasks. It makes the image features stand out more clearly by making optimal use of colours available on the display or the output device. Contrast manipulation involves changing the range of contrast values in an image.
Segmentation and labeling of objects within a scene is a prerequisite for most object recognition and classification systems. Segmentation is the process of assigning each pixel in the source image to two or more classes. Image segmentation is the process of partitioning the digital image into multiple segments (sets of pixels, also known as super pixels). The goal is to simplify and/or change the contrast representation of an image into something that is more meaningful and easier to analyse. Once the relevant objects have been segmented and labelled, their relevant features can be extracted and used to classify, compare, cluster or recognise desired objects.

Types of images

The MATLAB tool box supports four types of images, namely, grey-level images, binary images, indexed images and RGB images. Brief description of these image types is given below.

Grey-level images

Also referred to as monochrome images, these use 8 bits per pixel, where a pixel value of 0 corresponds to ‘black,’ a pixel value of 255 corresponds to ‘white’ and intermediate values indicate varying shades of grey. These are also encoded as a 2D array of pixels, with each pixel having 8 bits.

Binary images

These images use 1 bit per pixel, where a 0 usually means ‘black’ and a 1 means ‘white.’ These are represented as a 2D array. Small size is the main advantage of binary images.

No comments:

Post a Comment