Introduction
Among the various traits, iris recognition has attracted a lot of attention because it has various advantageous factors like greater speed, simplicity and accuracy compared to other biometric traits. Iris recognition relies on the unique patterns of the human iris to identify or verify the identity of an individual.
For iris recognition, an input image is taken using Infra-red sensitive CCD camera and the frame grabber. From this input image eye is localized using various image processing algorithms. Area of interest i.e. iris is then detected from the eye and the features are extracted. These features are encoded into pattern which is stored in the database for enrollment and are matched with the database for authentication.
Our Iris recognition system is based on three novel algorithms. The first algorithm is based on extracting circular and radial features using edge detection while in the second algorithm features are extracted using Fourier transforms along radial direction. The third algorithm is based on feature extraction with Circular-Mellin operators. These operators are found to be scale invariant and rotational invariant and convolution with them results in shift invariant features.
Methodology
The biometrics trait iris - the colored circle that surrounds the pupil - contains many randomly distributed immutable structures, which makes each iris distinct from another. Like fingerprint, iris does not change with time. Since 1990s, many researchers have worked on this problem and designed the algorithms for iris recognition. Generally, iris identification is divided into four steps, namely localization, normalization, feature extraction, and matching. In localization, the inner and the outer boundaries of the iris are extracted and only iris pattern is stored. In normalization step, various normalization algorithms are used to handle different size, variation in illumination and other factors. The main step is feature extraction where a feature vector is formed consisting of the ordered sequence of features extracted from the various representations of the iris images. Finally, in matching step, the feature vectors are classified through different thresholding algorithms like Hamming Distance, weight vector and winner selection, dissimilarity function, etc. and then verification and identification are carried out.
The first known algorithm for iris recognition is due to Daugman. The algorithm is based on Iris Codes generated using 2D Gabor wavelet. The accuracy obtained in the iris recognition system is found to be more than 99.9%. Another major contribution is due to Wildes. The algorithm has made use of an isotropic band-pass decomposition derived from application of Laplacian of Gaussian filters to the image data. This algorithm explicitly models the upper and lower eyelids with parabolic arcs whereas Daugman excludes the upper and the lower portions of the image. The results of this system are good enough to recognize the individuals in minimum time period.
Boashash and Boles have presented a new algorithm based on zero-crossings. In this algorithm the zero-crossings of the wavelet transform are calculated at various resolution levels over concentric circles on the iris. Resulting one-dimensional (1-D) signals are then compared with the model features using different dissimilarity function. A similar type of system has been presented in which is based on zero-crossing discrete dyadic wavelet transform representation and has shown a high level of accuracy.
Multi-resolution Independent Component Identification (M-ICA) which provides good properties to represent signals with time frequency is used in to extract the features of iris signals. The accuracy obtained is found to be low because the M-ICA does not give good performance on class-separability. Dargham et. al. has used self-organizing map networks for recognizing the iris patterns. The accuracy obtained by the network is around 83%. In another algorithm by Li Ma et. al., circular symmetry filters are used to capture local texture information of the iris, which are then used to construct a fixed length feature vector. Nearest feature line algorithm is used for iris matching. The results obtained are 0.01% for false match and 2.17% for false non-match rate. In, Chen and Yuan have developed the algorithm for extracting the iris features based on fractal dimension. The iris zone is partitioned into small blocks in which the local fractal dimension features are computed as the iris code. And finally the patterns are matched using the k-means and neural networks. The algorithm gives 8.2% False Accept Rate (FAR) and 0% False Rejection Rate. Gabor filters and 2-D wavelet transforms are used by Wang etc. al for feature extraction. For identification weighted Euclidean distance classification has been used. This algorithm is invariant to translation and rotation and tolerant to illumination. The classification rate on using Gabor is 98.3% and the accuracy with wavelets is 82.51%. Robert et. al. have presented a new algorithm for localization and extraction of iris. For localization a combination of the integro-differential operators with a Hough Transform is used while the concept of instantaneous phase or emergent frequency is used for feature extraction.
Iris code is generated by thresholding both the models of emergent frequency and the real and imaginary parts of the instantaneous phase. Hamming distance is used for matching. In this algorithm false rejection rate obtained is 11%. Lim et. al. have used Haar Wavelet transform to extract features from iris images. The transformation is applied four times on image of size 450X60 to obtain a 87-bit feature vector. For classification of feature vectors, weight vector initialization and winner selection strategy has been used. The recognition rate is found to be around 98.4%. In two new algorithms, which are based partly on the correlation analysis and partly on the median binary code of commensurable regions of digitized iris image, are presented. Similarly an algorithm of eye-iris structure characterization using statistical and spectral analysis of color iris images is considered in which has used Wiener spectra for characterization of iris patterns. In human iris structure is explained and classified using coherent Fourier spectra of the optical transmission. has shown an efficient biometric security algorithm for iris recognition system with high performance and high confidence. This system is based on an empirical analysis of the iris image and it is split in several steps using local image properties. The proposed system uses the wavelet transform for texture analysis, and uses the knowledge of general structure of human iris. The system has been implemented and tested using a dataset of 240 samples of iris data with different contrast quality. The above algorithms have used iris images captured in infrared lighting system. According to the available literature, results on iris images without infrared lights are not found. Similarly a little work has been done to de-noise the iris image taken from a distance and to handle the various illumination problems.
Thus iris recognition system can be classified into two modules- (i) Database Preparation Module and (ii) Verification Module. Database Preparation Module consists of four sub-modules and they are (a) Enrollment Module and (b) Deletion Module (c) modification module (d) view module. Verification module can be divided into two modules (a) Matching Module and (b) Decision Module
Our Approach For Iris Recognition
Iris is gaining lots of attention due to its accuracy, reliability and simplicity as compared to other biometric traits. The human iris is an annular region between the pupil (generally darkest portion of the eye) and sclera and has many interlacing minute characteristics such as freckles, coronas, stripes, furrows, crypts and so on. These minute patterns in the iris are unique to each individual and are non invasive to their users. These properties make iris recognition particularly promising solution to society. Iris recognition system is broadly classified into following modules
Iris Localization
Normalization
Feature Extraction & Matching
Iris Localization
Iris image is localized by first finding the inner pupil boundary and then drawing concentric circles of different radii from the pupil center. The intensities lying over the perimeter of the circle are summed up. Among the candidate iris circles, the circle having a maximum change in intensity with respect to the previous drawn circle is the iris outer boundary. Figure 1 shows an example of localized iris image.
Figure 1: (a) Contrast Enhanced Image (b) Concentric circles of different radii (c) Localized Iris image
Iris Normalization
Localizing iris from an image delineates the annular portion from the rest of the image. The concept of rubber sheet modal suggested by J. Daugman takes into consideration the possibility of pupil dilation and appearing of different size in different images. For this purpose, the coordinate system is changed by unwrapping the iris and mapping all the points within the boundary of the iris into their polar equivalent.
Figure 2. Iris Normalization (a) Localized iris image (b) Normalized iris image (c) Removing of eyelids from the strip
Feature Extraction and Matching
In order to extract features in the spatial domain we are using a novel fusion of two approaches viz Haar Wavelet and Circular Mellin operator. Matching between the iris codes (generated by Haar Wavelet and Circular Mellin operator for enrollment and verification) is done using Hamming Distance method. The value of hamming distance is compared with the corresponding threshold value and fusion is done at the decision level using Conjunction Rule.