Suppose for a particular writer we are taking 10 pages to train the system (to get the database) and from each page after doing preprocessing we are choosing two blocks randomly, Initially we are taking the size of the block as 256 x 256 but if the size of the image is less than this this will reduce the block size automatically. Then with each block we will do the convolution. During this process we will apply the Gabor filter with different parameter. These parameter are basically the frequency f (4, 8, 16, 32 ) and the orientation theta.(00, 450, 900, 1350 ). So for each image (block) now we had got the 16 gabor output images( 4x4 ). Now we will calculate the mean and standard deviation of pixel value for each image, and in this manner we will got 32 features (means and standard deviation) for each image (block). We will put this in a file say "database" in a line (format). So for a particular writer we will got 10x 2 lines and each line contain the 32 values.
Finally we will calculate the mean nf(k)
and
standard deviation nv(k)
of each feature (2x 32) (k for Kth writer). And will keep in another file
say "final data" and these will be used in further calculation. The idea
of keeping this intermediate file "database" is to give the software more
flexibility.
[Back]
[Main Page] [ Course
page ] [
Course
Projects]
[Amit] [Kamat]