IJournals: International Journal of Software & Hardware Research in Engineering ISSN-2347-4890 Volume 3 Issue 4 April, 2015 Enhanced Algorithm For Tracking number Plate From Vehicle Using Blob Analysis Author: Shashikumar Naubadkar1 ; Dr. A Meera2 PG Scholar1 ECE Department, BMSCE Bengaluru ; Professor2 ECE Department, BMSCE Bengaluru shashiln5051@gmail.com1 ; amira.ece@bmsce.ac.in2 ABSTRACT This paper presents an enhanced algorithm to extract the text from the number plate of the vehicles. The enhanced algorithm is based on Prewitt Edge Detector and text extraction is carried out by BLOB analysis. Experiments are done on many number plates and results show that it has overcome the problems for many problematic images, where text was not extracted using old methods because either the system does not extract number plate from gray scale due to some luminance conditions or due to problematic backgrounds. Extracted text is stored in text file. Keywords–number plate detector, BLOB analysis extraction, prewitt edge 1. INTRODUCTION Number plate extraction is hotspot research area in the field of image processing. Many of automated system have been developed but each has its advantages and disadvantages. Previously it was was assumed that the old method worked on images which have been captured from fixed angle parallel to horizon in different luminance conditions. It was also assumed the vehicle is stationary and images are captured at fixed distance.[1] developed algorithm which is applied on the car park systems to monitor and manage parking services. Algorithm is developed on the basis of morphological operations and used for number plate recognition. Optical character is used for the recognition of characters in number plate. [2] proposed a methodology which finds ROI using morphological processing and directional segmentation. The ROI is the area which includes the number plate from which alphanumeric characters are recognized. 2.PROPOSED ENHANCED ALGORITHM An automated system is developed using MATLAB in which image is captured from camera and converted in Gray scale image and then to binary image for pre processing. After conversion, Prewitt edge detector is © 2015, IJournals All Rights Reserved applied then dilation process is applied on image and unwanted holes in image have been filled. After dilation, blob analysis is applied which filter out unwanted regions or unwanted noise from image, and image is segmented. After segmentation, each alphanumeric character on number plate is extracted and then recognized with the help of template images of alphanumeric characters. Each alphanumeric character is stored in file and whole number plate is extracted successfully. Figure 1. block diagram of the proposed algorithm Enhanced algorithm that uses Prewitt edge detector [3 ] and blob analysis 1. Convert the image into monochrome image by thresholding. 2. Filter the image for removing noise. Use Gaussian lowpass filter. 3. Apply Prewitt edge-detector to the filtered image. 4. Apply proper morphological operations, i.e. dilation to make clusters of text regions. 5. Apply blob analysis to segment and extract the text from the numberplate 6. Apply segmented and extracted text image to OCR to convert into .txt file. 3. BLOB ANALYSIS The Blob Analysis block is used to calculate statistics for labeled regions in a binary image. The block returns quantities such as the centroid, bounding box, label matrix, and blob count. The Blob Analysis block supports input and output variable size signals. www.ijournals.in Page 85 IJournals: International Journal of Software & Hardware Research in Engineering ISSN-2347-4890 Volume 3 Issue 4 April, 2015 information and ignore the rest. Area of a BLOB :It is the number of pixels the BLOB consists of. This feature is often used to remove BLOBs that are too small or too big from the image. Figure 2.blob analysis block Count is the scalar value that represents the actual number of labeled regions in each image.BW is the vector or matrix that represents a binary image. Area is the vector that represents the number of pixels in labeled regions. BBox (Bounding box) is the M-by-4 matrix of [x y width height] bounding box coordinates, where M represents the number of blobs and [x y] represents the upper left corner of the bounding box. Example: Suppose there are two blobs, where x and y define the location of the upper-left corner of the bounding box, and w, h define the width and height of the bounding box. The block outputs at the BBox port. Bounding box of a BLOB :It is the minimum rectangle which contains the BLOB, see Figure 1. It is defined by going through all pixels for a BLOB and finding the four pixels with the minimum x-value, maximum x-value, minimum y-value and maximum y-value, respectively. From these values the width of the bounding box is given as w = Xmax – Xmin and the height h= Ymax – Ymin. A bounding box can be used as a ROI (region of interest). Figure 4. Bounding box of letter G First we have to separate the different objects in the image and then we have to evaluate which object is the one we are looking for. The former process is known as BLOB extraction and the latter as BLOB classification. A BLOB consists of a group of connected pixels. The term large indicates that only objects of a certain size are of interest and that small binary objects are usually noise. A number of different algorithms exist for finding the BLOBs. One of these algorithms known as the Grassfire algorithm, we use 4-connectivity for simplicity. The Recursive Grass-Fire Algorithm starts in the upper-left corner of the binary image. It then scans the entire image from left to right and from top to bottom. Bounding box ratio of a BLOB : It is defined as the height of the bounding box divided by the width. This feature indicates the elongation of the BLOB, i.e., is the BLOB long, high or neither. Compactness of a BLOB: is defined as the ratio of the BLOB‟s area to the area of the bounding box. This can be used to distinguish compact BLOBs from noncompact ones. For example, fist vs a hand with outstretched fingers. Compactness = Area of BLOB/ Width * Height\ 4. OPTICAL CHARACTER ECOGNITION OCR, in principle classify optical pattern corresponding to alphanumeric or other characters. OCR converts scanned image of text - printed or hand-written into machine-editable text format electronically. In simple terms, it refers to the conversion of images of handwritten, typewritten or printed-text usually taken by means of scanner or a suitable camera into machineeditable form [4]. Figure 3. 4-connectivity BLOB Extraction : The purpose of BLOB extraction is to isolate the BLOBs (objects) in a binary image. Feature extraction is a matter of converting each BLOB into a few representative numbers. That is, keep the relevant © 2015, IJournals All Rights Reserved Initially a library has been created with 26 uppercase and 26 lowercase alphabets and 10 numeric bitmaps. These bitmaps are binary replica of all the alphanumeric characters and each of them is stored in matrix form. They all are of equal size and dimension. Figure.4.2 shows the bitmap stored in library of X. www.ijournals.in Page 86 IJournals: International Journal of Software & Hardware Research in Engineering ISSN-2347-4890 Volume 3 Issue 4 April, 2015 Figure 5. Bitmap image corresponding to letter „X‟ 5. SIMULATION RESULTS The algorithm implementation was carried out on MATLAB platform. Figure 6 shows the result of the proposed algorithm. (d) (a) (b) (c) (e) Figure. 6. text extraction from the number plate (a)Original image (b)histogram of original image (c)extracted text (d) extracted text is segmented (e)extracted text in .txt file Figure.6 shows the text extraction from the numbetplate. (a) is the original color image of a car. (b) is the histogram of the color image, the right hand side has more frequency, represents light and pure white areas, the image has. (c) is the extracted text from the original image.(d) extracted text is segmented and (e) applied to OCR and saved in text file. Figure.7 shows comparision of the proposed algorithm for complex number plate where text was not extracted by [5].The proposed algorithm is successful in extracting the text from the complex number plates. (a) © 2015, IJournals All Rights Reserved www.ijournals.in Page 87 IJournals: International Journal of Software & Hardware Research in Engineering ISSN-2347-4890 Volume 3 Issue 4 April, 2015 [6] Mrs. Neha Gupta, and Mr.V .K. Banga , “Image Segmentation for Text Extraction” April 2012, 2nd International Conference on Electrical, Electronics and Civil Engineering (ICEECE'2012) Singapore. (b) (c) Figure.7 comparision of the proposed algorithm for complex number plate (a) original image (b) text not extracted by [5] (c) text extracted by proposed algorithm 6. CONCLUSION The paper has implemented an enhanced algorithm for text extraction from complex number plate and tested on with nearly 20 images. The results shows that the proposed algorithm has overcome the problems with the existing methods. [7] Prof. Amit Choksi1, Nihar Desai2, Ajay Chauhan3, Vishal Revdiwala4, Prof. Kaushal Patel, Electronics and Telecommunication Department, BVM Engineering College, Anand, India, “Text Extraction from Natural Scene Images using Prewitt Edge Detection Method ”, International Journal of Advanced Research in Computer Science and Software Engineering Research Paper, Volume 3, Issue 12, December 2013. [8] C.P. Sumathi1, T. Santhanam2 and G.Gayathri Devi “A SURVEY ON VARIOUS APPROACHES OF TEXT EXTRACTION IN IMAGES” International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.4, August 2012 . [9] Yungang Zhang and Changshui Zhang “A New Algorithm for Character Segmentation of License Plate”. [10] Hyung Il Koo, Member, IEEE, and Duck Hoon Kim, Member, IEEE “Scene Text Detection via Connected Component Clustering and Nontext Filtering” IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013 7. REFERENCES [1] J.S. Chittode and R. Kate, “Number plate recognition using segmentation,” International Journal of Engineering Research & Technology, Vol. 1 Issue 9, November- 2012. [2] C N Paunwala and S Patnaik, ”A novel multiple license plate extraction technique for complex background in Indian traffic conditions,” International Journal of Image processing, Vol-4,Issue-2,pp 106-118. [3] Prof. Amit Choksi,Nihar Desai, Ajay Chauhan “Text Extraction from Natural Scene Images using Prewitt Edge Detection Method”. [4] Disha Bhattacharjee, Deepti Tripathi “A Novel Approach for Character Recognition” International Journal of Engineering Trends and Technology (IJETT) – Volume 10 Number 6-Apr 2014. [5] Manisha Rathore and Saroj Kumari “TRACKING NUMBER PLATE FROM VEHICLE USING MATLAB” International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.4, No.3, May 2014. © 2015, IJournals All Rights Reserved www.ijournals.in Page 88
© Copyright 2024