Reading is obviously essential in today’s society. Printed text is everywhere in the form of reports, receipts, bank statements, on medicine bottles, etc. Optical aids, video magnifiers, and screen readers can help blind users and those with low vision to access documents. There are few devices that can provide good access to common hand-held objects such as product packages, and objects printed with text such as prescription medication bottle. One of the devices we are going to discuss in this computer science seminar topic is Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
The ability of people who are blind or have significant visual impairments to read printed labels. And product packages will enhance independent living and foster economic and social self-sufficiency. Some reading-assistive systems such as pen scanners might be employed in these and similar situations. Such systems integrate OCR(optical character recognition ) software to offer the function of scanning and recognition of text and some have integrated voice output.
A number of portable reading assistants have been designed specifically for the visually impaired, KReader Mobile runs on a cell phone and allows the user to read mail, receipts and many other documents . However, the document to be read must be nearly flat, placed on a clear, dark surface (i.e., a non cluttered background), and contain mostly text. Mobile accurately reads black print on a white background. Furthermore, these systems require a blind user to manually localize areas of interest and text regions on the objects in most cases.
Here the text information can appear in multiple scales, fonts, colors, and orientations. To assist blind persons to read text from these kinds of hand-held objects, we have conceived of a camera-based assistive text reading framework to track the object of interest within the camera view. And extract print text information from the object.
The hand-held object appears in the camera view, we use a camera with sufficiently wide angle to accommodate users with only approximate aim. This may often result in other text objects appearing in the camera’s view. To extract the hand-held object from the camera image, we develop a motion-based method to obtain a Region of interest (ROI) of the object. In text orientations, here assumes a text strings in scene images keep approximately horizontal alignment.
Many algorithms have been developed for localization of text regions in scene images.
Rule-based algorithms apply pixel-level image processing to extract text information from predefined text layouts such as character size, aspect ratio, edge density, character structure, color uniformity of text string, etc. In this method the analyzed edge pixel density with the Laplacian operator and employed maximum gradient differences to identify text regions.
In color-based text segmentation is performed through a Gaussian mixture model for calculating a confidence value for text regions. This type of algorithm tries to define a universal feature descriptor of text. Learning-based algorithms, model text structure and extract representative text features to build text classifiers.
Please go through the attached PPT for complete info.