Blind Navigator


  • Sachin Bharambe (3rd year BTech in Electronics and Communication, VNIT Nagpur)
  • Harsharanga (3rd year BTech in Computer Science, VNIT Nagpur)
  • Rohan Thakker (3rd year BTech in Mechanical, VNIT Nagpur)


Based on the survey conducted by WHO 285 million people are visually impaired worldwide: 39 million are blind and 246 have low vision. The 39 million are completely blind and cannot be cured hence it is up to us to use the technology for the betterment of the people.

Project Description:

The aim of the Project is to make an embedded device which is able to give the blind person some details of his environment (by using camera and range sensors for getting the details), in form of acoustic and haptic feedback(to communicate with the blind). At the same time it can be used to give instant directions to the blind to reach his destination, so that he doesn’t need to depend on others for help while walking. Another aim is to make this project is to make this affordable as well as small in size. We have seen that android devices have become really common and cheap in our country. Hence we will use Android to find the path to the destination, calculating the position and orientation of the person and also generating voice commands. But the android alone cannot be used by the blind to maneuver about, due to presence of local traffic and obstacles. We therefore build an embedded device which will be connected with the android using Bluetooth. The embedded device will be made in two versions:-

Version 1:-

Will consists of an array ultrasonic range finder mounted on the persons head or belt (which can be worn in pants), to detect the distance and approximate position of the obstacle.

Version 2:-

To mount a camera on the persons head (or design it to be worn like goggles). The feed of the camera will be given to an embedded board, where details of the environment will be obtained using Image Processing (Using algorithms of Face recognition, Canny edge detection, Color detection). The beagleboard will also get the input from the range sensor used in version1 and will create a map of the environment using the data from the camera and the range sensors.

Then follows the main part of the project, which is to communicate the details of the environment to the blind. We will do this by giving them a haptic and acoustic feedback by using haptic actuators and speakers or headphones.
Based on the work we have divided the project into the following 3 parts on which we will work simultaneously:-
1) Making the Embedded Device
2) Making the Android App
3) Making the feed-back actuators.

Here is a detailed description of each part:-
1) The Embedded Device:-

Version 1:-

This version of the device will consists of the ultrasonic range sensors. We want the device to be confortable and stylish so that the users don’t feel embarrassed or bad while wearing it. Hence we will be mounting the sensors on a belt which the blind can wear. These range sensors will be given as input to the microcontroller such as MSP430 using ADC and based on the values from the range sensor we will generate a haptic feedback.

Version 2:-

This will consist a camera which will be interfaced with a beagleboard. Using real-time image processing algorithms of face recognition, edge-detection, color detection we will get other details about the environment. The purpose of this version is to give more information to the blind rather than just how far or close an obstacle is. This version is not dedicated for navigation but for getting a better idea of the surroundings.
The camera will be small and be designed in a way that it can be worn like goggles or a head-band in order to make it more comfortable.

2) Android App:-

The Goolge Maps API will be used to obtain the directions of the destination once it is selected by the user. And then the directions will be updated by obtaining the users current location and orientation orientation. This will be communicated to the user by using the Voice Commands API.

Obtaining user location and orientation:

Knowing where the user is allows your application to be smarter and deliver better information to the user. When developing a location-aware application for Android, we can utilize GPS and Android's Network Location Provider to acquire the user location. Although GPS is most accurate, it only works outdoors, it quickly consumes battery power, and doesn't return the location as quickly as users want. Android's Network Location Provider determines user location using cell tower and Wi-Fi signals, providing location information in a way that works indoors and outdoors, responds faster, and uses less battery power. To obtain the user location in our application, we can use GPS, the Network Location Provider and Wi-Fi.The Orientation will be obtained by using the magnetic compass inside the android device.

3) Haptic and acoustic feedback
As mentioned in the android app part the voice commands of the gps directions will be given to the blind from the mobile. Apart from that to give the feed-back of the environment we will use haptics. We plan to attach very small dc motors to different parts of the glove and moving the motors at different speeds will generate various vibrations.

The red shaded region shows the position of the haptic actuators.
For the acoustic feedback of the local environment we will use a buzzer which will generate sounds at different frequency based on distance of the object from the blind.


The above block diagram shows the working of the parts.

Initially the Destination point is selected in the Android Device. After that the Android app calculates the coordinates of the path along which it needs to travel.
The android device will calculate the best approximation of the location coordinates of the blind using the inputs of GPS, Wi-Fi and Cell-ID and compares it to the path which needs to be followed. Based on this it gives the voice commands to the blind using the speakers on the android device (Hence giving the users freedom to use headphones if desired).
Simultaneously the embedded device is looking for local obstacles from the camera and the range sensors and giving the feed-back to the blind in form of the variable frequency sound from the buzzer using PWM and creating vibrations on the hand gloves using the motors.
We will be using Motor drivers, voltage regulators, Op-Amps, Range sensors, Visual sensors as the Analogue devices from TI. Apart from this the we will also use an ARM processor board making the embedded device and running linux to using image processing in OpenCV.
The feedback actuators are just basic ideas.  We will first test how comfortable these and if they are not as per requirements we shall be looking for better substitutes as we don’t want to compromise on the comfort of the user.

This project is to be submitted for participation in Texas Instruments Analog Design Contest 2012 and will be completed by 26th January, 2013.

The device will be made using an ARM8 development board with Linux which is worth $199 funded by Texas Instruments.

No comments:

Post a Comment