V.A.V.I: Voice Assitance for Visually Impaired.

Pranshur Goel
3 min readNov 18, 2020

Blindness and vision impairment affects at least 2.2 billion people around the world, meaning improvements in new technologies are vital. Globally, around 39 million people are blind and one of the major problems for them is to utilize modern technologies such as android phones. In today’s world, people want to be self-serving, so we are trying to help visually impaired people use a mobile phone like a normal person without the help of anyone else. We also want blind people to get an overall essence of their environment by helping them know what is in their surroundings.

The Idea:

My teammates and I were contemplating what project to make this semester. Although we came across a large variety of projects on various topics such as Machine Learning, we were interested in making a new innovative system that could help society. In the end, we came across a project that helped visually impaired people walk on a preset path by utilizing the camera of a phone. So, we started thinking of making something this idea can be upgraded to, and after much brainstorming, arrived at our current project idea.

The Solution:

We thought about how in today’s world, using a mobile phone is a must for connecting to someone, calling in a time of need, or requiring help because of an emergency. This is also true for a blind person. We wanted to give them something using which they could effortlessly use their phones and also get a feel of their surroundings through object detection. We also wanted to help them identify currency and the total amount they had in hand. My teammates and I feel that we have been able to achieve these objectives and this thing will be of tremendous help to a visually impaired person.

The Journey:

The team used an agile approach. Firstly, we started to research the things we needed for the completion of our project. Work was discussed and distributed amongst us. Later, we started practicing thoroughly and learning different software. After a few days, we started working on Android Studio and then started learning how to make a machine learning model on TensorFlow Lite.

The automatic connection of Bluetooth was set up and the front-end design of the application was completed. Raspberry Pi was set up and it started to collect different datasets and labels to train the object detection model. The backend code in the android app was completed, which creates a Bluetooth terminal in the background to send and receive data on the Raspberry Pi. The Working Object detection model was successfully made with passable accuracy. The channel between Raspberry Pi and the app was set up and data could be sent and received on both ends. The rest of the features such as calling, sending an emergency alert, gallery, login page, etc. were completed in the end.

Remarks:

My team and I had a lot of fun building this project. This project is one which we would use in our day-to-day life. We are looking forward to creating more of such interesting projects and applications.

Team:

Yash Dwivedi, Pranshur Goel, and Manika Mishra.

https://twitter.com/yash_dwi/status/1328557881378054144?s=08

--

--