Who are we and what is A.Eye-Drive?
We are Imperial College graduate engineers, who are committed to empowering everyone with independent mobility regardless of their level of disability. A. Eye-Drive is a wheelchair interface built from low-cost sensors and AI algorithms used in self-driving cars. It can be added to any existing wheelchair to enable anyone with any level of disability to navigate seamlessly and independently.
What is wrong with the existing technologies?
With the aging population, the prevalence of neurodegenerative diseases will increase, creating a population in need of assistance and care. 1.85% of the world population currently require a wheelchair, and this percentage is projected to increase. Particularly for people with severe disabilities (full-body paralysis, ALS, multiple sclerosis), there is no available wheelchair control interface that can enable them to navigate independently. For people with less severe disabilities, the joystick has been the primary solution to navigation; however, the users often report accidents, maneuver struggles and constant need to pay attention to the obstacles and floor conditions. For manual wheelchair users, it is arduous to travel for long distances, especially under rain or snow.
We believe that wheelchair navigation should be as easy as walking for a regular person.
How does A.Eye-Drive work?
A. Eye-Drive combines eye-tracking technology with AI algorithms used in self-driving cars. The sensor network tracks users' eye movements and decodes their intended destination in the user's visual scene. The self-driving sensors map out the environment in real-time and recognize every object and condition in the proximity of the wheelchair. By doing so, the wheelchair can derive the intended destination and the object that the user wants to reach and autonomously takes the user to the destination without any further command from the user. In the meantime, the wheelchair continuously monitors the environment for the dynamic obstacles and hazardous conditions and navigates around them without any input from the user. Thus, if a user wants to leave a room, he/she winks at the door, and the wheelchair understands this command as "get me out of the room" and navigates the user autonomously to the door.
Therefore, our system:
- Reduces the physical effort and attention needed from the user to navigate.
- AI algorithms provide better safety measures by recognizing and avoiding hazardous condition and objects.
- The user does not need to interact with the interface; he/she can enjoy the surroundings.
- The system can adapt to any level of disability.
Why do we need more funds?
We have been working with our disabled pilots from day one of the prototyping and listening to their needs to improve the prototype constantly. We received a $50,000 prize Nesta Mobility Unlimited Challenge and the Toyota Mobility Foundation for our work [https://mobilityunlimited.org/people/discoveryawardees]. With this initial funding, we drove the development of our prototype as seen on the video. With this, we have been selected as one of the Finalists for the AbilityNet Tech4Good Inclusive Design Award.
However, we need more funding for further technical development with our end-users, carry out clinical trials to test our prototype and acquire necessary certifications from the regulatory bodies. We believe that with your support, we can bring this technology to millions of people who are dependent on someone else enjoy their lives.