Blind Prothesis

Michael Frank Deering: AI: Blind Prothesis

Institutions: Smith-Kettlewell Institute of Visual Sciences, U.C. Berkeley

Years of Research: 1977-1982


This was a project to build a sophisticated computer vision system that could be used as a mobility aid for the blind in the environment of city sidewalks. Dubbed “the seeing-eye computer” by the press, it was a battery powered 68000 in 1981 that used a 64×64 pixel digital video camera to scan the area in front of a blind person and (verbally) inform them of any potential obstacles, curbs, and other situations. Unlike many university research computer vision systems of the time which processed carefully pre-recorded scenes, my system was made to operate in real-time live outdoor environment. (OK, so I usually ran it close to noon on sunny days when the shadows were simpler.)


This work was the basis for my Ph.D. thesis, although I started it while an undergraduate and continued to do some work with the system after finishing my Ph.D. The main work was done under an NSF grant to the Smith-Kettlewell Institute of Visual Sciences (SKIVS) in San Francisco (now know as SKERI: with Dr. Carter Compton Collins as the PI, myself as chief architect, with hardware support from several internal staff. The work was carried out both at SKIVS and at U.C. Berkeley. My formal thesis advisor was Lotfi Zadah at Berkeley, but my committee included Dr. Collins and Marty Tenenbaum from SRI (later my first manager at Schlumberger).

The Algorithm

This was a software vision algorithm, with the inner loop written in 68000 assembler, and the rest in a C superset language that I wrote just for the project that has similarities to C++ (which didn’t exist yet). The edge extraction and object classification algorithm are described in detail in the publications cited below (which I eventually hope to get on-line). The vision algorithm was interesting because it used concepts from AI knowledge representation techniques (see my PEARL work) to represent higher level objects in the process of being recognized, tracked, and categorized in the scene. The system used and electronic compass and a gravity tilt sensor to estimate and divided out the effects of camera rotation (the blind person was wearing the camera).


The system actually worked in its target environment, but the research showed that the technology had a long way to go before it commercial products would be feasible. This was highly in line with the goal of the NSF grant; not to produce a commercial product, but to do the basic research to see how feasible a computer vision based approach to blind mobility was, and what problems still needed to be solved. As an AI vision Ph.D. research project it was highly successful.


I was contacted by others working on similar projects for many years later, but I still thought the technology was not ready. To put this in current perspective, the ARPA grand challenge II held recently (Oct 2005) for autotomous vehicles in the desert is a not dissimilar problem, and the hardware involved still can’t be made cosmetically acceptable (hidden beneath the blind person’s clothing.) And having just looked at the home page, I see that they are starting a project with similar goals again 30 years later. The cost target is tough as well. Yes, seeing-eye dogs cost $12K to train, but this money is donated, the out of pocket cost to the blind person is only a few thousand dollars. Another issue is the population of blind people. In the fifties there was a break-through in the survivability of premature babies: put them in an oxygen tent. Unfortunately this was amended a few years later to “put them in a chamber with a high but not pure oxygen content; as pure oxygen will many times cause blindness”. As a result there was a baby boom of young otherwise healthy blind people, and in the sixties and seventies they wanted to walk around. This population drove a lot of the work on blind mobility aids. However, this population has aged, and now the majority of blind people (in the U.S.) are not these still middle aged people, but generally the very old, and they have a lot of other things going wrong with them as well. They are nowhere near as likely to want to take un-escorted strolls down downtown city streets.


Michael F. Deering, “Computer Vision Requirements in Blind Mobility Aids”, in Electronic Spatial Sensing for the Blind, 1985, Martinus Nijhoff` Publishers, 65-79.

Michael F. Deering, “Real-Time Natural Scene Analysis for a Blind Prosthesis”, Ph.D. thesis, University of California at Berkeley, 1981.

Michael F. Deering and Carter C. Collins, “Real-Time Natural Scene Analysis for a Blind Prosthesis”, in Proc. of IJCAI-81, August, 1981, 704-709.