Wearable Computers for the Blind

Wearable Computers for the Blind

Now that we have come to the end of the series of posts inspired by my friend and colleague Daniel Kish, it is fitting that we discuss how wearable technologies might be used by blind individuals.

What would a wearable prosthetic look like for the blind, what options would it provide for the consumer?

Daniel Kish and I, with feedback over the years from Steve Mann, wrote a white paper about such a system in the 1990s. In this case, “white paper” refers to a speculative overview–an outline for potential development. We called the white paper “Project Hawkeye,” and referred to the wearable systems as CyberEyes. Part of what follows is taken from that earlier envisioning. Terms like “audification,” and “seeing eye people,” are discussed in the Hawkeye report.

1- New technologies will provide more options for blind navigation. The goal of these new tools will be to enable the blind traveler to move more efficiently, and with greater safety–ease of travel will be improved through technologies. A brief list of systems within this category include:

* GPS, embedded in wearables
* GPS tags: for locating “missing” items, embedded in personal objects
* Landmark assist technologies; audification and embedded messages
* Memory prosthetics for routes; mapping access
* Seeing Eye People- operator assist for difficult situations
* OCR for reading signage

2- New technologies will support natural perceptual abilities, as well as offer bionic solutions.

We know from the work of Daniel Kish and his associates at World Access for the Blind, that blind individuals can evolve highly sophisticated auditory perceptual skills that enable self-sufficient navigation. Technologies can be developed to support and enhance this natural auditory capability.

* Audified environments
* Sonar systems for enhancing seeing with sound
* Bionic hearing- improved for echolocation
* Artificial vision systems (brain implants)
* smart canes–with tactile and olfactory assisted technologies

3. Communication systems will continue to shrink and become available in wearable units for the blind. This is nothing more than the porting of handheld technologies to the sensory zone (the ring of sensation on the head that includes the eyes, ears, vestibular organs, and the nose). Additional solutions will be provided for the blind, including:

* Face recognition and memory for faces
* Translation of facial expression and body language
* Voice capability for email, text, social networking, conferencing
* OCR units embedded in wearables will enable reading of print

4. The environment will get more accessible:

* technologies will enable the reading of touch screens
* technologies will evolve to allow voice control of touch screens
* Objects will talk on demand (as in a kindergarten classroom)
* Environments will be prescribed (smart rooms), especially for children.
* OCR will read Braille, signage, textbooks, wall posters, and calendars.
* Landmarks will contain embedded messages
* Magic mirror technologies will teach blind kids (“robotic” mirrors)
* Knowledge access will be embedded in objects (the painting that tells its history, etc.)
* Robot assistance will be available as needed–in the form of toys and magic friends for children.

All of the above suggestions can only come to fruition if:

* Professionals understand the technologies and can prescribe them.
* Training is ongoing as developmental abilities evolve, or as aging or disease cause detriment–adaptations to technologies and training have to occur.
* Upgrades are ongoing as technology evolves.
* Repair is available and affordable.
* Tech support is available and affordable.
* Training manuals and curricula are available and affordable.
* Technologies are tailored and prescribed for individuals.
* Technologies are under the control of the consumer.

Prosthetics for children in special education should be developed, in my opinion, around a concept called Humanistic Intelligence. This term was coined by professor Steve Mann at the University of Toronto. See his presentation on VIMEO.

Within Humanistic Intelligence (HI) theory, the computer is understood to be a second brain. It’s sensory modalities are considered to be additional senses. A synthetic synesthesia results when “computer senses” merge with the wearer’s native senses. The computer uses the human brain as one of its peripherals, just as the human brain uses the computer as a peripheral. This symbiotic relationship is the heart of Humanistic intelligence.

Because of the make-up of the human brain, the human being has to be in the decision–loop. This is because perception and all higher level processing work through a “motor-first” neural mechanism.

For example, Braille can only be perceived after the motor act of running fingertips over raised dots–motor precedes sensory/perceptual pattern recognition. Echolocation requires a tongue click, like a sonar blip, to be generated by the human being before returning sound reflections can be perceived–again, motor activity precedes sensation and perception.

Natural human processing requires cognition or movement to occur before perception can happen. The human being has to be in the loop, otherwise the system will be weak, faulty, or non-functional. Technologies cannot be “passively layered,” they must be integrated with the brain. And the way to do this is to keep the user in charge as much as possible.

How to bring about blind prosthetics

For over thirty years, Daniel Kish, Steve Mann, and many others have tried to bring some form of perceptual prosthetic to the blind consumer. This never came about and still awaits a breakthrough.

Perhaps the technology had not evolved sufficiently, or perhaps the rehabilitation field was reticent about the technologies until they were more promising, or perhaps blind consumer organizations were reluctant to follow the lead of engineers unfamiliar with the larger picture. The truth is probably a combination of factors.

Here are some suggestions for bringing these wearable technologies to children in special education:

1. Steve Mann and Niel Harbisson are pioneering cyborgs. They wear the technologies that address their personal needs. Perhaps the blindness field needs one or more individuals who are passionate about being a cyborg. Daniel Kish once said to me that trickle down does not work. Revolutions like Braille and echolocation are grassroots movements.

2. Perhaps an X-Prize?

3. Perhaps regular conferences, modeled after the Eye and Chip, could be held to showcase the development of prosthetics for the blind. Or the conference could be broader and showcase all prosthetics for children in special education.

4. Perhaps the “new” digital eyeglasses (Project Glass and Eyetap are examples) could be the substrate/frame for prosthetic applications (similar to phone apps). This route enables open source solutions.

5. Perhaps a university program, or a non-profit agency, or a commercial business could set up a lab or a project. Then, all of the above four suggestions could be addressed under one umbrella.

Posted in Uncategorized | Leave a comment

The Higher Functions of the Navigational Brain

Perception and Consciousness
The higher functions of the navigational brain

If we watch our mind work, we see that almost the whole of its activity is an insistent voice that rattles on and on inside the skull–we are constantly talking to ourselves. And what are these thoughts that won’t shut up?

Our waking brain is constantly working on one of three things:

1- What happened in the past.

2- What will happen in the future.

3- What’s going on right now.

Notice that these are all sophisticated navigation questions: where have I been? Where am I going? Where am I now?


Perception is “what’s going on right now.” According to Daniel Kish, perceptual systems, in general, ask three questions:

1- What is around me?

2- What can I do with what is around me? What meaning does this environment have for me?

3- How do I get to what is around me? How do I use what is meaningful for me?

Perception is the surveying of the immediate environment, in the moment. It is the “getting ready to navigate” system.

Just as the perceptual window that floats in front of our eyes is an amalgamation of brain inputs (this window is really a multi-sensory “reality,” rather than a single sense called “vision”), so too is perception an amalgamation of sensations and motor flow, a mix (result) of incoming and outgoing signals–a snapshot of the moment.


Consciousness is a highly evolved ability of our navigational brains. We became captains of the ship called “our body” when we became aware that we were aware. We began the quest to navigate through new worlds, and we became hungry to do so, because our entire brain is designed to navigate.

So, consciousness is the beginning of motivation, the beginning of the will. It is a navigational system that can decide for itself where it wants to go, where it chooses to stay, and it can reflect on where it has been.

Perception is the window that consciousness uses to survey the moment. Consciousness has the same three purposes as perception, and is actually a further evolution of perception.

Just as perception answers the question “what’s going on right now,” consciousness is asking the same question. Except consciousness is aware of itself and brings in the highly evolved tools of willpower and intention. Consciousness is concerned with:

1- What is around me?

Perception “knows subconsciously” what is in the immediate environment, and can, through reflex actions, react to needs in the moment. These are auto-pilot, sensorimotor reactions. Consciousness knows that it knows, and so it enables choice. Consciousness can call on search patterns to verify redundancy or variation in the landscape of the moment.

2- What can I do with what is around me? What meaning does this environment have?

We call the act of being conscious about meaning “thinking.” Thinking is rare because it requires awareness that you are aware, i.e. mindfulness, being awake in the moment–something human beings rarely do well. Most of what the mind does is monkey-brain daydreaming–obsessing about the past or being anxious about the future.

3- How do I get to what is around me? How do I use what is around me?

Consciousness can consider probabilities, play out imaginary scenarios, and can recall past movements and scenes. Consciousness can also override auto reflexes.

If we try to define consciousness as anything more than “knowing that we know,” we end up with mass confusion. Marvin Minsky calls consciousness a suitcase phenomenon–there are bits and pieces of ” consciousness” all over the brain. When we try to study its parts, we end up confusing consciousness with perception, attention, intention, imagination, even with sensory systems like vision.

Consciousness is one of the new, unknown frontiers. It is a recent fascination of the navigational brain. Ironically, studying consciousness using our minds is a questionable conundrum, like “water” trying to comprehend “ocean.”


I am aware that the viewpoint of the brain as a navigational entity is cold-hearted materialism–the stuff of analytical minds. Our evolution has given us love, compassion, empathy, dialogue, language, community, a pantheon of positive capability. Even if the brain came about historically to enable movement, the modern human today is a language using, emotional miracle.

The analytical approach also completely leaves out the spiritual. Science has little to no patience with things that cannot be dissected and analyzed. But human beings with daily emotional issues, relationships to nurture, cannot and will not exclude love and spirituality–humanity is more than the science of body parts.

There is also a very real, sometimes dramatic difference between male and female brains anatomically, physiologically, and behaviorally, that results in a difference in consciousness and perception. This is an oversimplification, but there is a deep difference between male and female brains that is, in my opinion, too often disregarded by the male-dominated sciences.

Males appear to be genetically programmed to navigate outwardly into unknown worlds. Females have highly evolved social brains. They navigate inwardly and they explore relationships. They sail on the sea of empathy, they connect human beings together. They nurture and attract. Male and female brains are different. This gender difference cannot be ignored as we study the brain and the new frontier of consciousness.

Neither can the unique individuality that transcends gender be ignored–gender is only one defining characteristic of the human species; there are poetic social males and analytical, pragmatic females.


Everything said above I created using the overarching perceptive that brains are navigational “machines.” I am, I assure you, no expert on the brain, nor am I a philosopher.

Please don’t quote my words as if there was a body of research supporting any of the speculations. I really do like the internal logic, but I have spent little time pouring over the scientific literature.

I do read a lot about the brain, and I apply the knowledge to students in special education. There is a pragmatic “validity” to my musings.

Posted in Uncategorized | Leave a comment

The Navigational Brain

The Navigational Brain

How can a person like Daniel Kish navigate so well without sight? If you have no eyes, how can you navigate at all?

Why do we have brains?

Compare a tree, a member of the plant kingdom, and you, a human being, a member of the animal kingdom. We humans have brains and nervous systems. But trees don’t have brains or nervous systems. Why is that?

Just like human beings, trees are born from the union of two energies that come together to make a seed embryo. The embryos have a life cycle that begins with infancy, then continues through maturity to old age. Eventually trees die, returning into the soil.

Like us, trees have a rich genetic endowment reaching millions of years into the past. They are magnificent creations of nature, rivaling our own evolution. Yet they have no nervous systems and no brains.

The life cycle of an ocean creature called the sea squirt provides further insight into the mystery. The sea squirt is a primitive fish for the first part of it’s life cycle. When it reaches maturity, it fixes itself to a rock on the ocean floor and morphs from an animal into a plant. As it does so, it absorbs its crude brain and nervous system, since it no longer needs them.

What was the sea squirt using a brain for? Why did it suddenly and empathically rid itself of a brain the moment it became a plant?

What is it that animals do that plants don’t?

Animals move. Brains evolved, and are primarily used for self-guided movement. Animals navigate. Brains evolved to enable purposefully movement.

Therefore, Daniel Kish, or any blind individual with normal cognitive abilities, can navigate without vision because they have lost only one component of a whole system. The brain receiving no visual input uses the rest of the sensory systems to construct purposeful movement.

The idea that the brain evolved to enable movement is based on the research and conclusions of Cambridge University Professor Daniel Wolpert. His Ted Talk on the subject can be accessed on YouTube.

Dr. Wolpert states that brains evolved to enable “adaptable and complex movement.” For my own purposes, as an orientation and mobility specialist, I re-stated Dr. Wolpert’s language, and use the term “the navigational brain.”

What is Vision?

It is clear that vision is basically a navigation system. The reason laymen come to believe that blind individuals can’t navigate is because they confuse vision with navigation. Vision is the most powerful component of the navigational brain, but only a component.

It is helpful to get a more detailed look at visual processing. When we do this, we see that there are two vision systems. This division into two visual processing streams begins at the retinal level with a central vision processing network, and a peripheral vision processing network.

From the retina, these separate neural systems travel different processing pathways throughout the brain. Central vision goes from the occipital region, passes through the temporal cortex and ends in the prefrontal brain. The peripheral system starts in the occipital lobe but then flows up over the top of the brain, through the parietal and frontal lobes and ends at the prefrontal- where it merges with the temporal stream.

The central processing stream has been apply named the “What is it” processing stream. The peripheral system is called the “Where is it” stream. If we are right to think that the brain evolved for navigation, it becomes clear that the peripheral processing stream is the “ground,” while the temporal stream is the “figure.” The peripheral system contains dynamic global maps, overhead views, the ever-changing terrain. The central system contains the landmarks, the things we name and turn into concepts.

In other words, vision is clearly a navigation system. To navigate with great precision, we need a map (a substrate), and then we need details on the map. The peripheral vision system is a holistic background, while the central system contains the patterns (the figures).

An interesting, but relevant aside here is that bats, using echolocation to navigate, also have a central processing and a peripheral processing system for auditory perception. Bats vary their clicking patterns depending upon what they are doing. They use an orienting sonar, a parietal process to get a holistic view. When they are targeting prey, they increase clicking frequency to improve spatial resolution using a central processing system.

The visual and auditory senses are the two largest contributors to the navigational brain. Their primary function, working together synchronously, is to allow the human being to move with purpose.

Navigational Disability

I noticed as a special educator that many of my students had difficulty navigating. I don’t mean just the blind kids, or the severely visually impaired children. I mean as well, the autistic kids, the physically impaired students, the hearing impaired kids–all of them. The more severe the brain damage, the more severe the navigational disability. If we understand that the brain evolved as a powerful navigational computer, then we can see why this would be the case. Damage anywhere in the human brain results in some degree of navigational difficulty.

Daniel Kish says that the traditional approach to teaching blind children to travel has emphasized guiding, explaining, and demonstrating. Using this traditional approach, he says, turns blind kids into trees– they become frozen in space, never developing the perceptual skills that would allow for independent travel. They grow up waiting to be guided, waiting for help–they develop learned dependence.

The navigational brain needs to move. It needs to explore. It has a hard time parked in idle. The approach that makes sense for a brain that needs to purposefully move, is what author Richard Mettler termed “structured discovery learning.”

Basically, structured discovery means letting blind kids explore for themselves. It is a method founded on an understanding that self-exploration is what works best for the navigational brain.

Daniel Kish uses his own variation of structured discovery to teach students. His philosophy calls for self-development through self-initiated purposeful movement. He is building perceptual systems, he says. His students develop a high degree of sound perception as they explore.

Navigation and Development

Human development follows a navigational time line. Infants can manage a crib. Toddlers can be left to play in their bedrooms. Preteens can be trusted in their homes, to cross the street to the park, to visit friends in the neighborhood. Teenagers leave the nest and explore the whole town. College kids roam the planet, if they are lucky. Adults go to the moon and send space probes to Saturn. Great human navigators explore consciousness, spiritual realms, virtual reality, artificial brains. They go to the bottom of oceans and climb the highest mountains. If they could figure out some new landscape to explore they would go there in a heart beat.

If we can explain so much using this hypotheses about the evolution of the brain, if indeed Professor Daniel Wolpert is correct that the brain primarily evolved for purposeful movement, then we should also be able to explain consciousness and perception as navigational systems (see the next post).

“I believe that to
understand movement
is to understand the brain”

Professor Daniel Wolpert

Posted in Uncategorized | Leave a comment

Amygdala For sale

Amygdala For Sale

As scientists and doctors look to chip implantation to address major concerns, like blindness and Parkinson’s disease, they are starting down a road that must be carefully monitored. The problem is not that scientists and doctors are unaware or not careful enough-they are indeed aware, and they are especially mindful to protect their patients.

Within the corporate world however, old ways of operating that ignore public welfare can take us into a future that alters what it means to be human, in ways few of us would condone.

From a corporate perspective, the human mind is a frontier waiting to be colonized. Indeed, the Maginot Line has been breached–we are already in a war for control of the human mind.

The brain is real estate to be patented, in the same way that corporations are now battling to own the human genome.

If nothing is done to change business as usual, than we might just find ourselves in a Brave New World filled with corporately controlled cyborgs.

I can think of several reasons why the human brain might end up on the real estate market:

* Because brain organs, like the amygdala, could have their function “enhanced” if they contained implants. The amygdala chip would be patented and owned by a corporation.

* Internal chips can be programmed from a desk top computer (or mobile phone perhaps). This means that outside re-programming becomes a real possibility –not science fiction. Corporations will own the implanted chips, as well as the external control chips– hardware and software.

* “Executive function chips” implanted in the prefrontal cortex would have direct control over all brain functions.

* Because we are a drug culture, we pop pills like candy. We want to take our medicines, and get cured. It’s an easy step to “chips as drugs”. We will want the “experience nirvana” amygdala chip, and the executive enhancement chip, and the “weight reduction” chip.

* Hive brain, mind-to-mind social networking, will be enabled by brain chip networks. What gets in, and what gets filtered out, will be decided by the corporations that build the technologies.

Processing chips implanted in the brain can be networked with other implanted brain chips. When this occurs, this network becomes an internal area net (IAN)–an artificial universe unto itself. Computer processing units rarely stand alone–they eventually get networked with other processing units.

In short, computer networks are artificial mini-brains. When we place these processing networks into our own neuronal structure –brains inside brains– we alter what it means to be alive, what it means to be a human being. That isn’t necessarily a bad thing, of course, since there are diseases to cure, and there might be enhancements to memory and processing speed, for example, that we could consider–but we need to have the debate before unilateral decisions get made by default.

Since we hardly know how the brain works now, it might be a tad premature to start putting for sale signs up in the amygdala.

We ought not to rush into this brave new world without thinking it through.

People with less than social justice on their agendas (including some corporations and governments) use networks like television and the internet to influence the thoughts of the populace. That won’t change when we create new internal area networks. The propaganda mentality will still exist, except now we will have created an ability to simply pour advertisements and propaganda directly into minds.

Notice also that networks tend to link with other networks. Not only do we have the internet, a so-called wide area network (WaN), but we also are creating personal area networks (PaNs). Personal networks include the linking of all the technologies we hold in our hands (the mobile stuff) and all the systems we wear–like the new digital eyeglasses that will soon explode on the market. PaNs have the ability to link with IaNs, and with the world wide web.

These networks will also intermesh with other net-webs, leading to an evolution of a meta-net, comprised of at least the following set of existing and evolving communication systems:

* Smart spaces linked together (spatial area nets called SaNs) will combine with IaNs, PaNs, and the Internet. SaNs are chips embedded in the environmental.

* SaNs will link with object nets (object area networks, called OaNs). Smart objects will link with other smart objects, and these will link with all other networks.

* There will also be virtual area networks (VaNs) wherein we link virtual worlds with “real” worlds, further blurring what we mean by “reality.”

* We will also create molecular area nets (MaNs) as computers shrink to molecular size.

* As computer processors shrink to sub-molecular size, we will create nano area networks (NaNs).

* And finally, we will evolve quantum nets (QaNs) as we get down to quantum-sized processing units.

All the above networks will mesh into a meta net.

Hook that up to a brain filled with chips and you might just get a science fiction horror novel.

Now imagine a virus-impregnated meta-net, or an advertisement-saturated meta-net, or a meta-net full of invasive opinions coming from who-knows-where.

We have started down a road fraught with dangers.

We ought not to put blinders on, thinking that we are only doing one innocent piece of good work for the benefit of mankind, when in fact we are helping redefine what it means to be human.

Web to weave
and corn to grind
Things are in the saddle
and ride mankind

Ralph Waldo Emerson

Posted in Uncategorized | Leave a comment

The Evolution of Vision Implants

The Evolution of Vision Implants

During our recent lunch (see yesterday’s blog post), Dr. Phil Hessburg emphasized two technologies that model how visual implants will evolve: the intra-ocular lens (IOL), and the cochlear implant.

The intra-ocular lens is the most successful visual prosthetic to be developed up to this date. It is a manufactured plastic lens that is used to replace the biological lens of the eye after cataract surgery. Vision with intra-ocular lenses is actually better than with the biological lens! These prosthetic lenses are an improvement on nature’s design since they don’t denature and turn opaque over time. There has been steady improvement in design and function since their inception, and the cost has gradually but continually declined. The IOL is safe, and it has become completely accepted by the medical community and the general population.

Early IOLs were primitive, and there were many competing varieties. Doctors worried the body might reject the foreign substance or that undue or unusual infections might result after implantation. The research to develop the IOL was very expensive, and critics wondered whether all this money might better be spent elsewhere.

In short, the development of the IOL mirrors the development of retinal chips–or any other internal implantation. And if IOLs are a good benchmark, then the future of implantation looks very promising and safe.

The cochlear implant is a major miracle of modern science. In one important way, it is more sophisticated than the IOL because it is an active computer chip, which means that it can be adjusted and programmed from a computer outside the body. As a processing chip, the cochlear implant evolves at the same rate as computer technology, becoming faster, smaller, cheaper, and more powerful with each new generation of computer architecture or software upgrade. And that is exactly what has happened with the cochlear implant–it has become increasingly more sophisticated every year.

However, because the vision system is more complex than the auditory system, the engineering challenges for vision chip researchers is of a magnitude far greater than that faced by the pioneers who created the cochlear implant. Furthermore, the retina is a very complex computing tissue. Placing processing chips in contact with the retina is far more complicated than placing active processing units against the auditory nerve.

Despite these challenges, the assumption is that vision chips will eventually eliminate blindness, and then varieties of chips will eliminate vision impairments–but these advances are a long way off. On the other hand, if IOLs and cochlear implants are models, there is no doubt that we will continue the march to load the human body with very successful and medically safe perceptual processing chips. Current technological advancements certainly point to the powerful idea that we will eventually eradicate blindness, and we will reshape the human brain. A huge new arsenal of tools will become available for future specialists.

The speed of technological change, that has so eloquently been described by futurists–most notably Ray Kurzweil–is so rapid that the arrival of these implanted processing chips will be sooner than most of us can envision.

This bodes well for human evolution.

Beyond the wonderful work being done by pioneers like Phil Hessburg, there are social issues, worries and dangers, that chip implantation brings to the surface.

Personally, I believe that a momentum will continue as chips continue to evolve and be used inside human bodies and especially inside brains. I think this momentum is unstoppable, actually, no matter what moral qualms we have. Still, we ought to be having a debate as these technologies roll over us.

These “moral qualms” and the need for debate are the focus of the next post.

Posted in Dialogues, Technology Conference | Leave a comment

Medicine Meets Rehabilitation Over Lunch

Dr. Phil Hessburg and Daniel Kish:
Medicine meets rehabilitation over lunch

When I joined the program committee for the IEEE conference, my mind immediately leaped to involving two individuals: Daniel Kish, because his contributions to blind rehabilitation on a global scale have become legendary, and Dr. Phil Hessburg, because his contributions to visual prosthetic research and development on a global scale have become legendary. What we had over lunch, January, 2013, was the legendary meeting of medicine and special education.

When I was director of the Institute for Innovative Blind Navigation, I used to attend the famous Eye and Chip conferences that Dr. Hessburg organized. Dr. Hessburg set up the Eye and Chip gatherings 15 years ago because, as he stated during lunch, “Collegiality facilitates collaboration, which accelerates progress.”

That has certainly been true for the Eye and Chip conferences. The gatherings, which take place every two years in Detroit, have brought together top research teams working on vision implant technologies. Because of the Eye and Chip conferences, researchers have bonded, and teams have openly and kindly shared their ideas and technologies with each other. This year, for the first time in history, a retinal prosthesis passed the first round of FDA guidelines–and today we are one step closer to a retinal implant.

In his article titled “Artificial Vision, Will There Be Such a Thing?” for the Eye on Michigan Magazine, a publication of the Michigan Society of Eye Physicians and Surgeons, Dr. Hessburg wrote the following:

“We are not there yet with visual neuro-prosthetic devices. But the proof of principle is at hand and tremendous progress has been made. On September 28 (2012), a U.S. Food and Drug Administration (FDA) Ophthalmic Devices Advisory Panel unanimously voted 19-0 that the probable benefit of the Argus II Retinal Prosthetic System outweighs the risks to health, an important step toward the FDA market approval of this product manufactured by Second Sight Medical Products, Inc., of Sylmar, California. That device is already being implanted in Europe . . .”

Dr. Hessburg, Daniel, and I share the common accomplishment of having set up and administrated non-profit agencies. Being a successful eye surgeon, Dr. Hessburg did not have to found the Detroit Institute of Ophthalmology (DIO). He did not have to set up three internationally acclaimed conferences: The Eye and Chip, Eyes on Design, and The Eye and the Auto. But he did, so I asked him for the motivation behind all that determined work.

“Well,” he said, “As a doctor, I hated using the words ‘Nothing more can be done.’ I hate those words!”

Phil Hessburg is obviously a very compassionate individual. Daniel remarked after lunch that it was a special pleasure and an inspiration to have met him. I feel the same.

During our lunch with Dr. Hessburg, Daniel remarked that some of his clients had actually said there would soon be no need for echolocation, rehabilitation, or even special education, quoting news reports that artificial vision “had arrived.” These parents hold the common conviction that their blind kids will soon see and the rehabilitation crisis will have passed.

Dr. Hessburg was somewhat horrified to hear this. He made it very clear to Daniel that we are a long, long way from artificial vision. That, while there was reason to rejoice because we had set out on the correct research pathways, the road ahead was so complicated, and the goal so distant, these types of sensationalist news reports were more harmful than helpful.

Our lunch together was much too short–it is rare to get two such unique individuals at the same table. Each of these men represent the very best in their respective professions. It would have been interesting to witness and document a much more indepth conversation between these remarkable men. Perhaps, collegiality might have facilitated collaboration had there been extended dialogue between Daniel Kish and Dr. Hessburg.

I could see that these two leaders represented two parallel services in today’s world, medicine and rehabilitation. There is tension on a global scale between these worlds because there is constant competition for funding. Rehabilitation (special education) always seems to be the poorer cousin. I would have enjoyed listening to a discussion about this problem.

From my perspective (this is a soap box for me), I can see that the longstanding struggle between rehabilitation and medicine has left needy children without sophisticated technologies.

For example, to get the retinal implant Argus II from concept to stage-one FDA approval cost approximately $200 hundred million dollars, according to Dr. Hessburg’s article: ” . . . about $100 million of government money coupled with at least $100 million private investor funding.”

Meanwhile, as money is thrown into experimental and speculative medical solutions, it is increasingly difficult to obtain funding for viable technologies and services in special education that already exist, or could be brought to market with a mere fraction of the cost of Argus II.

In order to write a basic grant for $3000 to purchase a technology that enables blind kids to see with sound, I had to set up an entire non-profit agency. Likewise, Daniel struggles to keep his non-profit agency World Access for the Blind afloat on a daily basis.

It is clear that our medically-oriented mindset values the big bang of a sensational solution over the simpler and lesser touted practical solutions that can today give blind students eye-hand coordination and the ability to perceive through sound.

More needs to be done to provide affordable access to these existing services and technologies, even while searching for the big bang solution.

Both opportunities need to exist, and be equally funded, side-by-side.

Posted in Dialogues, Technology Conference | Leave a comment

Conversations with Daniel Kish

Daniel Kish

There was a young blind man named Ben Underwood who made a big media splash many years ago. Oprah Winfrey featured him on her show and gave him global recognition. Ben moved about with the same fearless attitude that Daniel displays. He had discovered echolocation on his own and perfected it on his own, just as Daniel had done when he was a boy.

Both Ben and Daniel had a rare form of cancer called retinoblastoma. It is a vicious beast, this cancer, it often recurs and it sometimes refuses to yield to technology. Retinoblastoma took Ben Underwood’s life when he was only a teenager. I have no doubt whatever that Ben would have made a significant contribution to our culture had he lived. Daniel’s genetics are obviously more protective, he has lived for 46 healthy years. His vegan diet, active mind, and energetic lifestyle contribute to his good health.

Both Ben Underwood and Daniel Kish tell us, with their fearless, fluid, sightless navigation, that our approach to educating blind children is antiquated. Changes need to be made so that generations of blind children can walk beside their sighted peers without anyone thinking it unusual.

When you witness a group of blind mountain bikers weaving through terrain that most of us would fear, then you begin to understand that our low expectations of blind individuals, across every culture on the planet, is seriously flawed. I remember many images of Daniel during our years of friendship, when he would do things that made me shake my head and wonder. I would stand there, mouth slightly open, and mummer to myself “What the hell!”

In Metro Airport in Detroit, for example, after skillfully getting myself and Daniel lost in the wrong, huge, and mostly empty terminal–he had a plane to catch–he suddenly turned to me–we were on the second floor–and said “Well, Doug, it’s been great again. I really enjoy our conversations. I’ll take it from here.”

He gave me a hug, swung around, went directly down a nearby escalator, walked across a huge empty expanse on the first floor– there were glass windows all around, I could see his movements from a block away– walked directly out a door and flagged a bus. I eventually found my way back to the parking lot and after a frustrating search found my car. I have two perfectly fine myopic eyes. Daniel has two plastic prosthetics.

Television crews are always filming Daniel and the small group of young blind people he mentors. These young blind employees of World Access have become experts and celebrities themselves. The BBC seems to regularly show up, as do Japanese, French, German, Dutch, Indian, and Italian crews. You can watch Daniel and his team on youtube or check out the World Access website. Articles appear a few times every years in major magazines and newspapers. The lay public is pretty well informed about Daniel Kish and his magic life style. But it’s a different story in Daniel’s professional world.

Daniel is a global curiosity to most of the planet while, at the same time, he struggles to convince his own professional colleagues to adopt his curriculum for teaching echolocation. He would also like negative attitudes towards blindness, that exist everywhere in the world, to be replaced by his philosophy of “no limits”. “It’s about being free,” he says, empathically.

Parents of young blind kids have no problem understanding the significance of Daniel’s insights. These parents simply bypass the education and rehabilitation infrastructures of their cultures; they go directly to Daniel. It is parents in Scotland, Michigan, Canada, South Africa, Mexico (all over the planet) who find the money to bring Daniel directly to their blind children.

Change within professional training institutions is slowly evolving, but parents cannot wait–and they don’t.

Daniel is a respected member of a profession called orientation and mobility (O&M) – he has a degree in blind rehabilitation. He also has a second masters degree, in developmental psychology. O&M professionals work with blind and visually impaired individuals to help them navigate through the world efficiently and safely. Like all professions, orientation and mobility is evolving; the field has a momentum established by it’s unique history.

Orientation and mobility was developed after World War Two. It came out of the military, shaped through the compassionate energy of a few pioneers. Because it had its beginnings in the military, it targeted blind veterans, males roughly between the ages of 18 and 30 who were primarily blind, i.e. did not have secondary health issues.

Orientation and mobility only reached the public schools in the 1960′s. Later, the field expanded to include severely visually impaired children, the elderly, and people with multiple impairments. The scope of the profession has undergone massive change over the past fifty years.

Daniel is on the leading wave of the evolution of his profession. His teaching approach and philosophy are part of the future of orientation and mobility. As a leader, way out in front of the pack, he is naturally frustrated with the rate of change. He sees from a global perspective because his teaching takes him into every corner of the globe. But the notion that he is unappreciated by his peers, is simply not the case. Daniel is highly regarded and admired by his colleagues, especially by those of us who know him personally and understand what he so expertly demonstrates.

The world as a whole is still a mess when it comes to people with “disabilities.” The altitudes held in the third world are downright harmful, dreadful even. The mis-perceptions and dumbing-down attitude of the industrial societies are not a whole lot better. Blind children are born every day. These infants could lead free and happy lives, with the same potential as any sighted kid, if there weren’t barriers, behind barriers, behind a lot of bad attitudes.

Daniel is a wave of new hope sweeping over the planet.

Posted in Dialogues, Technology Conference | Leave a comment

Conversations with Daniel Kish

Daniel Kish

Daniel and I visited Toronto on a couple occasions in the 1990s to meet with University of Toronto professor Steve Mann. Steve had been part of the first listserve I set up, back before the world wide web was created. The listserve brought together inventors working on high technologies for blind navigation. Steve was on that list with Daniel Kish, Leslie Kay, Peter Meijer, Greg Downey, and Michael May, to name a few of the brilliant guys gathered there.

Steve won us over early on, and we remained fascinated with the things he suggested we might do. On one occasion, Daniel and I joined Steve at the University to discuss a possible collaboration. We wondered if all the high technologies for blind navigation might be housed in a single wearable substrate. We brought along some current technologies as an example of what might be incorporated in a wearable system.

We showed Steve Dr. Leslie Kay’s invention called KASPA, “Kay’s Advanced Spatial Perception Aid”. Leslie, who was the former Dean of the Engineering Department at Cambridge University in New Zealand, was knighted by the queen of England for this invention. Later, companies used the technology to build range finders into their cameras–the auto focus of modern point and shoot devices.

Steve put KASPA on his head and didn’t take it off for weeks! He had no problem being some new variety of cyborg.

KASPA sends sonar from its transmitter over the nose. The returning sound waves come into receivers that send signals to the ears. Students are then able to localize sounds.

I used one of the first KASPA’s with my students in Saginaw, when the Institute for Innovative Blind Navigation (IIBN) was in its heyday. We wrote a grant for the three thousand dollars the hardware cost back then (my school budget for technology was a mere $300–that’s one reason IIBN was established, to get technologies for blind kids).

I will never forget the first time I put KASPA on one of my totally blind students.

Seven quarter inch dowel rods, mounted on small wooden platforms, were arranged in a semi-circle on a table–my student Jerry sat facing them. He was wearing KASPA for the first time. While I was setting up the dowel rod array, I noticed Jerry was waving his fingers in front of his face.

“Hey, Doug,” he said. “I can see my hands!”

I stopped, somewhat stunned, and watched him. Sonar signals were bouncing off Jerry’s hands. Sound waves were then reflected back into lateral pickups that allowed him to localize his hands in space. The signals changed as he wiggled his fingers or moved his hands in and out, or side to side.

Jerry’s lesson task was to discern a series of dowel rods set at different distances. He had already shocked me moments earlier when he reached out and grasped a single dowel rod.

Jerry had to initially learn what “hand and eye coordination” was. But once he got his hands and the KASPA signals coordinated in his mind, he was able to reach with pinpoint accuracy. It took a remarkably short period of time to teach him hand and eye coordination.

I told Jerry that the rods were placed at various distances in a line on the table and asked him to scan from left to right. He moved his head in a wide sweep and then gradually slowed and narrowed the head turning. He simply started counting and accurately pointing.

“One . . . two . . . three . . . He counted all seven and when he got the last one correct he scanned all over the table looking for an eighth. Finally, he turned his head, looked at me, and asked “Seven?”

Jerry hardly ever made eye contact. His head was always drooping, much to the chagrin of his teachers who were constantly on him to hold his head up and face the person talking. KASPA, just naturally, led to eye contact.

I got called over to talk to a colleague for a few moments and then suddenly noticed that Jerry was picking up the dowel rods and putting them away in a box for me! The environment was calling out to him and he was responding.

Leslie Kay had an arsenal of video tapes showing blind infants reaching for their bottles, kids running through playgrounds, other students putting pegs in a peg board. The thing was an amazing breakthrough for the blind.

Another one of my students, Lisa, was equally good with KASPA. When she was done with her lesson one day, while I was packing up, she scanned the wall and saw her cane leaning against a door frame. Any other day she would be groping along the wall to find the thing. Then she noticed something on the wall, a picture in a frame. I watched while she slowly moved her head side to side, eventually seeing the width and length of the frame.

A few moments later she was studying the door and remarked that there was a picture frame on the door– there was a small window in the door that she had not perceived as a blind person. With KASPA, the hard glass came back with a sharper echo and the edges of the window frame also were discernable.

As revolutionary as KASPA was and still is, it was never mass produced — there is no money to be made giving blind kids the gift of hand and eye coordination. There is no profit to be made helping blind kids “see with sound.” There were other problems as well, of course.

KASPA was always breaking. It had to be shipped back to New Zealand for repairs. Customs people used to hold shipments up for months because they couldn’t wrap their mind around what the thing did. At any rate, they tacked on a small fortune just to get the thing through officialdom.

And KASPA was heavy, it had a small lead battery with delicate wires coming out of it. I remember with horror the day I put it on a two year old. He yanked that sucker off his head so fast, when I flipped the on switch, I didn’t have time to save the delicate wiring.

KASPA was always sending a steady stream of sound patterns to the ears when it was on. It was not like Daniel’s clicks that are emitted as needed. KASPA was like the vision system. It flooded the brain with information, most of which had to be ignored.

It took me years to figure out why the blindness community and institutions for the blind rejected the device . . . Yes, they really did . . . They had the usual complains, cost being the main factor. The mythology of the blindness community was that any technology costing more than two hundred dollars was outside the financial range of the average blind individual. It was also about all rehab agencies and schools would pay for something new.

KASPA was too heavy, people would said, too delicate, and required special training to use. My favorite bad excuse was that it brought undue attention to the person wearing the device- it was ugly, people told me.

Those were the arguments that justified non-support. Despite these semi-legitimate concerns, the biggest part of the problem was a culture afraid of change.

I used to give presentations using a small red plastic phone that I pulled out of my kid’s toy box. I would stand in front of audiences pretending to be Louis Braille talking on the phone to the director of an institute for the blind, trying to get my new invention introduced as a tool to help the blind “see with touch.”

The Director would chuckle ever now and then, and explain to Louis in a patronizing voice why his passion was mis-directed. The Director would conclude the conversation with a suggestion that Louis go into social work because he was “obviously a good-hearted guy.”

I really believe that if Louis Braille showed up today with the idea that blind people could learn to read using raised dots on paper, he would be politely shown the door:

“Where will the dots be manufactured, Louis?”

“Who will teach dot reading?”

“Where is the curriculum, the training manuals, the dot decoding system?”

“Where will the funding come from for the Brailling machines, and to pay these new teachers?”

“Where are the engineers who will design Brailling technologies?”

“Honestly, Louis, this is one of those cockamany ideas . . .”

“Social work, Louis, it’s your calling, young man.”

Posted in Dialogues, Technology Conference | Leave a comment

Conversations with Daniel Kish

Daniel Kish

Echolocation is a fascinating phenomenon. Many creatures use the skill, including bats, dolphins, whales, porpoises, shrews, oilbirds, swiftlets, whirligig beetles, and night flying moths. Daniel emits a well practiced, very clear, sharp click – sound leaves his lips traveling at 1,000 feet per second.

Sound waves from Daniel’s clicks spread out through the spaces around him contacting everything within the environment. These sound waves bounce off solid objects and return to Daniel’s ears. Every object has properties that affect how sound is reflected. The result is that Daniel can “see” auditory patterns similar to the visual patterns sighted people experience.

I remember smiling when Daniel told me that he had just given a speech in Denmark at an echolocation conference. The audience of professors and researchers, experts in whales, dolphins, and bats, sat their amused to discover that human beings could also be expert echolocation navigators. Daniel said he felt like a bug under a huge microscope, surrounded by enthusiastic scientists.

These scientists convinced Daniel to undergo MRI studies to determine what his brain was doing while he was echolocating. What they found was amazing. Daniel’s brain processes sound patterns in the occipital lobe of the so-called “visual cortex.” When Daniel claims to be teaching blind kids to “see” using echolocation, he is scientifically correct. What he and others do when they use echolocation is “see with sound.”

I remember walking through the streets of Toronto one summer day with Daniel when I asked him to explain to me what he was processing as he marched down the sidewalk at his fast pace.

“Well,” he said, “I keep track of what I’m passing, if I feel like it. For example, there’s a parked car, now a gap, another car, another gap, car–like that. Now we are passing a low hedge on the right. There’s a tree. Across the street is a tall building set back from the road.”

In an article for Future Reflections Magazine, Daniel writes “. . . a parked car, detectable from six or seven yards away, may be perceived as a large object that starts out low at one end, rises in the middle, and drops off again at the other end. The differentiation in the height and slope pitch at either end can identify the front from the back; typically, the front will be lower, with a more gradual slope up to the roof. Distinguishing between types of vehicles is also possible. A pickup truck, for instance, is usually tall, with a hallow sound reflecting from its bed. An SUV is usually tall and blocky overall, with a distinctly blocky geometry at the rear. A tree is imaged according to relatively narrow and solid characteristics at the bottom, broadening in all directions and becoming more sparse toward the top. More specific characteristics, such as size, leafiness, or height of the branches can also be determined. Using this information in synergy with other auditory perceptions as well as touch and the long cane, a scene can be analyzed and imaged, allowing the listener to establish orientation and guide movement within the scene.”

The thing that amazes me most is that Daniel gets whole scenes, complex spatial patterns, from a single click of his tongue. He can analyze an environment as fast as I can. He gets back enough spatial detail to navigate with no help.

In the Future Reflections article Daniel says: “In our work with blind students we use the term Flashsonar because the most effective echo signals resemble a flash of sound, much like the flash of a camera. The brain captures the reflection of the signal, much like a camera’s film.”

Our world is very redundant. Buildings are rectangles usually, with four walls to a room, a ceiling, floor, windows, and doors. The insides of these rooms are characterized by function. Kitchens have stoves, refrigerators, sinks, and tables. Bathrooms have toilet stalls, urinals, showers, and sinks. Some houses have basements and second floors. Layouts repeat. It doesn’t take long to learn the redundant spatial designs of our built environment.

The outside world is more complex, but it also has redundant layouts like sidewalks, streets, neighborhoods full of houses, yards, parkways, and trees and hedges in places where you expect to find them. There are also redundant spatial areas, like parks, parking lots, gas stations, and high rise buildings.

This universal redundancy allows our perceptual systems to identify recurring patterns. These patterns are stored in memory and become easy to retrieve, especially after repeated verification.

I will never forget the first blind student I ever worked with in the school system. He was about ten by the time the school system realized they needed an orientation and mobility specialist. After working a few days with him, I noticed he was using various echolocation strategies, wacking his cane on a hard surface, clapping his hands together, clicking with his mouth.

So I asked Chevelle about his use of echolocation. What was he getting from the wacking and clicking.

“Well,” he said, “when I was a young boy, laying in bed, I got this weird sensation. It scared me a little. After a while I realized that I was hearing walls. With practice, I could quickly tell the size of rooms.

“Right now, for example, I can hear the wall on my left clearly because it is closer to me than the other walls.”

Then Chevelle turned, faced this wall, and began to move slowly towards it.

“I am about five feet away, four, three, two . . .”

When he stood beside the wall, without touching it, he nodded in the direction of the wall and said- “Here it is.” He then reached up and gently touched the wall with his pointer finger.

Chevelle told me that he walked down the center of hallways by balancing the sound between his two ears. He could also hear the openings to doors very clearly, and of course hearing hallways open up on either side of him was a piece of cake. School classrooms had a teacher’s desk, and rows of student desks. Shelves lined the walls. There were a few windows on the outside wall, one door for entering and exiting, and calendars, pictures, and chalkboards hanging from walls– no big deal.

Traditionally and historically, rehabilitation and educational institutions in every country on the planet use strategies for teaching the blind that do not take advantage of this alternative perceptual system. We have failed to understand “blind perception” in the past, and only now are recognizing the potential.

People like Daniel Kish, Ben Underwood, and my student Chevelle, can get very expert at “perceiving blind.” Imagine how much we could enhance blind perception with wearable technologies.

There is a revolution here waiting to happen. It is frustratingly overdue, if you want my opinion!

Posted in Dialogues, Technology Conference | Leave a comment

Conversations with Daniel Kish

Daniel Kish

As I said in an earlier post, Daniel is working on his memoirs and I am helping. The tone and content of the following posts, over the next few days, are in a style targeted for the book project. I wrote the following years ago, however, and shared it with Daniel during his visit to my home in January, 2013. After I read this to him, he said, much to my surprise and delight, “Why don’t we do a book together!”


One thing about Daniel Kish, you never see him with his head down or his shoulders hunched over. He decided as a young man to accept his blindness, to walk through the sighted world with confidence. All his life, Daniel refused to be perceived as helpless or unfortunate. He is a role model, a mentor, and a source of inspiration.

When he was in his thirties, Daniel created a non-profit organization called World Access for the Blind. The motto of World Access is “No Limits”–he is not kidding.

A conversation with Daniel is always worth slowing down for. He talks with a cadence that reminds you of an enlightened person measuring every word; he stays awake, focused in the moment.

Daniel and I are colleagues and friends. He stops by my house in Saginaw whenever his travels take him near Michigan. Our conversations range over a wide landscape, but one theme reoccurs–Daniel’s observation that vision can be a serious disability!

The memory of the first time I heard him express his opinion about the limits of vision makes me smile. It’s a Zen smile, a roar of pure laugher just beneath the grin. Daniel can explain, with stunning logic, why vision can be a serious problem.

I’m pretty sure Buddha would agree: human vision can sometimes be a nasty form of mind blindness. This is an insight, a wisdom, that can help us become more compassionate. Daniel is right, human beings are visually dependent. It took me a long time to understand and to finally agree with Daniel.

Daniel didn’t resent his twelve years of special education classes in public school; he was grateful for the kindness, but it did bother him that the natives–his teachers–didn’t consider their own need for special help.

Human beings are mostly blind. Vision is actually a perception, a window that floats in front of the face. However, the rest of the body is blind: no eyes behind the head, no eyes on the side of the head, no eyes on top of the head.

Furthermore, the vision system perceives only one octave of the electromagnetic spectrum. However, the auditory system can perceive over a ten octave range. Hearing also penetrates the environment to the sides and behind, which means it has a much broader spatial range than vision. This is why Daniel does not feel disabled by the loss of his eyesight; he has perfected auditory perception to a very high degree.


When I was a young hippy, with my purse of Spanish leather and hair down to my shoulders, I drifted all over the planet, soaking up a peculiar variety of sophistication. Those experiences in the world defined me, gave me my eagle’s wings. I was proud of myself for walking away from my culture long enough to absorb the complexity of human societies different from my own. But all my self-congratulations cannot begin to hold a candle to the travels Daniel undertakes routinely.

Daniel travels with no guide dog and few technologies–mostly, he travels alone. Yet he flows through the world without falling down steps or crashing into walls. When we are together, walking through the streets of Toronto or Saginaw, I find myself jogging, trying to maintain his pace. He is careful, alert, but he is also fearless.

Daniel travels the world giving lectures and demonstrating how echolocation can be used by blind individuals to navigate fluidly through any kind of environment. He has walked alone through the streets of Calcutta. He has strolled without a sighted guide along the dusty pathways in remote villages in India. He has walked at his fast pace along the teeming roads of Jakarta in Indonesia; through the streets of Moscow and Edinburgh; in Cape Town, South Africa; and in remote regions of Mexico–to name just a few of his travels.

Yet airports frustrate him; he will readily admit that. It’s not that he can’t navigate the spaces. It is simply that airports lack signage.

“The world is designed to assist visually dependent people,” he tells me. “We post signs, arrows, and symbols all over the landscape so these sighted creatures don’t get lost.”

With today’s technology, indoor talking electronic navigation systems could produce a similar type of signage for visually impaired travelers. For example, Talking Signs is a company that uses remote infrared audible signage (RIAS) to manufacture systems for people unable to use visual signs and symbols.

As Daniel puts it: “If they put up signage for blind people, we wouldn’t get lost either.”

Posted in Dialogues, Technology Conference | Leave a comment