The White Cane as Technology

A conversation with scholar Georgina Kleege about what her cane tells her, how tech designers should think about visual impairments, and why "bluetooth shoes for the blind" are a terrible idea

Freaktography/Flickr

A couple of months ago, a cartoon appeared in the New Yorker magazine depicting people crossing a busy New York street, staring at their smartphones, swinging white canes to sense their surroundings.

The image got me thinking about the purported “inattentional blindness" induced by smartphones, and the poorly understood functions of the white cane. So I emailed with Georgina Kleege, a literature scholar, professor at UC Berkeley, and a daily user of both a smart phone and a white cane. I asked her about these technologies, 20/20 vision, app design, and re-thinking navigational smarts.

Sara Hendren: Whenever I see someone using a cane on the street, I perceive it as such an elegant wayfinding device—that is, when paired with the magnificent sensing system of the human body. Can you talk a little bit about how a person learns to deploy a cane for maximal sensitivity?

Georgina Kleege: I think there's a popular misconception that blind people use a cane as an extension of the hand to feel the space around us. But, along with my cane, I use hearing, touch, and sometimes even olfactory perception in combination to get me where I want to go.

The cane’s tip sweeps the ground before my feet to alert me to obstacles and curbs, and to announce details about the texture of the surface underfoot. On regular routes, changes in the pavement’s texture signal that I am approaching a destination or turning point. But while I attend to this tactile information I am very conscious of sounds: both the echoes of the sound the cane makes, which can sometimes tell me something about my surroundings, and the sound of traffic, children at play in a schoolyard, footsteps behind or coming toward me, music playing at a corner bar, and so forth. Restaurants, bakeries, flower stands, drugstores, and bookshops all exhale their particular scents. To take advantage of all this information, I direct my attention outward in all directions, creating a sort of sphere of perception to surround me as I move.

The cartoon seems to imply that a dependence on phones for information and social interaction necessitates a new prosthetic. But my guess is that a smartphone and a cane are an interesting combination when used in tandem. Do you use one more than the other, or both in different ways, or something else?

Since I need to rely on my hearing to get around, I tend not to use my phone when I'm in motion. I sometimes use GPS navigation with turn-by turn directions spoken out loud, but it can be tricky if I'm also listening for traffic sounds and other people I might walk into. When I use GPS, I prefer to get the maximum amount of information; I want to hear all the street names, all the businesses I pass. I retain a memory of this for future reference: "Oh, there's a Thai restaurant across the street from that movie theater," that kind of thing.

Not long ago, I came across a project: “bluetooth shoes for the blind.” The designers put sensors in the soles of a pair of shoes; a blind person would then type in a destination into the smart phone's GPS, and the shoes would vibrate to tell you when to make a turn. The inventors admitted that these shoes would not help with maneuvering through crowded city streets. Also, they don't make a distinction between a curb and an open manhole. So I say—who are they kidding? It's an example of a kind of technology that’s supposed to be attractive because it would replace the cane, making the blindness less visible, and allowing the blind person to "pass" more successfully as sighted.

It's also an example of the well-meaning but utterly wrong-headed notion that "ubiquitous computing" will save the world. I see a troubling number of technologies now touting themselves as prototypes for blind or deaf users—as an afterthought application. They've figured out a gadget that's tactile in its sensing and response functions, say, and then they think: Great! Now—what's the application? And the go-to becomes disability tech. It’s a more challenging, sustained effort to look closely at the tools in use already, especially ones that support the dynamic work of all human bodies in making choices and judgments as complex as those required by spatial navigation.

And yes, "passing" is always driving too much of the design impetus around disability tech. I'm curious if there are new tools or ideas that do seem promising to you?

In general, I find that the pared-down format of apps versus their corresponding websites actually makes them easier to use. On a website for a bank or whatever, I sometimes have to tab around quite a bit to find what I'm looking for. But on the phone there's less information displayed on each page, so it's easier to find.

I have one blind-specific app that's pretty useful. It's called Blind Square, and it's the equivalent to Four Square, in that it tells me about businesses and points of interest near my current location, or a location I plan to visit. My favorite function is the "Look Around" feature. I can turn around and point the phone in different directions and it will tell me what's there. There's also a "Simulate Location" feature that lets me do this when I'm at a distance. So if I'm going to meet you at a coffee shop, I can check out what other businesses are nearby before I actually get there.

Since the screenreader is built into the iPhone rather than something that needs to be added on, I often recommend it to sighted people who find it hard to read the teeny tiny print on a web page, or on an iBook, etc. Like the dictation function: It may be that it was originally put there for a certain population, but why shouldn't everyone use it?

There are now some apps that allow the user to take a picture of something, a restaurant menu, a food package in the supermarket, and then read the text out loud. So far these are add-ons and thus expensive. And I've heard that they don't work so well, but I hold out hope.

You've written critically about the oversimplified notion of blindness as a state of having no vision at all, when in fact only 10-20 percent of people who are "legally blind"—in cultures where such a measure exists—see nothing. Most people, in other words, use the vision they have in connection to the sphere of perception you were describing. Some see high-contrast lights and darks, for example, or objects directly in front of them but not in their peripheral vision. And this is a spectrum that mirrors that of sightedness too. There's a really big difference between someone seeing with 20/40 vision versus someone seeing at 20/10—and to possess 20/20 vision, which is often associated with "seeing clearly" in every sense, is actually just to be statistically average. What implications does a more nuanced understanding of the relativity of vision have for technology design, either so-called "assistive" tech or personal devices?

I’d encourage tech designers to think in terms of a spectrum of human visual experience rather than in terms of a simple binary: blindness versus sightedness. And to think about the age of onset. How is it different to develop a visual impairment late in life (which is quite common) than to be born with one? Answering that question need not be a matter of degrees of tragedy—which state is “worse”—but simply a matter of difference.

I like the term “people with print disabilities,” which encompasses people with visual impairments as well as people with a wide range of cognitive processing issues—various learning disabilities—that affect the ability to process standard print material. The term forces us to think of print as the problem, rather than looking at the individual human being and his or her individual sensory and cognitive apparatus. So the technology solution needs to be flexible enough and diverse enough to allow users to pick and choose when and how they might use it.

Processing print material can be a problem for anyone depending on the situation, so many of us want to have devices where it’s easy to turn on text-to-speech, or text enlargement, or text highlighting, or some combination of these. The point is to recognize that there are all sorts of people who might prefer to receive information in non-normative ways who would not identify as people with disabilities.

You cite Helen Keller's subdividing her sense of touch into specific measures: the concrete differences among texture, temperature, and vibration, for example, carried very powerful and distinct ways of perceiving for her. You've written that it might be useful to unpack all of the five senses to understand each broad area as a complicated armature for sensing the world. Especially when the biggest understanding in recent neuroscience is the apparently endless plasticity of the brain, the time may really have come for such a reconsideration. And for designed tools to better reflect that understanding.

In addition to those three aspects that Keller describes, we could perhaps add kinesthesia, meaning tactile sensation combined with motion; proprioception, meaning the sense of one’s body in contact with or proximity to objects; plus more general states of “feeling” having to do with energy levels: fatigue, repose, excitement, anxiety, playfulness. We could break down each of the traditional five senses in this way, including sight, since sight is not simply a matter of opening your eyes and seeing. There is such a complex array of different visual activities—reading and other fine-grain pattern recognition tasks, tracking objects, color perception, peripheral versus central awareness.

The other thing to think about is how the senses interact together, overlap, occur simultaneously and or in sequence. Previously, I described how my navigation  through space involves auditory, tactile and olfactory experience occurring simultaneously and sequentially as I move from here to there. I can think of other instances where two of the traditional five senses work together to create a different kind of sensory experience. For instance, the sound of one's shoes interacting with the surface underfoot offers a different kind of tactile awareness. Similarly, running one’s hands over different substances—paper, silk, etc.—can involve both the texture and sound. When you start thinking of all the combinations of sense experience, the possibilities are endless.

On a recent solo trip to San Diego, I realized that I was using my phone's map application, with spoken cues, to make the same short trip between my hotel and a conference center just five minutes away, several days in a row. I was really just willfully turning off the sphere of perception that I've relied on heavily most of my life: I made no attempt to remember landmarks and relationships and the look or feel of roads and such. I always reject the Luddite's knee-jerk certainty that the good life is being eroded by new technologies, but I think it's reasonable to say that outsourcing my multi-modal responsiveness and memory in asking a machine to tell me "how to get from here to there" may be impoverishing my overall sensory experience.

Well, it might be impoverishing your sensory experience, or it might be allowing you to save sensory aesthetic pleasure for other contexts. It may be the case that as people become more reliant on GPS turn-by-turn directions, they lose the ability to read maps. But I’m reminded that before there were maps as many of us know them today, recreating an aerial view of geography, people gave each other turn-by-turn directions. Medieval pilgrimage routes were a list of directions: You start at such-and-such a place, you travel south for half a day until you get to such-and-such a landmark, then you turn left, and so on. The authors of these directions did not necessarily need to indicate the relationship of the particular routes to roads and paths leading to other locations. 
 
As you probably know, Josh Miele has been doing a lot of fascinating work creating tactile maps for blind people, and I suppose for sighted people who might find it useful. I have found that this way of representing space is quite alien to me. I get the concept: it represents an aerial view. The map is showing you the lay of the land as if you were looking down on it from above. But since I don’t really have actual experience of looking down at a landscape or a city-scape from above, I find it hard to translate the experience of walking from here to there with the bird’s eye view of a map. And then there’s the issue of scale, requiring one to translate one’s actual scale to the scale of the map.

None of these different ways of representing space is any more or less intrinsic or natural to human beings. Designers of way-finding technologies should embrace these different ways of conceptualizing space and human movement through it.

Sara Hendren is an artist, researcher, and writer based in Cambridge, Massachusetts. She edits Abler and lectures at the Rhode Island School of Design.