Strategies for Addressing User Needs


There are many different ways to change the way an electronic device behaves to take account of the varying needs of users. The following describes strategies for addressing the needs of users with a wide variety of abilities and limitations.

If the user cannot see the device, make it say things so they can use their ears

Things that cannot be seen can be said, using synthesized or pre-recorded speech. For example, items on a touchscreen interface can be pressed and their name and contents can be said aloud; buttons names on a device can be said aloud so that a user can explore an interface before selecting items. Because of the need to auditorally explore the interface it is important that items touched are not immediately selected… and an alternative means of selection is necessary: this can be by requiring confirmation by pressing another button (that is off to the side / edge and can easily be found and used without pressing other buttons) or by pressing and holding the button down for a short delay time until it is selected.

If the user cannot understand things that are said by the device, let them change the way it says it

Synthesized speech is malleable in the same way that text is. It can be made faster or slower, the pitch can be raised or lowered, the basic voice can be altered (e.g. male / female / robotic etc.).

If the user has difficulty seeing the device, let them change the way it looks

Text is malleable depending upon the constraints of the visual interface. For example, fonts can be enlarged, changed between serif and sans serif, made white on black or any other color combination. A visual interface may be constrained in the maximum size of the text, the colors that are available, and the clarity (resolution) possible.

If the user cannot locate the buttons on the device, let them use a list with only 3 buttons

If the user is unfamiliar with or cannot reliably find or remember where all of the buttons are on an interface, an alternative is to put all of the items onto a list. The list has all of the interface items (buttons, switches, text etc.) arranged logically from top to bottom. The list can be shown visually, or auditorally. The list can be accessed by using 3 buttons (up / down / select), or by sliding a finger along an edge (e.g. a touchscreen). The list works because it takes a two-dimensional interface and makes it one-dimensional. Although this is a cognitively more complex interface strategy, it does allow access by people who are unable to locate interface elements independently.

If the user has difficulty hearing the device, let them change the way it sounds

Sound contains properties that can be altered, such as volume (loudness) and pitch. Modifying these can help users who are unable to hear a device operating normally. In addition, it is possible to directly connect hearing aids to sound sources, providing a better listening system (e.g. headphone jack connection or telephone hearing aid T-coil connection).

If the user cannot hear the sounds from the device, show the sounds visually

Any sounds that a device makes can be shown visually, for example by making a display or indicator light flash when a sound is made. Spoken text and sounds can be shown in “caption” form, enabling someone who cannot hear at all to have access to the same information as people who can easily hear.

If the user cannot be sure of pressing the right button, allow them to Confirm button presses

If someone cannot reliably press individual buttons (for example because they are too small), then it might be easier to confirm button selections. This can be done by highlighting or saying out loud with synthesized or pre-recorded speech what button was pressed. For example, when using a cellular phone outside in the winter when you have large gloves on: you could press the “dial” button but miss and press the “cancel” button – but you would not have really pressed it; instead you try again until the “dial” button is highlighted, then press the confirm button which is off to the side of the phone away from all of the other buttons.

If the user cannot provide speech input, allow them to use buttons instead

If someone cannot speak but an interface uses speech input, an alternative can be to press buttons or keys. For example, at the beginning of a computer controlled operator telephone call, a starting prompt could be “press ‘1’ to use your touch-tone phone to control this call, or say ‘OK’ now to use speech control”.

If the user cannot reach or touch the device, let them give commands by speech

If someone is unable to reach or see an interface, they can control it using speech input together with appropriate output (either audible (speech) or visual). Words spoken by the user are interpreted by the device and used as commands to control the interface.

If the user can see, but can only use one or two switches for input, let them step around the buttons using scanning

If someone can only use one or two switches (for example if they are paralyzed from the neck down), it is possible to control the interface by having each item highlighted (or said aloud) one by one. When the one that the user wants is highlighted, they can select it using a single switch. With a double switch they can use one switch to advance the highlight, and the other to select. The latter has more flexibility and control, but not everyone can use two switches which is why single switch is available. Note: It is possible to scan using auditory feedback, but it would be more likely that a user would use speech output and a list to interact with the device.

If the user wants to use their own customized type of input and output, let them use Remote Control

If a user cannot reach an interface, they can control it remotely using an infrared link. The infrared link works in a standard way so that the remote control can be used to control an electronic device. The user points the remote controller at the device and the device sends the remote controller the available commands for itself. The commands can be accessed as a list (which enables them to be converted to hand-held speech output, Braille output, large text output etc.), or as a graphical image with buttons similar to those on the device itself. The remote control can be configured to meet the needs of the individual user: different displays and levels of information can be shown on the remote control, making the interface simpler to use.

If the user cannot use the standard language, let them change the language to one they can understand

All of the above means of interaction are of no use if the standard language of the device is not the user’s own and they cannot understand it, or have some difficulty in understanding it. Multi-language support will depend upon the location of the device: for example, in the USA the two most predominant languages are English and Spanish, but if a public information device were placed in a tourist area, then Chinese, Japanese, German, French, Italian etc. would be useful to allow visitors access to the information.

If you found this information useful, please consider sharing it.
Go to top of page