SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Hakkinen MT, Williges BH. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 1982; 26(3): 204.

Copyright

(Copyright © 1982, Human Factors and Ergonomics Society, Publisher SAGE Publishing)

DOI

10.1177/154193128202600301

PMID

unavailable

Abstract

The use of Voice Synthesis Technology (VST) in human/computer dialogues has not received much attention in the human factors literature. Therefore, guidelines for implementing VST systems must be based upon data from natural speech situations published in human factors handbooks. One of the more promising environments for VST is a high workload situation in which the user's visual sensory channel may be heavily burdened with information related to the primary task while a lesser amount of information is present in the auditory channel. The use of VST allows information presented visually to be shifted to the less loaded auditory channel. VST can also serve as a redundant source of information to increase the probability of correct message reception by the user. Characteristics of auditory signals, such as serving an alerting function, make them well-suited for the presentation of urgent and time-critical messages. Although human factors design handbooks suggest that voice messages should be preceded in time by an alerting tone (e.g., Woodson, 1981), recent research (Simpson and Williams, 1980) refutes this design guideline. The present study examined the effectiveness of cueing signals for auditory warning messages as a function of the amount of information presented by voice synthesis and workload in a task that primarily involved visual information. At issue here is whether or not information is lost because of the need to transition attention from one sensory channel to another.
Subjects performed a task similar to that found in a simplified air traffic control environment. Users were required to monitor two visual displays and to enter commands via a standard keyboard. Workload was varied by changing the number of aircraft the subject had to control simultaneously. Emergency messages were always presented by synthesized speech. However, the presence of an alerting cue (light and tone) prior to the emergency messages and the presentation mode (visual or auditory) of non-critical messages were varied experimentally. When non-critical messages were presented by synthesized speech, subjects were required to monitor only one display. Subject response times to detect and respond to emergency messages and system-generated information requests, accuracy of message transcriptions, aircraft control performance, and subjective ratings of the presentation modes were analyzed. Recommendations for human factors engineers involved in the design of systems using VST will be presented.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print