SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Kaji AH, Lewis RJ. Ann. Emerg. Med. 2008; 52(3): 204-10, 210.e1-8.

Affiliation

Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA 90509, USA. akaji@emedharbor.edu

Comment In:

Ann Emerg Med 2008;52(3):230-1.

Copyright

(Copyright © 2008, American College of Emergency Physicians, Publisher Elsevier Publishing)

DOI

10.1016/j.annemergmed.2007.07.025

PMID

17933427

Abstract

STUDY OBJECTIVE: The Joint Commission requires hospitals to implement 2 disaster drills per year to test the response phase of their emergency management plans. Despite this requirement, there is no direct evidence that such drills improve disaster response. Furthermore, there is no generally accepted, validated tool to evaluate hospital performance during disaster drills. We characterize the internal and interrater reliability of a hospital disaster drill performance evaluation tool developed by the Johns Hopkins University Evidence-based Practice Center, under contract from the Agency for Healthcare Research and Quality (AHRQ). METHODS: We evaluated the reliability of the Johns Hopkins/AHRQ drill performance evaluation tool by applying it to multiple hospitals in Los Angeles County, CA, participating in the November 2005 California statewide disaster drill. Thirty-two fourth-year medical student observers were deployed to specific zones (incident command, triage, treatment, and decontamination) in participating hospitals. Each observer completed common tool items, as well as tool items specific to their hospital zone. Two hundred items from the tool were dichotomously coded as indicating better versus poorer preparedness. An unweighted "raw performance" score was calculated by summing these dichotomous indicators. To quantify internal reliability, we calculated the Kuder-Richardson interitem consistency coefficient, and to assess interrater reliability, we computed the kappa coefficient for each of the 11 pairs of observers who were deployed within the same hospital and zone. RESULTS: Of 17 invited hospitals, 6 agreed to participate. The raw performance scores for the 94 common items ranged from 18 (19%) to 63 (67%) across hospitals and zones. The raw performance scores of zone-specific items ranged from 14 of 45 (31%) to 30 of 45 (67%) in the incident command zone, from 2 of 17 (12%) to 15 of 17 (88%) in the triage zone, from 19 of 26 (73%) to 22 of 26 (85%) in the treatment zone, and from 2 of 18 (11%) to 10 of 18 (56%) in the decontamination zone. The Kuder-Richardson internal reliability, by zone, ranged from 0.72 (95% confidence interval [CI] 0.58 to 0.87) in the treatment zone to 0.97 (95% CI 0.95 to 0.99) in the incident command zone. The interrater reliability ranged, across hospital zones, from 0.24 (95% CI 0.09 to 0.38) to 0.72 (95% CI 0.63 to 0.81) for the 11 pairs of observers. CONCLUSION: We found a high degree of internal reliability in the AHRQ instrument's items, suggesting the underlying construct of hospital preparedness is valid. Conversely, we found substantial variability in interrater reliability, suggesting that the instrument needs revision or substantial user training, as well as verification of interrater reliability in a particular setting before use.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print