Pilot Testing the Debriefing for Meaningful Learning Evaluation Scale

Cynthia Sherraden Bradley, Kristina Thomas Dreifuerst

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Background: Debriefing for Meaningful Learning (DML), an evidence-based debriefing method, promotes thinking like a nurse through reflective learning. Despite widespread adoption of DML, little is known about how well it is implemented. To assess the effectiveness of DML implementation, an evaluative rubric was developed and tested. Sample: Three debriefers who had been trained to use DML at least 1 year previously, submitted five recorded debriefings each for evaluation. Methods: Three raters who were experts in DML scored each of the 15 recorded debriefing session using DML Evaluation Scale (DMLES). Observable behaviors were scored with binary options. These raters also assessed the items in the DMLES for content validity. Results: Cronbach's alpha, intraclass correlation coefficients, and Content Validity Index scores were calculated to determine reliability and validity. Conclusion: Use of DMLES could support quality improvement, teacher preparation, and faculty development. Future testing is warranted to investigate the relationship between DML implementation and clinical reasoning.

Original languageEnglish (US)
Pages (from-to)277-280
Number of pages4
JournalClinical Simulation in Nursing
Volume12
Issue number7
DOIs
StatePublished - Jul 1 2016
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2016 International Nursing Association for Clinical Simulation and Learning.

Keywords

  • DML
  • Debriefing
  • Debriefing evaluation
  • Effective briefing
  • Measurement

Fingerprint

Dive into the research topics of 'Pilot Testing the Debriefing for Meaningful Learning Evaluation Scale'. Together they form a unique fingerprint.

Cite this