Scientific Foundations for Assessment of Surgical Technical Skills

Project: Research project

Project Details

Description

PROJECT SUMMARY Surgical technical skills directly impact patient outcomes. There remains a need for objective, accurate, and inexpensive methods to measure such skills in a manner that can scale to the large number of surgical residents and practitioners. The long term goal of this research program is to improve surgical training and assessment by establishing more scientifically-rigorous foundations for accurate, objective evaluation of surgical technical skills and their relationship to surgical outcomes. The overall objective of this research proposal is to determine the biases, limitations, and absolute accuracy inherent in the putative gold standard of surgical technical skill evaluation—review of video footage by a panel of human raters—across representative procedures in three surgical specialties: urology, gynecology, and orthopedic surgery. The central hypothesis is that both expert and non-expert raters are subject to unconscious bias and limitations in their capacity to evaluate surgical technical skills objectively and accurately. This will have positive impact on advancing the science and improving the practice of surgical skill evaluation for surgical residency programs. The research will answer the following questions, at least for representative procedures from urology (robotic prostatectomy), gynecology (robotic hysterectomy), and orthopedic surgery (hip fracture fixation and pedicle screw placement). 1. What is the magnitude of identity bias (gender, ethnicity, etc.) in the evaluation of surgical technical skills? a. How do ratings change from the control condition (identity-blind) to identity-visible for gender, race, age, or perceived reputation? b. How does this bias change across skill levels? (e.g. preliminary evidence shows that perceived females are docked more severely than males among novice skill levels, but less so at proficient levels). c. How much do faculty semester evaluations of resident technical skill (current widespread practice) differ from identity-blind skill evaluation (anonymized review of video)? Especially across gender? 2. What is the optimal user interface and conditions for identity-blind web-enabled review of surgical video that maximize skill discrimination and minimize reviewer resource cost? 3. What is the absolute accuracy of human raters? i.e., how imperfect is the gold-standard of technical skill evaluation
StatusActive
Effective start/end date8/18/217/31/24

Funding

  • National Institute of Biomedical Imaging and Bioengineering: $341,357.00
  • National Institute of Biomedical Imaging and Bioengineering: $353,732.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.