Predicting length of stay at the post anesthesia care unit using machine learning and identifying associated risk factors using local explanation models
PhD project
within the Industrial Doctoral School at Umeå university
The purpose of the project is to improve the current situation regarding increased healthcare queues within emergency and planned surgery, and to optimise postoperative care by predicting the expected treatment time at the postoperative care unit (PACU) at the University Hospital of northern Sweden.
In Sweden, postoperative care is a prerequisite for planned surgery to be carried out. The resources within postoperative care therefore affect the flow through of patients and can create a potential bottleneck if not utilised correctly. By predicting surgery times and postoperative treatment times using machine learning (ML), current resources can be distributed in the most optimal way, increasing the throughput of patients and streamlining healthcare queues.
Today, the planning of postoperative care is carried out manually. Planning is based on patient data collected before the planned surgery. The data contains specific information about each individual patient, and it is up to the staff at PACU to interpret the information and estimate how much resources each patient requires.
Since treatment time can depend on several underlying factors, the interpretation of data can become complicated and difficult to manage. Therefore, a ML model is advocated to find these relationships and patterns in underlying data that contribute to treatment times. The model will be used as decision support in the planning stage of postoperative care and the underlying decisions must therefore be explainable to such an extent that all decision-makers, independently of each other, can come to the same interpretation.
The goal is to create a predictive decision support system using ML, and to use Local Explanation Methods (LEM’s) to create explainable decisions in the form of individual risk factors for each intended patient. An essential part of having explainable decisions within clinical environments is to create acceptance and trust for ML-methods, allowing for integration within these environments. The project thus contributes to technical development within a heavily burdened field through resource optimization and ultimately increased societal benefit in the form of reduced healthcare queues.