Hospitalized adult patients registered in the Trauma Registry System between January 2009 and December 2015 were enrolled in this study. Only patients with an Abbreviated Injury Scale (AIS) score ≥ 3 points related to head injuries were included in this study. A total of 1734 (1564 survival and 170 non-survival) and 325 (293 survival and 32 non-survival) patients were included in the training and test sets, respectively.
Using demographics and injury characteristics, as well as patient laboratory data, predictive tools (e.g., logistic regression [LR], support vector machine [SVM], decision tree [DT], naive Bayes [NB], and artificial neural networks [ANN]) were used to determine the mortality of individual patients. The predictive performance was evaluated by accuracy, sensitivity, and specificity, as well as by area under the curve (AUC) measures of receiver operator characteristic curves. In the training set, all five ML models had a specificity of more than 90% and all ML models (except the NB) achieved an accuracy of more than 90%. Among them, the ANN had the highest sensitivity (80.59%) in mortality prediction. Regarding performance, the ANN had the highest AUC (0.968), followed by the LR (0.942), SVM (0.935), NB (0.908), and DT (0.872). In the test set, the ANN had the highest sensitivity (84.38%) in mortality prediction, followed by the SVM (65.63%), LR (59.38%), NB (59.38%), and DT (43.75%).
The ANN model provided the best prediction of mortality for patients with isolated moderate and severe TBI 1).
A study proposes to validate the use of the ANN model for predicting in-hospital mortality after traumatic brain injury (TBI) surgery and to compare the predictive accuracy of ANN with that of the logistic regression model.
The authors of this study retrospectively analyzed 16,956 patients with TBI nationwide who were surgically treated in Taiwan between 1998 and 2009. For every 1000 pairs of ANN and logistic regression models, the area under the receiver operating characteristic curve (AUC), Hosmer-Lemeshow statistics, and accuracy rate were calculated and compared using paired t-tests. A global sensitivity analysis was also performed to assess the relative importance of input parameters in the ANN model and to rank the variables in order of importance.
The ANN model outperformed the logistic regression model in terms of accuracy in 95.15% of cases, in terms of Hosmer-Lemeshow statistics in 43.68% of cases, and in terms of the AUC in 89.14% of cases. The global sensitivity analysis of in-hospital mortality also showed that the most influential (sensitive) parameters in the ANN model were surgeon volume followed by hospital volume, Charlson comorbidity index score, length of stay, sex, and age.
This work supports the continued use of ANNs for predictive modeling of neurosurgery outcomes. However, further studies are needed to confirm the clinical efficacy of the proposed model 2).
Rughani et al. described the artificial neural network (ANN) as an innovative and powerful modeling tool that can be increasingly applied to develop predictive models in neurosurgery. They aimed to demonstrate the utility of an ANN in predicting survival following traumatic brain injury and compare its predictive ability with that of regression models and clinicians.
The authors designed an ANN to predict in-hospital survival following traumatic brain injury. The model was generated with 11 clinical inputs and a single output. Using a subset of the National Trauma Database, the authors “trained” the model to predict outcome by providing the model with patients for whom 11 clinical inputs were paired with known outcomes, which allowed the ANN to “learn” the relevant relationships that predict outcome. The model was tested against actual outcomes in a novel subset of 100 patients derived from the same database. For comparison with traditional forms of modeling, 2 regression models were developed using the same training set and were evaluated on the same testing set. Lastly, the authors used the same 100-patient testing set to evaluate 5 neurosurgery residents and 4 neurosurgery staff physicians on their ability to predict survival on the basis of the same 11 data points that were provided to the ANN. The ANN was compared with the clinicians and the regression models in terms of accuracy, sensitivity, specificity, and discrimination.
Compared with regression models, the ANN was more accurate (p < 0.001), more sensitive (p < 0.001), as specific (p = 0.260), and more discriminating (p < 0.001). There was no difference between the neurosurgery residents and staff physicians, and all clinicians were pooled to compare with the 5 best neural networks. The ANNs were more accurate (p < 0.0001), more sensitive (p < 0.0001), as specific (p = 0.743), and more discriminating (p < 0.0001) than the clinicians.
When given the same limited clinical information, the ANN significantly outperformed regression models and clinicians on multiple performance measures. While this paradigm certainly does not adequately reflect a real clinical scenario, this form of modeling could ultimately serve as a useful clinical decision support tool. As the model evolves to include more complex clinical variables, the performance gap over clinicians and logistic regression models will persist or, ideally, further increase 3).