convolutional_neural_network

Convolutional neural network

In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery.

CNNs are regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The “fully-connectedness” of these networks makes them prone to overfitting data. Typical ways of regularization include adding some form of magnitude measurement of weights to the loss function. However, CNNs take a different approach towards regularization: they take advantage of the hierarchical pattern in data and assemble more complex patterns using smaller and simpler patterns. Therefore, on the scale of connectedness and complexity, CNNs are on the lower extreme.


Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large annotated datasets for training. Annotation of 3D medical images is time-consuming, requires highly trained raters and may suffer from high inter-rater variability. Self-supervised learning strategies can leverage unlabeled data for training.


Previous studies have used artificial intelligence to attempt to expedite the diagnosis of Intracranial hemorrhage pathology on neuroimaging. However, these studies have used local, institution-specific data for training of networks that limit deployment of across broader hospital networks or regions because of data biases.

To demonstrate the creation of a neural network based on an openly available imaging data tested on data from our institution demonstrating a high-efficacy, institution-agnostic network.

A data set was created from publicly available noncontrast computed tomography images of known ICH. These data were used to train a neural network using distinct windowing and augmentation. This network was then validated in 2 phases using cohort-based (phase 1) and longitudinal (phase 2) approaches.

The convolutional neural network was trained on 752 807 openly available slices, which included 112 762 slices containing intracranial hemorrhage. In phase 1, the final network performance for intracranial hemorrhage showed a receiver operating characteristic curve (AUC) of 0.99. At the inflection point, our model showed a sensitivity of 98% at a threshold specificity of 99%. In phase 2, we obtained an AUC of 0.98 after analysis of 726 scans with a negative predictive value of 99.70% (n = 726).

Hopkins et al. demonstrated an effective neural network trained on completely open data for screening ICH at an unrelated institution. This study demonstrates a proof of concept for screening networks for multiple sites while maintaining high efficacy 1).


Pérez-García et al. developed an algorithm to simulate resections from preoperative Magnetic resonance imaging (MRIs). They performed self-supervised training of a 3D CNN for RC segmentation using thei own simulation method. They curated EPISURG, a dataset comprising 430 postoperative and 268 preoperative MRIs from 430 refractory epilepsy patients who underwent resective neurosurgery. They fine-tuned the model on three small annotated datasets from different institutions and on the annotated images in EPISURG, comprising 20, 33, 19 and 133 subjects.

The model trained on data with simulated resections obtained median (interquartile range) Dice score coefficients (DSCs) of 81.7 (16.4), 82.4 (36.4), 74.9 (24.2) and 80.5 (18.7) for each of the four datasets. After fine-tuning, DSCs were 89.2 (13.3), 84.1 (19.8), 80.2 (20.1) and 85.2 (10.8). For comparison, inter-rater agreement between human annotators from our previous study was 84.0 (9.9).

They presented a self-supervised learning strategy for 3D CNNs using simulated RCs to accurately segment real RCs on postoperative MRI. The method generalizes well to data from different institutions, pathologies and modalities. Source code, segmentation models and the EPISURG dataset are available at https://github.com/fepegar/resseg-ijcars 2)


It is difficult to generate a large amount of high-quality annotation data to train FCNs for medical image segmentation. Thus, it is desired to achieve high segmentation performances even from incomplete training data 3).


In long-term video-monitoring, automatic seizure detection holds great promise as a means to reduce the workload of the epileptologist. A convolutional neural network (CNN) designed to process images of EEG plots demonstrated high performance for seizure detection, but still has room for reducing the false-positive alarm rate.

Methods: We combined a CNN that processed images of EEG plots with patient-specific autoencoders (AE) of EEG signals to reduce the false alarms during seizure detection. The AE automatically logged abnormalities, i.e., both seizures and artifacts. Based on seizure logs compiled by expert epileptologists and errors made by AE, we constructed a CNN with 3 output classes: seizure, non-seizure-but-abnormal, and non-seizure. The accumulative measure of number of consecutive seizure labels was used to issue a seizure alarm.

Results: The second-by-second classification performance of AE-CNN was comparable to that of the original CNN. False-positive seizure labels in AE-CNN were more likely interleaved with “non-seizure-but-abnormal” labels than with true-positive seizure labels. Consequently, “non-seizure-but-abnormal” labels interrupted runs of false-positive seizure labels before triggering an alarm. The median false alarm rate with the AE-CNN was reduced to 0.034 h-1, which was one-fifth of that of the original CNN (0.17 h-1).

Conclusions: A label of “non-seizure-but-abnormal” offers practical benefits for seizure detection. The modification of a CNN with an AE is worth considering because AEs can automatically assign “non-seizure-but-abnormal” labels in an unsupervised manner with no additional demands on the time of the epileptologist 4).


Bae et al. investigated a novel method using a 2D convolutional neural network (CNN) to identify superior and inferior vertebrae in a single slice of CT images, and a post-processing for 3D segmentation and separation of cervical vertebrae.

The cervical spines of patients (N == 17, 1684 slices) from Severance and Severance Hospitals (S/GSH) and healthy controls (N == 24, 3490 slices) from Seoul National University Bundang Hospital (SNUBH) were scanned by using various volumetric CT protocols. To prepare gold standard masks of cervical spine in CT images, each spine was segmented by using conventional image-processing methods and manually corrected by an expert. The gold standard masks were preprocessed and labeled into superior and inferior cervical vertebrae separately in the axial slices. The 2D U-Net model was trained by using the disease dataset (S/GSH) and additional validation was performed by using the healthy control dataset (SNUBH), and then the training and validation were repeated by switching the two datasets.

In case of the model was trained with the disease dataset (S/GSH) and validated with the healthy control (SNUBH), the mean and standard deviation (SD) of the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), mean surface distance (MSD), and Hausdorff surface distance (HSD) were 94.37 ± 1.45%, 89.47 ± 2.55%, 0.33 ± 0.12 mm and 20.89 ± 3.98 mm, and 88.67 ± 5.82%, 80.83 ± 8.09%, 1.05 ± 0.63 mm and 29.17 ± 19.74 mm, respectively. In case of the model was trained with the healthy control (SNUBH) and validated with the disease dataset (S/GSH), the mean and SD of DSC, JSC, MSD, and HSD were 96.23 ± 1.55%, 92.95 ± 2.58%, 0.39 ± 0.20 mm and 16.23 ± 6.72 mm, and 93.15 ± 3.09%, 87.54 ± 5.11%, 0.38 ± 0.17 mm and 20.85 ± 7.11 mm, respectively.

The results demonstrated that our fully automated method achieved comparable accuracies with inter- and intra-observer variabilities of manual segmentation by human experts, which is time consuming 5).


1)
Hopkins BS, Murthy NK, Texakalidis P, Karras CL, Mansell M, Jahromi BS, Potts MB, Dahdaleh NS. Mass Deployment of Deep Neural Network: Real-Time Proof of Concept With Screening of Intracranial Hemorrhage Using an Open Data Set. Neurosurgery. 2022 Feb 10. doi: 10.1227/NEU.0000000000001841. Epub ahead of print. PMID: 35132970.
2)
Pérez-García F, Dorent R, Rizzi M, Cardinale F, Frazzini V, Navarro V, Essert C, Ollivier I, Vercauteren T, Sparks R, Duncan JS, Ourselin S. A self-supervised learning strategy for postoperative brain cavity segmentation simulating resections. Int J Comput Assist Radiol Surg. 2021 Jun 13. doi: 10.1007/s11548-021-02420-2. Epub ahead of print. PMID: 34120269.
3)
Sugino T, Suzuki Y, Kin T, Saito N, Onogi S, Kawase T, Mori K, Nakajima Y. Label cleaning and propagation for improved segmentation performance using fully convolutional networks. Int J Comput Assist Radiol Surg. 2021 Mar 3. doi: 10.1007/s11548-021-02312-5. Epub ahead of print. PMID: 33655468.
4)
Takahashi H, Emami A, Shinozaki T, Kunii N, Matsuo T, Kawai K. Convolutional neural network with autoencoder-assisted multiclass labelling for seizure detection based on scalp electroencephalography. Comput Biol Med. 2020 Sep 26;125:104016. doi: 10.1016/j.compbiomed.2020.104016. Epub ahead of print. PMID: 33022521.
5)
Bae HJ, Hyun H, Byeon Y, Shin K, Cho Y, Song YJ, Yi S, Kuh SU, Yeom JS, Kim N. Fully automated 3D segmentation and separation of multiple cervical vertebrae in CT images using a 2D convolutional neural network. Comput Methods Programs Biomed. 2019 Oct 4;184:105119. doi: 10.1016/j.cmpb.2019.105119. [Epub ahead of print] PubMed PMID: 31627152.
  • convolutional_neural_network.txt
  • Last modified: 2023/04/10 13:52
  • by administrador