ECR 2019 / C-2866
Cloud-based semi-automated liver segmentation: analytical study to compare its speed and accuracy with a semi-automated workstation based software
Congress:
ECR 2019
Poster Number:
C-2866
Type:
Scientific Exhibit
Keywords:
Artificial Intelligence, CT, Segmentation
Authors:
V. Venugopal1, A. Chunduru2, M. Barnwal3, D. S. Mahra4, A. Raj2, K. Vaidhya2, A. Rangasai Devalla2, V. Mahajan3, H. Mahajan 3; 1Aligarh/IN, 2Bangalore/IN, 3New Delhi/IN, 4Bengaluru, Karnataka/IN
DOI: 10.26044/ecr2019/C-2866
DOI-Link: https://dx.doi.org/10.26044/ecr2019/C-2866

Aims and objectives
Living Donor Liver Transplantation (LDLT) is the commonest form of liver transplantation in Asia. While Deceasead Donor Liver Transplantation (DDLT) constitutes more than 90% of liver transplantation in the western world, in India and many other Asian countries, the majority of transplants performed are LDLT.
Living Donor Liver Transplants is now a well established procedure and has reduced liver transplant waiting list mortality. The complexities of the procedure, along with the donor risks, are the biggest obstacles to widespread use of this valuable treatment option. Pre-operative surgical evaluation and preparation is perhaps the single most important determinant of successful outcomes for both donors and recipients.
For that reason, all LDLT preoperative studies are designed to provide the most accurate information about anatomy, volume and function of the graft and remnant donor liver. These data are integrated with recipient clinical information to determine the optimal surgical strategy.
Preoperative analysis primarily involves segmentation of liver and associated structures to calculate their volumes. Recently, Deep Neural Networks have shown to obtain outstanding performances in many computer vision tasks including image classification[1], object detection[2], object segmentation[3], instance segmentation[4]. They have also been used for medical image segmentation tasks including liver[5], brain tumour[6], multiple sclerosis[7], ischemic stroke[8], etc.
The purpose of this study is to evaluate the performance of a fully automated post processing solution, based on deep neural networks, for liver on MDCT image datasets. We compare time taken to perform liver volumetry between (1) Manual segmentation using commercially available software and (2) Automated segmentation with manual refinement.
Methods and materials

References: Predible Health
Data Preparation
A test dataset of 15 multi-phasic contrast-enhanced CT scans was provided by Centre for Advanced Research in Imaging, Neurosciences and Genomics (CARING), New Delhi, India.
Exclusion Criteria entailed:
- Morphologic features of cirrhosis,
- History of prior liver/biliary surgery or liver tumor ablation procedures
- One or more liver lesions greater than 3 cm in size identified by CT or MRI
- Portal or hepatic vein thrombosis.
All studies were acquired on a 128-MDCT GE Discovery IQ scanner. The images were acquired using a matrix size of 512 x 512 pixels, at an in-plane pixel size of 0.76 mm, reconstructing 0.6 mm thin images. Individual contrast bolus-tracking was performed during repetitive low dose acquisitions at 120 kVp /40 mAs and placement of a threshold region-of-interest (ROI) within the abdominal aorta at the level of the diaphragm, plotting HU contrast wash-in to a level of 150 HU following contrast administration of 100 ml 320 mg I/ml contrast agent administered at 4 ml / sec injected into a right antecubital vein using a CTA injector. The diagnostic arterial and portal-venous cranio-caudal helical hepatic MDCT acquisition commenced 12 seconds and 60 seconds post 150 HU wash-in, respectively.
Manual vs Automated Liver Volumetry
We performed liver volumetry on two setups (1) A commercially available CT Volume Viewer Package and (2) PredibleLiver (Predible Health, Bengaluru, India), a liver-volumetry software package with segmentations initialized using DNNs (Fig. 2). All quantitative volumetric evaluations were performed by a radiologist (MD) of 7 years of experience. The radiologist performed manual and automated volumetry with an interval of 2 months.
Manual Volumetry
All studies in the test set were loaded on the CT Volumetric Viewer application and made available in axial, saggital and coronal reformations. A seed pointer was centrally placed over internal portions of the liver, with an interactively controlled growing color overlay region-of-interest (ROI) visible to the radiologists; region growing speed (100 mL/sec), seed size (20 mm²) and sensitivity to attenuation differences (sensitivity 5, range 1 – 10) were standardized for the in-vitro phantom and in-vivo patient datasets. If color-overlay ROIs were noticed outside of the liver on axial, sagittal, and coronal reformations, an eraser tool with identical settings was utilized. This was performed until the radiologists deemed the volumetric assessment appropriate. The CT Volume Viewer application was then prompted to provide the whole liver volume.
Automated Volumetry
Venous Phase CT scans of the abdomen were loaded onto PredibleLiver application. The application performs liver segmentation without any user input. PredibleLiver's automated segmentation is based on Deep Neural Networks (DNN). UNet[9] (Fig. 1) is a popular DNN architecture used for medical image segmentation. PredibleLiver uses a 3D UNet architecture to segment liver region in the abdomen venous phase CT. It takes less than a minute to generate the liver segmentation using the neural network. The generated liver segmentation was allowed to modified by the radiologist using region grow and erase tools as described in the previous section. The software was then prompted to provide whole liver volume.
DNN Training
A CNN based on the UNet[9] (Fig. 1) architecture was trained on 324 triphasic contrast-enhanced abdomen CT scans. The data was annotated by radiology technician and the annotations were approved by a radiologist. The protocol for contrast enhancement varied across the dataset. The dataset included images of different slice thickness and pixel spacings. This dataset was independent from the test dataset.
Evaluation
We measured the time taken for performing volumetry on both setups along with the final volumes obtained in order to track consistency with regards to volumetric assessment.
Results

References: Predible Health

References: Predible Health
Consistency in Liver Volumes
In Fig. 4, we can see the volumes (in ml) obtained for the two setups. The liver volumes for the studies are consistent over the two setups with a maximum variation of 2.3% and an average variation of 0.9%.
Duration of Segmentation : Manual vs Automated
In Fig. 3, we can see the time taken (in mins) for performing liver volumetry on the two setups. Automation of liver volumetry accelerated the post processing significantly. Automated Liver volumetry on PredibleLiver takes an average of 3.5 minutes compared to 14.6 minutes on Commercial CT Volume Viewer.
Conclusion
The study shows that liver volumetry post processing can be significantly accelerated by initializing with Deep Learning based segmentation. We compared two setups : (1) Commercial CT Volume Viewer and (2) PredibleLiver. We found PredibleLiver to require lesser time in performing volumetric assessment over 15 studies as the segmentations come pre-initalized using Deep Learning.
Personal information
Please do not hesitate to reach out to us in case you have any questions or comments:
Vasantha Venugopal, MD
Centre for Advanced Research in Imaging, Neuroscience & Genomics, (CARING),
Mahajan Imaging, E-19, Defence Colony,
New Delhi, INDIA
91 9871438999
drvasanth@mahajanimaging.com
www.caring-research.com
Abhijit Chunduru, BTech
Predible Health
IKP Eden, #16, Bhuvanappa Layout, Adugodi,
Bangalore, INDIA
+91 9790742906
abhijith@prediblehealth.com
www.prediblehealth.com
References
1. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR abs/1512.00567 (2015)
2. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: NIPS. (2015)
3. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proc. CVPR. pp. 3431–3440 (2015)
4. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. In: ICLR. (2015)
5. Ben-Cohen, A., Diamant, I., Klang, E., Amitai, M., Greenspan, H., Fully Convolutional Network for Liver Segmentation and Lesions Detection 2016. Dlmia. In: International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis. Vol. 10008 of Lect Notes Comput Sci. pp. 77–85
6. Vaidhya, Kiran, Subramaniam Thirunavukkarasu, Alex Varghese and Ganapathy Krishnamurthi. “Multi-modal Brain Tumor Segmentation Using Stacked Denoising Autoencoders.” Brainles@MICCAI (2015).
7. Vaidya, S., et al.: Longitudinal multiple Sclerosis lesion segmentation using 3D convolutional neural networks. In: Proceedings of the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge, pp. 1–2 (2015)
8. Chen L, Bentley P, Rueckert D. Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks. Neuroimage Clin. 2017;15:633–643
9. O. Ronneberger, P. Fischer, T. Brox, "U-net: Convolutional networks for biomedical image segmentation", Proc. Int. Conf. Medical Image Comput. Comput.-Assisted Intervention, pp. 234-241, 2015.


