Empirical Analysis of Deep Convolutional Generative Adversarial Network for Ultrasound Image Synthesis Alternate Title: Empirical Analysis of Deep Convolutional
Dheeraj Kumar1, Mayuri A. Mehta2, Indranath Chatterjee3
Identifiers and Pagination:Year: 2021
Issue: Suppl-1, M3
First Page: 71
Last Page: 77
Publisher ID: TOBEJ-15-71
Article History:Received Date: 13/9/2020
Revision Received Date: 01/2/2021
Acceptance Date: 7/2/2021
Electronic publication date: 18/10/2021
Collection year: 2021
open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Recent research on Generative Adversarial Networks (GANs) in the biomedical field has proven the effectiveness in generating synthetic images of different modalities. Ultrasound imaging is one of the primary imaging modalities for diagnosis in the medical domain. In this paper, we present an empirical analysis of the state-of-the-art Deep Convolutional Generative Adversarial Network (DCGAN) for generating synthetic ultrasound images.
This work aims to explore the utilization of deep convolutional generative adversarial networks for the synthesis of ultrasound images and to leverage its capabilities.
Ultrasound imaging plays a vital role in healthcare for timely diagnosis and treatment. Increasing interest in automated medical image analysis for precise diagnosis has expanded the demand for a large number of ultrasound images. Generative adversarial networks have been proven beneficial for increasing the size of data by generating synthetic images.
Our main purpose in generating synthetic ultrasound images is to produce a sufficient amount of ultrasound images with varying representations of a disease.
DCGAN has been used to generate synthetic ultrasound images. It is trained on two ultrasound image datasets, namely, the common carotid artery dataset and nerve dataset, which are publicly available on Signal Processing Lab and Kaggle, respectively.
Results show that good quality synthetic ultrasound images are generated within 100 epochs of training of DCGAN. The quality of synthetic ultrasound images is evaluated using Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM). We have also presented some visual representations of the slices of generated images for qualitative comparison.
Our empirical analysis reveals that synthetic ultrasound image generation using DCGAN is an efficient approach.
In future work, we plan to compare the quality of images generated through other adversarial methods such as conditional GAN, progressive GAN.