REVIEW ARTICLE


https://doi.org/10.5005/jp-journals-10009-1702
Donald School Journal of Ultrasound in Obstetrics and Gynecology
Volume 15 | Issue 3 | Year 2021

Artificial Intelligence and Obstetric Ultrasound


Ryu Matsuoka1

Department of Obstetrics and Gynecology, Showa University School of Medicine, Hatanodai, Shinagawa, Tokyo, Japan

Corresponding Author: Ryu Matsuoka, Department of Obstetrics and Gynecology, Showa University School of Medicine, Hatanodai, Shinagawa, Tokyo, Japan, Phone: +81-(0)3-3784-8551, e-mail: ryu@med.showa-u.ac.jp

How to cite this article Matsuoka R. Artificial Intelligence and Obstetric Ultrasound. Donald School J Ultrasound Obstet Gynecol 2021;15(3):218–222.

Source of support: Nil

Conflict of interest: None

ABSTRACT

Artificial intelligence (AI) technology is currently in its third era. Current AI technology is driven by machine learning (ML), particularly deep learning (DL). Deep learning is a computer technology that allows a computational model with multiple processing layers to learn the features of data. Convolutional neural networks have led to breakthroughs in the processing of images, videos, and audio. In medical imaging, computer-aided diagnosis algorithms for diabetic retinopathy, diabetic macular edema, tuberculosis, skin lesions, and colonoscopy classifiers are highly accurate and comparable to clinician performance. Although the application of AI technology in the field of ultrasound (US) has lagged behind other modalities such as radiography, computed tomography (CT), and magnetic resonance imaging (MRI), it has been rapidly applied in the field of obstetrics and gynecology in recent years. The results of AI processing of US images to determine the malignancy of ovarian tumors are comparable to the International Ovarian Tumor Analysis results, and it is now possible to identify each part of the body and calculate the estimated weight from fetal US movies. However, the application of AI to the central nervous system and especially to the fetal heart, which is the main part of fetal US morphological examination, is just beginning to progress.

Keywords: Artificial intelligence, Computer-aided diagnosis, Convolutional neural network, Deep learning, Fetal ultrasound, Ovarian tumor.

INTRODUCTION

Artificial intelligence (AI) technology is currently in its third era, comprising the first era of inference and searching in the 1960s, the second era of knowledge in the late 1980s, and the current third era of the boom. Current AI technology is driven by machine learning (ML), particularly deep learning (DL). Deep learning is a computer technology that allows a computational model with multiple processing layers to learn the features of data.1 Convolutional neural networks (CNNs) are designed to automatically extract image features2 and have led to breakthroughs in the processing of images, video, and audio.1 Convolutional neural networks are the algorithms that are most commonly applied to images. Since their first introduction in 1989,3 CNNs have been widely applied to the classification and segmentation of photographic images,4,5 and >80% of the research in medical image analysis uses CNN approaches.6 Research on ultrasound (US) images with AI technology has been rapidly increasing, as seen from the number of publications in peer-reviewed journal articles found in a literature search on PubMed (Fig. 1).

Trials of using computers to automatically analyze medical images began in the 1960s.79 Doi started the systematic development of ML and image analysis techniques for medical images in the 1980s.10 The first commercial computer-aided diagnosis (CAD) system was approved by the United States Food and Drug Administration in 1998 for use as a diagnostic aid in screening mammography. Research on CAD has progressed, but its application in clinical practice has not progressed as expected. The main reason for this was that CAD tools developed using conventional ML methods did not reach a level of performance sufficient to meet physicians’ needs for diagnostic accuracy and work efficiency. In AI technology, the major difference between ML and DL is that DL learns and makes image decisions on its own for features that are dictated by humans in ML. This has opened up the possibility of discovering new findings that cannot be detected by humans and would otherwise be missed. However, the drawback is that learning requires a large amount of training data. In medical imaging, CADs for diabetic retinopathy,11 diabetic macular edema,11 tuberculosis,12 skin lesions,13 and colonoscopy14 classifiers are highly accurate and have similar performance to that of clinicians.

Fig. 1: Literature search on PubMed for publications in peer-reviewed journals published from 2000 to 2020 using the following keywords: [(deep learning) AND (ultrasound)]. The number of reports published since 2016 has risen exponentially

Ultrasound imaging is the first choice diagnostic imaging because it is non-invasive, convenient, and cost-effective compared with other medical imaging modalities such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI). Ultrasound imaging is widespread in most medical fields. On the other hand, US imaging has artifacts resulting in noisy images, and small findings and structures can be difficult to see. Thus, the application of CAD using US images lags behind that of CT and MRI.

In this review article, we present the current status of the use of AI technology in obstetric and gynecological ultrasonography.

OVARIAN TUMOR

At most, 10% of ovarian tumors lead to clinical symptoms in postmenopausal women and are often discovered incidentally during physical examinations, and only 1% are malignant.15 Over 50% of ovarian tumors occur in women of childbearing potential, in which case unnecessary or extensive surgery may result in the loss of fertility.16 The average 5-year survival rate for ovarian cancer is 45%, making it the malignant tumor with the worst prognosis among gynecological tumors.17 The imaging findings are diverse, and it is difficult to estimate malignancy. Thus, if an ovarian tumor is detected, its malignant status needs to be accurately determined. With individualized evaluation, benign masses can be managed conservatively with US follow-up and minimally invasive laparoscopy to preserve fertility.17 Evaluation of the malignancy of ovarian tumors using US images has been performed for many years. Unfortunately, the rate of correct diagnosis varies between experienced and inexperienced examiners and is often based on subjective judgment. In addition, past use of multimodal scoring systems, such as the risk of malignancy index (RMI), morphology scores, or models based on logistic regression analysis or ML, were developed for small populations at a single institution, and the heterogeneity of the tumor population and variability in the definition of the US terms used contributed to the lack of efficacy. The International Ovarian Tumor Analysis (IOTA) Collaborative Group was established in 1999 to investigate a large number of ovarian tumors recruited at different centers using a clearly defined and standardized US protocol to create a predictive model that would perform as well as tests performed by experienced experts.18 Currently, the IOTA study result is used in 47 centers in 17 countries, primarily in Europe, China, and Canada. The IOTA’s latest model was a simple rule (SR), which is a set of rules based on five US features suggestive of the benign lesion (B-features) and five features suggestive of a malignant lesion (M-features).19 The sensitivity, specificity, and accuracy for the detection of malignancy were 91.66, 84.84, and 86.66%, respectively.20

Some studies have used DL in CAD while assessing the malignant status of ovarian tumors.21,22 However, there were some problems such as the limited number of cases and the use of handcrafted image descriptors. To develop CAD using DL to discriminate between benign and malignant ovarian tumors, Christiansen et al. used transfer learning in three pre-trained DL algorithms with 3,077 US ovarian tumor images of 758 patients and compared its diagnostic accuracy with that of assessment by a US expert. Christiansen et al. trained DL on US images of ovarian tumors to determine their malignancy and proved that its identification of malignant and borderline malignant tumors was comparable to that of experts.23 The sensitivity, specificity, and accuracy for discriminating between benign and malignant tumors were 96.0, 86.7, and 91.3%, respectively, for the DL model; 96.0, 88.0, and 92.0%, respectively, for experts; and 96.0, 66.7, and 81.3%, respectively, for the IOTA’s SR; there were no significant differences among methods.23

FETAL ULTRASOUND

In recent years, several methods have been proposed to detect the fetal plane in US images.2426 Wu et al. proposed a DL model to assess the image quality of fetal abdominal images.27 Chen et al. proposed a CNN to devise a multi-task learning framework to automatically identify different standard planes from US videos.28 Baumgartner et al. used a single full CNN to detect planes from US sweeps and used a saliency map to provide localization of fetal structures in real-time.29 Ryou et al. proposed an automated system to localize and extract fetal biometric planes from 3D US volumes. They localized fetal structures using a structured classical random forest and classified fetal classes using a transfer-trained CNN.30 One of the goals of CNN-based CAD of fetal US images is to automatically show fetal sites to assist clinicians in their evaluation. Sridar et al. proposed a decision fusion classification model using CNNs to classify 2D fetal US planes; 14 different fetal structure images (abdomen, arm, blood vessels, cord insert, face, femur, foot, genitals, head, heart, kidney, leg, spine, and hand) were classified with an accuracy of 97.1%.31

BODY WEIGHT ESTIMATION

Wu et al. compared the CNN model with images judged by three doctors and concluded that the performance of the CNN model was comparable to that of the doctors’ evaluation.27 Chen et al. achieved an area under the curve of 0.95 for detecting the estimated fetal weight measurement cross-section from US movies using their DL model.28 Yu et al. proposed a method for discriminating fetal US-estimated weight measurement cross-sections and obtained an accuracy of 93.03%, which was better than that of the conventional ML method.27

FETAL CENTRAL NERVOUS SYSTEM

The number of reports on the automatic recognition of fetal heads on US images using DL has been increasing in the past few years, and its accuracy has also been improving.3234 The fetal central nervous system is a complex three-dimensional organ that continues to change throughout the fetal period. The International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) guidelines provide reference cross-sections for ultrasonographic screening methods.35 Standard axial US planes of craniocerebral segmentation are key to classifying potential abnormalities. Xie et al. recruited 15,372 normal and 14,047 abnormal fetal brain images in standard axial planes and assessed the classification accuracy by calculating the sensitivity and specificity for abnormal images using precision, recall, and Dice’s coefficient (DICE).36 They reported that the sensitivity and specificity for the identification of abnormal images were 96.9 and 95.9%, respectively.36 This is the first study in which DL was applied to detect fetal central nervous system abnormalities using US imaging.

FETAL ECHOCARDIOGRAPHY

Fetal cardiac screening is a mainstay of fetal morphology examination. The reasons for this are as follows: (1) high frequency of occurrence, (2) wide variety of disease variation, and (3) vital organs. The incidence of congenital heart disease (CHD) is estimated to be 4 to 13 per 1,000 live births, which means that approximately 1 in 100 offspring are born with CHD.3739 In addition, 1/3 of all cases are of severe CHD, which is a major cause of infant mortality. For these reasons, it has been thought that observation of the fetal heart during the fetal period, prenatal diagnosis of CHD, and neonatal treatment will contribute to improved neonatal prognosis.40 In a meta-analysis of prenatal diagnosis of severe CHD, 1,373 patients with CHD (hypoplastic left heart, right ventricular origin of both great vessels, aortic stenosis, severe aortic stenosis, pulmonary atresia, and common arterial stem disease) were studied. The odds ratio for neonatal mortality was 0.25 [95% confidence interval (CI), 0.08–0.84] in 31 (3.0%) of 1,047 cases with postnatal diagnosis compared to 2 (0.7%) of 297 cases with prenatal diagnosis, indicating that treatment planning based on prenatal diagnosis contributes to neonatal prognosis. Thus, the usefulness and significance of fetal neonatal ultrasonography have been proven.13 The structure of the heart consists of four chambers (right atrium, right ventricle, left atrium, and left ventricle), blood vessels flowing into the heart (pulmonary veins, superior and inferior vena cava, and ductus venosus), and blood vessels flowing out of the heart (aorta, pulmonary artery, and ductus arteriosus). Yoo et al. proposed the three-vessel view in 1997,16,17 and Yagel et al. proposed the three-vessel and trachea view in 200218 as a method to observe this complex three-dimensional structure using US B-mode cross-sectional images. The three-vessel and three-vessel and trachea views were groundbreaking as US sections that could confirm the positional relationship of the pulmonary artery and aorta, which cross and flow out from the heart, in a single cross-section. The detection rate of CHD was improved by observing not only one section of the four-chamber cross-section but also the large vessels above and below the four-ventricle cross-section, from the stomach to the upper heart.4143 To date, the guidelines for fetal cardiac US examinations have been published by various academic societies. In 2004, the American Society of Echocardiography published guidelines.44 The ISUOG published guidelines for cardiac screening examination of the fetus in 2006 that were updated in 2013. The American Institute of Ultrasound in Medicine published practice guidelines for the performance of fetal cardiac US examinations in 2010 and 2013. The American Heart Association (AHA) published a scientific statement on the diagnosis and treatment of fetal cardiac disease in 2014.

Despite the establishment of US screening methods for fetal CHD as described above, the diagnosis rate is still low at 30–60%.45 van Nisselrooij et al. investigated the reasons for this by scoring the quality of views used for midterm ultrasonography in 114 cases of isolated severe CHD. As a result, they identified the following four causes: (1) insufficient technique (images and artifacts that cannot be determined), (2) insufficient knowledge of CHD (inability to determine abnormality = missed), (3) cross-sectional images are correct, but abnormalities are not shown (abnormalities outside the specified cross-sectional area), and (4) frequency of screening, especially if the examiner concentrates on a certain period of time rather than a period of time during which the examiner is engaged.46 These causes are unique to humans and can be solved using AI. Yeo and Romero47 used an intelligent navigation technique called FINE to automatically acquire screening US images of the fetal heart and identify cardiac anatomical abnormalities. This tool was able to show findings of fetal cardiac anatomical abnormalities in four CHD cases.47 However, the identification of the cross-sectional area requires manual positioning and remains a semi-automatic determination method. More recently, Arnaout et al.48 proposed the use of a CNN in a supervised manner, using 685 echocardiograms of fetuses between 18 weeks and 24 weeks of gestational age to (1) identify the five most important views of the fetal heart, (2) measure cardiac structures in segments, and (3) distinguish the healthy heart from tetralogy of Fallot and hypoplastic left heart syndrome.48 The sensitivity and specificity were 100% and 90%, respectively, for the diagnosis of hypoplastic left heart syndrome. Although these results seem promising, the main limitations of this study are that only two CHDs were evaluated and the DL system was trained only on images from one US device without considering echocardiographic variability. Therefore, further studies need to be performed using larger datasets from different US devices.

FUTURE USE OF AI IN OBSTETRIC ULTRASOUND

In the future, DL methods for the US will yield promising results. However, intra- and inter-reader variability in the acquisition and interpretation of US images, as well as the degradation of image quality due to artifacts, are important issues that must be resolved and are major factors that still lag far behind advances in AI for the US compared with AI for CT and MRI. Fetal ultrasonography is an imaging examination performed in real-time; i.e., it is an examination observed in moving images rather than still images. In particular, fetal echocardiography shows a variety of images depending on the time axis due to the beating of the heart. Therefore, it is necessary to overcome the characteristics of the video. Bridge et al. extracted key information from 2D high-frame rate US videos of the fetal heart. A model for automating the interpretation of fetal heart US videos has been proposed.49 Acoustic shading is an artifact on US images that significantly reduces diagnostic efficiency. The same problem exists in image processing, and it is an unavoidable problem in the AI processing of US images. Such shadows reduce the performance of image recognition methods for US images.5052 Yasutomi et al. proposed a method to estimate not only the location of acoustic shadows in US images but also their intensity by using an auto-encoding structure-based CNN.53 The use of such quality assessment is useful for pre-processing CAD of US using DL. This kind of image analysis technology is expected to contribute significantly to improving the detection rate of CHD in the future.

References

1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521 (7553):436–444. DOI: 10.1038/nature14539.

2. Akkus Z, Galimzianova A, Hoogi A, et al. Deep learning for brain MRI segmentation: state of the art and future directions. J Digit Imaging 2017;30(4):449–459. DOI: 10.1007/s10278-017-9983-4.

3. LeCun Y, Boser B, Denker JS, et al. Backpropagation applied to handwritten ZIP code recognition. Neural Comput 1989;1(4):541–551. DOI: 10.1162/neco.1989.1.4.541.

4. Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition. Available at: https://ieeexplore.ieee.org/document/5206848 .Accessed June 18, 2019.

5. Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis 2015;115(3):211–252. DOI: 10.1007/s11263-015-0816-y.

6. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60–88. DOI: 10.1016/j.media.2017.07.005.

7. Winsberg F, Elkin M, Macy J, et al. Detection of radiographic abnormalities in mammograms by means of optical scanning and computer analysis. Radiology 1967;89(2):211–215. DOI: 10.1148/89.2.211.

8. Spiesberger W. Mammogram inspection by computer. IEEE. Trans. Biomed. Eng. 1979;26(4):213–219. DOI: 10.1109/tbme.1979.326560.

9. Semmlow JL, Shadagopappan A, Ackerman LV, et al. A fully automated system for screening mammograms. Comp Biomed Res 1980;13(4):350–362. DOI: 10.1016/0010-4809(80)90027-0.

10. Doi K. Chapter 1. Historical overview. In: ed. Q, Li RM, Nishikawa Computer-Aided Detection and Diagnosis in Medical Imaging.Boca Raton, FL: Taylor and Francis Group, LLC, CRC Press; 2015 pp.1–17.

11. Gulshan V, Peng L, Coram M, et al. Diabetic retinopathy in retinal fundus photographs. JAMA 2016;316(22):2402–2410. DOI: 10.1001/jama.2016.17216.

12. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 2017;284(2):574–582. DOI: 10.1148/radiol.2017162326.

13. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542(7639):115–118. DOI: 10.1038/nature21056.

14. Mori Y, Kudo SE, Wakamura K, et al. Novel computer-aided diagnostic system for colorectal lesions by using endocytoscopy (with videos). Gastrointest Endosc 2015;81(3):621–629. DOI: 10.1016/j.gie.2014.09.008.

15. Sharma A, Apostolidou S, Burnell M, et al. Risk of epithelial ovarian cancer in asymptomatic women with ultrasound-detected ovarian masses: a prospective cohort study within the UK collaborative trial of ovarian cancer screening (UKCTOCS). Ultrasound Obstet Gynecol 2012;40(3):338–344. DOI: 10.1002/uog.12270.

16. Froyman W, Landolfo C, De Cock B, et al. Risk of complications in patients with conservatively managed ovarian tumours (IOTA5): a 2-year interim analysis of a multicentre, prospective, cohort study. Lancet Oncol 2019;20(3):448–458. DOI: 10.1016/S1470-2045(18)30837-4.

17. Webb PM, Jordan SJ. Epidemiology of epithelial ovarian cancer. Best Pract Res Clin Obstet Gynaecol 2017;41:3–14. DOI: 10.1016/j.bpobgyn.2016.08.006.

18. Timmerman D, Valentin L, Bourne TH, et al. Terms, definitions and measurements to describe the sonographic features of adnexal tumors: a consensus opinion from the international ovarian tumor analysis (IOTA) group. Ultrasound Obstet Gynecol 2000;16(5):500–505. DOI: 10.1046/j.1469-0705.2000.00287.x.

19. Froyman W, Timmerman D. Methods of assessing ovarian masses: international ovarian tumor analysis approach. Obstet Gynecol Clin North Am 2019;46(4):625–641. DOI: 10.1016/j.ogc.2019.07.003.

20. Garg S, Kaur A, Mohi JK, et al. Evaluation of IOTA simple ultrasound rules to distinguish benign and malignant ovarian tumours. J Clin Diagn Res 2017;11(8):TC06–TC09. DOI: 10.7860/JCDR/2017/26790.10353.

21. Khazendar S, Al-Assam H, Du H, et al. Automated classification of static ultrasound images of ovarian tumours based on decision level fusion. 6th Computer Science and Electronic Engineering Conference (CEEC), Colchester, UK, 2014; 148–153.

22. Khazendar S, Sayasneh A, Al-Assam H, et al. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator. Facts Views Vis Obgyn 2015;7(1):7–15.

23. Christiansen F, Epstein EL, Smedberg E, et al. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment. Ultrasound Obstet Gynecol 2021;57(1):155–163. DOI: 10.1002/uog.23530.

24. Chen H, Ni D, Qin J, et al. Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE J Biomed Health Inform 2015;19(5):1627–1636. DOI: 10.1109/JBHI.2015.2425041.

25. Kwitt R, Vasconcelos N, Razzaque S, et al. Localizing target structures in ultrasound video-a phantom study. Med Image Anal 2013;17(7):712–722. DOI: 10.1016/j.media.2013.05.003.

26. Lei B, Zhuo L, Chen S, et al. Automatic recognition of fetal standard plane in ultrasound image. 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI).Piscataway, NJ: IEEE; 2014. 85–88.

27. Wu L, Cheng JZ, Li S, et al. FUIQA: fetal ultrasound image quality assessment with deep convolutional networks. IEEE Trans Cybern 2017;47(5):1336–1349. DOI: 10.1109/TCYB.2017.2671898.

28. Chen H, Wu L, Dou Q, et al. Ultrasound standard plane detection using a composite neural network framework. IEEE Trans Cybern 2017;47(6):1576–1586. DOI: 10.1109/TCYB.2017.2685080.

29. Baumgartner CF, Kamnitsas K, Matthew J, et al. Sononet: Real-time detection and local-isation of fetal standard scan planes in freehand ultrasound. IEEE Trans Med Imaging 2017;36(11):2204–2215. DOI: 10.1109/TMI.2017.2712367.

30. Ryou H, Yaqub M, Cavallaro A, et al. Automated 3D ultrasound image analysis for first trimester assessment of fetal health. Phys Med Biol 2019;64(18): 185010. DOI: 10.1088/1361-6560/ab3ad1.

31. Sridar P, Kumar A, Quinton A, et al. Decision fusion-based fetal ultrasound image plane classification using convolutional neural networks. Ultrasound Med Biol 2019;45(5):1259–1273. DOI: 10.1016/j.ultrasmedbio.2018.11.016.

32. Yu Z, Tan EL, Ni D, et al. A deep convolutional neural network-based framework for automatic fetal facial standard plane recognition. IEEE J Biomed Health Inform 2018;22(3):874–885. DOI: 10.1109/JBHI.2017.2705031.

33. Van den Heuvel TLA, Petros H, Santini S, et al. Ultrasound Med Biol 2019;45(3):773–785. DOI: 10.1016/j.ultrasmedbio.2018.09.015.

34. Kim HP, Lee SM, Kwon JY, et al. Automatic evaluation of fetal head biometry from ultrasound images using machine learning. Physiol Meas 2019;40(6): 065009. DOI: 10.1088/1361-6579/ab21ac.

35. Malinger G, Paladini D, Haratz KK, et al. ISUOG practice guidelines (updated): sonographic examination of the fetal central nervous system. Part 1: performance of screening examination and indications for targeted neurosonography. Ultrasound Obstet Gynecol 2020;56(3):476–484. DOI: 10.1002/uog.22145.

36. Xie HN, Wang N, He M, et al. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet Gynecol 2020;56(4):579–587. DOI: 10.1002/uog.21967.

37. Petrini JR, Broussard CS, Gilboa SM, et al. Racial differences by gestational age in neonatal deaths attributable to congenital heart defects—United States,2003–2006. MMWR Morb Mortal Wkly Rep 2010;(59:)1208–1211.

38. Wren C, Richmond S, Donaldson L. Temporal variability in birth prevalence of cardiovascular malformations. Heart 2000;83(4):414–419. DOI: 10.1136/heart.83.4.414.

39. Meberg A, Otterstad JE, Froland G, et al. Outcome of congenital heart defects—a population-based study. Acta Paediatr 2000;89(11):1344–1351. DOI: 10.1080/080352500300002552.

40. Holland BJ, Myers JA, Woods CR. Prenatal diagnosis of critical congenital heart disease reduces risk of death fromcardiovascular compromise prior to planned neonatal cardiac surgery: a meta-analysis. Ultrasound Obstet Gynecol 2015;45(6):631–638. DOI: 10.1002/uog.14882.

41. Kirk JS, Riggs TW, Comstock CH, et al. Prenatal screening for cardiac anomalies; The value of routine addition of the aortic root to the four-chamber view. Obstet Gynecol 1994;84(3):427–431.

42. DeVore GR. The aortic and pulmonary outflow tract screening examination in the human fetus. J Ultrasound Med 1992;11(7):345–348. DOI: 10.7863/jum.1992.11.7.345.

43. Zhang YF, Zeng XL, Zhao EF, et al. Diagnostic value of fetal echocardiography for congenital heart disease: a systematic review and meta-analysis. Medicine (Baltimore) 2015;94(42):e1759. DOI: 10.1097/MD.0000000000001759.

44. Rychik J, Ayres N, Cuneo B, et al. American society of echocardiography guidelines and standards for performance of the fetal echocardiogram. J Am Soc Echocardiogr 2004;17(7):803–810. DOI: 10.1016/j.echo.2004.04.011.

45. van Velzen CL, Ket JCF, van de Ven PM, et al. Systematic review and meta-analysis of the performance of second-trimester screening for prenatal detection of congenital heart defects. Int J Gynaecol Obstet 2018;140(2):137–145. DOI: 10.1002/ijgo.12373.

46. van Nisselrooij AEL, Teunissen AKK, Clur SA, et al. Why are congenital heart defects being missed? Ultrasound Obstet Gynecol 2020;55(6):747–757. DOI: 10.1002/uog.20358.

47. Yeo L, Romero R. Fetal intelligent navigation echocardiography (FINE): a novel method for rapid, simple, and automatic examination of the fetal heart. Ultrasound Obstet Gynecol 2013;42(3):268–284. DOI: 10.1002/uog.12563.

48. Arnaout R, Curran L, Zhao Y, et al. Expert-level prenatal detection of complex congenital heart disease from screening ultrasound using deep learning. medRxiv 2020.

49. Bridge CP, Ioannou C, Noble JA. Automated annotation and quantitative description of ultrasound videos of the fetal heart. Med Image Anal 2017;36:147–161. DOI: 10.1016/j.media.2016.11.006.

50. Noble JA, Boukerroui D. Ultrasound image segmentation: a survey. IEEE Trans Med Imaging 2006;25(8):987–1010. DOI: 10.1109/tmi.2006.877092.

51. Brattain LJ, Telfer BA, Dhyani M, et al. Machine learning for medical ultrasound: status, methods, and future opportunities. Abdom Radiol 2018;43(4):786–799. DOI: 10.1007/s00261-018-1517-0.

52. Liu S, Wang Y, Yang X, et al. Deep learning in medical ultrasound analysis: a review. Engineering 2019;5(2):261–275. DOI: 10.1016/j.eng.2018.11.020.

53. Yasutomi S, Arakaki T, Matsuoka R, et al. Shadow estimation for ultrasound images using auto-encoding structures and synthetic shadows. Appl Sci 2021;11(3):1127. DOI: 10.3390/app11031127.

________________________
© The Author(s). 2021 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.