thesis medical imaging

Head Start Your Radiology Residency [Online] ↗️

  • Radiology Thesis – More than 400 Research Topics (2022)!

Please login to bookmark

Radiology Thesis Topics RadioGyan.com

Introduction

A thesis or dissertation, as some people would like to call it, is an integral part of the Radiology curriculum, be it MD, DNB, or DMRD. We have tried to aggregate radiology thesis topics from various sources for reference.

Not everyone is interested in research, and writing a Radiology thesis can be daunting. But there is no escape from preparing, so it is better that you accept this bitter truth and start working on it instead of cribbing about it (like other things in life. #PhilosophyGyan!)

Start working on your thesis as early as possible and finish your thesis well before your exams, so you do not have that stress at the back of your mind. Also, your thesis may need multiple revisions, so be prepared and allocate time accordingly.

Tips for Choosing Radiology Thesis and Research Topics

Keep it simple silly (kiss).

Retrospective > Prospective

Retrospective studies are better than prospective ones, as you already have the data you need when choosing to do a retrospective study. Prospective studies are better quality, but as a resident, you may not have time (, energy and enthusiasm) to complete these.

Choose a simple topic that answers a single/few questions

Original research is challenging, especially if you do not have prior experience. I would suggest you choose a topic that answers a single or few questions. Most topics that I have listed are along those lines. Alternatively, you can choose a broad topic such as “Role of MRI in evaluation of perianal fistulas.”

You can choose a novel topic if you are genuinely interested in research AND have a good mentor who will guide you. Once you have done that, make sure that you publish your study once you are done with it.

Get it done ASAP.

In most cases, it makes sense to stick to a thesis topic that will not take much time. That does not mean you should ignore your thesis and ‘Ctrl C + Ctrl V’ from a friend from another university. Thesis writing is your first step toward research methodology so do it as sincerely as possible. Do not procrastinate in preparing the thesis. As soon as you have been allotted a guide, start researching topics and writing a review of the literature.

At the same time, do not invest a lot of time in writing/collecting data for your thesis. You should not be busy finishing your thesis a few months before the exam. Some people could not appear for the exam because they could not submit their thesis in time. So DO NOT TAKE thesis lightly.

Do NOT Copy-Paste

Reiterating once again, do not simply choose someone else’s thesis topic. Find out what are kind of cases that your Hospital caters to. It is better to do a good thesis on a common topic than a crappy one on a rare one.

Books to help you write a Radiology Thesis

Event country/university has a different format for thesis; hence these book recommendations may not work for everyone.

How to Write the Thesis and Thesis Protocol: A Primer for Medical, Dental, and Nursing Courses: A Primer for Medical, Dental and Nursing Courses

  • Amazon Kindle Edition
  • Gupta, Piyush (Author)
  • English (Publication Language)
  • 206 Pages - 10/12/2020 (Publication Date) - Jaypee Brothers Medical Publishers (P) Ltd. (Publisher)

In A Hurry? Download a PDF list of Radiology Research Topics!

Sign up below to get this PDF directly to your email address.

100% Privacy Guaranteed. Your information will not be shared. Unsubscribe anytime with a single click.

List of Radiology Research /Thesis / Dissertation Topics

  • State of the art of MRI in the diagnosis of hepatic focal lesions
  • Multimodality imaging evaluation of sacroiliitis in newly diagnosed patients of spondyloarthropathy
  • Multidetector computed tomography in oesophageal varices
  • Role of positron emission tomography with computed tomography in the diagnosis of cancer Thyroid
  • Evaluation of focal breast lesions using ultrasound elastography
  • Role of MRI diffusion tensor imaging in the assessment of traumatic spinal cord injuries
  • Sonographic imaging in male infertility
  • Comparison of color Doppler and digital subtraction angiography in occlusive arterial disease in patients with lower limb ischemia
  • The role of CT urography in Haematuria
  • Role of functional magnetic resonance imaging in making brain tumor surgery safer
  • Prediction of pre-eclampsia and fetal growth restriction by uterine artery Doppler
  • Role of grayscale and color Doppler ultrasonography in the evaluation of neonatal cholestasis
  • Validity of MRI in the diagnosis of congenital anorectal anomalies
  • Role of sonography in assessment of clubfoot
  • Role of diffusion MRI in preoperative evaluation of brain neoplasms
  • Imaging of upper airways for pre-anaesthetic evaluation purposes and for laryngeal afflictions.
  • A study of multivessel (arterial and venous) Doppler velocimetry in intrauterine growth restriction
  • Multiparametric 3tesla MRI of suspected prostatic malignancy.
  • Role of Sonography in Characterization of Thyroid Nodules for differentiating benign from
  • Role of advances magnetic resonance imaging sequences in multiple sclerosis
  • Role of multidetector computed tomography in evaluation of jaw lesions
  • Role of Ultrasound and MR Imaging in the Evaluation of Musculotendinous Pathologies of Shoulder Joint
  • Role of perfusion computed tomography in the evaluation of cerebral blood flow, blood volume and vascular permeability of cerebral neoplasms
  • MRI flow quantification in the assessment of the commonest csf flow abnormalities
  • Role of diffusion-weighted MRI in evaluation of prostate lesions and its histopathological correlation
  • CT enterography in evaluation of small bowel disorders
  • Comparison of perfusion magnetic resonance imaging (PMRI), magnetic resonance spectroscopy (MRS) in and positron emission tomography-computed tomography (PET/CT) in post radiotherapy treated gliomas to detect recurrence
  • Role of multidetector computed tomography in evaluation of paediatric retroperitoneal masses
  • Role of Multidetector computed tomography in neck lesions
  • Estimation of standard liver volume in Indian population
  • Role of MRI in evaluation of spinal trauma
  • Role of modified sonohysterography in female factor infertility: a pilot study.
  • The role of pet-CT in the evaluation of hepatic tumors
  • Role of 3D magnetic resonance imaging tractography in assessment of white matter tracts compromise in supratentorial tumors
  • Role of dual phase multidetector computed tomography in gallbladder lesions
  • Role of multidetector computed tomography in assessing anatomical variants of nasal cavity and paranasal sinuses in patients of chronic rhinosinusitis.
  • magnetic resonance spectroscopy in multiple sclerosis
  • Evaluation of thyroid nodules by ultrasound elastography using acoustic radiation force impulse (ARFI) imaging
  • Role of Magnetic Resonance Imaging in Intractable Epilepsy
  • Evaluation of suspected and known coronary artery disease by 128 slice multidetector CT.
  • Role of regional diffusion tensor imaging in the evaluation of intracranial gliomas and its histopathological correlation
  • Role of chest sonography in diagnosing pneumothorax
  • Role of CT virtual cystoscopy in diagnosis of urinary bladder neoplasia
  • Role of MRI in assessment of valvular heart diseases
  • High resolution computed tomography of temporal bone in unsafe chronic suppurative otitis media
  • Multidetector CT urography in the evaluation of hematuria
  • Contrast-induced nephropathy in diagnostic imaging investigations with intravenous iodinated contrast media
  • Comparison of dynamic susceptibility contrast-enhanced perfusion magnetic resonance imaging and single photon emission computed tomography in patients with little’s disease
  • Role of Multidetector Computed Tomography in Bowel Lesions.
  • Role of diagnostic imaging modalities in evaluation of post liver transplantation recipient complications.
  • Role of multislice CT scan and barium swallow in the estimation of oesophageal tumour length
  • Malignant Lesions-A Prospective Study.
  • Value of ultrasonography in assessment of acute abdominal diseases in pediatric age group
  • Role of three dimensional multidetector CT hysterosalpingography in female factor infertility
  • Comparative evaluation of multi-detector computed tomography (MDCT) virtual tracheo-bronchoscopy and fiberoptic tracheo-bronchoscopy in airway diseases
  • Role of Multidetector CT in the evaluation of small bowel obstruction
  • Sonographic evaluation in adhesive capsulitis of shoulder
  • Utility of MR Urography Versus Conventional Techniques in Obstructive Uropathy
  • MRI of the postoperative knee
  • Role of 64 slice-multi detector computed tomography in diagnosis of bowel and mesenteric injury in blunt abdominal trauma.
  • Sonoelastography and triphasic computed tomography in the evaluation of focal liver lesions
  • Evaluation of Role of Transperineal Ultrasound and Magnetic Resonance Imaging in Urinary Stress incontinence in Women
  • Multidetector computed tomographic features of abdominal hernias
  • Evaluation of lesions of major salivary glands using ultrasound elastography
  • Transvaginal ultrasound and magnetic resonance imaging in female urinary incontinence
  • MDCT colonography and double-contrast barium enema in evaluation of colonic lesions
  • Role of MRI in diagnosis and staging of urinary bladder carcinoma
  • Spectrum of imaging findings in children with febrile neutropenia.
  • Spectrum of radiographic appearances in children with chest tuberculosis.
  • Role of computerized tomography in evaluation of mediastinal masses in pediatric
  • Diagnosing renal artery stenosis: Comparison of multimodality imaging in diabetic patients
  • Role of multidetector CT virtual hysteroscopy in the detection of the uterine & tubal causes of female infertility
  • Role of multislice computed tomography in evaluation of crohn’s disease
  • CT quantification of parenchymal and airway parameters on 64 slice MDCT in patients of chronic obstructive pulmonary disease
  • Comparative evaluation of MDCT  and 3t MRI in radiographically detected jaw lesions.
  • Evaluation of diagnostic accuracy of ultrasonography, colour Doppler sonography and low dose computed tomography in acute appendicitis
  • Ultrasonography , magnetic resonance cholangio-pancreatography (MRCP) in assessment of pediatric biliary lesions
  • Multidetector computed tomography in hepatobiliary lesions.
  • Evaluation of peripheral nerve lesions with high resolution ultrasonography and colour Doppler
  • Multidetector computed tomography in pancreatic lesions
  • Multidetector Computed Tomography in Paediatric abdominal masses.
  • Evaluation of focal liver lesions by colour Doppler and MDCT perfusion imaging
  • Sonographic evaluation of clubfoot correction during Ponseti treatment
  • Role of multidetector CT in characterization of renal masses
  • Study to assess the role of Doppler ultrasound in evaluation of arteriovenous (av) hemodialysis fistula and the complications of hemodialysis vasular access
  • Comparative study of multiphasic contrast-enhanced CT and contrast-enhanced MRI in the evaluation of hepatic mass lesions
  • Sonographic spectrum of rheumatoid arthritis
  • Diagnosis & staging of liver fibrosis by ultrasound elastography in patients with chronic liver diseases
  • Role of multidetector computed tomography in assessment of jaw lesions.
  • Role of high-resolution ultrasonography in the differentiation of benign and malignant thyroid lesions
  • Radiological evaluation of aortic aneurysms in patients selected for endovascular repair
  • Role of conventional MRI, and diffusion tensor imaging tractography in evaluation of congenital brain malformations
  • To evaluate the status of coronary arteries in patients with non-valvular atrial fibrillation using 256 multirow detector CT scan
  • A comparative study of ultrasonography and CT – arthrography in diagnosis of chronic ligamentous and meniscal injuries of knee
  • Multi detector computed tomography evaluation in chronic obstructive pulmonary disease and correlation with severity of disease
  • Diffusion weighted and dynamic contrast enhanced magnetic resonance imaging in chemoradiotherapeutic response evaluation in cervical cancer.
  • High resolution sonography in the evaluation of non-traumatic painful wrist
  • The role of trans-vaginal ultrasound versus magnetic resonance imaging in diagnosis & evaluation of cancer cervix
  • Role of multidetector row computed tomography in assessment of maxillofacial trauma
  • Imaging of vascular complication after liver transplantation.
  • Role of magnetic resonance perfusion weighted imaging & spectroscopy for grading of glioma by correlating perfusion parameter of the lesion with the final histopathological grade
  • Magnetic resonance evaluation of abdominal tuberculosis.
  • Diagnostic usefulness of low dose spiral HRCT in diffuse lung diseases
  • Role of dynamic contrast enhanced and diffusion weighted magnetic resonance imaging in evaluation of endometrial lesions
  • Contrast enhanced digital mammography anddigital breast tomosynthesis in early diagnosis of breast lesion
  • Evaluation of Portal Hypertension with Colour Doppler flow imaging and magnetic resonance imaging
  • Evaluation of musculoskeletal lesions by magnetic resonance imaging
  • Role of diffusion magnetic resonance imaging in assessment of neoplastic and inflammatory brain lesions
  • Radiological spectrum of chest diseases in HIV infected children High resolution ultrasonography in neck masses in children
  • with surgical findings
  • Sonographic evaluation of peripheral nerves in type 2 diabetes mellitus.
  • Role of perfusion computed tomography in the evaluation of neck masses and correlation
  • Role of ultrasonography in the diagnosis of knee joint lesions
  • Role of ultrasonography in evaluation of various causes of pelvic pain in first trimester of pregnancy.
  • Role of Magnetic Resonance Angiography in the Evaluation of Diseases of Aorta and its Branches
  • MDCT fistulography in evaluation of fistula in Ano
  • Role of multislice CT in diagnosis of small intestine tumors
  • Role of high resolution CT in differentiation between benign and malignant pulmonary nodules in children
  • A study of multidetector computed tomography urography in urinary tract abnormalities
  • Role of high resolution sonography in assessment of ulnar nerve in patients with leprosy.
  • Pre-operative radiological evaluation of locally aggressive and malignant musculoskeletal tumours by computed tomography and magnetic resonance imaging.
  • The role of ultrasound & MRI in acute pelvic inflammatory disease
  • Ultrasonography compared to computed tomographic arthrography in the evaluation of shoulder pain
  • Role of Multidetector Computed Tomography in patients with blunt abdominal trauma.
  • The Role of Extended field-of-view Sonography and compound imaging in Evaluation of Breast Lesions
  • Evaluation of focal pancreatic lesions by Multidetector CT and perfusion CT
  • Evaluation of breast masses on sono-mammography and colour Doppler imaging
  • Role of CT virtual laryngoscopy in evaluation of laryngeal masses
  • Triple phase multi detector computed tomography in hepatic masses
  • Role of transvaginal ultrasound in diagnosis and treatment of female infertility
  • Role of ultrasound and color Doppler imaging in assessment of acute abdomen due to female genetal causes
  • High resolution ultrasonography and color Doppler ultrasonography in scrotal lesion
  • Evaluation of diagnostic accuracy of ultrasonography with colour Doppler vs low dose computed tomography in salivary gland disease
  • Role of multidetector CT in diagnosis of salivary gland lesions
  • Comparison of diagnostic efficacy of ultrasonography and magnetic resonance cholangiopancreatography in obstructive jaundice: A prospective study
  • Evaluation of varicose veins-comparative assessment of low dose CT venogram with sonography: pilot study
  • Role of mammotome in breast lesions
  • The role of interventional imaging procedures in the treatment of selected gynecological disorders
  • Role of transcranial ultrasound in diagnosis of neonatal brain insults
  • Role of multidetector CT virtual laryngoscopy in evaluation of laryngeal mass lesions
  • Evaluation of adnexal masses on sonomorphology and color Doppler imaginig
  • Role of radiological imaging in diagnosis of endometrial carcinoma
  • Comprehensive imaging of renal masses by magnetic resonance imaging
  • The role of 3D & 4D ultrasonography in abnormalities of fetal abdomen
  • Diffusion weighted magnetic resonance imaging in diagnosis and characterization of brain tumors in correlation with conventional MRI
  • Role of diffusion weighted MRI imaging in evaluation of cancer prostate
  • Role of multidetector CT in diagnosis of urinary bladder cancer
  • Role of multidetector computed tomography in the evaluation of paediatric retroperitoneal masses.
  • Comparative evaluation of gastric lesions by double contrast barium upper G.I. and multi detector computed tomography
  • Evaluation of hepatic fibrosis in chronic liver disease using ultrasound elastography
  • Role of MRI in assessment of hydrocephalus in pediatric patients
  • The role of sonoelastography in characterization of breast lesions
  • The influence of volumetric tumor doubling time on survival of patients with intracranial tumours
  • Role of perfusion computed tomography in characterization of colonic lesions
  • Role of proton MRI spectroscopy in the evaluation of temporal lobe epilepsy
  • Role of Doppler ultrasound and multidetector CT angiography in evaluation of peripheral arterial diseases.
  • Role of multidetector computed tomography in paranasal sinus pathologies
  • Role of virtual endoscopy using MDCT in detection & evaluation of gastric pathologies
  • High resolution 3 Tesla MRI in the evaluation of ankle and hindfoot pain.
  • Transperineal ultrasonography in infants with anorectal malformation
  • CT portography using MDCT versus color Doppler in detection of varices in cirrhotic patients
  • Role of CT urography in the evaluation of a dilated ureter
  • Characterization of pulmonary nodules by dynamic contrast-enhanced multidetector CT
  • Comprehensive imaging of acute ischemic stroke on multidetector CT
  • The role of fetal MRI in the diagnosis of intrauterine neurological congenital anomalies
  • Role of Multidetector computed tomography in pediatric chest masses
  • Multimodality imaging in the evaluation of palpable & non-palpable breast lesion.
  • Sonographic Assessment Of Fetal Nasal Bone Length At 11-28 Gestational Weeks And Its Correlation With Fetal Outcome.
  • Role Of Sonoelastography And Contrast-Enhanced Computed Tomography In Evaluation Of Lymph Node Metastasis In Head And Neck Cancers
  • Role Of Renal Doppler And Shear Wave Elastography In Diabetic Nephropathy
  • Evaluation Of Relationship Between Various Grades Of Fatty Liver And Shear Wave Elastography Values
  • Evaluation and characterization of pelvic masses of gynecological origin by USG, color Doppler and MRI in females of reproductive age group
  • Radiological evaluation of small bowel diseases using computed tomographic enterography
  • Role of coronary CT angiography in patients of coronary artery disease
  • Role of multimodality imaging in the evaluation of pediatric neck masses
  • Role of CT in the evaluation of craniocerebral trauma
  • Role of magnetic resonance imaging (MRI) in the evaluation of spinal dysraphism
  • Comparative evaluation of triple phase CT and dynamic contrast-enhanced MRI in patients with liver cirrhosis
  • Evaluation of the relationship between carotid intima-media thickness and coronary artery disease in patients evaluated by coronary angiography for suspected CAD
  • Assessment of hepatic fat content in fatty liver disease by unenhanced computed tomography
  • Correlation of vertebral marrow fat on spectroscopy and diffusion-weighted MRI imaging with bone mineral density in postmenopausal women.
  • Comparative evaluation of CT coronary angiography with conventional catheter coronary angiography
  • Ultrasound evaluation of kidney length & descending colon diameter in normal and intrauterine growth-restricted fetuses
  • A prospective study of hepatic vein waveform and splenoportal index in liver cirrhosis: correlation with child Pugh’s classification and presence of esophageal varices.
  • CT angiography to evaluate coronary artery by-pass graft patency in symptomatic patient’s functional assessment of myocardium by cardiac MRI in patients with myocardial infarction
  • MRI evaluation of HIV positive patients with central nervous system manifestations
  • MDCT evaluation of mediastinal and hilar masses
  • Evaluation of rotator cuff & labro-ligamentous complex lesions by MRI & MRI arthrography of shoulder joint
  • Role of imaging in the evaluation of soft tissue vascular malformation
  • Role of MRI and ultrasonography in the evaluation of multifidus muscle pathology in chronic low back pain patients
  • Role of ultrasound elastography in the differential diagnosis of breast lesions
  • Role of magnetic resonance cholangiopancreatography in evaluating dilated common bile duct in patients with symptomatic gallstone disease.
  • Comparative study of CT urography & hybrid CT urography in patients with haematuria.
  • Role of MRI in the evaluation of anorectal malformations
  • Comparison of ultrasound-Doppler and magnetic resonance imaging findings in rheumatoid arthritis of hand and wrist
  • Role of Doppler sonography in the evaluation of renal artery stenosis in hypertensive patients undergoing coronary angiography for coronary artery disease.
  • Comparison of radiography, computed tomography and magnetic resonance imaging in the detection of sacroiliitis in ankylosing spondylitis.
  • Mr evaluation of painful hip
  • Role of MRI imaging in pretherapeutic assessment of oral and oropharyngeal malignancy
  • Evaluation of diffuse lung diseases by high resolution computed tomography of the chest
  • Mr evaluation of brain parenchyma in patients with craniosynostosis.
  • Diagnostic and prognostic value of cardiovascular magnetic resonance imaging in dilated cardiomyopathy
  • Role of multiparametric magnetic resonance imaging in the detection of early carcinoma prostate
  • Role of magnetic resonance imaging in white matter diseases
  • Role of sonoelastography in assessing the response to neoadjuvant chemotherapy in patients with locally advanced breast cancer.
  • Role of ultrasonography in the evaluation of carotid and femoral intima-media thickness in predialysis patients with chronic kidney disease
  • Role of H1 MRI spectroscopy in focal bone lesions of peripheral skeleton choline detection by MRI spectroscopy in breast cancer and its correlation with biomarkers and histological grade.
  • Ultrasound and MRI evaluation of axillary lymph node status in breast cancer.
  • Role of sonography and magnetic resonance imaging in evaluating chronic lateral epicondylitis.
  • Comparative of sonography including Doppler and sonoelastography in cervical lymphadenopathy.
  • Evaluation of Umbilical Coiling Index as Predictor of Pregnancy Outcome.
  • Computerized Tomographic Evaluation of Azygoesophageal Recess in Adults.
  • Lumbar Facet Arthropathy in Low Backache.
  • “Urethral Injuries After Pelvic Trauma: Evaluation with Uretrography
  • Role Of Ct In Diagnosis Of Inflammatory Renal Diseases
  • Role Of Ct Virtual Laryngoscopy In Evaluation Of Laryngeal Masses
  • “Ct Portography Using Mdct Versus Color Doppler In Detection Of Varices In
  • Cirrhotic Patients”
  • Role Of Multidetector Ct In Characterization Of Renal Masses
  • Role Of Ct Virtual Cystoscopy In Diagnosis Of Urinary Bladder Neoplasia
  • Role Of Multislice Ct In Diagnosis Of Small Intestine Tumors
  • “Mri Flow Quantification In The Assessment Of The Commonest CSF Flow Abnormalities”
  • “The Role Of Fetal Mri In Diagnosis Of Intrauterine Neurological CongenitalAnomalies”
  • Role Of Transcranial Ultrasound In Diagnosis Of Neonatal Brain Insults
  • “The Role Of Interventional Imaging Procedures In The Treatment Of Selected Gynecological Disorders”
  • Role Of Radiological Imaging In Diagnosis Of Endometrial Carcinoma
  • “Role Of High-Resolution Ct In Differentiation Between Benign And Malignant Pulmonary Nodules In Children”
  • Role Of Ultrasonography In The Diagnosis Of Knee Joint Lesions
  • “Role Of Diagnostic Imaging Modalities In Evaluation Of Post Liver Transplantation Recipient Complications”
  • “Diffusion-Weighted Magnetic Resonance Imaging In Diagnosis And
  • Characterization Of Brain Tumors In Correlation With Conventional Mri”
  • The Role Of PET-CT In The Evaluation Of Hepatic Tumors
  • “Role Of Computerized Tomography In Evaluation Of Mediastinal Masses In Pediatric patients”
  • “Trans Vaginal Ultrasound And Magnetic Resonance Imaging In Female Urinary Incontinence”
  • Role Of Multidetector Ct In Diagnosis Of Urinary Bladder Cancer
  • “Role Of Transvaginal Ultrasound In Diagnosis And Treatment Of Female Infertility”
  • Role Of Diffusion-Weighted Mri Imaging In Evaluation Of Cancer Prostate
  • “Role Of Positron Emission Tomography With Computed Tomography In Diagnosis Of Cancer Thyroid”
  • The Role Of CT Urography In Case Of Haematuria
  • “Value Of Ultrasonography In Assessment Of Acute Abdominal Diseases In Pediatric Age Group”
  • “Role Of Functional Magnetic Resonance Imaging In Making Brain Tumor Surgery Safer”
  • The Role Of Sonoelastography In Characterization Of Breast Lesions
  • “Ultrasonography, Magnetic Resonance Cholangiopancreatography (MRCP) In Assessment Of Pediatric Biliary Lesions”
  • “Role Of Ultrasound And Color Doppler Imaging In Assessment Of Acute Abdomen Due To Female Genital Causes”
  • “Role Of Multidetector Ct Virtual Laryngoscopy In Evaluation Of Laryngeal Mass Lesions”
  • MRI Of The Postoperative Knee
  • Role Of Mri In Assessment Of Valvular Heart Diseases
  • The Role Of 3D & 4D Ultrasonography In Abnormalities Of Fetal Abdomen
  • State Of The Art Of Mri In Diagnosis Of Hepatic Focal Lesions
  • Role Of Multidetector Ct In Diagnosis Of Salivary Gland Lesions
  • “Role Of Virtual Endoscopy Using Mdct In Detection & Evaluation Of Gastric Pathologies”
  • The Role Of Ultrasound & Mri In Acute Pelvic Inflammatory Disease
  • “Diagnosis & Staging Of Liver Fibrosis By Ultraso Und Elastography In
  • Patients With Chronic Liver Diseases”
  • Role Of Mri In Evaluation Of Spinal Trauma
  • Validity Of Mri In Diagnosis Of Congenital Anorectal Anomalies
  • Imaging Of Vascular Complication After Liver Transplantation
  • “Contrast-Enhanced Digital Mammography And Digital Breast Tomosynthesis In Early Diagnosis Of Breast Lesion”
  • Role Of Mammotome In Breast Lesions
  • “Role Of MRI Diffusion Tensor Imaging (DTI) In Assessment Of Traumatic Spinal Cord Injuries”
  • “Prediction Of Pre-eclampsia And Fetal Growth Restriction By Uterine Artery Doppler”
  • “Role Of Multidetector Row Computed Tomography In Assessment Of Maxillofacial Trauma”
  • “Role Of Diffusion Magnetic Resonance Imaging In Assessment Of Neoplastic And Inflammatory Brain Lesions”
  • Role Of Diffusion Mri In Preoperative Evaluation Of Brain Neoplasms
  • “Role Of Multidetector Ct Virtual Hysteroscopy In The Detection Of The
  • Uterine & Tubal Causes Of Female Infertility”
  • Role Of Advances Magnetic Resonance Imaging Sequences In Multiple Sclerosis Magnetic Resonance Spectroscopy In Multiple Sclerosis
  • “Role Of Conventional Mri, And Diffusion Tensor Imaging Tractography In Evaluation Of Congenital Brain Malformations”
  • Role Of MRI In Evaluation Of Spinal Trauma
  • Diagnostic Role Of Diffusion-weighted MR Imaging In Neck Masses
  • “The Role Of Transvaginal Ultrasound Versus Magnetic Resonance Imaging In Diagnosis & Evaluation Of Cancer Cervix”
  • “Role Of 3d Magnetic Resonance Imaging Tractography In Assessment Of White Matter Tracts Compromise In Supra Tentorial Tumors”
  • Role Of Proton MR Spectroscopy In The Evaluation Of Temporal Lobe Epilepsy
  • Role Of Multislice Computed Tomography In Evaluation Of Crohn’s Disease
  • Role Of MRI In Assessment Of Hydrocephalus In Pediatric Patients
  • The Role Of MRI In Diagnosis And Staging Of Urinary Bladder Carcinoma
  • USG and MRI correlation of congenital CNS anomalies
  • HRCT in interstitial lung disease
  • X-Ray, CT and MRI correlation of bone tumors
  • “Study on the diagnostic and prognostic utility of X-Rays for cases of pulmonary tuberculosis under RNTCP”
  • “Role of magnetic resonance imaging in the characterization of female adnexal  pathology”
  • “CT angiography of carotid atherosclerosis and NECT brain in cerebral ischemia, a correlative analysis”
  • Role of CT scan in the evaluation of paranasal sinus pathology
  • USG and MRI correlation on shoulder joint pathology
  • “Radiological evaluation of a patient presenting with extrapulmonary tuberculosis”
  • CT and MRI correlation in focal liver lesions”
  • Comparison of MDCT virtual cystoscopy with conventional cystoscopy in bladder tumors”
  • “Bleeding vessels in life-threatening hemoptysis: Comparison of 64 detector row CT angiography with conventional angiography prior to endovascular management”
  • “Role of transarterial chemoembolization in unresectable hepatocellular carcinoma”
  • “Comparison of color flow duplex study with digital subtraction angiography in the evaluation of peripheral vascular disease”
  • “A Study to assess the efficacy of magnetization transfer ratio in differentiating tuberculoma from neurocysticercosis”
  • “MR evaluation of uterine mass lesions in correlation with transabdominal, transvaginal ultrasound using HPE as a gold standard”
  • “The Role of power Doppler imaging with trans rectal ultrasonogram guided prostate biopsy in the detection of prostate cancer”
  • “Lower limb arteries assessed with doppler angiography – A prospective comparative study with multidetector CT angiography”
  • “Comparison of sildenafil with papaverine in penile doppler by assessing hemodynamic changes”
  • “Evaluation of efficacy of sonosalphingogram for assessing tubal patency in infertile patients with hysterosalpingogram as the gold standard”
  • Role of CT enteroclysis in the evaluation of small bowel diseases
  • “MRI colonography versus conventional colonoscopy in the detection of colonic polyposis”
  • “Magnetic Resonance Imaging of anteroposterior diameter of the midbrain – differentiation of progressive supranuclear palsy from Parkinson disease”
  • “MRI Evaluation of anterior cruciate ligament tears with arthroscopic correlation”
  • “The Clinicoradiological profile of cerebral venous sinus thrombosis with prognostic evaluation using MR sequences”
  • “Role of MRI in the evaluation of pelvic floor integrity in stress incontinent patients” “Doppler ultrasound evaluation of hepatic venous waveform in portal hypertension before and after propranolol”
  • “Role of transrectal sonography with colour doppler and MRI in evaluation of prostatic lesions with TRUS guided biopsy correlation”
  • “Ultrasonographic evaluation of painful shoulders and correlation of rotator cuff pathologies and clinical examination”
  • “Colour Doppler Evaluation of Common Adult Hepatic tumors More Than 2 Cm  with HPE and CECT Correlation”
  • “Clinical Relevance of MR Urethrography in Obliterative Posterior Urethral Stricture”
  • “Prediction of Adverse Perinatal Outcome in Growth Restricted Fetuses with Antenatal Doppler Study”
  • Radiological evaluation of spinal dysraphism using CT and MRI
  • “Evaluation of temporal bone in cholesteatoma patients by high resolution computed tomography”
  • “Radiological evaluation of primary brain tumours using computed tomography and magnetic resonance imaging”
  • “Three dimensional colour doppler sonographic assessment of changes in  volume and vascularity of fibroids – before and after uterine artery embolization”
  • “In phase opposed phase imaging of bone marrow differentiating neoplastic lesions”
  • “Role of dynamic MRI in replacing the isotope renogram in the functional evaluation of PUJ obstruction”
  • Characterization of adrenal masses with contrast-enhanced CT – washout study
  • A study on accuracy of magnetic resonance cholangiopancreatography
  • “Evaluation of median nerve in carpal tunnel syndrome by high-frequency ultrasound & color doppler in comparison with nerve conduction studies”
  • “Correlation of Agatston score in patients with obstructive and nonobstructive coronary artery disease following STEMI”
  • “Doppler ultrasound assessment of tumor vascularity in locally advanced breast cancer at diagnosis and following primary systemic chemotherapy.”
  • “Validation of two-dimensional perineal ultrasound and dynamic magnetic resonance imaging in pelvic floor dysfunction.”
  • “Role of MR urethrography compared to conventional urethrography in the surgical management of obliterative urethral stricture.”

Search Diagnostic Imaging Research Topics

You can also search research-related resources on our custom search engine .

A Search Engine for Radiology Presentations

Free Resources for Preparing Radiology Thesis

  • Radiology thesis topics- Benha University – Free to download thesis
  • Radiology thesis topics – Faculty of Medical Science Delhi
  • Radiology thesis topics – IPGMER
  • Fetal Radiology thesis Protocols
  • Radiology thesis and dissertation topics
  • Radiographics

Proofreading Your Thesis:

Make sure you use Grammarly to correct your spelling ,  grammar , and plagiarism for your thesis. Grammarly has affordable paid subscriptions, windows/macOS apps, and FREE browser extensions. It is an excellent tool to avoid inadvertent spelling mistakes in your research projects. It has an extensive built-in vocabulary, but you should make an account and add your own medical glossary to it.

Grammarly spelling and grammar correction app for thesis

Guidelines for Writing a Radiology Thesis:

These are general guidelines and not about radiology specifically. You can share these with colleagues from other departments as well. Special thanks to Dr. Sanjay Yadav sir for these. This section is best seen on a desktop. Here are a couple of handy presentations to start writing a thesis:

Read the general guidelines for writing a thesis (the page will take some time to load- more than 70 pages!

A format for thesis protocol with a sample patient information sheet, sample patient consent form, sample application letter for thesis, and sample certificate.

Resources and References:

  • Guidelines for thesis writing.
  • Format for thesis protocol
  • Thesis protocol writing guidelines DNB
  • Informed consent form for Research studies from AIIMS 
  • Radiology Informed consent forms in local Indian languages.
  • Sample Informed Consent form for Research in Hindi
  • Guide to write a thesis by Dr. P R Sharma
  • Guidelines for thesis writing by Dr. Pulin Gupta.
  • Preparing MD/DNB thesis by A Indrayan
  • Another good thesis reference protocol

Hopefully, this post will make the tedious task of writing a Radiology thesis a little bit easier for you. Best of luck with writing your thesis and your residency too!

More guides for residents :

  • Guide for the MD/DMRD/DNB radiology exam!
  • Guide for First-Year Radiology Residents

FRCR Exam: THE Most Comprehensive Guide (2022)!

  • Radiology Practical Exams Questions compilation for MD/DNB/DMRD !
  • Radiology Exam Resources (Oral Recalls, Instruments, etc )!
  • Tips and Tricks for DNB/MD Radiology Practical Exam

FRCR 2B exam- Tips and Tricks !

  • FRCR exam preparation – An alternative take!
  • Why did I take up Radiology?
  • Radiology Conferences – A comprehensive guide!
  • ECR (European Congress Of Radiology)
  • European Diploma in Radiology (EDiR) – The Complete Guide!
  • Radiology NEET PG guide – How to select THE best college for post-graduation in Radiology (includes personal insights)!
  • Interventional Radiology – All Your Questions Answered!
  • What It Means To Be A Radiologist: A Guide For Medical Students!
  • Radiology Mentors for Medical Students (Post NEET-PG)
  • MD vs DNB Radiology: Which Path is Right for Your Career?
  • DNB Radiology OSCE – Tips and Tricks

More radiology resources here: Radiology resources This page will be updated regularly. Kindly leave your feedback in the comments or send us a message here . Also, you can comment below regarding your department’s thesis topics.

Note: All topics have been compiled from available online resources. If anyone has an issue with any radiology thesis topics displayed here, you can message us here , and we can delete them. These are only sample guidelines. Thesis guidelines differ from institution to institution.

Image source: Thesis complete! (2018). Flickr. Retrieved 12 August 2018, from https://www.flickr.com/photos/cowlet/354911838 by Victoria Catterson

About The Author

Dr. amar udare, md, related posts ↓.

FRCR 2b exam Tips and tricks

7 thoughts on “Radiology Thesis – More than 400 Research Topics (2022)!”

Amazing & The most helpful site for Radiology residents…

Thank you for your kind comments 🙂

Dr. I saw your Tips is very amazing and referable. But Dr. Can you help me with the thesis of Evaluation of Diagnostic accuracy of X-ray radiograph in knee joint lesion.

Wow! These are excellent stuff. You are indeed a teacher. God bless

Glad you liked these!

happy to see this

Glad I could help :).

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Get Radiology Updates to Your Inbox!

This site is for use by medical professionals. To continue, you must accept our use of cookies and the site's Terms of Use. Learn more Accept!

thesis medical imaging

Wish to be a BETTER Radiologist? Join 14000 Radiology Colleagues !

Enter your email address below to access HIGH YIELD radiology content, updates, and resources.

No spam, only VALUE! Unsubscribe anytime with a single click.

  • Faculty of Arts and Sciences
  • FAS Theses and Dissertations
  • Communities & Collections
  • By Issue Date
  • FAS Department
  • Quick submit
  • Waiver Generator
  • DASH Stories
  • Accessibility
  • COVID-related Research

Terms of Use

  • Privacy Policy
  • By Collections
  • By Departments

Generalizable and Explainable Deep Learning in Medical Imaging with Small Data

Thumbnail

Citable link to this page

Collections.

  • FAS Theses and Dissertations [6138]

Contact administrator regarding this item (to report mistakes or request changes)

Advertisement

Advertisement

A holistic overview of deep learning approach in medical imaging

  • Regular Paper
  • Published: 21 January 2022
  • Volume 28 , pages 881–914, ( 2022 )

Cite this article

thesis medical imaging

  • Rammah Yousef 1 ,
  • Gaurav Gupta 1 ,
  • Nabhan Yousef 2 &
  • Manju Khari   ORCID: orcid.org/0000-0001-5395-5335 3  

8277 Accesses

42 Citations

20 Altmetric

Explore all metrics

Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image’s modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.

Similar content being viewed by others

thesis medical imaging

Brain tumor detection and classification using machine learning: a comprehensive survey

thesis medical imaging

UNet++: A Nested U-Net Architecture for Medical Image Segmentation

thesis medical imaging

Convolutional neural networks: an overview and application in radiology

Avoid common mistakes on your manuscript.

1 Introduction

Health no doubt is on the top of concerns hierarchy in our life. Through the lifetime, human has struggled of diseases which cause death; in our life scope, we are fighting against enormous number of diseases, moreover, improving life expectancy and health status significantly. Historically medicine could not find the cure of numerous diseases due to a lot of reasons starting from clinical equipment and sensors to the analytical tools of the collected medical data. The fields of big data, AI, and cloud computing have played a missive role at each aspect of handling these data. Across the worldwide, Artificial Intelligence (AI) has been widely common and well known enough to most of the people due to the rapid progress achieved in almost every domain in our life. The importance of AI comes from the remarkable progress within the last 2 decades only, and it is still growing and specialists from different fields are investing. AI’s algorithms were attributed to the availability of big data and the efficiency of modern computing criteria that is provided lately.

This paper aims to give a holistic overview in the field of healthcare as an application of AI and deep learning particularly. The paper starts by giving an overview of medical imaging as an application of deep learning and then moving to why do we need AI in healthcare; in this section, we will give the key terms of how AI is used in both the main medical data types which are medical imaging and medical signals. To provide a moderate and rich general perspective, we will mention the well-known data which are widely used for generalization and the main pathologies, as well. Starting from classification and detection of a disease to segmentation and treatment and finally survival rate and prognostics. We will talk in detail about each pathology with the relevant key features and the significant results found in literature. In the last section, we will discuss about the challenges of deep learning and the future scope of AI in healthcare. Generally, AI is being a fundamental path in nowadays medicine which is in short a software that can learn from data like human being and it can develop an experience systematically and finally deliver a solution or diagnostic even faster than humans. AI has become an assistive tool in medicine with benefits like error reduction, improving accuracy, fast computing, and better diagnosis were introduced to help doctors efficiently. From clinical perspective, AI is used now to help the doctors in decision-making due to faster pattern recognition from the medical data which also in turn are registered more precisely in computers than humans; moreover, AI has the ability to manage and monitor the patients’ data and creating a personalized medical plan for future treatments. Ultimately, AI has proved to be helpful in medical field with different levels, such as telemedicine diagnosis diseases, decision-making assistant, and drug discovery and development. Machine learning (ML) and deep learning (DL) have tremendous usages in healthcare such as clinical decision support (CDS) system which incorporate human’s knowledge or large datasets to provide clinical recommendations. Another application is to analyze large historical data and get the insights which can predict the future cases of a patient using pattern identification. In this paper, we will highlight the top deep learning advancement and applications in medical imaging. Figure  1 shows the workflow chart of paper highlights.

figure 1

Deep learning implementation and traits for medical imaging application

2 Background concepts

2.1 medical imaging.

Deep learning in medical imaging [ 1 ] is the contemporary scope of AI which has the top breakthroughs in numerous scientific domains including computer vision [ 2 ], Natural Language Processing (NLP) [ 3 ] and chemical structure analysis, where deep learning is specialized with highly complicated processes. Lately due to deep learning robustness while dealing with images, it has attracted big interest in medical imaging, and it holds big promising future for this field. The main idea that DL is preferable is that medical data are large and it has different varieties such as medical images, medical signals, and medical logs’ data of patients monitoring of body sensed information. Analyzing these data especially historical data by learning very complex mathematical models and extracting meaningful information is the key feature where DL scheme outperformed humans. In other words, DL framework will not replace the doctors, but it will assist them in decision-making and it will enhance the accuracy of the final diagnosis analysis. Our workflow procedure is shown in Fig.  1 .

2.1.1 Types of medical imaging

There are plenty of medical image types, and selecting the type depends on the usage, in a study which was held in US [ 4 ], it was found that there are some basic and widely used modalities of these medical images which also have increased, and these modalities are Magnetic Resonance Images (MRI), Computed Tomography (CT) scans, and Positron Emission Tomography (PET) to be on the top and some other common modalities like, X-ray, Ultrasound, and histology slides. Medical images are known to be so complicated, and in some cases, acquisition of these images is considered to be long process and it needs specific technical implications, e.g., an MRI which may need over 100 Mega Byte of memory storage.

Because of a lack of standardization while image acquisition and diversity in the scanning devices’ settings, a phenomenon called “distribution drift” might arise and cause non-standard acquisition. From a clinical need perspective, medical images are the key part of diagnosis of a disease and then the treatment too. In traditional diagnosis, a radiologist reviews the image, and then, he provides the doctors with a report of his findings. Images are an important part of the invasive process to be used in further treatment, e.g., surgical operations or radiology therapies for example [ 5 , 6 ].

2.2 DL frameworks

Conceptually, Artificial Neural Networks (ANN) are a mimic of the human neuro system in the structure and work. Medical imaging [ 7 ] is a field by which is specialized in observing and analyzing the physical status of the human body by generating visual representations like images of internal tissues or some organs of the body through either invasive or non-invasive procedure.

2.2.1 Key technologies and deep learning

Historically, AI scheme has been proposed in 1970s and it has mainly the two major subcategories, such as Machine Learning (ML) and Deep Learning (DL). The earlier AI used heuristics-based techniques for extracting features from data, and further developments started using handcrafted features’ extraction and finally to supervised learning. Where basically Convolutional Neural Networks (CNN) [ 8 ] is used in images and specifically in medical images. CNN is known to be hungry for data, so it is the most suitable methodology for images, and the recent developments in hardware specifications and GPUs have helped a lot in performing CNN algorithms for medical image analysis. The generalized formulation of how CNN work was proposed by Lecun et al. [ 9 ], where they have used the error backpropagation for the first example of digits hand written recognition. Ultimately, CNNs have been the predominant architecture among all other algorithms which belong to AI, and the number of research of CNN has increased especially in medical images analysis and many new modalities have been proposed. In this section, we explain the fundamentals of DL and its algorithmic path in medical imaging. The commonly known categories of deep learning and their subcategories are discussed in this section and are shown in Fig.  2 .

figure 2

DL basic categories as per paper organization

2.2.2 Supervised learning

Convolutional neural networks: CNN [ 10 ] have taken the major role in many aspects and have lead the work in image-based tasks, including image reconstruction, enhancement, classification, segmentation, registration, and localization. CNNs are considered to be the most deep learning algorithm regarding images and visual processing because of its robustness in image dimensionality reduction without losing image’s important features; in this way, CNN algorithm deals with less parameters which mean increasing the computational efficiency. Another key term about CNN is that this architecture is suitable for hospitals use, because it can handle both 2D and 3D images, because some of medical images modalities like X-ray images are 2D-based images, while MRI and CT scan images are 3-dimensional images. In this section, we will explain the framework of CNN architecture as the heart of deep learning in medical imaging.

Convolutional layer: Before deep learning and CNN, in image processing, convolution terminology was used for extracting specific features from an image, such as corners, edges (e.g., sobel filter), and noise by applying a particular filters or kernels on the image. This operation is done by sliding the filter all over the image in a sliding window form until all the image is covered. In CNN, usually, the startup layers are designed to extract low-level features, such as lines and edges, and the progressive layers are built up for extracting higher features like full objects within an image. The goodness of using modern CNNs is that the filters could be 2D or 3D filters using multiple filters to form a volume and this depends on the application. The main discrimination in CNN is that this architecture obliges the elements in a filter to be the network weights. The idea behind CNN architecture is the convolution operation which is denoted by the symbol *. Equation ( 1 ) represents the convolution operation

where s ( t ) is the output feature map and I ( t ) is the original image to be convolved with the filter K ( a ).

Activation function : Activation functions are the enable button of a neuron; in CNN, there are multiple popular activation functions which are widely used such as, sigmoid, tanh, ReLU, Leaky ReLU, and Randomized ReLU. Especially, in medical imaging, most papers found in literature uses ReLU activation function which is defined using the formula

where x represents the input of a neuron.

There are other used activation functions used in CNN, such as sigmoid, tanh, and leaky-ReLu

Pooling layer: Mainly, this layer is used to reduce the parameters needed to be computed and it reduces the size of the image but not the number of channels. There are few pooling layers, such as Max-pooling, average- pooling, and L2-normalization pooling, where Max-pooling is the widely used pooling layer. Max-pooling means taking the maximum value of a position of the feature map after convolution operation.

Fully connecter layer: This layer is the same layer that is used in a casual ANN where usually in such network each neuron is connected to all other neurons in both the previous and next layer’s neurons; this makes the computation very expensive. A CNN model can get the help of the stochastic gradient descent to learn significant associations from the existing examples used for training. Thus, the benefit of a CNN usage is that it gradually reduces the feature map size before finally is get flatten to feed the fully connected layer which in turn computes the probability scores of the targeted classes for the classification. Fc-connected layer is the last layer in a CNN model, Furthermore, this layer processes the strongly extracted features from an image due to the convolutional a pooling layer before and finally fc-layer indicate to which class is an image belong to.

Recurrent neural networks: RNN is a major part from supervised deep learning models, and this model is specific with analyzing sequential data and time series. We can imagine an RNN as a casual neural network, while each layer of it represents the observations at a particular time (t). In [ 11 ], RNN was used for text generating which further connected to speech recognition and text prediction and other applications too. RNN are recurrent, because same work is done for every element in a sequence and the output depends on the previous output computation of the previous element in that sequence general, the output of a layer is fed as an input to the new input of the same layer as it is shown in Fig.  3 . Moreover, since the backpropagation of the output will suffer of vanishing gradient with time, so commonly a network is evolved which is Long Short-Term Memory (LSTM).

figure 3

Basic common deep learning architectures. A Restricted Boltzmann machine. B Recurrent Neural Network (RNN). C Autoencoders. D GANs

In network and three bidirectional gated recurrent units is (BGRU) to help the RNN to hold long-term dependencies.

There were few papers found in the literature of RNN in medical imaging and particularly in segmentation, in [ 12 ], Chen et al. have used RNN along with CNN for segmenting fungal and neuronal structures from 3D images. Another application of RNN is in image caption generation [ 13 ], where these models can be used for annotating medical images like X-ray with text captions extracted and trained from radiologists’ reports [ 14 ]. RuoxuanCui et al. [ 15 ] have used a combination of CNN and RNN for diagnosing Alzheimer disease where their CNN model was used for classification task, after that the CNN model’s output is fed to an RNN model with cascaded bidirectional gated recurrent units (BGRU) layers to extract the longitudinal features of the disease. In summary, RNN is commonly used with a CNN model in medical imaging. In [ 16 ], authors have developed a novel RNN for speeding up an iterative MAP estimation algorithm.

2.2.3 Unsupervised deep learning

Beside the CNN as a supervised machine leaning algorithm in medical imaging, there are a few unsupervised learning algorithms for this purpose as well, such as Deep Belief Networks (DBNs), Autoencoders, and Generative Adversarial Networks (GANs), where the last has been used for not only performing the image-based tasks but as a data synthesis and augmentation too. Unsupervised learning models have been used for different medical imaging applications, such as motion tracking [ 17 ] general modeling, classification improvement [ 18 ], artifact reduction [ 19 ], and medical image registration [ 20 ]. In this section, we will list the mostly used unsupervised learning structures.

2.2.3.1 Autoencoders

Autoencoders [ 21 , 22 ] are an unsupervised deep learning algorithm by which this model refers to the important features of an input data and dismisses the other data. These important representations of features are called ‘codings’ where it is commonly called representation learning. The basic architecture is shown in Fig.  3 . The robustness of autoencoders stems from the ability to reconstruct output data, which is similar to the input data, because it has cost function which applies penalties to the model when the output and input data are different. Moreover, autoencoders are considered as an automatic features detector, because they do not need labeled data to learn from due to the unsupervised manner. Autoencoders architecture is similar to a formal CNN model, but with the feature is that the number of input neurons must be equal to the number in the output layer. Reducing dimensionality of the raw input data is one of the features of autoencoders, and in some cases, autoencoders are used for denoising purpose [ 23 ], where this autoencoders are called denoising autoencoders. In general, there are few kinds of autoencoders used for different purposes, we mention here the common autoencoders, for example, Sparse Autoencoders [ 24 ] where the neurons in the hidden layer are deactivated through a threshold which means limiting the activated neurons to get a representation in the output similar to the input where for extracting most of the features from the input, most of the hidden layer neurons should be set to zero. Variational autoencoders (VAEs) [ 25 ] are generative model with two networks (Encoder and Decoder) where the encoder network projects the input into latent representation using Gaussian distribution approximation, and the decoder network maps the latent representations into the output data. Contractive autoencoders [ 26 ] and adversarial autoencoders are mostly similar to a Generative Adversarial Network (GAN).

2.2.3.2 Generative Adversarial Networks

GANs [ 27 ] 28 were first introduced by Ian Goodfellow in 2014; it consists basically on a combination of two CNN networks: the first one is called Generative model and another is the discriminator model. For better understanding how GANs work, scientists describe the two networks as a two players who competing against each other, where the generator network tries to fool the discriminator network by generating near authentic data (e.g., artificial images), while the discriminator network tries to distinguish between the generator output and the real data, Fig.  3 . The name of the network is inspired from the objective of the generator to overcome the discriminator. After the training process, both the generator and discriminator networks get better, where the first generates more real data, and the second learns how to differentiate between both previously mentioned data better until the end-point of the whole process where the discriminator network is unable to distinguish between real and artificial data (images). In fact, the criteria by which both networks learn from each other are using the backpropagation for the both, Markov chains, and dropout too. Recently, we have seen tremendous usage of GANs for different applications in medical imaging such as, synthetic images for generating new images and enhance the deep learning models efficiency by increasing the number of training images in the dataset [ 29 ], classification [ 30 , 31 ], detection [ 32 ], segmentation [ 33 , 34 ], image-to-image translation [ 35 ], and other application too. In a study by Kazeminia et al. [ 36 ], they have listed all the applications of GANs in medical imaging and the most two used applications of this unsupervised models are image synthesis and segmentation.

2.2.3.3 Restricted Boltzmann machines

Axkley et al. were the first to introduce the Boltzmann machines in 1985 [ 37 ], Fig.  3 , also known as Gibbs distribution, and further Smolensky has modified it to be known as Restricted Boltzmann Machines (RBMs) [ 38 ] . RBMs consist on two layers of neural networks with stochastic, generative, and probabilistic capabilities, and they can learn probability distributions and internal representations from the dataset. RBMs work using the backpropagation path of input data for generating and estimating the probability distribution of the original input data using gradient descent loss. These unsupervised models are used mostly for dimensionality reduction, filtering, classification, and features representation learning. In medical imaging, Tulder et al. [ 39 ] have modified the RBMs and introduced a novel convolutional RBMs for lung tissue classification using CT scan images; they have extracted the features using different methodologies (generative, discriminative, or mixed) to construct the filters; after that, Random Forest (RF) classifier was used for the classification objective. Ultimately, a stacked version of RBMs is called Deep Belief Networks (DBNs) [ 40 ]. Each RBM model performs non-linear transformation which will again be the input for the next RBM model; performing this process progressively gives the network a lot of flexibility while expansion.

DBNs are generative models, which allow them to be used as a supervised or unsupervised settings. The feature learning is done through an unsupervised manner by doing the layer-by-layer pre-training. For the classification task, a backpropagation (gradient descent) through the RBM stacks is done for fine-tuning on the labeled dataset. In medical imaging applications, DBNs were used widely; for example, Khatami et al. [ 41 ] used this model for classification of X-ray images of anatomic regions and orientations; in [ 42 ], AVN Reddy et al. have proposed a hybrid deep belief networks (DBN) for glioblastoma tumor classification from MRI images. Another significant application of DBNs was reported in [ 43 ] where they have used a novel DBNs’ framework for medical images’ fusion.

2.2.4 Self-supervised learning

Self-supervised learning is basically a subtype of unsupervised Learning, by which it learns features’ representations using a proxy task where the data contain supervisory signals. After representation learning, it is fine-tuned using annotated data. The benefit of self-supervised learning is that it eliminates the need of humans to label the data, where this system extracts the visibly natural relevant context from the data and assign metadata with the representations as supervisory signals. This system matches with unsupervised learning, because both systems learn representations without using explicitly provided labels, but the difference is that self-supervised learning does not learn inherent structure of data and it is not centered around clustering, anomaly detection, dimensionality reduction, and density estimation. The genesis model of this system can retrieve the original image from a distorted image (e.g., non-linear gray-value transformation, image inpainting, image out-painting, and pixels shuffle) using proxy task [ 44 ]. Zhu et al. [ 45 ] have used self-supervised learning and its proxy task to solve Rubik’s cube which mainly contain three operations (rotating, masking, and ordering) the robustness of this model comes from that the network is robust to noise and it learns features that are invariant to rotation and translation. Shekoofeh et al. [ 46 ] have exploited the effectiveness of self-supervised learning in pre-training strategy used to classify medical images for tow tasks (dermatology skin condition classification, and multi-label chest X-ray classification). Their study has improved the classification accuracy after using two self-supervised learning systems: the first one is trained on ImageNet dataset and the second one is trained on unlabeled domain specific medical images.

2.2.5 Semi-supervised learning

Semi-supervised learning is a system by which it stands in between supervised learning and unsupervised learning systems, because for example, it is used for classification task (supervised learning) but without having all the data labeled (Unsupervised learning). Thus, this system is trained on small, labeled dataset, and then generates pseudo-labels to get larger dataset with labels, and the final model is trained by mixing up both the original dataset and the generated one of images. Nie et al. [ 47 ] have proposed semi-supervised learning-based deep network for image segmentation, the proposed method trains adversarially a segmentation model, from the confidence map is computed, and the semi-supervised learning strategy is used to generate labeled data. Another application of semi-supervised learning is used for cardiac MRI segmentation, [ 48 ]. Liu et al. [ 49 ] have presented a novel relation-driven semi-supervised model to classify medical images, they have introduced a novel Sample Relation Consistency (SRC) paradigm to use unlabeled data by generalizing and modeling the relationship information between different samples; in their experiment, they have applied the novel method on two benchmark medical images for classification, skin lesion diagnosis from ISIC 2018 challenge, and thorax disease classification from the publicly dataset ChestX-ray14, and the results have achieved the state-of-the-art criteria.

2.2.6 Weakly (partially) supervised learning

Weak supervision is basically a branch of machine learning used to label unlabeled data by exploiting noisy, limited sources to provide supervision signal that is responsible of labeling large amount of training data using supervised manner. In general, the new labeled data in “weakly-supervised learning” are imperfect, but it can be used to create a robust predictive model. The weakly supervised method uses image-level annotations and weak annotations (e.g., dots and scribbles) [ 50 ]. Weakly supervised multi-label disease system was used for classification task of chest X-ray [ 51 ], Also, it is used for multi-organ segmentation, [ 52 ] by learning single multi-class network from a combination of multiple datasets, where each one of these datasets contains partially organ labeled data and low sample size. Roth et al. [ 53 ] have used weakly supervised learning system for medical image segmentation and their results has speeded up the process of generating new training dataset used for the development purpose of deep learning in medical images analysis. Schleg et al. [ 54 ] have used this type of deep learning approach to detect abnormal regions from test images. Hu et al. [ 55 ] proposed an end-to-end CNN approach for displacement field prediction to align multiple labeled corresponding structures, and the proposed work was used for medical image registration of prostate cancer from T2-weighted MRI and 3D transrectal ultrasound images; the results reached 0.87 of Mean Dice score. Another application is applied in diabetic retinopathy detection in a retinal image dataset [ 56 ].

2.2.7 Reinforcement learning

Reinforcement learning (RL) is subtype of deep learning by which it takes the beneficial action toward maximizing the rewards of specific situation. The main difference between supervised learning and reinforcement learning is that in the first one, the training data have the answer within it, but in case of reinforcement learning, the agent decides how to act with the task where in the absence of the training dataset the model learn from its experience. Al Walid et al. [ 57 ] have used reinforcement learning for landmark localization in 3D medical images; they have introduced the partial policy-based RL, by learning optimal policy of smaller partial domains; in this paper, the proposed method was used on three different localization task in 3D-CT scans and MR images and proved that learning the optimal behavior requires significantly smaller number of trials. Also in [ 58 ], RL was used for object detection PET images. RL was also used for color image classification on neuromorphic system [ 59 ].

2.2.7.1 Transfer learning

Transfer learning is one of the powerful enablers of deep learning [ 60 ], which involves training a deep leaning model by re-using of a an already trained model with related or un-related large dataset. It is known that medical data face the problem of lacking and insufficient for training deep learning models perfectly, so Transfer learning can provide the CNN models with large learned features from non-medical images which in turn can be useful for this case [ 61 ]. Furthermore, Transfer Learning is a key feature for time-consuming problem while training a deep neural network, because it uses the freeze weights and hyperparameters of another model. In usual using transfer learning the weights which is already trained on different data (images) are freezed to be used for another CNN model, and only in the few last layers, modifications are done and these few last layers are trained on the real data for tuning the hyperparameters and weights. For these reasons, transfer learning was widely used in medical imaging, for example a classification of the interstitial lung disease [ 61 ] and detecting the thoraco-abdominal lymph nodes from CT scans; it was found that transfer learning is efficient, even though the disparity between the medical images and natural images. Transfer learning as well could be used for different CNN models (e.g., VGG-16, Resnet-50, and Inception-V3), Xue et al. [ 62 ], have developed transfer learning-based model for these models, and furthermore, they have proposed an Ensembled Transfer Learning (ETL) framework for classification enhancement of cervical histopathological images. Overall, in many computer vision tasks, tuning the last classification layers (fully connected layers) which is called “shallow tuning” is probably efficient, but in medical imaging, a deep tuning for more layers is needed [ 63 ], where they have studied the benefit of using transfer learning in four applications within three imaging modalities (polyp detection from colonoscopy videos, segmentation of the layers of carotid artery wall from ultrasound scans, and colonoscopy video frame classification), their study results found that training more CNN layers on the medical images is efficient more than training from the scratch.

2.3 Best deep learning models and practices

Convolutional Neural Networks (CNNs) based models are usually used in different ways with keeping in minds that CNNs remains the heart of any model; in general, CNN could be trained on the available dataset from the scratch when the available dataset is very large to perform a specific task (e.g., segmentation, classification, detection, etc.), or a pre-trained model with a large dataset (e.g., ImageNet) where this model could be used to train new datasets (e.g., CT scans) with fine-tuning some layers only; this approach is called transfer learning (TL) [ 60 ]. Moreover, CNN models could be used for feature extraction only from the input images with more representation power before proceeding to the next stage of processing these features. In the literature, there were commonly used CNN models which has proven their effectiveness, and based on these models, some developments have arisen; we will mention the most efficient and used models of deep learning in medical images analysis. First, it was AlexNet which was introduced by Alex Krizhevsky [ 64 ] and Siyuan Lu et al. [ 65 ], have used transfer learning with a pre-trained AlexNet with replacing the parameters of the last three layers with a random parameters for pathological brain detection. Another frequently used model is Visual Geometry Group (VGG-16) [ 66 ] where 16 refers to the number of layers; later on, some developments were proposed for VGG-16 like VGG-19; in [ 67 ], they have listed medical imaging applications using different VGGNet architectures. Inception Network [ 68 ] is one of the most common CNN architectures which aim to limit the resources consumption. And further modifications on this basic network were reported with new versions of it [ 69 ]. Gao et al. [ 70 ] have proposed a new architecture of Residual Inception Encoder–Decoder Neural Network (RIEDNet) for medical images synthesis. Later on, Inception network was called Google Net [ 71 ]. ResNet [ 72 ] is a powerful architecture for very deep architectures sometimes over than 100 layers, and it helps in limiting the loss of gradient in the deeper layers, because it adds residual connections between some convolutional layers Fig.  4 . Some of ResNet models in medical imaging are mostly used for robust classification [ 73 , 74 ], for pulmonary nodes and intracranial hemorrhage.

figure 4

The basic models used in medical imaging: A ResNet architecture, B U-Net architecture [ 75 ], C CNN AlexNet architecture for breast cancer [ 76 ], and D Dense Net architecture [ 77 ]

DenseNet exploits same aspect of residual CNN (ResNet) but in a compact mode for achieving good representations and feature extraction. Each layer of the network has in its input outputs from the previous layers, so comparing to a traditional CNN, DenseNet contains more connections (L) than CNN (L connections) where DenseNet has [ L ( L  − 1)]/2 connections. DenseNet is widely used with medical images, Mahmood et al . [ 78 ] have proposed a Multimodal DenseNet for fusing multimodal data to give the model the flexibility of combining information from multiple resources, and they have used this novel model for polyp characterization and landmark identification in endoscopy. Another application used transfer learning with DenseNet for fundus medical images [ 79 ].

U-net [ 80 ] is one of the most popular network architectures used mostly for segmentation, Fig.  4 . The reason behind it is mostly used in medical images is that because it is able to localize and highlight the borders between classes (e.g., brain normal tissues and malignant tissues) by doing the classification for each pixel. It is called U-net, because the network architecture takes the shape of U alphabet and it contains concatenation connections; Fig.  4 shows the basic structure of the U-Net. Some developments of U-Net were U-Net +  + [ 75 ], have proposed a new architecture U-Net +  + for medical image segmentation, and in their experiments, U_Net +  + has outperformed both U-Net and wide U-Net architectures for multiple medical image segmentation tasks, such as liver segmentation from CT scans, polyp segmentation in colonoscopy videos, and nuclei segmentation from microscopy images. From these popular and basic DL models, some other models were inspired and even some of these models were inspired and rely on the insights from others (e.g., inception and ResNet); Fig.  5 shows the timeline of the mentioned models and other popular models too.

figure 5

Timeline of mostly used DL models in medical imaging

3 Deep learning applications in medical imaging

For the purpose of studying the most applications of deep learning in medical imaging, we have organized a study based on the most-cited papers found in literature from 2015 to 2021; the number of surveyed literatures for segmentation, detection, classification, registration, and characterization are: 30, 20, 30, 10, and 10, respectively. Figure  6 shows the pie chart of these applications.

figure 6

Surveyed DL applications in medical imaging

3.1 Image segmentation

Deep learning is used to segment different body structures from different imaging modalities such as, MRI, CT scans, PET, and ultrasound images. Segmentation means portioning an image into different segments where usually these segments belongs to specific classes (tissue classes, organ, or biological structure) [ 81 ]. In general overview, for CNN models, there are two main approaches for segmenting a medical image; the first is using the entire image as an input and the second is using patches from the image. Segmentation process of Liver tumor using CNN architecture is shown in Fig.  7 according to Li et al., and both the methods work well in generating an output map which provides the segmented output image. Segmentation is potential for surgical planning and determining the exact boundaries of sub-regions (e.g., tumor tissues) for better guidance during the direct surgery resection. Most likely segmentation is common in neuroimaging field and with brain segmentation more than other organs in the body. Akkus et al. [ 82 ] have reviewed different DL models for segmentation of different organs with their datasets. Since CNN architecture can handle both 2-dimensional and 3-dimensional images, it is considered suitable for MRI which is in 3D scheme; Milleteria et al. [ 83 ] have used 3D MRI images and applier 3D-CNN for segmenting prostate images. They have proposed new CNN architecture which is V-Net which relies on the insights of U-Net [ 80 ] and their output results have achieved 0.869 dice similarity coefficient score; this is considered as efficient model regarding to the small dataset (50 MRI for training and 30 MRI for testing). Havaei et al. [ 84 ] have worked on Glioma segmentation from BRATS-2013 with 2D-CNN model and this model took only 3 min to run. From clinical point of view, segmentation of organs is used for calculating clinical parameters (e.g., volume) and improving the performance of Computer-Aided Detection (CAD) to define the regions accurately. Taghanaki et al. [ 85 ] have listed the segmentation challenges from 2007 to 2020 with different imaging modalities; Fig.  8 shows the number of these challenges. We have summarized Deep Learning models for segmentation for different organs in the body, based on the highly cited paper and variations in deep learning models shown in Table 1

figure 7

Liver tumor segmentation using CNN architecture [ 86 ]

figure 8

The number of challenges related to segmentation in medical imaging from 2007 to 2020 listed on Grand Challenges regarding the imaging modalities

3.2 Image detection/localization

Detection simply means is to identify a specific region of interest in an image and finally to draw a bounding box around it. Localization is just another terminology of detection which means to determine the location of a particular structure in images. In deep learning for medical images, analysis detection is referred as Computer-Aided Detection (CAD), Fig.  9 . CAD is divided commonly for anatomical structure detection or for lesions (abnormalities) detection. Anatomical structure detection is a crucial task in medical images analysis due to determining the locations of organs substructures and landmarks which in turn guide for better organ segmentation and radiotherapy planning for analysis and further surgical purposes. Deep learning for organ or lesion detection can be either classification-based or regression-based methods; the first one is used for discriminating body parts, while the second method is used for determining more detailed locations information. In fact, most of the deep learning pathologies are connected; for example, Yang et al. [ 114 ] have proposed a custom CNN classifier for locating landmarks which is the initialization steps for the femur bone segmentation. In case of lesion detection which is considered to be clinically time-consuming for the radiologists and physicians and it may lead to errors due to the lack of data needed to find the abnormalities and also to the visual similarity of the normal and abnormal tissues in some cases (e.g., low contrast lesions in mammography). Thus, the potential of CAD systems comes from overcoming these cons, where it reduces the times needed, computational cost, providing alternative way for the people who live in areas that lacks specialists and improve the efficiency of thereby streamlining in the clinical workflow. Some CNN custom models were developed specifically for lesion detection [ 115 , 116 ]. Both organ anatomical structures and lesion detection are applicable for mostly all body’s organs (e.g., Brain, Eye, Chest, Abdominal, etc.), and CNN architectures are used for both 2D and 3D medical images. When using 3D volumes like MRI, it is better to use patches fashion, because it is more efficient than sliding window fashion, so in this way, the whole CNN architecture will be trained using patches before the fully connected layer, [ 117 ]. Table 2 shows top-cited papers with different deep learning models for both structure and lesion detection within different organs.

figure 9

Lesion detection algorithm flowchart [ 118 ]

3.3 Image classification

This task is the fundamental task for the computer-aided diagnosis (CAD), and it aims to discover the presence of disease indicators. Commonly in medical images, the deep learning classification model’s output is a number that represents the disease presence or absence. A subtype of classification is called lesion classification and is used in a segmented images from the body [ 136 ]. Traditionally, classification used to rely on the color, shape, and texture, etc. but in medical images, features are more complicated to be categorized as these low-level features which lead to poor model generalization due to the high-level features for medical image. Recently, deep learning has provided an efficient way of building an n end-to-end model which produce classification labels-based from different medical images’ modalities. Because of the high resolution of medical images, expensive computational costs arise and limitations in the number of deep model layers and channels; Lai Zhifei et al. [ 137 ] have proposed the Coding Network with Multilayer Perceptron (CNMP) to overcome these problems by combining high-level features extracted by CNN and other manually selected common features. Xiao et al. [ 138 ] have used parallel attention module (PAM-DenseNet) for COVID-10 diagnosis, and their model can learn strong features automatically from channel-wise and spatial-wise which help in making the network to automatically detect the infected areas in CT scans of lungs without the need of manual delineation. As any deep learning application, classification task is performed on different body organs for detecting diseases’ patterns. Back in 1995, a CNN model was developed for detecting lung nodules from X-ray of chest [ 139 ]; classifying medical images is essential part for clinical aiding and further treatments, for example detecting and classifying pneumonia presence from chest X-ray scans [ 140 ]; CNN-based models have introduced various stratifies to better the classification performance especially when using small datasets, for example data augmentation [ 141 , 142 ]. GANs’ network was widely used for data augmentation and image synthesis [ 143 ]. Another robust strategy is transfer learning [ 61 ]. Rajpurkar et al. have used custom DenseNet for classifying 14 different diseases using chest X-ray from the chestXray14 dataset [ 129 ]. Li et al. have used 3D-CNN for interpolating the missing pixels data between MRI and PET modalities, where they have reconstructed PET images from MRI images from the (ADNI) dataset of Alzheimer disease which contain MRI and PET images [ 144 ]. Xiuli Bi et al. [ 31 ] have also worked on Alzheimer disease diagnosing using a CNN architecture for feature extraction and unsupervised predictor for the final diagnosis results on (ADNI-1 1.5 T) dataset and achieved accuracy of 97.01% for AD vs. MCI, and 92.6% for MCI vs. NC. Another 3D-CNN architecture employed in an autoencoder architecture is also used to classify Alzheimer disease using transfer learning on a pre-trained CAD Dementia dataset, they have reported accuracy of 99% on the publicly dataset ADNI, and the fine-tuning process is done in a supervised manner [ 145 ]. Diabetic Retinopathy (DR) could be diagnosed using fundus photographs of the eye, Abramoff et al . [ 146 ] have used custom CNN inspired from Alexnet and VGGNet to train a device (IDx-DR) version X2.1 on a dataset of 1.2 million DR images to record 0.98 AUC score. Figure  10 shows the classification of medical images. A few notable results found in literature are summarized in Table 3 .

figure 10

Classification of brain tumor using general CNN architecture

3.4 Image registration

Image registration means to allow images’ spatial alignment to a common anatomical field. Previously, image registration was done manually by clinical experts, but after deep learning, image registration has changed [ 176 , 177 , 178 ]. Practically, this task is considered main scheme in medical images, and it relies on aligning and establishing accurate anatomical correspondences between a source image and target image using transformations. In the main theme of image registration, both handcrafted and selected features are employed in a supervised manner. Wu et al. [ 179 , 180 ] have employed unsupervised deep learning approach for learning the basis filters which in turn represent image’s patches and detect the correspondence detection for image registration. Yang et al. [ 177 ] have used an autoencoder architecture for predicting of deformation diffeomorphic metrics mapping (LDDMM) to get fast deformable image registration and the results shows improvements in computational time. Commonly, image registration is employed for spinal surgery or neurosurgery in form of localization of spinal bony or tumor landmarks to facilitate the spinal screw implant or tumor removal operation. Miao et al. [ 181 ] have trained a customized CNN on X-ray images to register 3D models of hand implant and knee implant onto 2D X-ray images for pose estimation. An overview of registration operation is shown in Table 4 , which shows a summary of medical images registration as an application of deep learning.

3.5 Image characterization

Characterization of a disease within deep learning is a stage of computer-aided diagnosis (CADx) systems. For example, radiomics is an expansion of CAD systems for other tasks such as prognosis, staging, and cancer subtypes’ determination. In fact, characterization of a disease will rely on the disease type in the first place and on the clinical questions related to it. There is two ways used for features extraction, either handcrafted features extraction or deep learned features, in the first, radiomic features is similar to radiologist’s way of interpretation and analysis of medical images. These features might include tumor size, texture, and shape. In literature, the handcrafted features are used for many purposes, such as tumor aggressiveness, the probability of having cancer in the future, and the malignancy probability [ 190 , 191 ]. There are two main categories for characterization, lesion characterization and tissue characterization. In deep learning applications of medical imaging, each computerized medical image requires some normalization plus customization to be handled and suited to the task and image modality. Conventional CAD is used for lesion characterization. For example, to track the growth of lung nodules, the characterization task is needed for the nodules and the change of lung nodules over time, and this will help of reducing the false-positive of lung cancer diagnosis. Another example of tumor characterization is found in imaging genomics, where the radiomic features are used as phenotypes for associative analysis with genomics and histopathology. A good report which was done with multi-institutes’ collaboration about breast phenotype group through TGCA/TCIA [ 192 , 193 , 194 ]. Tissue characterization is to examine when particular tumor areas are not relevant. The main focus in this type of characterization is on the healthy tissues that are susceptible for future disease; also focusing on the diffuse disease such as interstitial lung disease and liver disease [ 195 ]. Deep learning has used conventional texture analysis for lung tissue. The characterization of lung pattern using patches can be informative of the disease which commonly is interpreted by radiologists. Many researchers have employed DL models with different CNN architectures for interstitial lung disease classification characterized by lung tissue sores [ 149 , 196 ]. CADx is not only a detection/localization task only, but it is classification and characterization task as well. Finding the likelihood of disease subtyping is the output of a DL model and characteristic features’ presentation of a disease too. For the characterization task, especially with limited dataset, CNN models are not trained from scratch in general, data augmentation is an essential tool for this application, and performing CNN on dynamic contrast-enhanced MRI is important too. For example, while using VGG-19-Net, researchers have used DCE-MRI temporal images with pre-contrast, first post-contrast, and the second post-contrast MR images as an input to the RGB channels. Antropova et al. [ 197 ] have used the maximum intensity projections (MIP) as an input to their CNN model. Table 5 shows some highlighted literature of characterization which includes diagnosis and prognosis.

3.6 Prognosis and staging

Prognosis and staging refer to the future prediction of a disease status for example after cancer identification, further treatment process through biopsies which give a track on the stage, molecular type, and genomics which finally provides information about prognosis and the further treatment process and options. Since most of the cancers are spatially heterogeneous, specialists and radiologists are interested about the information on spatial variations that medical imaging can provide. Mostly, many imaging biomarkers include only the size and another simple enhancement procedures; therefore, the current investigators are more interested in including radiomic features and extending the knowledge from medical images. Some deep learning analysis have been investigated in cancerous tumors for prognosis and staging [ 192 , 206 ]. The goal of prognosis is to analyze the medical images (MRI or ultrasound) of cancer and get the better presentation of it by gaining the prognostic biomarkers from the phenotypes of the image (e.g., size, margin morphology, texture, shape, kinetics, and variance kinetics). For example, Li et al. [ 192 ] found that texture phenotype enhancement can characterize the tumor pattern from MRI, which lead to prediction of the molecular classification of the breast cancers; in other words, the computer-extracted phenotypes provide promises regarding the quality of the breast cancer subtypes’ discrimination which leads to distinct quantitative prediction in terms of the precise medicine. Moreover, with the enhancement of the texture entropy, the vascular uptake pattern related to the tumor became heterogeneous which in turn reflects the heterogeneous temperament of the angiogenesis and the treatment process applicability and this is termed as the virtual digital biopsy location based. Gonzalez et al. [ 204 ] have applied DL on thoracic CT scans for prediction of staging of chronic obstructive pulmonary disease (COPD). Hidenori et al. [ 207 ] have used CNN model for grading diabetic retinopathy and determining the treatment and prognosis which involves a non-typically visualized on fundoscopy of retinal area; their novel AI system suggests treatment and determines prognoses.

Another term related to staging and prognosis is survival prediction and disease outcome, Skrede et al. [ 208 ] have performed DL using a large dataset over 12 million pathology images to predict the survival outcome for colorectal cancer in its early stages, a common evaluation metric is Hazard function which indicate the risk measures of a patient after treatment, and their results yield a hazard ration of 3.84 for poor against good prognosis in the validation set cohort of 1122 patients, and a hazard ratio of 3.04 after adjusting for prognostic markers which contain T and N stages. Sillard et al. [ 209 ] used deep learning for predicting survival outcomes after hepatocellular carcinoma resection.

3.7 Medical imaging in COVID-19

Basically, after COVID-19 has been identified in 31 December 2019 [ 210 ] and it is based on polymerase chain reaction (PCR) test. However, it was found that it can be analyzed and diagnosed through medical imaging, even though most radiologists’ societies do not recommend it, because it has similar features of various pneumonia diseases. Simpson et al. [ 211 ], have prospected a potential use of CT scans for clinical managing, and eventually, they have proposed four standard categories for reporting COVID-19 languages. Mahmood et al. [ 212 ] have studied 12,270 patients and recommend to be subjected for CT screening for early detection of COVID-19 to limit the speedy spread of the disease. Another approach for classification of COVID-19 is using portable (PCXR) which uses chest X-ray scans instead of the expensive CT scans; furthermore, this has the potential of minimizing the chances of spreading the virus. For the identification of COVID-19, Pereira et al. [ 152 ] have flowed using chest X-ray scans using the portable manner. For the comparison of different screening methods, it was suggested by Soldati et al. [ 213 ], which stated the Lung Ultrasound (LUS) is needed to be compared with chest X-ray and CT scans to help designing better diagnostic system to be suitable for the technological resources. COVID-19 has gained the attention of deep learning researchers who have employed different DL models for the main pathologies for diagnosing this disease using different medical imaging modalities from different datasets. Starting with segmentation, a new proposed system for screening coronavirus disease was done by Butt et al. [ 214 ], who have employed 3D-CNN architecture for segmenting multiple volumes of CT scans; a classification step is included to categorize patches into COVID-19 from other pneumonia diseases, such as influenza and a viral pneumonia. After that, Bayesian function is used to calculate the final analysis report. Wang et al. [ 215 ] have performed their CNN model on chest X-ray images, for extracting the feature map, classification, regression, and finally the needed mask for segmentation. Another DL model using chest X-ray scans was introduced by Murphy et al. [ 108 ], using U-Net architecture for detecting of tuberculosis and finally classifying images, with AUC of 0.81. For the detection of COVID-19, Li et al. [ 124 ] have developed a new tool of deep learning to detect COVID-19 from CT scans; the main work consists of few steps starting from extracting the lungs as ROI using U-Net, then generating features using ResNet-50, and finally using fully connected layer for generating the probability score of COVID-19 and the final results have reported AUC of 0.96. Another COVID-19 detection system from X-rays and CT scans was proposed by Kassani et al. [ 126 ], who have used multiple models for their strategy, DenseNet 121 have achieved accuracy of 99%, and REsNet achieved accuracy of 98% after being trained by LightGBM, and also, they have used other backbone models such as MobileNet, Xception, and Inception-ResNet-V2,NASNe, and VGG-Net. For classification of COVID-19, Wu et al. [ 150 ] have used the fusion of DL networks, starting from segmenting lung regions using threshold-based method using CT scans, next using ResNet-50 to extract the features map which further is fed to fully connected layer to record AUC of 0.732 and 70% accuracy. Ardakani et al. [ 216 ] have compared ten DL models for classification of COVID-19, including AlexNet, VGG-16, VGG-19, GoogleNet, MobileNet, Xception, ResNet-101, ResNet-18, ResNet-50, and SqueezNet. Where ResNet-101 has recorded the best results regarding sensitivity. A few used deep learning themes that have been used for different applications of COVID-19 are listed in Tables 1 , 2 , 3 .

4 Deep learning schemes

4.1 data augmentation.

It was clearly that deep learning approach performs better than the traditional machine learning and shallow learning methods and other handcrafted feature extraction from images, because deep learning models learn image descriptors automatically for analysis. It is commonly possible to combine deep learning approach with the knowledge learned from the handcrafted features for analyzing medical images [ 153 , 200 , 217 ]. The main key feature of deep learning is the large-scale datasets which contain images from thousands of patients. Although some vast data of clinical images, reports, and annotations are recorded and stored digitally in many hospitals for example, Picture Archiving and Communication systems (PACS) and Oncology Information System (OIS), in practice, these kinds of large-scale datasets with semantic labels are an efficiency measure for deep learning models used in medical imaging analysis. As it is known that medical images face the lack of dataset, data augmentation has been used to create new samples either depending on the existing samples or using generative models to generate new images. The new augmented samples are emerged with the original samples; thus, the size of the dataset is increased with the variation in the data points. Data augmentation is used by default with deep learning due to its added efficiency, since it reduces the chance of overfitting and it eliminates the imbalanced issue while using multi-class datasets, because it increases the number of the training samples and this also helps in generalizing the models and enhance the testing results. The basic data augmentation techniques are simple and it was widely adopted in medical imaging, such as cropping, rotating, flipping, shearing, scaling, and translation of images [ 80 , 218 , 219 ]. Pezeshk et al. [ 220 ] have proposed mixing tool which can seamlessly merge a lesion patch into a CT scan or mammography modality, so the merged lesion patches can be augmented using the basic transformations and inserted to the lesion shape and characteristics.

Zhang et al. [ 221 ] have used DCNN for extracting features and obtaining image representations and similarity matrix too, their proposed data augmentation method is called unified learning of features representation, their model was trained on seed-labeled dataset, and authors intended to classify colonoscopy and upper endoscopy medical images. The second method to tackle limited datasets is to synthesize medical data using an object model or physics principles of image formation and using generative models schemes to serve as applicable medical examples and therefore increase the performance of any deep learning task at hand. The most used model for synthesizing medical data is Generative Adversarial Networks (GANs); for example [ 143 ], GANs were used to generate lesion samples which increase CNN performance while the classification task of liver lesions. Yang et al. [ 222 ] used Radon Transform for objects with different modeled conditions by adding noise to the data for synthesizing CT dataset and the trained CNN model does the estimation of high-dose projection from low-dose. Synthesizing medical images is used for different purposes; for example, Chen et al. [ 223 ] have generating training data for noise reduction for reconstructed CT scans by applying deep learning algorithm by synthesizing noisy projections from patient images. While, CUI et al. [ 224 ] have used simulated dynamic PET data and used stacked sparse autoencoders for dynamic PET reconstruction framework.

4.2 Datasets

Deep learning models are famous to be dataset hungry, and the good quality of dataset has been always the key-parameter for deep learning for learning computational models and provide trusted results. The task of deep learning models is more potential when handling medical data because the accuracy is highly needed, recently many publicly available datasets have been released online for evaluating the new developed DL models. Commonly, there are different repositories which provide useful compilations of the public datasets (e.g., Github, Kaggle, and other webpages). Comparing to the datasets for general computer vision tasks (thousands to million annotated images), medical imaging datasets are considered to be too small. According to the Conference on Machine Intelligence in Medical Imaging (C-MIMI) that was held in 2016 [ 225 ], ML and DL are starving for large-scale annotated datasets, and the most common regularities and specifications (e.g., sample size, cataloging and discovery, pixel data, metadata, and post-processing) related to medical images datasets are mentioned in this white paper. Therefore, different trends in medical imaging community have started to adopt different approaches for generating and increasing the number of samples in dataset, such as generative models, data augmentation, and weakly supervised learning, to avoid overfitting on the small dataset and finally provide an end-to-end fashion reliable deep learning model. Martin et al. [ 226 ] have described the fundamental steps for preparing the medical imaging datasets for the usage of AI applications. Fig.  11 shows the flowchart of the process; moreover, they have listed the current limitations and problem of data availability of such datasets. Examples of popular used databases for medical images analysis which exploit deep learning were listed in [ 227 ]. In this paper, we provide the typically mostly used datasets in the literature of medical imaging which are exploited by deep learning approaches in Table 6 .

figure 11

Flowchart of medical images data handling

4.3 Feature’s extraction and selection

Feature extraction is the tool of converting training data and trying to establish as maximum features as possible to make deep learning algorithms much efficient and adequate. There are some common algorithms used for medical image features’ extractors, such as Gray-Level-Run-Length-Matrix (GLRM), Local Binary Patterns (LBP), Local Tetra Patterns (LTrP), Completed Local Binary Patterns (CLBP), and Gray-Level-Co-Occurrence Matrix (GLCM); these techniques are used in the first place before applying the main DL algorithm for different medical imaging tasks.

GLCM : is a common used feature extractor by which it searches for the textural patterns and their nature within gray-level gradients [ 234 ]. The main extracted features through this technique are autocorrelation, contrast, Dissimilarity, correlation, cluster prominence, energy, homogeneity, variance, entropy, difference variance, sum variance, cluster shade, sum entropy, information measure of correlation.

LBP : is another famous feature extractor which uses the locally regional statistical features [ 235 ]. The main theme of this technique is to select a central pixel and the rest pixels along a circle are taken to be binary encoded as 0 if their values are less than the central pixel, and 1 for the pixels which have values greater than the central pixel. In histogram statistics, these binary codes are encoded to decimal numbers.

Gray-Level Run Length Matrix (GLRLM ): this method removes the higher order statistical texture data. In case of the maximum gray dimensions G, the image is repeatedly re-quantizing to aggregate the network. The mathematical formula of GLRLM is given as follows:

where ( u , v ) refers to the sizes of the array values, N r refers to the maximum gray-level values, and K max is the more length.

Raj et al. [ 236 ] have used both GLCM and GLRLM as the main features’ extraction techniques for extracting the optimal features from the pre-processed medical images, which further the optimal features have improved the final results of classification task. Figure  12 shows the features extraction and selection types used for dimensionality reduction.

4.3.1 Feature selection techniques

Analysis of Variance—ANOVA: is a statistical model by which it evaluates and compares two or more experiments averages. The idea behind this model is that the difference between means are substantial to evaluate the performance of two estimates [ 237 ]. Surendiran et al. [ 238 ] have used the stepwise ANOVA Discriminant Analysis (DA) for mammogram masses’ classification.

The basic steps of performing ANOVA on data distribution are

Defining Hypothesis:

Calculating the sum of squares

It is used to determine the dispersion from datapoints and it can be written as

ANOVA performs F test to compare the variance difference between groups and within groups. And this can be done using total sum of squares which is defined as the distance between each point from the grand mean x -bar.

Determining the degree of freedom

Calculating F value

Acceptance or rejection of null hypothesis

Principal Component Analysis (PCA): is considered as the most used tool for extracting structural features from potentially high-dimensional datasets. It extracts the eigenvectors ( q ) which are connected to ( q ) largest eigenvalues from an input distribution. PCA results develop new features that are independent of another. The main goal of PCA is to apply linear transformation for obtaining a new set of samples, so that the components of y are un-correlated, [ 239 ]. The linear transform is given as follows:

where x is the input element vector ∈  R I , after that the PCA algorithm will choose the most significant components ( y ), and the main steps to do this are summarized as follows:

Standardize and normalization of the datapoints: after calculating the mean and standard deviation of the input distribution

Calculating the covariance matrix from the input datapoints:

From the covariance matrix extract the eigenvalues:

Choosing k eigenvectors with the highest eigenvalues by sorting the eigenvalues and eigenvectors, k refers to the number of dimensions in the dataset

Another major feature of PCA algorithm is used for feature dimensionality reduction.

In medical imaging, PCA was used mostly for dimensionality reduction, Wu et al. [ 240 ] have used PCA-based nearest neighbor for estimation of local structure distribution and extracted the entire connected tree, and in their results over retinal fundus data, they have achieved state-of-the-art results by producing more information regarding the tree structure.

PCA was also used as a data augmentation process before training the discriminative CNN for different medical imaging tasks; for capturing the important characteristics of natural images, different algorithms were compared to perform data augmentation [ 241 ] (Fig.  12 ).

figure 12

Features’ extraction and selection types used for dimensionality reduction

4.4 Evaluation metrics

For the purpose of evaluating and measuring the performance of deep learning models while validating medical images, different evaluation metrics are used according to some specific regularities and criteria. For example, some particular evaluation metrics are used with specific tasks like Dice score and F 1-score are mostly used for segmentation, while accuracy and sensitivity are mostly used for classification task. Here, we will focus on the most used performance measurement metrics in the literature and will cover the metrics mentioned in our tables of comparison.

The Dice coefficient is the most used metric for segmentation task for validating the medical images. It is common also to use dice score to measure reproducibility [ 242 ]. The general formula to calculate the Dice coefficient is

Jaccard-index (similarity coefficient) [JAC]:

Jaccard-index is a statistic metric used for finding the similarities between sample-sets. It is defined as the ratio between the size of intersection and the size of union of the sample-set, and the mathematical formula is given by

From the formula above, we note that the JAC-index is always greater than the dice score and the relation between the two metrics is defined by

True-Positive Rate (TPR):

Also is called as Sensitivity and Recall, is used to maximize the prediction of a particular class and it measures the portion of the positive voxels from the ground truth which also were identified as positive when performing segmentation process. It is given by the formula

True-Negative Rate (TNR):

Also called specificity, it measures the number of negative voxels (background) from the ground truth which are also identified to be negative after the segmentation process, and it is given by the formula

However, both TNR and TPR metrics are not used commonly for medical images’ segmentation due to their sensibility to the segments size.

Accuracy [ACC]

Accuracy means exactly how good the DL model at guessing the right labels (ground truth). Accuracy is commonly used to validate the classification task and it is given using the formula

It is used to get the best precision and recall together; thus, the F 1-score is called the harmonic mean of precision and recall values; it is given by the formula

The predictive accuracy of a classification model is related to the F 1-score; when F 1-score is higher means, we have better classification accuracy.

F-beta score:

It is a combination of advantages of precision and recall metrics when both the False-Negative (FN) and False-Positive (FP) have equal importance. F -beta-score is given using the same formula for F 1-score by altering its formula a bit by including an adjustable parameter (beta), and the formula became

This evaluation metric measures the effectiveness of a DL model according to a user who attaches beta times.

Receiver-Operating Characteristics Curve (ROC) is a graph between True-Positive Rate (TPR) (sensitivity) and False-Positive Rate (FPR) (1- specificity) by which it shows the performance of classification model, and the plot is characterized at different classification thresholds. The biggest advantage of ROC curve is that its independency of the change in number of responders and response rate.

AUC is the area under curve of ROC, and it measures the 2D area under the ROC curve which in turn means the integral of the ROC curve from 0 to the AUC measures the aggregate performance of classification at all the possible thresholds. One way to understand the AUC is as the probability that a model classifies random positive samples more than random negative samples. The ROC curve is shown in Fig.  13 .

figure 13

ROC and AUC graph

5 Discussion and conclusion

5.1 technical challenges.

In this overview paper, we have presented a review of the previous literature of deep learning applications in medical imaging. It contributed three main sections: first, we have presented the core of deep learning concepts considering the main highlights of understanding of basic frameworks in medical images analysis. The second section contains the main applications of deep learning in medical imaging (e.g., segmentation, detection, classification, and registration) and we have presented a comprehensive review of the literature. The criteria that we have built our overview consists of the mostly cited papers, the mostly recent (from 2015 to 2021), and the papers with better results. The third major part of this paper is focused on the deep learning themes regarding some challenges and the future directions of addressing those challenges. Besides focusing on the quality of the mostly recent works, we have highlighted the suitable solutions for different challenges in this field and the future directions that have been concluded from different scientific perspective. Medical imaging can get the benefit from other fields of deep learning, that have been encouraged from collaborative research works from computer vision communities, and furthermore, this collaboration is used to overcome the lack of medical dataset using transfer learning. Cho et al. [ 243 ] have answered the question of how much is the size of medical dataset needed to train a deep learning model. Creating synthetic medical images is another solution presented in deep learning using Variational Autoencoders (VAEs) and GANs for tackling the limited labeled medical data. For instance, Guibas et al. [ 244 ] have used 2 GANs for segmenting and then generating new retinal fundus images successfully. Another applications of GANs for segmentation and synthetic data generation were found [ 132 , 245 ].

Data or class imbalance [ 246 ] is considered a critical problem in medical imaging, and it refers to that medical images that are used for training are skewed toward non-pathological images; rare diseases have less number of training examples which cause the problem of imbalanced data which lead to incorrect results. Data augmentation represents good solution for this, because it increases the number of samples of the small classes. Away from dataset challenges strategies, there are algorithmic modification strategies which are used to improve DL models’ performance for data imbalance issue [ 247 ].

Another important non-technical challenge is the public reception of humans that the results are being analyzed using DL models (not human). In some papers in our report, DL models have outperformed specialists in medicine (e.g., dermatologists and radiologists) and mostly in image recognition tasks. Yet, a moral culpability may arise whenever a patient is mistakenly diagnosed or morbidity cases may arise too when using DL-based diagnostic, since the work of a DL algorithms is considered a black box. However, the continued development and evolving of DL models might take a major role in the medicine as it is involving in various facets of our life.

AI systems have started to emerge in hospitals from a clinical perspective. Bruijne [ 248 ] have presented five challenges facing the broader family of deep learning which is machine learning in medical imaging field including the data preprocessing of different modalities, interpretation of results to clinical practice, improving the access of medical data, and training the models with little training data. These challenges further have addressed the future directions of DL models improvement. Another solutions of small datasets were reported in [ 8 , 249 ].

DL models’ architectures were found not to be the only factor that provides quality results, where data augmentation and preprocessing techniques are also substantial tools for a robust and efficient system. The big question is that how to benefit from the results of DL models for the best of medical images analysis in the community.

Considering the historical developments of ML techniques in medical imaging gives us a future perspective how DL models will continue to improve in the same field. Accordingly, medical images quality and data annotations are crucial for proper analysis. A significant concept is the relevance between statistical sense and clinical sense, even though the statistical analysis are quiet important in research, but in this field, researchers should not lose the sight from clinical perspective; in other words, even when a good CNN models provides good answers from the statistical perspective, it does not mean that it will replace a radiologist even after using all the helping techniques like data augmentation and adding more layers to get better accuracies.

5.2 Future promises

After reviewing literature and the most competitive challenges that face deep leaning in medical imaging, we concluded that three aspects will probably carry the revolution of DL according to most of researchers, which are the availability of large-scale datasets, advances in deep learning algorithms, and the computational power for data processing and evaluation of DL models. Thus, most of the DL techniques are directed into the above aspects for alleviating the DL performance more; moreover, the need for investigations to improve data harmonization, standards developments which is needed for reporting and evaluation, and accessibility of larger annotated data such as the public datasets which lead to better independent benchmarks’ services. Some of the interesting applications in medical imaging was proposed by Nie et al. [ 250 ], by which they have used GANs to generate or CT scans from MRI images for brain; the benefit of such work will reduce the risk of patients being exposed to ionizing radiation from CT scanners, which also reserve patients’ safety. Another significant perspective relies on increasing the resolution and quality of medical images and also reduces the blurriness from CT scans and MRI images which means getting higher resolution with lower costs and better results, because it has lower field strength [ 251 ].

The new trends’ technology of deep learning approach is concerned about medical data collection. Wearable technologies are getting the interest of the new research which provide the benefits of flexibility, real-time monitoring of patients, and the immediate communication of the collected data. Whenever the data become available, Deep learning and AI will start to use the unsupervised data exploration, which in turn will provide better analysis power plus suggesting better treatments’ methodologies in healthcare. In summary, the new trends of AI in healthcare pass through stages; the quality of performance (QoP) related to deep learning will lead to standardization in terms of wearable technology which represent the next stage of healthcare applications and personalized treatment. Diagnosing and treatment depend on specialists, but with deep learning enabled, some small changes and signs in human body can be seen and early detection becomes possible which in turn will launch the treatment process of pre-stage of diseases. DL model optimization mainly focuses on the network architecture, while the standard term of optimization means the distribution and standardization with respect to other parts of DL (e.g., optimizers, loss functions, preprocessing and post-processing, etc.). In many cases to achieve better diagnosis, medical images are not sufficient, where another data are required to be combined (e.g., historical medical reports, genetic information, lab values, and other non-image data), though by linking and normalizing these data with medical images will lead to better diagnosis of diseases, more accurately through analyzing these data in higher dimensions.

LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521 (7553), 436–444 (2015). https://doi.org/10.1038/nature14539

Article   Google Scholar  

Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6 , 14410–14430 (2018). https://doi.org/10.1109/ACCESS.2018.2807385

Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 4 (January), 3104–3112 (2014)

Google Scholar  

Smith-Bindman, R., et al.: Use of diagnostic imaging studies and associated radiation exposure for patients enrolled in large integrated health care systems, 1996–2010. JAMA 307 (22), 2400–2409 (2012). https://doi.org/10.1001/jama.2012.5960

Rubin, D.L.: Measuring and improving quality in radiology: meeting the challenge with informatics. Radiographics 31 (6), 1511–1527 (2011). https://doi.org/10.1148/rg.316105207

Recht, M.P., et al.: Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations. Eur. Radiol. 30 (6), 3576–3584 (2020). https://doi.org/10.1007/s00330-020-06672-5

Bosma, M., van Beuzekom, M., Vähänen, S., Visser, J., Koffeman, E.: The influence of edge effects on the detection properties of Cadmium Telluride. In: 2011 IEEE Nuclear Science Symposium Conference Record IEEE, pp. 4812–4817 (2011)

Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42 (December 2012), 60–88 (2017). https://doi.org/10.1016/j.media.2017.07.005

LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1 (4), 541–551 (1989). https://doi.org/10.1162/neco.1989.1.4.541

Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86 (11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

Sutskever, I., Martens, J. and Hinton, G.: Generating text with recurrent neural networks. In: Proc. 28th Int. Conf. Mach. Learn. ICML 2011, pp. 1017–1024 (2011)

Balkanski, E., Rubinstein, A. and Singer, Y.: The power of optimization from samples. In: Advances in Neural Information Processing Systems, 2016, vol. 29. Available: https://proceedings.neurips.cc/paper/2016/file/c8758b517083196f05ac29810b924aca-Paper.pdf

Karpathy, A and Fei-Fei, L.: Deep Visual-Semantic Alignments for Generating Image Descriptions - Karpathy_Deep_Visual-Semantic_Alignments_2015_CVPR_paper.pdf. Cvpr (2015)

Shin, H.C., Roberts, K., Lu, L., Demner-Fushman, D., Yao, J., Summers, R.M.: Learning to read chest x-rays: recurrent neural cascade model for automated image annotation. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2016 , 2497–2506 (2016). https://doi.org/10.1109/CVPR.2016.274

Cui, R., Liu, M.: RNN-based longitudinal analysis for diagnosis of Alzheimer’s disease. Comput. Med. Imaging Graph. 73 , 1–10 (2019). https://doi.org/10.1016/j.compmedimag.2019.01.005

Zhang, J., Zuo, H.: A deep RNN for CT image reconstruction. Proc. SPIE (2020). https://doi.org/10.1117/12.2549809

Qin, C., et al.: Joint learning of motion estimation and segmentation for cardiac MR image sequences, vol. 11071. Springer International Publishing, LNCS (2018)

Ben-Cohen, A., Mechrez, R., Yedidia, N. and Greenspan, H.: Improving CNN training using disentanglement for liver lesion classification in CT. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019, pp. 886–889, https://doi.org/10.1109/EMBC.2019.8857465

Liao, H., Lin, W.-A., Zhou, S.K., Luo, J.: ADN: artifact disentanglement network for unsupervised metal artifact reduction. IEEE Trans. Med. Imaging 39 (3), 634–643 (2020). https://doi.org/10.1109/TMI.2019.2933425

Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations, vol. 11492. Springer International Publishing, LNCS (2019)

Creswell, A., Bharath, A. A.: Denoising adversarial autoencoders. IEEE transactions on neural networks and learning systems, 30 (4), 968–984 (2018)

Lopez Pinaya, W.H., Vieira, S., Garcia-Dias, R., Mechelli, A.: Autoencoders. Elsevier Inc. (2019)

Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11 , 3371–3408 (2010)

MathSciNet   MATH   Google Scholar  

Ranzato, M.A., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. Adv. Neural Inf. Process. Syst. (2007). https://doi.org/10.7551/mitpress/7503.003.0147

Kingma, D.P. and Welling, M.: Auto-encoding variational bayes. In: 2nd International Conference of Learning Representation. ICLR 2014 - Conf. Track Proc., no. Ml, pp. 1–14 (2014)

Rifai, S., Vincent, P., Muller, X., Glorot, X. and Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceeding 28th International Conference of Machine Learning. ICML 2011, no. 1, pp. 833–840 (2011)

Li, C., Xu, K., Zhu, J. and Zhang, B.: Triple generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 2017-Decem, pp. 4089–4099 (2017)

Goodfellow, I. et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, 2014, vol. 27. Available: https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf

Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W. and Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: Proceeding - 30th IEEE Conference of Computer Vission Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2242–2251, (2017) https://doi.org/10.1109/CVPR.2017.241

Hacihaliloglu, J.R.B.I., Singer, E.A., Foran, D.J.: For Classification of Prostate Histopathology Whole-Slide Images, vol. 1. Springer International Publishing, Berlin (2018)

Bi, X., Li, S., Xiao, B., Li, Y., Wang, G., Ma, X.: Computer aided Alzheimer’s disease diagnosis by an unsupervised deep learning technology. Neurocomputing 392 , 296–304 (2020). https://doi.org/10.1016/j.neucom.2018.11.111

Baumgartner, C.F., Koch, L.M., Tezcan, K.C., Ang, J.X., Konukoglu, E.: Visual feature attribution using wasserstein GANs. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (2018). https://doi.org/10.1109/CVPR.2018.00867

Son, J., Park, S.J. and Jung, K.-H.: Retinal Vessel Segmentation in Fundoscopic Images with Generative Adversarial Networks (2017). Available: http://arxiv.org/abs/1706.09318

Dou, Q., et al.: PnP-AdaNet: plug-and-play adversarial domain adaptation network with a benchmark at cross-modality cardiac segmentation. CoRR, vol. abs/1812.0, (2018). Available: http://arxiv.org/abs/1812.07907

Welander, P., Karlsson, S. and Eklund, A.: Generative adversarial networks for image-to-image translation on multi-contrast {MR} images - {A} comparison of CycleGAN and {UNIT}. CoRR vol. abs/1806.0, (2018). Available: http://arxiv.org/abs/1806.07777

Kazeminia, S., et al.: GANs for medical image analysis. Artif. Intell. Med. 109 , 101938 (2020). https://doi.org/10.1016/j.artmed.2020.101938

Ackley, D.H., Hinton, G.E., Sejnowski, T.J.: A learning algorithm for boltzmann machines. Cogn. Sci. 9 (1), 147–169 (1985). https://doi.org/10.1016/S0364-0213(85)80012-4

Paul, S.: Information processing in dynamical systems: foundations of harmony theory. J. Jpn. Soc. Fuzzy Theory Syst. 4 (2), 220–228 (1986)

van Tulder, G., de Bruijne, M.: Combining generative and discriminative representation learning for lung CT analysis with convolutional restricted boltzmann machines. IEEE Trans. Med. Imaging 35 (5), 1262–1272 (2016). https://doi.org/10.1109/TMI.2016.2526687

Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18 (7), 1527–1554 (2006). https://doi.org/10.1162/neco.2006.18.7.1527

Article   MathSciNet   MATH   Google Scholar  

Khatami, A., Khosravi, A., Nguyen, T., Lim, C.P., Nahavandi, S.: Medical image analysis using wavelet transform and deep belief networks. Expert Syst. Appl. 86 , 190–198 (2017). https://doi.org/10.1016/j.eswa.2017.05.073

Reddy, A.V.N., et al.: Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks. J. Big Data. (2020). https://doi.org/10.1186/s40537-020-00311-y

Kaur, M., Singh, D.: Fusion of medical images using deep belief networks. Cluster Comput. 23 (2), 1439–1453 (2020). https://doi.org/10.1007/s10586-019-02999-x

Zhou, Z., et al.: Models genesis: generic autodidactic models for 3D medical image analysis. In: Medical Image Computing and Computer Assisted Intervention -- MICCAI 2019, pp. 384–393 (2019)

Zhu, J., Li, Y., Hu, Y., Ma, K., Zhou, S.K., Zheng, Y.: Rubik’s Cube+: a self-supervised feature learning framework for 3D medical image analysis. Med. Image Anal. 64 , 101746 (2020). https://doi.org/10.1016/j.media.2020.101746

Azizi, S., et al.: Big self-supervised models advance medical image classification. no. 1, (2021). Available: http://arxiv.org/abs/2101.05224

Nie, D., Gao, Y., Wang, L. and Shen, D.: ASDNet: attention based semi-supervised deep networks for medical image segmentation. In: Medical Image Computing and Computer Assisted Intervention -- MICCAI 2018, pp. 370–378 (2018)

Bai, W., et al.: Semi-supervised learning for network-based cardiac MR image segmentation. In: Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017, pp. 253–260 (2017)

Liu, Q., Yu, L., Luo, L., Dou, Q., Heng, P.A.: Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE Trans. Med. Imaging 39 (11), 3429–3440 (2020). https://doi.org/10.1109/TMI.2020.2995518

Kervadec, H., Dolz, J., Tang, M., Granger, E., Boykov, Y., Ben Ayed, I.: Constrained-CNN losses for weakly supervised segmentation. Med. Image Anal. 54 , 88–99 (2019). https://doi.org/10.1016/j.media.2019.02.009

Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray: hospital-scale chest x-ray database and benchmarks on weakly supervised classification and localization of common thorax diseases. Adv. Comput. Vis. Pattern Recognit. (2019). https://doi.org/10.1007/978-3-030-13969-8_18

Shi, G., Xiao, L., Chen, Y., Zhou, S.K.: Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Med. Image Anal. 70 , 101979 (2021). https://doi.org/10.1016/j.media.2021.101979

Roth, H.R., Yang, D., Xu, Z., Wang, X., Xu, D.: Going to extremes: weakly supervised medical image segmentation. Mach. Learn. Knowl. Extr. 3 (2), 507–524 (2021). https://doi.org/10.3390/make3020026

Schlegl, T., Seeböck, P., Waldstein, S.M., Langs, G., Schmidt-Erfurth, U.: f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 54 , 30–44 (2019). https://doi.org/10.1016/j.media.2019.01.010

Hu, Y., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Med. Image Anal. 49 , 1–13 (2018). https://doi.org/10.1016/j.media.2018.07.002

Quellec, G., Laniard, M., Cazuguel, G., Abràmoff, M.D., Cochener, B. and Roux, C.: Weakly supervised classification of medical images. In: 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), pp. 110–113 (2012) doi: https://doi.org/10.1109/ISBI.2012.6235496

Abdullah Al, W., Yun, I.D.: Partial policy-based reinforcement learning for anatomical landmark localization in 3D medical images. IEEE Trans. Med. Imaging 39 (4), 1245–1255 (2020). https://doi.org/10.1109/TMI.2019.2946345

Smith, R.L., Ackerley, I.M., Wells, K., Bartley, L., Paisey, S. and Marshall, C.: Reinforcement learning for object detection in PET imaging. In: 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), pp. 1–4 (2019). doi: https://doi.org/10.1109/NSS/MIC42101.2019.9060031

Park, J., Jo, S., Lee, J., and Sun, W.: Color image classification on neuromorphic system using reinforcement learning. In: 2020 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1–2 (2020). doi: https://doi.org/10.1109/ICEIC49074.2020.9051310

Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22 (10), 1345–1359 (2010). https://doi.org/10.1109/TKDE.2009.191

Shin, H.C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35 (5), 1285–1298 (2016). https://doi.org/10.1109/TMI.2016.2528162

Xue, D., et al.: An application of transfer learning and ensemble learning techniques for cervical histopathology image classification. IEEE Access 8 , 104603–104618 (2020). https://doi.org/10.1109/ACCESS.2020.2999816

Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35 (5), 1299–1312 (2016). https://doi.org/10.1109/TMI.2016.2535302

Krizhevsky, B.A., Sutskever, I., Hinton, G.E.: Cnn实际训练的. Commun. ACM 60 (6), 84–90 (2012)

Lu, S., Lu, Z., Zhang, Y.D.: Pathological brain detection based on AlexNet and transfer learning. J. Comput. Sci. 30 , 41–47 (2019). https://doi.org/10.1016/j.jocs.2018.11.008

Simonyan, K. and Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference of Learning Representation. ICLR 2015 - Conf. Track Proc., pp. 1–14 (2015)

Sahiner, B., et al.: Deep learning in medical imaging and radiation therapy. Med. Phys. 46 (1), e1–e36 (2019). https://doi.org/10.1002/mp.13264

Article   MathSciNet   Google Scholar  

Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 2818–2826 (2016). https://doi.org/10.1109/CVPR.2016.308

Szegedy, C., Ioffe, S., Vanhoucke, V. and Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: 31st AAAI Conference of Artificial Intelligence. AAAI 2017, pp. 4278–4284 (2017)

Gao, F., Wu, T., Chu, X., Yoon, H., Xu, Y., Patel, B.: Deep residual inception encoder–decoder network for medical imaging synthesis. IEEE J. Biomed. Heal. Informatics 24 (1), 39–49 (2020). https://doi.org/10.1109/JBHI.2019.2912659

A. {Szegedy, Christian and Liu, Wei and Jia, Yangqing and Sermanet, Pierre and Reed, Scott and Anguelov, Dragomir and Erhan, Dumitru and Vanhoucke, Vincent and Rabinovich, “{Going Deeper With Convolutions}e,” 2015, [Online]. Available: Szegedy, Christian, et al. %22Going deeper with convolutions.%22 Proceedings of the IEEE conference on computer vision and pattern recognition. 2015

He, K., Zhang, X.,S. Ren, S. and Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition., vol. 2016-Decem, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

Wang, Q., Shen, F., Shen, L., Huang, J., Sheng, W.: Lung nodule detection in CT images using a raw patch-based convolutional neural network. J. Digit. Imaging 32 (6), 971–979 (2019). https://doi.org/10.1007/s10278-019-00221-3

Mantas, J., Hasman, A., Househ, M.S., Gallos, P., Zoulias, E.: Preface. Stud. Health Technol. Inform. 272 , v (2020). https://doi.org/10.3233/SHTI272

Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N. and Liang, J.: UNet++: A nested U-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. pp. 3–11 (2018)

Mohammed Senan, E., Waselallah Alsaade, F., Ibrahim Ahmed Al-mashhadani, M., Aldhyani, T.H.H., Hmoudal-Adhaileh, M.: Classification of histopathological images for early detection of breast cancer using deep learning. J. Appl. Sci. Eng. 24 (3), 323–329 (2021). https://doi.org/10.6180/jase.202106_24(3).0007

Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q.: Densely connected convolutional networks. In: Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua. pp. 2261–2269 (2017). https://doi.org/10.1109/CVPR.2017.243

Mahmood, F., Yang, Z., Ashley, T. and Durr, N.J.: Multimodal Densenet (2018). Available: http://arxiv.org/abs/1811.07407

Xu, X., Lin, J., Tao, Y. and Wang, X.: An improved DenseNet method based on transfer learning for fundus medical images. In: 2018 7th International Conference on Digital Home (ICDH). pp. 137–140 (2018). https://doi.org/10.1109/ICDH.2018.00033

Ronneberger, O., Fischer, P. and Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015. pp. 234–241 (2015)

Forouzanfar, M., Forghani, N., Teshnehlab, M.: Parameter optimization of improved fuzzy c-means clustering algorithm for brain MR image segmentation. Eng. Appl. Artif. Intell. 23 (2), 160–168 (2010). https://doi.org/10.1016/j.engappai.2009.10.002

Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D.L., Erickson, B.J.: Deep learning for brain MRI segmentation: state of the art and future directions. J. Digit. Imaging 30 (4), 449–459 (2017). https://doi.org/10.1007/s10278-017-9983-4

Milletari, F., Navab, N. and Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). pp. 565–571 (2016). https://doi.org/10.1109/3DV.2016.79

Havaei, M., et al.: Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 35 , 18–31 (2017). https://doi.org/10.1016/j.media.2016.05.004

Asgari Taghanaki, S., Abhishek, K., Cohen, J.P., Cohen-Adad, J., Hamarneh, G.: Deep semantic segmentation of natural and medical images: a review. Artif. Intell. Rev. 54 (1), 137–178 (2021). ( Springer Netherlands )

Li, W., Jia, F., Hu, Q.: Automatic segmentation of liver tumor in CT images with deep convolutional neural networks. J. Comput. Commun. 03 (11), 146–151 (2015). https://doi.org/10.4236/jcc.2015.311023

Dong, H., Yang, G., Liu, F., Mo, Y. and Guo Y.: Automatic brain tumor detection and segmentation using U-net based fully convolutional networks. In: Medical Image Understanding and Analysis, pp. 506–517 (2017)

Soltaninejad, M., Zhang, L., Lambrou, T., Allinson, N. and Ye, X.: Multimodal MRI brain tumor segmentation using random forests with features learned from fully convolutional neural network. (2017). Available: http://arxiv.org/abs/1704.08134

Ker, J., Wang, L., Rao, J., Lim, T.: Deep learning applications in medical image analysis. IEEE Access 6 , 9375–9379 (2017). https://doi.org/10.1109/ACCESS.2017.2788044

Chen, L., Bentley, P., Rueckert, D.: Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks. NeuroImage Clin. 15 (January), 633–643 (2017). https://doi.org/10.1016/j.nicl.2017.06.016

Li, Z., Wang, Y. and Yu, J.: Brain tumor segmentation using an adversarial network. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. pp. 123–132 (2018)

Bakas, S., Reyes, M., Jakab, A., Bauer, S., Rempfler, M., Crimi, A., Jambawalikar, S. R.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. (2018). arXiv:1811.02629

Isensee, F., Kickingereder, P., Wick, W., Bendszus, M. and Maier-Hein, K. H.: Brain tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pp. 287–297 (2018)

Korfiatis, P., Kline, T.L., Erickson, B.J.: Automated segmentation of hyperintense regions in FLAIR MRI using deep learning. Tomography 2 (4), 334–340 (2016)

Liu, M., Zhang, J., Adeli, E., Shen, D.: Landmark-based deep multi-instance learning for brain disease diagnosis. Med. Image Anal. 43 , 157–168 (2018). https://doi.org/10.1016/j.media.2017.10.005

Liskowski, P., Krawiec, K.: Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 35 (11), 2369–2380 (2016). https://doi.org/10.1109/TMI.2016.2546227

Fang, L., Cunefare, D., Wang, C., Guymer, R.H., Li, S., Farsiu, S.: Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomed. Opt. Express 8 (5), 2732–2744 (2017). https://doi.org/10.1364/BOE.8.002732

Shankaranarayana, S.M., Ram, K., Mitra, K. and Sivaprakasam, M.: Joint optic disc and cup segmentation using fully convolutional and adversarial networks. In: Fetal, Infant and Ophthalmic Medical Image Analysis. pp. 168–176 (2017)

Fu, H., Cheng, J., Xu, Y., Wong, D.W.K., Liu, J., Cao, X.: Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 37 (7), 1597–1605 (2018). https://doi.org/10.1109/TMI.2018.2791488

Hu, P., Wu, F., Peng, J., Liang, P., Kong, D.: Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution. Phys. Med. Biol. 61 (24), 8676–8698 (2016). https://doi.org/10.1088/1361-6560/61/24/8676

Ben-Cohen, A., Diamant, I., Klang, E., Amitai, M. and Greenspan, H.: Fully convolutional network for liver segmentation and lesions detection. In: Deep Learning and Data Labeling for Medical Applications, pp. 77–85 (2016)

Yang, D., et al.: Automatic liver segmentation using an adversarial image-to-image network. In: Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. pp. 507–515 (2017)

Cheng, J.Z., et al.: Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci. Rep. 6 (March), 1–13 (2016). https://doi.org/10.1038/srep24454

Al-antari, M.A., Al-masni, M.A., Choi, M.T., Han, S.M., Kim, T.S.: A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int. J. Med. Inform. 117 (April), 44–54 (2018). https://doi.org/10.1016/j.ijmedinf.2018.06.003

Ait Skourt, B., El Hassani, A., Majda, A.: Lung CT image segmentation using deep neural networks. Procedia Comput. Sci. 127 , 109–113 (2018). https://doi.org/10.1016/j.procs.2018.01.104

Kalinovsky, A. and Kovalev, V.: Lung image segmentation using deep learning methods and convolutional neural networks. Int. Conf. Pattern Recognit. Inf. Process., no. July 2017, 21–24 (2016)

Roy, S., et al.: Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound. IEEE Trans. Med. Imaging 39 (8), 2676–2687 (2020). https://doi.org/10.1109/TMI.2020.2994459

Murphy, K., et al.: COVID-19 on chest radiographs: a multireader evaluation of an artificial intelligence system. Radiology 296 (3), E166–E172 (2020). https://doi.org/10.1148/radiol.2020201874

Kline, T.L., et al.: Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys. J. Digit. Imaging 30 (4), 442–448 (2017). https://doi.org/10.1007/s10278-017-9978-1

Ma, J., Wu, F., Jiang, T., Zhao, Q., Kong, D.: Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 12 (11), 1895–1910 (2017). https://doi.org/10.1007/s11548-017-1649-7

Zhang, R., Huang, L., Xia, W., Zhang, B., Qiu, B., Gao, X.: Multiple supervised residual network for osteosarcoma segmentation in CT images. Comput. Med. Imaging Graph. 63 (January), 1–8 (2018). https://doi.org/10.1016/j.compmedimag.2018.01.006

Yu, L., Guo, Y., Wang, Y., Yu, J., Chen, P.: Segmentation of fetal left ventricle in echocardiographic sequences based on dynamic convolutional neural networks. IEEE Trans. Biomed. Eng. 64 (8), 1886–1895 (2017). https://doi.org/10.1109/TBME.2016.2628401

Jafari, M.H., Nasr-Esfahani, E., Karimi, N., Soroushmehr, S.M.R., Samavi, S., Najarian, K.: Extraction of skin lesions from non-dermoscopic images for surgical excision of melanoma. Int. J. Comput. Assist. Radiol. Surg. 12 (6), 1021–1030 (2017). https://doi.org/10.1007/s11548-017-1567-8

Yang, D., Zhang, S., Yan, Z., Tan, C., Li, K. and Metaxas, D.: Automated anatomical landmark detection ondistal femur surface using convolutional neural network. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). pp. 17–21 (2015). doi: https://doi.org/10.1109/ISBI.2015.7163806

Orlando, J.I., Prokofyeva, E., del Fresno, M., Blaschko, M.B.: An ensemble deep learning based approach for red lesion detection in fundus images. Comput. Methods Programs Biomed. 153 , 115–127 (2018). https://doi.org/10.1016/j.cmpb.2017.10.017

Yang, X., et al.: Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI. Med. Image Anal. 42 , 212–227 (2017). https://doi.org/10.1016/j.media.2017.08.006

Dou, Q., et al.: Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans. Med. Imaging 35 (5), 1182–1195 (2016). https://doi.org/10.1109/TMI.2016.2528129

Yan, K., Wang, X., Lu, L., Summers, R.M.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5 (3), 1–11 (2018). https://doi.org/10.1117/1.JMI.5.3.036501

Zhang, J., Liu, M., Shen, D.: Detecting anatomical landmarks from limited medical imaging data using two-stage task-oriented deep neural networks. IEEE Trans. Image Process. 26 (10), 4753–4764 (2017). https://doi.org/10.1109/TIP.2017.2721106

Nakao, T., et al.: Deep neural network-based computer-assisted detection of cerebral aneurysms in MR angiography. J. Magn. Reson. Imaging 47 (4), 948–953 (2018). https://doi.org/10.1002/jmri.25842

Tsehay, Y., et al.: Biopsy-guided learning with deep convolutional neural networks for prostate cancer detection on multiparametric MRI Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Science, National Institute of Health, C. In: 2017 IEEE 14th Int. Symp. Biomed. Imaging (ISBI 2017), pp. 642–645 (2017)

Sirinukunwattana, K., Raza, S.E.A., Tsang, Y.W., Snead, D.R.J., Cree, I.A., Rajpoot, N.M.: Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 35 (5), 1196–1206 (2016). https://doi.org/10.1109/TMI.2016.2525803

Setio, A.A.A., et al.: Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE Trans. Med. Imaging 35 (5), 1160–1169 (2016). https://doi.org/10.1109/TMI.2016.2536809

Li, L., Qin, L., Xu, Z., Yin, Y., Wang, X., Kong, B., Bai, J., Lu, Y., Fang, Z., Song, Q., Cao, K.: Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology 296 (2), E65–E71 (2020)

Luz, E., et al.: Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images. Res. Biomed. Eng. (2021). https://doi.org/10.1007/s42600-021-00151-6

Kassania, S.H., Kassanib, P.H., Wesolowskic, M.J., Schneidera, K.A., Detersa, R.: Automatic detection of coronavirus disease (COVID-19) in X-ray and CT images: a machine learning based approach. Biocybern. Biomed. Eng. 41 (3), 867–879 (2021). https://doi.org/10.1016/j.bbe.2021.05.013

Wang, D., Khosla, A., Gargeya, R., Irshad, H. and Beck, A.H.: Deep learning for identifying metastatic breast cancer. pp. 1–6 (2016). Available: http://arxiv.org/abs/1606.05718

Dou, Q., Chen, H., Yu, L., Qin, J., Heng, P.A.: Multilevel contextual 3-D CNNs for false positive reduction in pulmonary nodule detection. IEEE Trans. Biomed. Eng. 64 (7), 1558–1567 (2017). https://doi.org/10.1109/TBME.2016.2613502

Rajpurkar, P., et al.: CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. pp. 3–9, (2017). Available: http://arxiv.org/abs/1711.05225

Ma, J., Wu, F., Jiang, T., Zhu, J., Kong, D.: Cascade convolutional neural networks for automatic detection of thyroid nodules in ultrasound images. Med. Phys. 44 (5), 1678–1691 (2017). https://doi.org/10.1002/mp.12134

Baka, N., Leenstra, S., Van Walsum, T.: Ultrasound aided vertebral level localization for lumbar surgery. IEEE Trans. Med. Imaging 36 (10), 2138–2147 (2017). https://doi.org/10.1109/TMI.2017.2738612

Alex, V., Safwan, P.K.M., Chennamsetty, S.S., Krishnamurthi, G.: Generative adversarial networks for brain lesion detection. Med. Imaging 2017 Image Process. (2017). https://doi.org/10.1117/12.2254487

Bogunovic, H., et al.: RETOUCH: the retinal OCT fluid detection and segmentation benchmark and challenge. IEEE Trans. Med. Imaging 38 (8), 1858–1874 (2019). https://doi.org/10.1109/TMI.2019.2901398

Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. J. Am. Med. Assoc. 316 (22), 2402–2410 (2016). https://doi.org/10.1001/jama.2016.17216

Nasrullah, N., Sang, J., Alam, M.S., Mateen, M., Cai, B., Hu, H.: Automated lung nodule detection and classification using deep learning combined with multiple strategies. Sensors (2019). https://doi.org/10.3390/s19173722

Domingues, I., Cardoso, J. S.: Mass detection on mammogram images: a first assessment of deep learning techniques. In: 19th Portuguese Conference on Pattern Recognition (RECPAD) (2013)

Lai, Z., Deng, H.: Medical image classification based on deep features extracted by deep model and statistic feature fusion with multilayer perceptron. Comput. Intell. Neurosci. (2018). https://doi.org/10.1155/2018/2061516

Xiao, B., et al.: PAM-DenseNet: a deep convolutional neural network for computer-aided COVID-19 diagnosis. IEEE Trans. Cybern. (2021). https://doi.org/10.1109/TCYB.2020.3042837

Lo, S.-C.B., Lou, S.-L.A., Lin, J.-S., Freedman, M.T., Chien, M.V., Mun, S.K.: Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Trans. Med. Imaging 14 (4), 711–718 (1995). https://doi.org/10.1109/42.476112

World Health Organization: Standardization of interpretation of chest radiographs for the diagnosis of pneumonia in children / World Health Organization Pneumonia Vaccine Trial Investigators’ Group (2001). Available: http://www.who.int/iris/handle/10665/66956

Ding, J., Chen, B., Liu, H., Huang, M.: Convolutional neural network with data augmentation for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 13 (3), 364–368 (2016). https://doi.org/10.1109/LGRS.2015.2513754

Perez, L. and Wang, J.: The effectiveness of data augmentation in image classification using deep learning. CoRR, vol. abs/1712.0 (2017). Available: http://arxiv.org/abs/1712.04621

Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321 , 321–331 (2018). https://doi.org/10.1016/j.neucom.2018.09.013

Li, R., et al.: Deep learning based imaging data completion for improved brain disease diagnosis. In: Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2014. pp. 305–312 (2014)

Hosseini-Asl, E., Gimel’farb, G.L. and El-Baz, A.: Alzheimer’s disease diagnostics by a deeply supervised adaptable 3D convolutional network. CoRR vol. abs/1607.0 (2016). Available: http://arxiv.org/abs/1607.00556

Abràmoff, M.D., et al.: Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Invest. Ophthalmol. Vis. Sci. 57 (13), 5200–5206 (2016). https://doi.org/10.1167/iovs.16-19964

Anthimopoulos, M., Christodoulidis, S., Ebner, L., Christe, A., Mougiakakou, S.: Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans. Med. Imaging 35 (5), 1207–1216 (2016). https://doi.org/10.1109/TMI.2016.2535865

Nibali, A., He, Z., Wollersheim, D.: Pulmonary nodule classification with deep residual networks. Int. J. Comput. Assist. Radiol. Surg. 12 (10), 1799–1808 (2017). https://doi.org/10.1007/s11548-017-1605-6

Christodoulidis, S., Anthimopoulos, M., Ebner, L., Christe, A., Mougiakakou, S.: Multisource transfer learning with convolutional neural networks for lung pattern analysis. IEEE J. Biomed. Heal. Informatics 21 (1), 76–84 (2017). https://doi.org/10.1109/JBHI.2016.2636929

Wu, X., et al.: Deep learning-based multi-view fusion model for screening 2019 novel coronavirus pneumonia: a multicentre study. Eur. J. Radiol. 128 (March), 1–9 (2020). https://doi.org/10.1016/j.ejrad.2020.109041

Farid, A.A., Selim, G.I., Khater, H.A.A.: A novel approach of CT images feature analysis and prediction to screen for corona virus disease (COVID-19). Int. J. Sci. Eng. Res. 11 (03), 1141–1149 (2020). https://doi.org/10.14299/ijser.2020.03.02

Pereira, R.M., Bertolini, D., Teixeira, L.O., Silla, C.N., Costa, Y.M.G.: COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput. Methods Programs Biomed. 194 , 105532 (2020). https://doi.org/10.1016/j.cmpb.2020.105532

Huynh, B.Q., Li, H., Giger, M.L.: Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J. Med. Imaging 3 (3), 034501 (2016). https://doi.org/10.1117/1.jmi.3.3.034501

Sun, W., Tseng, T.L.B., Zhang, J., Qian, W.: Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput. Med. Imaging Graph. 57 , 4–9 (2017). https://doi.org/10.1016/j.compmedimag.2016.07.004

Swati, Z.N.K., et al.: Brain tumor classification for MR images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 75 , 34–46 (2019). https://doi.org/10.1016/j.compmedimag.2019.05.001

Sajjad, M., Khan, S., Muhammad, K., Wu, W., Ullah, A., Baik, S.W.: Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 30 , 174–182 (2019). https://doi.org/10.1016/j.jocs.2018.12.003

Deepak, S., Ameer, P.M.: Brain tumor classification using deep CNN features via transfer learning. Comput. Biol. Med. 111 (June), 103345 (2019). https://doi.org/10.1016/j.compbiomed.2019.103345

Afshar, P., Mohammadi, A. and Plataniotis, K. N.: Brain tumor type classification via capsule networks. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 3129–3133 (2018). https://doi.org/10.1109/ICIP.2018.8451379

Gao, X.W., Hui, R., Tian, Z.: Classification of CT brain images based on deep learning networks. Comput. Methods Programs Biomed. 138 , 49–56 (2017). https://doi.org/10.1016/j.cmpb.2016.10.007

Bharati, S., Podder, P., Mondal, M.R.H.: Hybrid deep learning for detecting lung diseases from X-ray images. Inform. Med. Unlocked 20 , 100391 (2020). https://doi.org/10.1016/j.imu.2020.100391

Zhou, J., et al.: Weakly supervised 3D deep learning for breast cancer classification and localization of the lesions in MR images. J. Magn. Reson. Imaging 50 (4), 1144–1151 (2019). https://doi.org/10.1002/jmri.26721

Zhang, Q., et al.: Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics 72 , 150–157 (2016). https://doi.org/10.1016/j.ultras.2016.08.004

Yang, C., Rangarajan, A., Ranka, S.: Visual explanations from deep 3D Convolutional Neural Networks for Alzheimer’s disease classification. AMIA … Annu. Symp. proceedings. AMIA Symp. 2018 , 1571–1580 (2018)

Schwyzer, M., et al.: Automated detection of lung cancer at ultralow dose PET/CT by deep neural networks—initial results. Lung Cancer 126 (November), 170–173 (2018). https://doi.org/10.1016/j.lungcan.2018.11.001

de Carvalho Filho, A.O., Silva, A.C., de Paiva, A.C., Nunes, R.A., Gattass, M.: Classification of patterns of benignity and malignancy based on CT using topology-based phylogenetic diversity index and convolutional neural network. Pattern Recognit. 81 , 200–212 (2018). https://doi.org/10.1016/j.patcog.2018.03.032

Shen, W., et al.: Multi-crop Convolutional Neural Networks for lung nodule malignancy suspiciousness classification. Pattern Recognit. 61 , 663–673 (2017). https://doi.org/10.1016/j.patcog.2016.05.029

Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, O., Rajendra Acharya, U.: Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 121 (April), 103792 (2020). https://doi.org/10.1016/j.compbiomed.2020.103792

Ucar, F., Korkmaz, D.: COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med. Hypotheses 140 (April), 109761 (2020). https://doi.org/10.1016/j.mehy.2020.109761

Wang, L., Lin, Z.Q., Wong, A.: COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 10 (1), 1–12 (2020). https://doi.org/10.1038/s41598-020-76550-z

Rezvantalab, A., Safigholi, H. and Karimijeshni, S.: Dermatologist level dermoscopy skin cancer classification using different deep learning convolutional neural networks algorithms. (2018). Available: http://arxiv.org/abs/1810.10348

Dorj, U.O., Lee, K.K., Choi, J.Y., Lee, M.: The skin cancer classification using deep convolutional neural network. Multimed. Tools Appl. 77 (8), 9909–9924 (2018). https://doi.org/10.1007/s11042-018-5714-1

Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (7639), 115–118 (2017). https://doi.org/10.1038/nature21056

Awais, M., Muller, H., Tang, T. B. and Meriaudeau, F.: Classification of SD-OCT images using a deep learning approach. In: Proc. 2017 IEEE Int. Conf. Signal Image Process. Appl. ICSIPA 2017, vol. c, pp. 489–492 (2017). https://doi.org/10.1109/ICSIPA.2017.8120661

Ting, D.S.W., et al.: Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. J. Am. Med. Assoc. 318 (22), 2211–2223 (2017). https://doi.org/10.1001/jama.2017.18152

Frid-Adar, M., Klang, E., Amitai, M., Goldberger, J. and Greenspan, H.: Synthetic data augmentation using GAN for improved liver lesion classification. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 289–293 (2018). https://doi.org/10.1109/ISBI.2018.8363576

Krebs, J., et al.: Robust non-rigid registration through agent-based action learning. In: Medical Image Computing and Computer Assisted Intervention − MICCAI 2017. pp. 344–352 (2017)

Yang, X., Kwitt, R., Styner, M., Niethammer, M.: Quicksilver: fast predictive image registration—a deep learning approach. Neuroimage 158 , 378–396 (2017). https://doi.org/10.1016/j.neuroimage.2017.07.008

Sokooti, H., de Vos, B., Berendsen, F., Lelieveldt, B. P. F., Išgum, I. and Staring, M.: Nonrigid image registration using multi-scale 3D Convolutional Neural Networks. In: Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, pp. 232–239 (2017)

Wu, G., Kim, M., Wang, Q., Gao, Y., Liao, S. and Shen, D.: Unsupervised deep feature learning for deformable registration of MR brain images. In: Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2013, pp. 649–656 (2013)

Wu, G., Kim, M., Wang, Q., Munsell, B.C., Shen, D.: Scalable high-performance image registration framework by unsupervised deep feature representations learning. IEEE Trans. Biomed. Eng. 63 (7), 1505–1516 (2016). https://doi.org/10.1109/TBME.2015.2496253

Miao, S., Wang, Z.J., Liao, R.: A CNN regression approach for real-time 2D/3D registration. IEEE Trans. Med. Imaging 35 (5), 1352–1363 (2016). https://doi.org/10.1109/TMI.2016.2521800

de Vos, B.D., Berendsen, F. F., Viergever, M. A., Staring, M., Išgum, I.: End-to-end unsupervised deformable image registration with a convolutional neural network. In: Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10553 LNCS, pp. 204–212 (2017). doi: https://doi.org/10.1007/978-3-319-67558-9_24

Sun, L., Zhang, S.: Deformable MRI-ultrasound registration using 3D convolutional neural network, vol. 11042. Springer International Publishing, LNCS (2018)

Chen, Y., He, F., Li, H., Zhang, D., Wu, Y.: A full migration BBO algorithm with enhanced population quality bounds for multimodal biomedical image registration. Appl. Soft Comput. J. 93 , 106335 (2020). https://doi.org/10.1016/j.asoc.2020.106335

Niethammer, M., Kwitt, R., Vialard, F.X.: Metric learning for image registration. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2019 , 8455–8464 (2019). https://doi.org/10.1109/CVPR.2019.00866

Wang, S., Kim, M., Wu, G., Shen, D.: Scalable high performance image registration framework by unsupervised deep feature representations learning. Deep Learn. Med. Image Anal. 63 (7), 245–269 (2017). https://doi.org/10.1016/B978-0-12-810408-8.00015-8

Kang, E., Min, J., Ye, J.C.: A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 44 (10), e360–e375 (2017). https://doi.org/10.1002/mp.12344

Abanoviè, E., Stankevièius, G. and Matuzevièius, D.: Deep Neural Network-based feature descriptor for retinal image registration. In: 2018 IEEE 6th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE), pp. 1–4 (2018) https://doi.org/10.1109/AIEEE.2018.8592033

Haskins, G., et al.: Learning deep similarity metric for 3D MR–TRUS image registration. Int. J. Comput. Assist. Radiol. Surg. 14 (3), 417–425 (2019). https://doi.org/10.1007/s11548-018-1875-7

Giger, M.L., Chan, H.-P., Boone, J.: Anniversary paper: history and status of CAD and quantitative image analysis: the role of medical physics and AAPM. Med. Phys. 35 (12), 5799–5820 (2008). https://doi.org/10.1118/1.3013555

Giger, M.L., Karssemeijer, N., Schnabel, J.A.: Breast image analysis for risk assessment, detection, diagnosis, and treatment of cancer. Annu. Rev. Biomed. Eng. 15 (1), 327–357 (2013). https://doi.org/10.1146/annurev-bioeng-071812-152416

Li, H., et al.: Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set. NPJ Breast Cancer 2 (1), 16012 (2016). https://doi.org/10.1038/npjbcancer.2016.12

Guo, W., et al.: Prediction of clinical phenotypes in invasive breast carcinomas from the integration of radiomics and genomics data. J. Med. Imaging 2 (4), 1–12 (2015). https://doi.org/10.1117/1.JMI.2.4.041007

Li, H., et al.: MR imaging radiomics signatures for predicting the risk of breast cancer recurrence as given by research versions of MammaPrint, Oncotype DX, and PAM50 Gene assays. Radiology 281 (2), 382–391 (2016). https://doi.org/10.1148/radiol.2016152110

Katsuragawa, S., Doi, K., MacMahon, H., Monnier-Cholley, L., Ishida, T., Kobayashi, T.: Classification of normal and abnormal lungs with interstitial diseases by rule-based method and artificial neural networks. J. Digit. Imaging 10 (3), 108–114 (1997). https://doi.org/10.1007/BF03168597

Kim, G.B., et al.: Comparison of shallow and deep learning methods on classifying the regional pattern of diffuse lung disease. J. Digit. Imaging 31 (4), 415–424 (2018). https://doi.org/10.1007/s10278-017-0028-9

Antropova, N.O., Abe, H., Giger, M.L.: Use of clinical MRI maximum intensity projections for improved breast lesion classification with deep convolutional neural networks. J. Med. Imaging 5 (1), 1–6 (2018). https://doi.org/10.1117/1.JMI.5.1.014503

Mohamed, A.A., Berg, W.A., Peng, H., Luo, Y., Jankowitz, R.C., Wu, S.: A deep learning method for classifying mammographic breast density categories. Med. Phys. 45 (1), 314–321 (2018). https://doi.org/10.1002/mp.12683

Lee, J., Nishikawa, R.M.: Automated mammographic breast density estimation using a fully convolutional network. Med. Phys. 45 (3), 1178–1190 (2018). https://doi.org/10.1002/mp.12763

Antropova, N., Huynh, B.Q., Giger, M.L.: A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med. Phys. 44 (10), 5162–5171 (2017). https://doi.org/10.1002/mp.12453

Samala, R.K., Chan, H.-P., Hadjiiski, L.M., Helvie, M.A., Cha, K.H., Richter, C.D.: Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms. Phys. Med. Biol. 62 (23), 8894–8908 (2017). https://doi.org/10.1088/1361-6560/aa93d4

Kooi, T., van Ginneken, B., Karssemeijer, N., den Heeten, A.: Discriminating solitary cysts from soft tissue lesions in mammography using a pretrained deep convolutional neural network. Med. Phys. 44 (3), 1017–1027 (2017). https://doi.org/10.1002/mp.12110

Masood, A., et al.: Computer-Assisted Decision Support System in Pulmonary Cancer detection and stage classification on CT images. J. Biomed. Inform. 79 (January), 117–128 (2018). https://doi.org/10.1016/j.jbi.2018.01.005

Gonzalez, G., et al.: Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am. J. Respir. Crit. Care Med. 197 (2), 193–203 (2018). https://doi.org/10.1164/rccm.201705-0860OC

Lao, J., et al.: A deep learning-based radiomics model for prediction of survival in Glioblastoma Multiforme. Sci. Rep. 7 (1), 1–8 (2017). https://doi.org/10.1038/s41598-017-10649-8

Garapati, S.S., et al.: Urinary bladder cancer staging in CT urography using machine learning. Med. Phys. 44 (11), 5814–5823 (2017). https://doi.org/10.1002/mp.12510

Takahashi, H., Tampo, H., Arai, Y., Inoue, Y., Kawashima, H.: Applying artificial intelligence to disease staging: deep learning for improved staging of diabetic retinopathy. PLoS One 12 (6), 1–11 (2017). https://doi.org/10.1371/journal.pone.0179790

Skrede, O.-J., et al.: Deep learning for prediction of colorectal cancer outcome: a discovery and validation study. Lancet 395 (10221), 350–360 (2020). https://doi.org/10.1016/S0140-6736(19)32998-8

Saillard, C., et al.: Predicting survival after hepatocellular carcinoma resection using deep learning on histological slides. Hepatology 72 (6), 2000–2013 (2020). https://doi.org/10.1002/hep.31207

“No Title,” https://www.who.int/news-room/fact-sheets/detail . 2019, [Online]. Available: https://www.who.int/news-room/fact-sheets/detail

Simpson, S., et al.: Radiological society of North America expert consensus document on reporting chest CT findings related to COVID-19: endorsed by the society of thoracic radiology, the American College of Radiology, and RSNA. Radiol. Cardiothorac. Imaging 2 (2), e200152 (2020). https://doi.org/10.1148/ryct.2020200152

Mahmood, A., Gajula, C., Gajula, P.: COVID 19 diagnostic tests: a study of 12,270 patients to determine which test offers the most beneficial results. Surg. Sci. 11 (04), 82–88 (2020). https://doi.org/10.4236/ss.2020.114011

Soldati, G., et al.: Is there a role for lung ultrasound during the COVID-19 pandemic? J. Ultrasound Med. 39 (7), 1459–1462 (2020). https://doi.org/10.1002/jum.15284

Butt, C., Gill, J., Chun, D. et al.: RETRACTED ARTICLE: Deep learning system to screen coronavirus disease 2019 pneumonia. Appl Intell (2020). https://doi.org/10.1007/s10489-020-01714-3

Wang, B., Wu, Z., Khan, Z. U., Liu, C. and Zhu, M.: Deep convolutional neural network with segmentation techniques for chest x-ray analysis. In: 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA). pp. 1212–1216 (2019). https://doi.org/10.1109/ICIEA.2019.8834117

Ardakani, A.A., Kanafi, A.R., Acharya, U.R., Khadem, N., Mohammadi, A.: Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: results of 10 convolutional neural networks. Comput. Biol. Med. 121 , 103795 (2020). https://doi.org/10.1016/j.compbiomed.2020.103795

Li, H., Giger, M.L., Huynh, B.Q., Antropova, N.O.: Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms. J. Med. Imaging 4 (4), 1–6 (2017). https://doi.org/10.1117/1.JMI.4.4.041304

Roth, H.R., et al.: DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In: Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015. pp. 556–564 (2015)

Asperti, A. and Mastronardo, C.: The effectiveness of data augmentation for detection of gastrointestinal diseases from endoscopical images. CoRR, vol. abs/1712.0 (2017). Available: http://arxiv.org/abs/1712.03689

Pezeshk, A., Petrick, N., Chen, W., Sahiner, B.: Seamless lesion insertion for data augmentation in CAD training. IEEE Trans. Med. Imaging 36 (4), 1005–1015 (2017). https://doi.org/10.1109/TMI.2016.2640180

Zhang, C., Tavanapong, W., Wong, J., de Groen, P.C. and Oh, J.: Real data augmentation for medical image classification. In: Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis. pp. 67–76 (2017)

Yang, X., et al.: Low-dose x-ray tomography through a deep convolutional neural network. Sci. Rep. 8 (1), 2575 (2018). https://doi.org/10.1038/s41598-018-19426-7

Chen, H., et al.: Low-dose CT via convolutional neural network. Biomed. Opt. Express 8 (2), 679–694 (2017). https://doi.org/10.1364/BOE.8.000679

Cui, J., Liu, X., Wang, Y., Liu, H.: Deep reconstruction model for dynamic PET images. PLoS One 12 (9), 1–21 (2017). https://doi.org/10.1371/journal.pone.0184667

Kohli, M.D., Summers, R.M., Geis, J.R.: Medical image data and datasets in the era of machine learning—whitepaper from the 2016 C-MIMI meeting dataset session. J. Digit. Imaging 30 (4), 392–399 (2017). https://doi.org/10.1007/s10278-017-9976-3

Willemink, M.J., et al.: Preparing medical imaging data for machine learning. Radiology 295 (1), 4–15 (2020). https://doi.org/10.1148/radiol.2020192224

Altaf, F., Islam, S.M.S., Akhtar, N., Janjua, N.K.: Going deep in medical image analysis: concepts, methods, challenges, and future directions. IEEE Access 7 (3), 99540–99572 (2019). https://doi.org/10.1109/ACCESS.2019.2929365

Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34 (10), 1993–2024 (2015). https://doi.org/10.1109/TMI.2014.2377694

Petersen, R.C., et al.: Alzheimer’s disease neuroimaging initiative (ADNI). Neurology 74 (3), 201–209 (2010). https://doi.org/10.1212/WNL.0b013e3181cb3e25

Armato, S.G., et al.: Lung image database consortium: developing a resource for the medical imaging research community. Radiology 232 (3), 739–748 (2004). https://doi.org/10.1148/radiol.2323032035

Depeursinge, A., Vargas, A., Platon, A., Geissbuhler, A., Poletti, P.-A., Müller, H.: Building a reference multimedia database for interstitial lung diseases. Comput. Med. Imaging Graph. 36 (3), 227–238 (2012). https://doi.org/10.1016/j.compmedimag.2011.07.003

Staal, J., Abràmoff, M.D., Niemeijer, M., Viergever, M.A., Van Ginneken, B.: Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 23 (4), 501–509 (2004). https://doi.org/10.1109/TMI.2004.825627

Hoover, A.D., Kouznetsova, V., Goldbaum, M.: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 19 (3), 203–210 (2000). https://doi.org/10.1109/42.845178

Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man. Cybern. SMC-3 (6), 610–621 (1973). https://doi.org/10.1109/TSMC.1973.4309314

Çamlica, Z., Tizhoosh, H. R. and Khalvati, F.: Medical image classification via SVM using LBP features from saliency-based folded data. In: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA). pp. 128–132 (2015). https://doi.org/10.1109/ICMLA.2015.131

Raj, R.J.S., Shobana, S.J., Pustokhina, I.V., Pustokhin, D.A., Gupta, D., Shankar, K.: Optimal feature selection-based medical image classification using deep learning model in internet of medical things. IEEE Access 8 , 58006–58017 (2020). https://doi.org/10.1109/ACCESS.2020.2981337

Miller Jr, R. G.: Beyond ANOVA: basics of applied statistics. CRC press (1997)

Surendiran, A., Vadivel, B.: Feature selection using stepwise ANOVA discriminant analysis for mammogram mass classification. Int. J. Signal Image Process. 2 (1), 17 (2011). https://www.researchgate.net/profile/Surendiran_Balasubramanian/publication/258052973_Feature_selection_using_stepwise_ANOVA_discriminant_analysis_for_mammogram_mass_classification/links/0c96052942c3e97cda000000.pdf

Theodoridis, D.: Sergios and Pikrakis, Aggelos and Koutroumbas, Konstantinos and Cavouras, Introduction to pattern recognition: a matlab approach . 2010

Wu, A., Xu, Z., Gao, M., Mollura, D.J.: Deep vessel tracking: a generalized probabilistic approach via deep learning Aaron Wu, Ziyue Xu?, Mingchen Gao, Mario Buty, Daniel J. Mollura Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892. Isbi 2016 , 1363–1367 (2016)

Hussain, Z., Gimenez, F., Yi, D., Rubin, D.: Differential data augmentation techniques for medical imaging classification tasks. AMIA … Annu. Symp. Proc. AMIA Symp. 2017 , 979–984 (2017)

Zou, K.H., et al.: Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports. Acad. Radiol. 11 (2), 178–189 (2004). https://doi.org/10.1016/S1076-6332(03)00671-8

Cho, J., Lee, K., Shin, E., Choy, G. and Do, S.: Medical image deep learning with hospital {PACS} dataset. CoRR vol. abs/1511.0, (2015). Available: http://arxiv.org/abs/1511.06348

Guibas, J. T., Virdi, T. S. and Li, P. S.: Synthetic medical images from dual generative adversarial networks. CoRR vol. abs/1709.0, (2017). Available: http://arxiv.org/abs/1709.01872

Moeskops, P., Veta, M., Lafarge, M. W., Eppenhof, K. A. J. and Pluim, J. P. W.: Adversarial training and dilated convolutions for brain MRI segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. pp. 56–64 (2017)

Mazurowski, M.A., Habas, P.A., Zurada, J.M., Lo, J.Y., Baker, J.A., Tourassi, G.D.: Training neural network classifiers for medical decision making: the effects of imbalanced datasets on classification performance. Neural Netw. 21 (2), 427–436 (2008). https://doi.org/10.1016/j.neunet.2007.12.031

Galar, M., Fernandez, A., Barrenechea, E., Bustince, H. and Herrera, F.: A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 42 (4), 463–484 (2012). https://doi.org/10.1109/TSMCC.2011.2161285

de Bruijne, M.: Machine learning approaches in medical image analysis: from detection to diagnosis. Med. Image Anal. 33 , 94–97 (2016). https://doi.org/10.1016/j.media.2016.06.032

Lundervold, A.S., Lundervold, A.: An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 29 (2), 102–127 (2019). https://doi.org/10.1016/j.zemedi.2018.11.002

Nie, D., Cao, X., Gao, Y., Wang, L. and Shen, D.: Estimating CT image from MRI data using 3D fully convolutional networks. In: Deep Learning and Data Labeling for Medical Applications, pp. 170–178 (2016)

Ledig, C., et al.: “\href{ https://ieeexplore.ieee.org/abstract/document/8099502 }{Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network},” Cvpr , vol. 2, no. 3, p. 4, 2017. Available: http://openaccess.thecvf.com/content_cvpr_2017/papers/Ledig_Photo-Realistic_Single_Image_CVPR_2017_paper.pdf

Download references

Author information

Authors and affiliations.

Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229, Himachal Pradesh, India

Rammah Yousef & Gaurav Gupta

Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat, India

Nabhan Yousef

Jawaharlal Nehru University, New Delhi, India

Manju Khari

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Manju Khari .

Ethics declarations

Conflict of interest.

Conflict of interest on the behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Communicated by B. Xiao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Yousef, R., Gupta, G., Yousef, N. et al. A holistic overview of deep learning approach in medical imaging. Multimedia Systems 28 , 881–914 (2022). https://doi.org/10.1007/s00530-021-00884-5

Download citation

Received : 09 August 2021

Accepted : 23 December 2021

Published : 21 January 2022

Issue Date : June 2022

DOI : https://doi.org/10.1007/s00530-021-00884-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical imaging
  • Deep learning (DL)
  • Medical data augmentation
  • Transfer learning
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Medical image analysis based on deep learning approach

Muralikrishna puttagunta.

Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India

Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.

Introduction

In the health care system, there has been a dramatic increase in demand for medical image services, e.g. Radiography, endoscopy, Computed Tomography (CT), Mammography Images (MG), Ultrasound images, Magnetic Resonance Imaging (MRI), Magnetic Resonance Angiography (MRA), Nuclear medicine imaging, Positron Emission Tomography (PET) and pathological tests. Besides, medical images can often be challenging to analyze and time-consuming process due to the shortage of radiologists.

Artificial Intelligence (AI) can address these problems. Machine Learning (ML) is an application of AI that can be able to function without being specifically programmed, that learn from data and make predictions or decisions based on past data. ML uses three learning approaches, namely, supervised learning, unsupervised learning, and semi-supervised learning. The ML techniques include the extraction of features and the selection of suitable features for a specific problem requires a domain expert. Deep learning (DL) techniques solve the problem of feature selection. DL is one part of ML, and DL can automatically extract essential features from raw input data [ 88 ]. The concept of DL algorithms was introduced from cognitive and information theories. In general, DL has two properties: (1) multiple processing layers that can learn distinct features of data through multiple levels of abstraction, and (2) unsupervised or supervised learning of feature presentations on each layer. A large number of recent review papers have highlighted the capabilities of advanced DLA in the medical field MRI [ 8 ], Radiology [ 96 ], Cardiology [ 11 ], and Neurology [ 155 ].

Different forms of DLA were borrowed from the field of computer vision and applied to specific medical image analysis. Recurrent Neural Networks (RNNs) and convolutional neural networks are examples of supervised DL algorithms. In medical image analysis, unsupervised learning algorithms have also been studied; These include Deep Belief Networks (DBNs), Restricted Boltzmann Machines (RBMs), Autoencoders, and Generative Adversarial Networks (GANs) [ 84 ]. DLA is generally applicable for detecting an abnormality and classify a specific type of disease. When DLA is applied to medical images, Convolutional Neural Networks (CNN) are ideally suited for classification, segmentation, object detection, registration, and other tasks [ 29 , 44 ]. CNN is an artificial visual neural network structure used for medical image pattern recognition based on convolution operation. Deep learning (DL) applications in medical images are visualized in Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig1_HTML.jpg

a X-ray image with pulmonary masses [ 121 ] b CT image with lung nodule [ 82 ] c Digitized histo pathological tissue image [ 132 ]

Neural networks

History of neural networks.

The study of artificial neural networks and deep learning derives from the ability to create a computer system that simulates the human brain [ 33 ]. A neurophysiologist, Warren McCulloch, and a mathematician Walter Pitts [ 97 ] developed a primitive neural network based on what has been known as a biological structure in the early 1940s. In 1949, a book titled “Organization of Behavior” [ 100 ] was the first to describe the process of upgrading synaptic weights which is now referred to as the Hebbian Learning Rule. In 1958, Frank Rosenblatt’s [ 127 ] landmark paper defined the structure of the neural network called the perceptron for the binary classification task.

In 1962, Windrow [ 172 ] introduced a device called the Adaptive Linear Neuron (ADALINE) by implementing their designs in hardware. The limitations of perceptions were emphasized by Minski and Papert (1969) [ 98 ]. The concept of the backward propagation of errors for purposes of training is discussed in Werbose1974 [ 171 ]. In 1979, Fukushima [ 38 ] designed artificial neural networks called Neocognitron, with multiple pooling and convolution layers. One of the most important breakthroughs in deep learning occurred in 2006, when Hinton et al. [ 9 ] implemented the Deep Belief Network, with several layers of Restricted Boltzmann Machines, greedily teaching one layer at a time in an unsupervised fashion. In 1989, Yann LeCun [ 71 ] combined CNN with backpropagation to effectively perform the automated recognition of handwritten digits. Figure ​ Figure2 2 shows important advancements in the history of neural networks that led to a deep learning era.

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig2_HTML.jpg

Demonstrations of significant developments in the history of neural networks [ 33 , 134 ]

Artificial neural networks

Artificial Neural Networks (ANN) form the basis for most of the DLA. ANN is a computational model structure that has some performance characteristics similar to biological neural networks. ANN comprises simple processing units called neurons or nodes that are interconnected by weighted links. A biological neuron can be described mathematically in Eq. ( 1 ). Figure ​ Figure3 3 shows the simplest artificial neural model known as the perceptron.

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig3_HTML.jpg

Perceptron [ 77 ]

Training a neural network with Backpropagation (BP)

In the neural networks, the learning process is modeled as an iterative process of optimization of the weights to minimize a loss function. Based on network performance, the weights are modified on a set of examples belonging to the training set. The necessary steps of the training procedure contain forward and backward phases. For Neural Network training, any of the activation functions in forwarding propagation is selected and BP training is used for changing weights. The BP algorithm helps multilayer FFNN to learn input-output mappings from training samples [ 16 ]. Forward propagation and backpropagation are explained with the one hidden layer deep neural networks in the following algorithm.

The backpropagation algorithm is as follows for one hidden layer neural network

  • Initialize all weights to small random values.
  • While the stopping condition is false, do steps 3 through10.
  • For each training pair (( x 1 ,  y 1 )…( x n ,  y n ) do steps 4 through 9.

Feed-forward propagation:

  • 4. Each input unit ( X i , i  = 1, 2, … n ) receives the input signal x i and send this signal to all hidden units in the above layer.
  • 5. Each hidden unit ( Z j ,  j  = 1. .,  p ) compute output using the below equation, and it transmits to the output unit (i.e.) z j _ in = b j + ∑ i = 1 n w ij x i applies to an activation function Z j  =  f ( Z j  _  in ).

y k _ in = b k + ∑ j = 1 p z j w jk and calculate activation y k  =  f ( y k  _  in )

Backpropagation

At output-layer neurons δ k  = ( t k  −  y k ) f ′ ( y k  _  in )

At Hidden layer neurons δ j = f ′ z j _ in ∑ k m δ k w jk

  • 9. Update weights and biases using the following formulas where η is learning rate

Each output layer ( Y k , k  = 1, 2, …. m ) updates its weights ( J  = 0, 1, … P ) and bias

w jk ( new ) =  w jk ( old ) +  ηδ k z j ; b k ( new ) =  b k ( old ) +  ηδ k

Each hidden layer ( Z J ,  J  = 1, 2, … p ) updates its weights ( i  = 0, 1, … n ) biases:

w ij ( new ) =  w ij ( old ) +  ηδ j x i ; b j ( old ) =  b j ( old ) +  ηδ j

  • 10. Test stopping condition

Activation function

The activation function is the mechanism by which artificial neurons process and transfers information [ 42 ]. There are various types of activation functions which can be used in neural networks based on the characteristic of the application. The activation functions are non-linear and continuously differentiable. Differentiability property is important mainly when training a neural network using the gradient descent method. Some widely used activation functions are listed in Table ​ Table1 1 .

Activation functions

Deep learning

Deep learning is a subset of the machine learning field which deals with the development of deep neural networks inspired by biological neural networks in the human brain .

Autoencoder

Autoencoder (AE) [ 128 ] is one of the deep learning models which exemplifies the principle of unsupervised representation learning as depicted in Fig.  4a . AE is useful when the input data have more number of unlabelled data compared to labeled data. AE encodes the input x into a lower-dimensional space z. The encoded representation is again decoded to an approximated representation  x ′ of the input x through one hidden layer z.

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig4_HTML.jpg

a Autoencoder [ 187 ] b Restricted Boltzmann Machine with n hidden and m visible units [ 88 ] c Deep Belief Networks [ 88 ]

Basic AE consists of three main steps:

Encode: Convert input vector x ϵ R m into h ϵ R n , the hidden layer by h  =  f ( wx  +  b )where w ϵ R m ∗ n and b ϵ R n . m  and n are dimensions of the input vector and converted hidden state. The dimension of the hidden layer h is to be smaller than x . f is an activate function.

Decode: Based on the above  h , reconstruct input vector z by equation z  =  f ′ ( w ′ h  +  b ′ ) where w ′ ϵ R n ∗ m and b ′ ϵ R m . The f ′ is the same as the above activation function.

Calculate square error: L recons ( x , z) =  ∥  x  − z∥ 2 , which is the reconstruction error cost function. Reconstruct error minimization is achieved by optimizing the cost function (2)

Another unsupervised algorithm representation is known as Stacked Autoencoder (SAE). The SAE comprises stacks of autoencoder layers mounted on top of each other where the output of each layer was wired to the inputs of the next layer. A Denoising Autoencoder (DAE) was introduced by Vincent et al. [ 159 ]. The DAE is trained to reconstruct the input from random noise added input data. Variational autoencoder (VAE) [ 66 ] is modifying the encoder where the latent vector space is used to represent the images that follow a Gaussian distribution unit. There are two losses in this model; one is a mean squared error and the Kull back Leibler divergence loss that determines how close the latent variable matches the Gaussian distribution unit. Sparse autoencoder [ 106 ] and variational autoencoders have applications in unsupervised, semi-supervised learning, and segmentation.

Restricted Boltzmann machine

A Restricted Boltzmann machine [RBM] is a Markov Random Field (MRF) associated with the two-layer undirected probabilistic generative model, as shown in Fig. ​ Fig.4b. 4b . RBM contains visible units (input) v and hidden (output) units  h . A significant feature of this model is that there is no direct contact between the two visible units or either of the two hidden units. In binary RBMs, the random variables ( v ,  h ) takes ( v ,  h ) ∈ {0, 1} m  +  n . Like the general Boltzmann machine [ 50 ], the RBM is an energy-based model. The energy of the state { v ,  h } is defined as (3)

where v j , h i are the binary states of visible unit j  ∈ {1, 2, … m } and hidden unit i  ∈ {1, 2, .. n }, b j , c i  are their biases of visible and hidden units, w ij is the symmetric interaction term between the units v j and h i them. A joint probability of ( v ,  h ) is given by the Gibbs distribution in Eq. ( 4 )

Z is a “partition function” that can be given by summing over all possible pairs of visual v  and hidden h (5).

A significant feature of the RBM model is that there is no direct contact between the two visible units or either of the two hidden units. In term of probability, conditional distributions p ( h |  v ) and p ( v |  h ) is computed as (6) p h v = ∏ i = 1 n p h i v

For binary RBM condition distribution of visible and hidden are given by (7) and (8)

where σ( · ) is a sigmoid function

RBMs parameters ( w ij ,  b j ,  c i ) are efficiently calculated using the contrastive divergence learning method [ 150 ]. A batch version of k-step contrastive divergence learning (CD-k) can be discussed in the algorithm below [ 36 ]

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Figd_HTML.jpg

Deep belief networks

The Deep Belief Networks (DBN) proposed by Hinton et al. [ 51 ] is a non-convolution model that can extract features and learn a deep hierarchical representation of training data. DBNs are generative models constructed by stacking multiple RBMs. DBN is a hybrid model, the first two layers are like RBM, and the rest of the layers form a directed generative model. A DBN has one visible layer v and a series of hidden layers h (1) , h (2) , …, h ( l ) as shown in Fig. ​ Fig.4c. 4c . The DBN model joint distribution between the observed units v and the l  hidden layers h k (  k  = 1, … l ) as (9)

where v  =  h (0) , P ( h k |  h k  + 1 ) is a conditional distribution (10) for the layer k given the units of k  + 1

A DBN has l weight matrices: W (1) , …. , W ( l ) and l  + 1 bias vectors: b (0) , …, b ( l ) P ( h ( l ) ,  h ( l  − 1) ) is the joint distribution of top-level RBM (11).

The probability distribution of DBN is given by Eq. ( 12 )

Convolutional neural networks (CNN)

In neural networks, CNN is a unique family of deep learning models. CNN is a major artificial visual network for the identification of medical image patterns. The family of CNN primarily emerges from the information of the animal visual cortex [ 55 , 116 ]. The major problem within a fully connected feed-forward neural network is that even for shallow architectures, the number of neurons may be very high, which makes them impractical to apply to image applications. The CNN is a method for reducing the number of parameters, allows a network to be deeper with fewer parameters.

CNN’s are designed based on three architectural ideas that are shared weights, local receptive fields, and spatial sub-sampling [ 70 ]. The essential element of CNN is the handling of unstructured data through the convolution operation. Convolution of the input signal  x ( t ) with filter signal  h ( t ) creates an output signal y ( t ) that may reveal more information than the input signal itself. 1D convolution of a discrete signals x ( t ) and h ( t ) is (13)

A digital image x ( n 1 ,  n 2 ) is a 2-D discrete signal. The convolution of images  x ( n 1 ,  n 2 ) and h ( n 1 ,  n 2 ) is (14)

where 0 ≤  n 1  ≤  M  − 1, 0 ≤  n 2  ≤  N  − 1.

The function of the convolution layer is to detect local features x l from input feature maps x l  − 1 using kernels k l by convolution operation (*) i.e. x l  − 1  ∗  k l . This convolution operation is repeated for every convolutional layer subject to non-linear transform (15)

where k mn l represents weights between feature map  m at layer l  − 1 and feature map n at l . x m l − 1 represents the  m  feature map of the layer l  − 1 and x n l is n  feature map of the layer l . b m l is the bias parameter. f (.) is the non-linear activation function.  M l  − 1 denotes a set of feature maps. CNN significantly reduces the number of parameters compared with a fully connected neural network because of local connectivity and weight sharing. The depth, zero-padding, and stride are three hyperparameters for controlling the volume of the convolution layer output.

A pooling layer comes after the convolutional layer to subsample the feature maps. The goal of the pooling layers is to achieve spatial invariance by minimizing the spatial dimension of the feature maps for the next convolution layer. Max pooling and average pooling are commonly used two different polling operations to achieve downsampling. Let the size of the pooling region M  and each element in the pooling region is given as x j  = ( x 1 ,  x 2 , … x M  ×  M ), the output after pooling is given as x i . Max pooling and average polling are described in the following Eqs. ( 16 ) and ( 17 ).

The max-pooling method chooses the most superior invariant feature in a pooling region. The average pooling method selects the average of all the features in the pooling area. Thus, the max-pooling method holds texture information that can lead to faster convergence, average pooling method is called Keep background information [ 133 ]. Spatial pyramid pooling [ 48 ], stochastic polling [ 175 ], Def-pooling [ 109 ], Multi activation pooling [ 189 ], and detailed preserving pooling [ 130 ] are different pooling techniques in the literature. A fully connected layer is used at the end of the CNN model. Fully connected layers perform like a traditional neural network [ 174 ]. The input to this layer is a vector of numbers (output of the pooling layer) and outputs an N-dimensional vector (N number of classes). After the pooling layers, the feature of previous layer maps is flattened and connected to fully connected layers.

The first successful seven-layered LeNet-5 CNN was developed by Yann LeCunn in 1990 for handwritten digit recognition successfully. Krizhevsky et al. [ 68 ] proposed AlexNet is a deep convolutional neural network composed of 5 convolutional and 3 fully-connected layers. In AlexNet changed the sigmoid activation function to a ReLU activation function to make model training easier.

K. Simonyan and A. Zisserman invented the VGG-16 [ 143 ] which has 13 convolutional and 3 fully connected layers. The Visual Geometric Group (VGG) research group released a series of CNN starting from VGG-11, VGG-13, VGG-16, and VGG-19. The main intention of the VGG group to understand how the depth of convolutional networks affects the accuracy of the models of image classification and recognition. Compared to the maximum VGG19, which has 16 convolutional layers and 3 fully connected layers, the minimum VGG11 has 8 convolutional layers and 3 fully connected layers. The last three fully connected layers are the same as the various variations of VGG.

Szegedy et al. [ 151 ] proposed an image classification network consisting of 22 different layers, which is GoogleNet. The main idea behind GoogleNet is the introduction of inception layers. Each inception layer convolves the input layers partially using different filter sizes. Kaiming He et al. [ 49 ] proposed the ResNet architecture, which has 33 convolutional layers and one fully-connected layer. Many models introduced the principle of using multiple hidden layers and extremely deep neural networks, but then it was realized that such models suffered from the issue of vanishing or exploding gradients problem. For eliminating vanishing gradients’ problem skip layers (shortcut connections) are introduced. DenseNet developed by Gao et al. [ 54 ] consists of several dense blocks and transition blocks, which are placed between two adjacent dense blocks. The dense block consists of three layers of batch normalization, followed by a ReLU and a 3 × 3 convolution operation. The transition blocks are made of Batch Normalization, 1 × 1 convolution, and average Pooling.

Compared to state-of-the-art handcrafted feature detectors, CNNs is an efficient technique for detecting features of an object and achieving good classification performance. There are drawbacks to CNNs, which are that unique relationships, size, perspective, and orientation of features are not taken into account. To overcome the loss of information in CNNs by pooling operation Capsule Networks (CapsNet) are used to obtain spatial information and most significant features [ 129 ]. The special type of neurons, called capsules, can detect efficiently distinct information. The capsule network consists of four main components that are matrix multiplication, Scalar weighting of the input, dynamic routing algorithm, and squashing function.

Recurrent neural networks (RNN)

RNN is a class of neural networks used for processing sequential information (deal with sequential data). The structure of the RNN shown in Fig.  5a is like an FFNN and the difference is that recurrent connections are introduced among hidden nodes. A generic RNN model at time t , the recurrent connection hidden unit h t receives input activation from the present data x t and the previous hidden state  h t  − 1 . The output y t is calculated given the hidden state h t . It can be represented using the mathematical Eqs. ( 18 ) and ( 19 ) as

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig5_HTML.jpg

a Recurrent Neural Networks [ 163 ] b Long Short-Term Memory [ 163 ] c Generative Adversarial Networks [ 64 ]

Here f is a non-linear activation function, w hx is the weight matrix between the input and hidden layers, w hh is the matrix of recurrent weights between the hidden layers and itself w yh is the weight matrix between the hidden and output layer, and b h and b y are biases that allow each node to learn and offset. While the RNN is a simple and efficient model, in reality, it is, unfortunately, difficult to train properly. Real-Time Recurrent Learning (RTRL) algorithm [ 173 ] and Back Propagation Through Time (BPTT) [ 170 ] methods are used to train RNN. Training with these methods frequently fails because of vanishing (multiplication of many small values) or explode (multiplication of many large values) gradient problem [ 10 , 112 ]. Hochreiter and Schmidhuber (1997) designed a new RNN model named Long Short Term Memory (LSTM) that overcome error backflow problems with the aid of a specially designed memory cell [ 52 ]. Figure ​ Figure5b 5b shows an LSTM cell which is typically configured by three gates: input gate g t , forget gate  f t and output gate  o t , these gates add or remove information from the cell.

An LSTM can be represented with the following Eqs. ( 20 ) to ( 25 )

Generative adversarial networks (GAN)

In the field of deep learning, one of the deep generative models are Generative Adversarial Networks (GANs) introduced by Good Fellow in [ 43 ]. GANs are neural networks that can generate synthetic images that closely imitate the original images. In GAN shown in Fig. ​ Fig.5c, 5c , there are two neural networks, namely generator, and discriminator, which are trained simultaneously. The generator G generates counterfeit data samples which aim to “fool” the discriminator  D , while the discriminator attempts to correctly distinguish the true and false samples. In mathematical terms, D and G play a two player minimax game with the cost function of (26) [ 64 ].

Where x represents the original image, z is a noise vector with random numbers. p data ( x ) and p z ( z ) are probability distributions of x and  z , respectively.  D ( x ) represents the probability that x comes from the actual data p data ( x ) rather than the generated data. 1 −  D ( G (z)) is the probability that it can be generated from p z (z). The expectation of x from the real data distribution  p data is expressed by E x ~ p data x and the expectation of z sampled from noise is E z ~ P z z . The goal of the training is to maximize the loss function for the discriminator, while the training objective for the generator is to reduce the term log (1 −  D ( G ( z ))).The most utilization of GAN in the field of medical image analysis is data augmentation (generating new data) and image to image translation [ 107 ]. Trustability of the Generated Data, Unstable Training, and evaluation of generated data are three major drawbacks of GAN that might hinder their acceptance in the medical community [ 183 ].

Ronneberger et al. [ 126 ] proposed CNN based U-Net architecture for segmentation in biomedical image data. The architecture consists of a contracting path (left side) to capture context and an expansive symmetric path (right side) that enables precise localization. U-Net is a generalized DLA used for quantification tasks such as cell detection and shape measurement in medical image data [ 34 ].

Software frameworks

There are several software frameworks available for implementing DLA which are regularly updated as new approaches and ideas are created. DLA encapsulates many levels of mathematical principles based on probability, linear algebra, calculus, and numerical computation. Several deep learning frameworks exist such as Theano, TensorFlow, Caffe, CNTK, Torch, Neon, pylearn, etc. [ 138 ]. Globally, Python is probably the most commonly used programming language for DL. PyTorch and Tensorflow are the most widely used libraries for research in 2019. Table ​ Table2 2 shows the analysis of various Deep Learning Frameworks based on the core language and supported interface language.

Comparison of various Deep Learning Frameworks

Use of deep learning in medical imaging

X-ray image.

Chest radiography is widely used in diagnosis to detect heart pathologies and lung diseases such as tuberculosis, atelectasis, consolidation, pleural effusion, pneumothorax, and hyper cardiac inflation. X-ray images are accessible, affordable, and less dose-effective compared to other imaging methods, and it is a powerful tool for mass screening [ 14 ]. Table ​ Table3 3 presents a description of the DL methods used for X-ray image analysis.

An overview of the DLA for the study of X-ray images

S. Hwang et al. [ 57 ] proposed the first deep CNN-based Tuberculosis screening system with a transfer learning technique. Rajaraman et al. [ 119 ] proposed modality-specific ensemble learning for the detection of abnormalities in chest X-rays (CXRs). These model predictions are combined using various ensemble techniques toward minimizing prediction variance. Class selective mapping of interest (CRM) is used for visualizing the abnormal regions in the CXR images. Loey et al. [ 90 ] proposed A GAN with deep transfer training for COVID-19 detection in CXR images. The GAN network was used to generate more CXR images due to the lack of the COVID-19 dataset. Waheed et al. [ 160 ] proposed a CovidGAN model based on the Auxiliary Classifier Generative Adversarial Network (ACGAN) to produce synthetic CXR images for COVID-19 detection. S. Rajaraman and S. Antani [ 120 ] introduced weakly labeled data augmentation for increasing training dataset to improve the COVID-19 detection performance in CXR images.

Computerized tomography (CT)

CT uses computers and rotary X-ray equipment to create cross-section images of the body. CT scans show the soft tissues, blood vessels, and bones in different parts of the body. CT is a high detection ability, reveals small lesions, and provides a more detailed assessment. CT examinations are frequently used for pulmonary nodule identification [ 93 ]. The detection of malignant pulmonary nodules is fundamental to the early diagnosis of lung cancer [ 102 , 142 ]. Table ​ Table4 4 summarizes the latest deep learning developments in the study of CT image analysis.

A review of articles that use DL techniques for the analysis of the CT image

AUC: area under ROC curve; FROC: Area under the Free-Response ROC Curve; SN: sensitivity; SP: specificity; MAE: mean absolute error LIDC: Lung Image Database Consortium; LIDC-IDRI: Lung Image Database Consortium-Image Database Resource Initiative.

Li et al. 2016 [ 74 ] proposed deep CNN for the detection of three types of nodules that are semisolid, solid, and ground-glass opacity. Balagourouchetty et al. [ 5 ] proposed GoogLeNet based an ensemble FCNet classifier for The liver lesion classification. For feature extraction, basic Googlenet architecture is modified with three modifications. Masood et al. [ 95 ] proposed the multidimensional Region-based Fully Convolutional Network (mRFCN) for lung nodule detection/classification and achieved a classification accuracy of 97.91%. In lung nodule detection, the feature work is the detection of micronodules (less than 3 mm) without loss of sensitivity and accuracy. Zhao and Zeng 2019 [ 190 ] proposed DLA based on supervised MSS U-Net and 3DU-Net to automatically segment kidneys and kidney tumors from CT images. In the present pandemic situation, Fan et al. [ 35 ] and Li et al. [ 79 ] used deep learning-based techniques for COVID-19 detection from CT images.

Mammograph (MG)

Breast cancer is one of the world’s leading causes of death among women with cancer. MG is a reliable tool and the most common modality for early detection of breast cancer. MG is a low-dose x-ray imaging method used to visualize the breast structure for the detection of breast diseases [ 40 ]. Detection of breast cancer on mammography screening is a difficult task in image classification because the tumors constitute a small part of the actual breast image. For analyzing breast lesions from MG, three steps are involved that are detection, segmentation, and classification [ 139 ].

The automatic classification and detection of masses at an early stage in MG is still a hot subject of research. Over the past decade, DLA has shown some significant overcome in breast cancer detection and classification problem. Table ​ Table5 5 summarizes the latest DLA developments in the study of mammogram image analysis.

Summary of DLA for MG image analysis

MIAS: Mammographic Image Analysis Society dataset; DDSM: Digital Database for Screening Mammography; BI-RADS: Breast Imaging Reporting and Data System; `WBCD: Wisconsin Breast Cancer Dataset; DIB-MG: data-driven imaging biomarker in mammography. FFDMs: Full-Field Digital Mammograms; MAMMO: Man and Machine Mammography Oracle; FROC: Free response receiver operating characteristic analysis; SN: sensitivity; SP: specificity.

Fonseca et al. [ 37 ] proposed a breast composition classification according to the ACR standard based on CNN for feature extraction. Wang et al. [ 161 ] proposed twelve-layer CNN to detect Breast arterial calcifications (BACs) in mammograms image for risk assessment of coronary artery disease. Ribli et al. [ 124 ] developed a CAD system based on Faster R-CNN for detection and classification of benign and malignant lesions on a mammogram image without any human involvement. Wu et al. [ 176 ] present a deep CNN trained and evaluated on over 1,000,000 mammogram images for breast cancer screening exam classification. Conant et al. [ 26 ] developed a Deep CNN based AI system to detect calcified lesions and soft- tissue in digital breast tomosynthesis (DBT) images. Kang et al. [ 62 ] introduced Fuzzy completely connected layer (FFCL) architecture, which focused primarily on fused fuzzy rules with traditional CNN for semantic BI-RADS scoring. The proposed FFCL framework achieved superior results in BI-RADS scoring for both triple and multi-class classifications.

Histopathology

Histopathology is the field of study of human tissue in the sliding glass using a microscope to identify different diseases such as kidney cancer, lung cancer, breast cancer, and so on. The staining is used in histopathology for visualization and highlight a specific part of the tissue [ 45 ]. For example, Hematoxylin and Eosin (H&E) staining tissue gives a dark purple color to the nucleus and pink color to other structures. H&E stain plays a key role in the diagnosis of different pathologies, cancer diagnosis, and grading over the last century. The recent imaging modality is digital pathology

Deep learning is emerging as an effective method in the analysis of histopathology images, including nucleus detection, image classification, cell segmentation, tissue segmentation, etc. [ 178 ]. Tables ​ Tables6 6 and ​ and7 7 summarize the latest deep learning developments in pathology. In the study of digital pathology image analysis, the latest development is the introduction of whole slide imaging (WSI). WSI allows digitizing glass slides with stained tissue sections at high resolution. Dimitriou et al. [ 30 ] reviewed challenges for the analysis of multi-gigabyte WSI images for building deep learning models. A. Serag et al. [ 135 ] discuss different public “Grand Challenges” that have innovations using DLA in computational pathology.

Summary of articles using DLA for digital pathology image - Organ segmentation

Summary of articles using DLA for digital pathology image - Detection and classification of disease

NODE: Neural Ordinary Differential Equations; IoU: mean Intersection over Union coefficient

Other images

Endoscopy is the insertion of a long nonsurgical solid tube directly into the body for the visual examination of an internal organ or tissue in detail. Endoscopy is beneficial in studying several systems inside the human body, such as the gastrointestinal tract, the respiratory tract, the urinary tract, and the female reproductive tract [ 60 , 101 ]. Du et al. [ 31 ] reviewed the Applications of Deep Learning in the Analysis of Gastrointestinal Endoscopy Images. A revolutionary device for direct, painless, and non-invasive inspection of the gastrointestinal (GI) tract for detecting and diagnosing GI diseases (ulcer, bleeding) is Wireless capsule endoscopy (WCE). Soffer et al. [ 145 ] performed a systematic analysis of the existing literature on the implementation of deep learning in the WCE. The first deep learning-based framework was proposed by He et al. [ 46 ] for the detection of hookworm in WCE images. Two CNN networks integrated (edge extraction and classification of hookworm) to detect hookworm. Since tubular structures are crucial elements for hookworm detection, the edge extraction network was used for tubular region detection. Yoon et al. [ 185 ] developed a CNN model for early gastric cancer (EGC) identification and prediction of invasion depth. The depth of tumor invasion in early gastric cancer (EGC) is a significant factor in deciding the method of treatment. For the classification of endoscopic images as EGC or non-EGC, the authors employed a VGG-16 model. Nakagawa et al. [ 105 ] applied DL technique based on CNN to enhance the diagnostic assessment of oesophageal wall invasion using endoscopy. J.choi et al. [ 22 ] express the feature aspects of DL in endoscopy.

Positron Emission Tomography (PET) is a nuclear imaging tool that is generally used by the injection of particular radioactive tracers to visualize molecular-level activities within tissues. T. Wang et al. [ 168 ] reviewed applications of machine learning in PET attenuation correction (PET AC) and low-count PET reconstruction. The authors discussed the advantages of deep learning over machine learning in the applications of PET images. AJ reader et al. [ 123 ] reviewed the reconstruction of PET images that can be used in deep learning either directly or as a part of traditional reconstruction methods.

The primary purpose of this paper is to review numerous publications in the field of deep learning applications in medical images. Classification, detection, and segmentation are essential tasks in medical image processing [ 144 ]. For specific deep learning tasks in medical applications, the training of deep neural networks needs a lot of labeled data. But in the medical field, at least thousands of labeled data is not available. This issue is alleviated by a technique called transfer learning. Two transfer learning approaches are popular and widely applied that are fixed feature extractors and fine-tuning a pre-trained network. In the classification process, the deep learning models are used to classify images into two or more classes. In the detection process, Deep learning models have the function of identifying tumors and organs in medical images. In the segmentation task, deep learning models try to segment the region of interest in medical images for processing.

Segmentation

For medical image segmentation, deep learning has been widely used, and several articles have been published documenting the progress of deep learning in the area. Segmentation of breast tissue using deep learning alone has been successfully implemented [ 104 ]. Xing et al. [ 179 ] used CNN to acquire the initial shape of the nucleus and then isolate the actual nucleus using a deformable pattern. Qu et al. [ 118 ] suggested a deep learning approach that could segment the individual nucleus and classify it as a tumor, lymphocyte, and stroma nuclei. Pinckaers and Litjens [ 115 ] show on a colon gland segmentation dataset (GlaS) that these Neural Ordinary Differential Equations (NODE) can be used within the U-Net framework to get better segmentation results. Sun 2019 [ 149 ] developed a deep learning architecture for gastric cancer segmentation that shows the advantage of utilizing multi-scale modules and specific convolution operations together. Figure ​ Figure6 6 shows U-Net is the most usually used network for segmentation (Fig. ​ (Fig.6 6 ).

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig6_HTML.jpg

U-Net architecture for segmentation,comprising encoder (downsampling) and decoder (upsampling) sections [ 135 ]

The main challenge posed by methods of detection of lesions is that they can give rise to multiple false positives while lacking a good proportion of true positive ones . For tuberculosis detection using deep learning methods applied in [ 53 , 57 , 58 , 91 , 119 ]. Pulmonary nodule detection using deep learning has been successfully applied in [ 82 , 108 , 136 , 157 ].

Shin et al. [ 141 ] discussed the effect of CNN pre-trained architectures and transfer learning on the identification of enlarged thoracoabdominal lymph nodes and the diagnosis of interstitial lung disease on CT scans, and considered transfer learning to be helpful, given the fact that natural images vary from medical images. Litjens et al. [ 85 ] introduced CNN for the identification of Prostate cancer in biopsy specimens and breast cancer metastasis identification in sentinel lymph nodes. The CNN has four convolution layers for feature extraction and three classification layers. Riddle et al. [ 124 ] proposed the Faster R-CNN model for the detection of mammography lesions and classified these lesions into benign and malignant, which finished second in the Digital Mammography DREAM Challenge. Figure ​ Figure7 7 shows VGG architecture for detection.

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig7_HTML.jpg

CNN architecture for detection [ 144 ]

An object detection framework named Clustering CNN (CLU-CNNs) was proposed by Z. Li et al. [ 76 ] for medical images. CLU-CNNs used Agglomerative Nesting Clustering Filtering (ANCF) and BN-IN Net to avoid much computation cost facing medical images. Image saliency detection aims at locating the most eye-catching regions in a given scene [ 21 , 78 ]. The goal of image saliency detection is to locate a given scene in the most eye-catching regions. In different applications, it also acts as a pre-processing tool including video saliency detection [ 17 , 18 ], object recognition, and object tracking [ 20 ]. Saliency maps are a commonly used tool for determining which areas are most important to the prediction of a trained CNN on the input image [ 92 ]. NT Arun et al. [ 4 ] evaluated the performance of several popular saliency methods on the RSNA Pneumonia Detection dataset and was found that GradCAM was sensitive to the model parameters and model architecture.

Classification

In classification tasks, deep learning techniques based on CNN have seen several advancements. The success of CNN in image classification has led researchers to investigate its usefulness as a diagnostic method for identifying and characterizing pulmonary nodules in CT images. The classification of lung nodules using deep learning [ 74 , 108 , 117 , 141 ] has also been successfully implemented.

Breast parenchymal density is an important indicator of the risk of breast cancer. The DL algorithms used for density assessment can significantly reduce the burden of the radiologist. Breast density classification using DL has been successfully implemented [ 37 , 59 , 72 , 177 ]. Ionescu et al. [ 59 ] introduced a CNN-based method to predict Visual Analog Score (VAS) for breast density estimation. Figure ​ Figure8 8 shows AlexNet architecture for classification.

An external file that holds a picture, illustration, etc.
Object name is 11042_2021_10707_Fig8_HTML.jpg

CNN architecture for classification [ 144 ]

Alcoholism or alcohol use disorder (AUD) has effects on the brain. The structure of the brain was observed using the Neuroimaging approach. S.H.Wang et al. [ 162 ] proposed a 10-layer CNN for alcohol use disorder (AUD) problem using dropout, batch normalization, and PReLU techniques. The authors proposed a 10 layer CNN model that has obtained a sensitivity of 97.73, a specificity of 97.69, and an accuracy of 97.71. Cerebral micro-bleeding (CMB) are small chronic brain hemorrhages that can result in cognitive impairment, long-term disability, and neurologic dysfunction. Therefore, early-stage identification of CMBs for prompt treatment is essential. S. Wang et al. [ 164 ] proposed the transfer learning-based DenseNet to detect Cerebral micro-bleedings (CMBs). DenseNet based model attained an accuracy of 97.71% (Fig. ​ (Fig.8 8 ).

Limitations and challenges

The application of deep learning algorithms to medical imaging is fascinating, but many challenges are pulling down the progress. One of the limitations to the adoption of DL in medical image analysis is the inconsistency in the data itself (resolution, contrast, signal-to-noise), typically caused by procedures in clinical practice [ 113 ]. The non-standardized acquisition of medical images is another limitation in medical image analysis. The need for comprehensive medical image annotations limits the applicability of deep learning in medical image analysis. The major challenge is limited data and compared to other datasets, the sharing of medical data is incredibly complicated. Medical data privacy is both a sociological and a technological issue that needs to be discussed from both viewpoints. For building DLA a large amount of annotated data is required. Annotating medical images is another major challenge. Labeling medical images require radiologists’ domain knowledge. Therefore, it is time-consuming to annotate adequate medical data. Semi-supervised learning could be implemented to make combined use of the existing labeled data and vast unlabelled data to alleviate the issue of “limited labeled data”. Another way to resolve the issue of “data scarcity” is to develop few-shot learning algorithms using a considerably smaller amount of data. Despite the successes of DL technology, there are many restrictions and obstacles in the medical field. Whether it is possible to reduce medical costs, increase medical efficiency, and improve the satisfaction of patients using DL in the medical field cannot be adequately checked. However, in clinical trials, it is necessary to demonstrate the efficacy of deep learning methods and to develop guidelines for the medical image analysis applications of deep learning.

Conclusion and future directions

Medical imaging is a place of origin of the information necessary for clinical decisions. This paper discusses the new algorithms and strategies in the area of deep learning. In this brief introduction to DLA in medical image analysis, there are two objectives. The first one is an introduction to the field of deep learning and the associated theory. The second is to provide a general overview of the medical image analysis using DLA. It began with the history of neural networks since 1940 and ended with breakthroughs in medical applications in recent DL algorithms. Several supervised and unsupervised DL algorithms are first discussed, including auto-encoders, recurrent, CNN, and restricted Boltzmann machines. Several optimization techniques and frameworks in this area include Caffe, TensorFlow, Theano, and PyTorch are discussed. After that, the most successful DL methods were reviewed in various medical image applications, including classification, detection, and segmentation. Applications of the RBM network is rarely published in the medical image analysis literature. In classification and detection, CNN-based models have achieved good results and are most commonly used. Several existing solutions to medical challenges are available. However, there are still several issues in medical image processing that need to be addressed with deep learning. Many of the current DL implementations are supervised algorithms, while deep learning is slowly moving to unsupervised and semi-supervised learning to manage real-world data without manual human labels.

DLA can support clinical decisions for next-generation radiologists. DLA can automate radiologist workflow and facilitate decision-making for inexperienced radiologists. DLA is intended to aid physicians by automatically identifying and classifying lesions to provide a more precise diagnosis. DLA can help physicians to minimize medical errors and increase medical efficiency in the processing of medical image analysis. DL-based automated diagnostic results using medical images for patient treatment are widely used in the next few decades. Therefore, physicians and scientists should seek the best ways to provide better care to the patient with the help of DLA. The potential future research for medical image analysis is the designing of deep neural network architectures using deep learning. The enhancement of the design of network structures has a direct impact on medical image analysis. Manual design of DL Model structure requires rich knowledge; hence Neural Network Search will probably replace the manual design [ 73 ]. A meaningful feature research direction is also the design of various activation functions. Radiation therapy is crucial for cancer treatment. Different medical imaging modalities are playing a critical role in treatment planning. Radiomics was defined as the extraction of high throughput features from medical images [ 28 ]. In the feature, Deep-learning analysis of radionics will be a promising tool in clinical research for clinical diagnosis, drug development, and treatment selection for cancer patients . Due to limited annotated medical data, unsupervised, weakly supervised, and reinforcement learning methods are the emerging research areas in DL for medical image analysis. Overall, deep learning, a new and fast-growing field, offers various obstacles as well as opportunities and solutions for a range of medical image applications.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Muralikrishna Puttagunta, Email: moc.liamg@04939ilarum .

S. Ravi, Email: moc.liamg@eticivars .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 16 October 2020

Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines

  • Shih-Cheng Huang   ORCID: orcid.org/0000-0001-9882-833X 1 , 2   na1 ,
  • Anuj Pareek   ORCID: orcid.org/0000-0002-1526-3685 2 , 3   na1 ,
  • Saeed Seyyedi 2 , 3 ,
  • Imon Banerjee   ORCID: orcid.org/0000-0002-3327-8004 2 , 4 , 5 &
  • Matthew P. Lungren 1 , 2 , 3  

npj Digital Medicine volume  3 , Article number:  136 ( 2020 ) Cite this article

41k Accesses

258 Citations

82 Altmetric

Metrics details

  • Data integration
  • Machine learning
  • Medical imaging

Advancements in deep learning techniques carry the potential to make significant contributions to healthcare, particularly in fields that utilize medical imaging for diagnosis, prognosis, and treatment decisions. The current state-of-the-art deep learning models for radiology applications consider only pixel-value information without data informing clinical context. Yet in practice, pertinent and accurate non-imaging data based on the clinical history and laboratory data enable physicians to interpret imaging findings in the appropriate clinical context, leading to a higher diagnostic accuracy, informative clinical decision making, and improved patient outcomes. To achieve a similar goal using deep learning, medical imaging pixel-based models must also achieve the capability to process contextual data from electronic health records (EHR) in addition to pixel data. In this paper, we describe different data fusion techniques that can be applied to combine medical imaging with EHR, and systematically review medical data fusion literature published between 2012 and 2020. We conducted a systematic search on PubMed and Scopus for original research articles leveraging deep learning for fusion of multimodality data. In total, we screened 985 studies and extracted data from 17 papers. By means of this systematic review, we present current knowledge, summarize important results and provide implementation guidelines to serve as a reference for researchers interested in the application of multimodal fusion in medical imaging.

Similar content being viewed by others

thesis medical imaging

Segment anything in medical images

thesis medical imaging

Vision–language foundation model for echocardiogram interpretation

thesis medical imaging

An overview of clinical decision support systems: benefits, risks, and strategies for success

Introduction.

The practice of modern medicine relies heavily on synthesis of information and data from multiple sources; this includes imaging pixel data, structured laboratory data, unstructured narrative data, and in some cases, audio or observational data. This is particularly true in medical image interpretation where substantial clinical context is often essential to provide diagnostic decisions. For example, it has repeatedly been shown that a lack of access to clinical and laboratory data during image interpretation results in lower performance and decreased clinical utility for the referring provider 1 , 2 . In a survey of radiologists, the majority (87%) stated that clinical information had a significant impact on interpretation 3 . The importance of clinical context for accurate interpretation of imaging data is not limited to radiology; instead many other imaging-based medical specialties such as pathology, ophthalmology, and dermatology, also rely on clinical data to guide image interpretation in practice 4 , 5 , 6 . Pertinent and accurate information regarding the current symptoms and past medical history enables physicians to interpret imaging findings in the appropriate clinical context, leading to a more relevant differential diagnosis, a more useful report for the physicians, and optimal outcome for the patient.

In the current digital era, the volume of radiological imaging exams is growing. To meet this increased workload demand, an average radiologist may have to interpret an image every 3–4 s over an 8-h workday which contributes to fatigue, burnout, and increased error-rate 7 . Deep learning in healthcare is proliferating due to the potential for successful automated systems to either augment or offload cognitive work from busy physicians 8 , 9 , 10 . One class of deep learning, namely convolutional neural networks (CNN) has proven very effective for image recognition and classification tasks, and are therefore often applied to medical images. Early applications of CNNs for image analysis in medicine include diabetic retinopathy, skin cancer, and chest X-rays 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 . Yet, these models consider only the pixel data as a single modality for input and cannot contextualize other clinical information as would be done in medical practice, therefore may ultimately limit clinical translation.

As an example consider the “simple” task in radiology of identifying pneumonia on a chest radiograph, something that has been achieved by many investigators training deep learning models for automated detection and classification of pathologies on chest X-rays 19 , 20 . Yet without clinical context such as patient history, chief complaint, prior diagnoses, laboratory values, such applications may ultimately have limited impact on clinical practice. The imaging findings on chest X-rays consistent with pneumonia, despite having imaging features that can generally differentiate alternative diagnoses, are nonspecific and accurate diagnosis requires the context of clinical and laboratory data. In other words, the chest X-ray findings that suggest pneumonia would be accurate in one person with fever and an elevated white blood cell count but in another patient without those supporting clinical characteristics and laboratory values, similar imaging finding may instead represent other etiologies such as atelectasis, pulmonary edema, or even lung cancer. There are countless examples across different medical fields in which clinical context, typically in the form of structured and unstructured clinical data from the electronic health record (EHR), is critical for accurate and clinically relevant medical imaging interpretation. As with human physicians, automated detection and classification systems that can successfully utilize both medical imaging data together with clinical data from the EHR, such as patient demographics, previous diagnoses and laboratory values, may lead to better performing and more clinically relevant models.

Multimodal deep learning models that can ingest pixel data along with other data types (fusion) have been successful in applications outside of medicine, such as autonomous driving and video classification. As an example, a multimodal fusion detection system for autonomous vehicles, that combines visual features from cameras along with data from Light Detection and Ranging (LiDAR) sensors, is able to achieve significantly higher accuracy (3.7% improvement) than a single-modal CNN detection model 21 . Similarly, a multimodal social media video classification pipeline leveraging both visual and textual features increased the classification accuracy to 88.0%, well above single modality neural networks such as Google’s InceptionV3 which reached an accuracy of 76.4% on the same task 22 . The improvements in performance for these efforts not only echo the justification in medical applications, leveraging fusion strategies for medical imaging is also primarily motivated by the desire to integrate complementary contextual information and overcome the limitation of image-only models.

The recent medical imaging literature shows a similar trend where both EHR and pixel data are leveraged in a “fusion-paradigm” for solving complex tasks which cannot readily be tackled by a single modality (Fig. 1 ). The new fusion paradigm covers a wide range of methodologies and techniques with varying terms and model architectures that have not been studied systematically. The purpose of this review paper is to present a comprehensive analysis of deep learning models that leverage multiple modalities for medical imaging tasks, define and consolidate relevant terminology, and summarize the results from state-of-the-art models in relevant current literature. We hope this review can help inform future modeling frameworks and serve as a reference for researchers interested in the application of multimodal fusion in medical imaging.

figure 1

Timeline showing growth in publications on deep learning for medical imaging, found by using the same search criteria on PubMed and Scopus. The figure shows that fusion has only constituted a small, but growing, subset of medical deep learning literature.

Terminology and strategies in fusion

Data fusion refers to the process of joining data from multiple modalities with the aim of extracting complementary and more complete information for better performing machine learning models as opposed to using a single data modality.

Figure 2 illustrates the three main different fusion strategies, namely early, joint, and late fusion. Here we define and describe each fusion strategy in detail:

figure 2

Model architecture for different fusion strategies. Early fusion (left figure) concatenates original or extracted features at the input level. Joint fusion (middle figure) also joins features at the input level, but the loss is propagated back to the feature extracting model. Late fusion (right figure) aggregates predictions at the decision level.

Early fusion 23 , commonly known as feature level fusion, refers to the process of joining multiple input modalities into a single feature vector before feeding into one single machine learning model for training (Fig. 2 Early Fusion). Input modalities can be joined in many different ways, including concatenation, pooling or by applying a gated unit 23 , 24 . Fusing the original features represents early fusion type I, while fusing extracted features, either from manual extraction, imaging analysis software or learned representation from another neural network represents early fusion type II. We consider predicted probabilities to be extracted features, thus fusing features with predicted probabilities from different modalities is also early fusion type II.

Joint fusion (or intermediate fusion) is the process of joining learned feature representations from intermediate layers of neural networks with features from other modalities as input to a final model. The key difference, compared to early fusion, is that the loss is propagated back to the feature extracting neural networks during training, thus creating better feature representations for each training iteration (Fig. 2 Joint Fusion). Joint fusion is implemented with neural networks due to their ability to propagate loss from the prediction model to the feature extraction model(s). When feature representations are extracted from all modalities, we consider this joint fusion type I. However, not all input features require the feature extraction step to be defined as joint fusion (Fig. 2 Joint Fusion—Type II).

Late fusion 23 refers to the process of leveraging predictions from multiple models to make a final decision, which is why it is often known as decision-level fusion (Fig. 2 Late Fusion). Typically, different modalities are used to train separate models and the final decision is made using an aggregation function to combine the predictions of multiple models. Some examples of aggregation functions include: averaging, majority voting, weighted voting or a meta-classifier based on the predictions from each model. The choice of the aggregation function is usually empirical, and it varies depending on the application and input modalities.

A total of 985 studies were identified through our systematic search. After removing duplicates and excluding studies based on title and abstract using our study selection criteria (see Methods), 44 studies remained for full-text screening. A total of 17 studies fulfilled our eligibility criteria and were included for systematic review and data extraction. The studies were in English except for a single paper in Chinese. Figure 3 presents a flowchart of the study screening and selection process and Table 1 displays the included studies and extracted data.

figure 3

Two authors independently screened all records for eligibility. Seventeen studies were included in the systematic review.

Early fusion

The majority of the studies that remained after our full-text screening (11/17) used early fusion to join the multimodal input. Thung et al. 25 conducted image-image fusion of PET and MRI images using a joint fusion approach, but since they concatenated clinical and imaging features into one single feature vector before feeding into their neural network, we categorized their approach as early fusion. Six out of eleven early fusion studies extracted features from medical imaging using a CNN (Table 1 ). Four out of the six studies that applied neural networks for feature extraction simply concatenated the extracted imaging features with clinical features for their fusion strategy 26 , 27 , 28 , 29 . The remaining two studies by Liu et al. 30 and Nie et al. 31 applied dimensionality reduction techniques before concatenating the features. Five studies used software generated and/or manually extracted features from medical imaging before fusing with clinical data. Software-based feature extraction included radiomics features such as skewness and kurtosis 32 or volume and thickness quantification of the regions of interest 25 , 33 . Manually extracted features included radiological assessments such as size, angle, and morphology of anatomical structures 34 . Out of these five studies, two applied feature selection strategies to reduce the feature dimension and improve predictive performance. The employed feature selection strategies included a rank-based method using Gini coefficients 32 , a filter-based method based on mutual information of the features 35 , and a genetic-algorithm based method 35 . Seven of the early fusion studies compared the performance of their fusion models against single modality models (Table 1 ). Six of these studies showed an improvement in performance when using fusion 25 , 26 , 28 , 29 , 31 , 33 , and the remaining one achieved the same performance but reduced standard deviation 27 , alluding to a model with better stability.

Joint fusion

Joint fusion was used in four out of the seventeen studies. Spasov et al. 36 , Yala et al. 37 , and Yoo et al. 38 implemented CNNs to learn image features and fused these feature representations with clinical features before feeding them into a feed-forward neural network. Spasov et al. and Yala. et al. both used simple concatenation to fuse the learned imaging and clinical features. To cater to the differences between the dimensionality and dynamic range between the imaging and clinical features, Yoo et al. replicated and scaled their clinical features before fusion and they observed improvements in performances. Kawahara et al. 39 also used CNNs as feature extractors for imaging modalities but experimented with a unique multimodal multi-task loss function that considers multiple combinations of the input modalities. The predicted probabilities of these multi-task outputs were aggregated for prediction, but we do not consider this late fusion since the probabilities were not from separate models. Kawahara et al., Yala et al. and Yoo et al. reported an improvement in performance using fusion compared to image-only models (Table 1 ). Yoo et al. further compared their joint fusion model to a late fusion model and achieved a 0.02 increase in Area Under Receiver Operating Characteristic Curve (AUROC).

Late fusion

Late fusion was used in three out of the seventeen included studies (Table 1 ). Each of the three late fusion papers applied a different type of aggregation strategy. Yoo et al. 38 took the mean of the predicted probabilities from two single modality models as the final prediction. Reda et al. 40 built another classifier using the single modality models’ prediction probabilities as inputs. Qiu et al. 41 trained three independent imaging models that took as input a single MRI slice, each from a specific anatomical location. Max, mean and majority voting were applied to aggregate predictions from the three imaging models. The results from the three aggregation methods were combined again by majority voting before another round of late fusion with the clinical models. All late fusion models showed improvements in performances when compared to models that used only single modalities.

The purpose of this review is to aggregate the collective knowledge of prior work applying multimodal deep learning fusion techniques that combine medical imaging with clinical data. We propose consistent terminology for multimodal fusion techniques and categorize prior work by fusion strategy. Overall, we found that multimodality fusion models generally led to increased accuracy (1.2–27.7%) and AUROC (0.02–0.16) over traditional single modality models for the same task. However, no single fusion strategy consistently led to optimal performance across all domains. Since our literature review shows that additional patient information and clinical context can result in better model performance, and fusion methods better replicate the human expert interpretation workflow, it is recommended to always experiment with fusion strategies when multimodal data is available.

The deep learning fusion models reviewed represent a spectrum of medical applications ranging from radiology 31 to hematology 29 . For example, fusion strategies were often applied to the diagnosis and prediction of Alzheimer’s disease 25 , 28 , 33 , 36 , 41 . In clinical practice, neither imaging nor clinical data alone are sufficient for the diagnosis of Alzheimer’s disease. Leveraging deep learning fusion techniques consistently showed improvements in performance for Alzheimer’s disease diagnosis, while physicians struggle with accurate and reliable diagnostics even when multimodality is present, as proven by histopathological correlation 42 . This highlights the importance and utility of multimodal fusion techniques in clinical applications.

Fusion approaches in other less complex clinical applications also improved performance over single modality models, even those in which single modality models have been widely reported to achieve high performance, such as pixel-based models for automated skin cancer detection 43 . While the fusion approach varied widely, the consistent improvement in reported performance across a wide variety of clinical use cases suggests that model performance based on single-modal data may not represent state of the art for a given application when multimodal data are not considered.

The complexity of the non-imaging data in multimodal fusion work was limited, particularly in the context of available feature-rich and time-series data in the EHR. Instead, most studies focused primarily on basic demographic information such as age and gender 25 , 27 , 39 , a limited range of categorical clinical history such as hypertension or smoking status 32 , 34 or disease-specific clinical features known to be strongly associated with the disease of interest such as APOE4 for Alzheimer’s 25 , 28 , 33 , 36 or PSA blood test for prediction of prostate cancer 40 . While selecting features known to be associated with disease is meaningful, future work may further benefit from utilizing large volumes of feature-rich data, as seen in fields outside medicine such as autonomous driving 44 , 45 .

Implementation guidelines for fusion models

In most applications early fusion was used as the first attempt for multimodal learning, a straightforward approach that does not necessarily require training multiple models. However, when the input modalities are not in the same dimensions, which is typical when combining clinical data represented in 1D with imaging data in 2D or 3D, then high-level imaging features must be extracted as a 1D vector before fusing with the 1D clinical data. There were a variety of strategies used to accomplish this; including using manually extracted imaging features or software-generated features 25 , 32 , 33 , 34 , 35 . It is worth noting, that unless there is a compelling reason for using such an approach, outputs from linear layers of a CNN are usually effective feature representations of the original image 28 , 29 , 31 . This is because learned features representations often result in much better task-specific performance than can be obtained with manual or software extracted features 46 . Based on the reviewed papers, early fusion consistently improved performance over single modality models, and is supported by this review as an initial strategy to fuse multimodal data.

When using CNNs to extract features from imaging modalities, the same CNNs can also be used in joint fusion. However, joint fusion is implemented using neural networks which can be a limitation especially with smaller datasets better suited for traditional machine learning models. For example, if there are disproportionately few samples relative to the number of features in the dataset or if some of the input features are sparsely represented, early or late fusion is preferred because they can be implemented with traditional machine learning algorithms (e.g., Lasso and ElasticNet 47 ) that are better suited for this type of data 48 . Nevertheless, joint and early fusion neural networks are both able to learn shared representations, making it easier for the model to learn correlations across modalities, thereby resulting in better performance 49 . Studies have also shown that fusing highly correlated features in earlier layers and less correlated features in deeper layers improve model performance 50 , 51 . In addition, we suspect that joint fusion models have the potential to outperform other fusion strategies, as the technique iteratively updates its feature representations to better complement each modality through simultaneous propagation of the loss to all feature extracting models. Yet to date, there is insufficient evidence to systematically assess this effect in fusion for medical imaging and is an important area for future exploration.

When signals from different modalities do not complement each other, that is to say input modalities separately inform the final prediction and do not have inherent interdependency, then trying a late fusion approach is preferred. This is chiefly because when feature vectors from multiple modalities are concatenated, such as in early and joint fusion, high-dimensional vectors are generated which can be difficult for machine learning models to learn without overfitting, unless a large number of input samples are available. This is the so-called “curse of dimensionality” in machine learning 52 , 53 . Late fusion mitigates this problem by utilizing multiple models that are each specialized on a single modality, thus limiting the input feature vector size for each model. For example, the quantitative result of a Mini Mental State Examination and the pixel data obtained from a brain MRI (e.g., Qiu et al. 41 ) are largely independent data, and would therefore be suitable candidates for input into late fusion models.

Furthermore, in the common real-world scenario of missing or incomplete data, i.e. some patients have only clinical data available but no imaging data or vice-versa, late fusion retains the ability to make predictions. This is because late fusion employs separate models for separate modalities, and aggregation functions such as majority voting and averaging can be applied even when predictions from a modality is missing. When the different input modalities have very different numbers of features, predictions might be overly influenced by the most feature-rich modality (e.g., Reda et al. 40 ). Late fusion is favorable in this scenario as it considers each modality separately. Yoo et al. 38 also showed that repeating or scaling the modality that has fewer features before fusion achieved a boost in the model’s performance. Nonetheless, joint fusion can also be tuned to mitigate the difference in number of features, by setting feature producing linear layers of the feature extraction model to output a similar number of features as the other modalities. Our recommendations are summarized in Table 2 .

Ideally, researchers want to first build and optimize single modality models to dually serve as baseline models and provide inputs to fusion models. Multiple fusion strategies can then be implemented to compare model performance and guide subsequent fusion experiments. Since better performance is consistently achieved with multimodal fusion techniques, routine best practice should include reporting of the systematic investigation of various fusion strategies in addition to deep learning architectures and hyperparameters.

Limitations

We devised our search string to only consider papers after 2012. This constitutes a limitation as we excluded earlier papers that applied fusion using traditional machine learning techniques or simple feed-forward neural networks. Publication bias is an important limitation since positive results can be disproportionately reported in the published literature, which may have the aggregate effect of overrepresenting the advantages of fusion techniques. Furthermore, using our study selection criteria, we only looked at fusion techniques applied to clinical prediction and diagnosis, but we recognize that fusion can be applied to other interesting medical tasks such as segmentation and registration.

As the included studies investigate different objectives, use different input modalities, report different performance metrics, and not all papers provide confidence bounds, we are not able to aggregate or statistically compare the performance gains in a meta-analysis. In addition, the reported metrics cannot always be considered valid, since some studies didn’t use an independent test-set for an unbiased performance estimate 29 , 40 . The limited number of studies per medical field and the heterogeneity of each study also makes it difficult to compare the studies qualitatively. A few studies implemented fusion in unconventional ways, which may introduce subjectivity when we classify each study into early, late, and joint fusion.

Future research

This systematic review found that multimodal fusion in medicine is a promising yet nascent field that complements the clinical practice of medical imaging interpretation across all disciplines. We have defined and summarized key terminology, techniques, and evaluated the state of the art for multimodal fusion in medical imaging, honing in on key insights and unexplored questions to guide task and modality-specific strategies. The field of multimodal fusion for deep learning in medical imaging is expanding and novel fusion methods are expected to be developed. Future work should focus on shared terminology and metrics, including direct evaluation of different multimodal fusion approaches when applicable. We found that multimodal fusion for automated medical imaging tasks broadly improves the performance over single modality models, and further work may discover additional insights to inform optimal approaches.

This systematic review was conducted based on the PRISMA guidelines 54 .

Search strategy

A systematic literature search was implemented in PubMed and Scopus under the supervision of a licensed librarian. The key search terms included a combination of the three major themes: ‘deep learning’, ‘multimodality fusion’, and ‘medical imaging’. Terms for segmentation, registration, and reconstruction were used as exclusion criteria in the search. The search encompassed papers published between 2012 and 2020. This range was considered appropriate due to the rise in popularity in applying CNN on medical images since the 2012 ImageNet challenge. The complete search string for both databases is provided in Supplementary Methods . For potentially eligible studies cited by articles already included in this review, additional targeted free-text searches were conducted on Google Scholar if they did not appear in Scopus or PubMed.

We included all research articles in all languages that applied deep learning models for clinical outcome prediction or diagnosis using a combination of medical imaging modalities and EHR data. Studies specific to deep learning were included rather than the broader field of machine learning because deep learning has consistently shown superior performance in image-related tasks. We selected only studies that fused medical imaging with EHR data since, unlike image-image fusion, this is an exciting new technique that effectively merges heterogeneous data types and adds complementary rather than overlapping information to inform prediction and diagnosis. We defined medical imaging modalities as any type of medical images used in clinical care. Studies that used deep learning only for feature extractions were also included for our review. We excluded any study that combined extracted imaging features with the original imaging modality, as we still considered this a single modality. Articles that fused multimodal data for segmentation, registration or reconstruction were also excluded due to our criteria for outcome prediction and diagnosis. Articles from electronic preprint archives such as ArXiv were excluded in order to ensure only papers that passed peer-review were included. Lastly, papers with poor quality that hindered our ability to meaningfully extract data were also excluded.

Study selection

The Covidence software ( www.covidence.org ) was used for screening and study selection. After removal of duplicates, studies were screened based on title and abstract, and then full-texts were obtained and assessed for inclusion. Study selection was performed by two independent researchers (S.-C.H. and A.P.), and disagreements were resolved through discussion. In cases where consensus could not be achieved a third researcher was consulted (I.B.).

Data extraction

For benchmarking the existing approaches we extracted the following data from each of the selected articles: (a) fusion strategy, (b) year of publication, (c) authors, (d) clinical domain, (e) target outcome, (f) fusion details, (g) imaging modality, (h) non-imaging modality, (i) number of samples, and (j) model performance (Table 1 ). We classified the specific fusion strategy based on the definitions in the section “Terminology and strategies in fusion”. The number of samples reported is the full data-size including training, validation and testing data. For classification tasks we extracted AUROC whenever this metric was reported, otherwise we extracted accuracy. When the article contained several experiments, metrics from the experiment with the best performing fusion model were extracted. These items were extracted to enable researchers to find and compare current fusion studies in their medical field or input modalities of interest.

Data availability

The authors declare that all data supporting the findings of this study are available within the paper and its Supplementary information files.

Leslie, A., Jones, A. J. & Goddard, P. R. The influence of clinical information on the reporting of CT by radiologists. Br. J. Radiol. 73 , 1052–1055 (2000).

Article   CAS   PubMed   Google Scholar  

Cohen, M. D. Accuracy of information on imaging requisitions: does it matter? J. Am. Coll. Radiol. 4 , 617–621 (2007).

Article   PubMed   Google Scholar  

Boonn, W. W. & Langlotz, C. P. Radiologist use of and perceived need for patient data access. J. Digit. Imaging 22 , 357–362 (2009).

Comfere, N. I. et al. Provider-to-provider communication in dermatology and implications of missing clinical information in skin biopsy requisition forms: a systematic review. Int. J. Dermatol. 53 , 549–557 (2014).

Comfere, N. I. et al. Dermatopathologists’ concerns and challenges with clinical information in the skin biopsy requisition form: a mixed-methods study: Clinical information in dermatopathology. J. Cutan. Pathol. 42 , 333–345 (2015).

Article   PubMed   PubMed Central   Google Scholar  

Jonas, J. B. et al. Glaucoma. The Lancet 390 , 2183–2193 (2017).

Article   Google Scholar  

McDonald, R. J. et al. The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Acad. Radiol. 22 , 1191–1198 (2015).

Dean, N. C. et al. Impact of an electronic clinical decision support tool for emergency department patients with pneumonia. Ann. Emerg. Med. 66 , 511–520 (2015).

Banerjee, I. et al. Development and Performance of the Pulmonary Embolism Result Forecast Model (PERFORM) for Computed Tomography Clinical Decision Support. JAMA Netw. Open 2 , e198719 (2019).

Sandeep Kumar, E. & Jayadev, P. S. Deep Learning for Clinical Decision Support Systems: A Review from the Panorama of Smart Healthcare. In Deep Learning Techniques for Biomedical and Health Informatics. Studies in Big Data (eds. Dash, S. et al.) vol. 68, 79–99 (Springer, Cham, 2020).

Hinton, G. Deep learning—a technology with the potential to transform health care. JAMA 320 , 1101 (2018).

Stead, W. W. Clinical implications and challenges of artificial intelligence and deep learning. JAMA 320 , 1107 (2018).

Dunnmon, J. A. et al. Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 290 , 537–544 (2019).

Irvin, J. et al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. Proc. AAAI Conf. Artif. Intell. 33 , 590–597 (2019).

Google Scholar  

Jaeger, S. et al. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant. Imaging Med. Surg. 4 , 475–477 (2014).

PubMed   PubMed Central   Google Scholar  

Johnson, A. E. W. et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6 , 317 (2019).

Kallianos, K. et al. How far have we come? Artificial intelligence for chest radiograph interpretation. Clin. Radiol. 74 , 338–345 (2019).

Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316 , 2402 (2016).

Rajpurkar, P. et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 15 , e1002686 (2018).

Majkowska, A. et al. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology 294 , 421–431 (2020).

Person, M., Jensen, M., Smith, A. O. & Gutierrez, H. Multimodal fusion object detection system for autonomous vehicles. J. Dyn. Syst. Meas. Control 141 , 071017 (2019).

Trzcinski, T. Multimodal social media video classification with deep neural networks. In Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018 (eds. Romaniuk, R. S. & Linczuk, M.) (SPIE, 2018).

Ramachandram, D. & Taylor, G. W. Deep Multimodal Learning: A Survey on Recent Advances and Trends. IEEE Signal Process. Mag. 34 , 96–108 (2017).

Kiela, D., Grave, E., Joulin, A. & Mikolov, T. Efficient large-scale multi-modal classification. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) , (2018).

Thung, K.-H., Yap, P.-T. & Shen, D. Multi-stage Diagnosis of Alzheimer’s Disease with Incomplete Multimodal Data via Multi-task Deep Learning. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (eds. Cardoso, M. J. et al.) vol. 10553, 160–168 (Springer International Publishing, 2017).

Kharazmi, P., Kalia, S., Lui, H., Wang, Z. J. & Lee, T. K. A feature fusion system for basal cell carcinoma detection through data-driven feature learning and patient profile. Skin Res. Technol. 24 , 256–264 (2018).

Yap, J., Yolland, W. & Tschandl, P. Multimodal skin lesion classification using deep learning. Exp. Dermatol. 27 , 1261–1267 (2018).

Li, H. & Fan, Y. Early Prediction Of Alzheimer’s Disease Dementia Based On Baseline Hippocampal MRI and 1-Year Follow-Up Cognitive Measures Using Deep Recurrent Neural Networks. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 368–371 (IEEE, 2019).

Purwar, S., Tripathi, R. K., Ranjan, R. & Saxena, R. Detection of microcytic hypochromia using cbc and blood film features extracted from convolution neural network by different classifiers. Multimed. Tools Appl. 79 , 4573–4595 (2020).

Liu, Mingqian, Lan, Jun, Chen, Xu, Yu, Guangjun & Yang, Xiujun Bone age assessment model based on multi-dimensional feature fusion using deep learning. Acad. J. Second Mil. Med. Univ. 39 , 909–916 (2018).

Nie, D. et al. Multi-channel 3D deep feature learning for survival time prediction of brain tumor patients using multi-modal neuroimages. Sci. Rep. 9 , 1103 (2019).

Article   PubMed   PubMed Central   CAS   Google Scholar  

Hyun, S. H., Ahn, M. S., Koh, Y. W. & Lee, S. J. A machine-learning approach using PET-based radiomics to predict the histological subtypes of lung cancer. Clin. Nucl. Med. 44 , 956–960 (2019).

Bhagwat, N., Viviano, J. D., Voineskos, A. N., Chakravarty, M. M. & Alzheimer’s Disease Neuroimaging Initiative. Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease using longitudinal data. PLOS Comput. Biol. 14 , e1006376 (2018).

Liu, J. et al. Prediction of rupture risk in anterior communicating artery aneurysms with a feed-forward artificial neural network. Eur. Radiol. 28 , 3268–3275 (2018).

An, G. et al. Comparison of Machine-Learning Classification Models for Glaucoma Management. J. Healthc. Eng. 2018 , 1–8 (2018).

Spasov, S. E., Passamonti, L., Duggento, A., Lio, P. & Toschi, N. A. Multi-modal Convolutional Neural Network Framework for the Prediction of Alzheimer’s Disease. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1271–1274 (IEEE, 2018).

Yala, A., Lehman, C., Schuster, T., Portnoi, T. & Barzilay, R. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology 292 , 60–66 (2019).

Yoo, Y. et al. Deep learning of brain lesion patterns and user-defined clinical and MRI features for predicting conversion to multiple sclerosis from clinically isolated syndrome. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 7 , 250–259 (2019).

Kawahara, J., Daneshvar, S., Argenziano, G. & Hamarneh, G. Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J. Biomed. Health Inform. 23 , 538–546 (2019).

Reda, I. et al. Deep learning role in early diagnosis of prostate cancer. Technol. Cancer Res. Treat. 17 , 153303461877553 (2018).

Qiu, S. et al. Fusion of deep learning models of MRI scans, Mini–Mental State Examination, and logical memory test enhances diagnosis of mild cognitive impairment. Alzheimers Dement. Diagn. Assess. Dis. Monit. 10 , 737–749 (2018).

Beach, T. G., Monsell, S. E., Phillips, L. E. & Kukull, W. Accuracy of the Clinical Diagnosis of Alzheimer Disease at National Institute on Aging Alzheimer Disease Centers, 2005–2010. J. Neuropathol. Exp. Neurol. 71 , 266–273 (2012).

Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 , 115–118 (2017).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hecker, S., Dai, D. & Van Gool, L. End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners. In Computer Vision – ECCV 2018 (eds. Ferrari, V. et al.) vol. 11211, 449–468 (Springer International Publishing, 2018).

Jain, A., Singh, A., Koppula, H. S., Soh, S. & Saxena, A. Recurrent Neural Networks for driver activity anticipation via sensory-fusion architecture. In 2016 IEEE International Conference on Robotics and Automation (ICRA) 3118–3125 (IEEE, 2016).

Goodfellow, I., Bengio, Y. & Courville, C. Deep Learning (MIT Press, 2017).

Zou, H. & Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 , 301–320 (2005).

Subramanian, V., Do, M. N. & Syeda-Mahmood, T. Multimodal Fusion of Imaging and Genomics for Lung Cancer Recurrence Prediction. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) 804–808 (IEEE, 2020).

Ngiam, J. et al. Multimodal Deep Learning. In Proceedings of the 28th International Conference on Machine Learning (ICML) 689–696 (2011).

Karpathy, A. et al. Large-Scale Video Classification with Convolutional Neural Networks. In 2014 IEEE Conference on Computer Vision and Pattern Recognition 1725–1732 (IEEE, 2014).

Neverova, N., Wolf, C., Taylor, G. & Nebout, F. ModDrop: adaptive multi-modal gesture recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38 , 1692–1706 (2016).

Bach, F. Breaking the curse of dimensionality with convex neural networks. J. Mach. Learn. Res. 18 , 1–53 (2017).

Mwangi, B., Tian, T. S. & Soares, J. C. A review of feature reduction techniques in neuroimaging. Neuroinformatics 12 , 229–244 (2014).

David, M., Alessandro, L., Jennifer, T. & Douglas, G. A. The PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med . 6 , e1000097 (2009).

Download references

Acknowledgements

The authors wish to thank John Alexander Borghi from Stanford Lane Medical Library for his help with creating the systematic search. The research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Number R01LM012966. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

These authors contributed equally: Shih-Cheng Huang, Anuj Pareek.

Authors and Affiliations

Department of Biomedical Data Science, Stanford University, Stanford, USA

Shih-Cheng Huang & Matthew P. Lungren

Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, USA

Shih-Cheng Huang, Anuj Pareek, Saeed Seyyedi, Imon Banerjee & Matthew P. Lungren

Department of Radiology, Stanford University, Stanford, USA

Anuj Pareek, Saeed Seyyedi & Matthew P. Lungren

Department of Biomedical Informatics, Emory University, Atlanta, USA

Imon Banerjee

Department of Radiology, Emory University, Atlanta, USA

You can also search for this author in PubMed   Google Scholar

Contributions

S.-C.H. and A.P. are co-first authors who contributed equally to this study. Concept and design : S.-C.H., A.P., M.P.L., and I.B. Study selection: S.-C.H. and A.P. Data extraction : S.-C.H., A.P., and S.S. Drafting of the manuscript : S.-C.H., A.P., I.B., and M.P.L. Critical revision of the manuscript for important intellectual content : S.-C.H., A.P., I.B., and M.P.L. Supervision : I.B. and M.P.L.

Corresponding author

Correspondence to Shih-Cheng Huang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Huang, SC., Pareek, A., Seyyedi, S. et al. Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. npj Digit. Med. 3 , 136 (2020). https://doi.org/10.1038/s41746-020-00341-z

Download citation

Received : 22 April 2020

Accepted : 17 September 2020

Published : 16 October 2020

DOI : https://doi.org/10.1038/s41746-020-00341-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Deep learning based joint fusion approach to exploit anatomical and functional brain information in autism spectrum disorders.

  • Sara Saponaro
  • Francesca Lizzi
  • Alessandra Retico

Brain Informatics (2024)

AtPCa-Net: anatomical-aware prostate cancer detection network on multi-parametric MRI

  • Haoxin Zheng
  • Alex Ling Yu Hung
  • Kyunghyun Sung

Scientific Reports (2024)

Healthcare on the brink: navigating the challenges of an aging society in the United States

  • Charles H. Jones
  • Mikael Dolsten

npj Aging (2024)

Empowering precision medicine: AI-driven schizophrenia diagnosis via EEG signals: A comprehensive review from 2002–2023

  • Mahboobeh Jafari
  • Delaram Sadeghi
  • Juan M. Gorriz

Applied Intelligence (2024)

Development of Medical Imaging Data Standardization for Imaging-Based Observational Research: OMOP Common Data Model Extension

  • Woo Yeon Park
  • Kyulee Jeon

Journal of Imaging Informatics in Medicine (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

thesis medical imaging

  • Dissertations & Theses
  • Collections

EliScholar > Medicine > Medicine Thesis Digital Library

Yale Medicine Thesis Digital Library

Starting with the Yale School of Medicine (YSM) graduating class of 2002, the Cushing/Whitney Medical Library and YSM Office of Student Research have collaborated on the Yale Medicine Thesis Digital Library (YMTDL) project, publishing the digitized full text of medical student theses on the web as a valuable byproduct of Yale student research efforts. The digital thesis deposit has been a graduation requirement since 2006. Starting in 2012, alumni of the Yale School of Medicine were invited to participate in the YMTDL project by granting scanning and hosting permission to the Cushing/Whitney Medical Library, which digitized the Library’s print copy of their thesis or dissertation. A grant from the Arcadia Fund in 2017 provided the means for digitizing over 1,000 additional theses. IF YOU ARE A MEMBER OF THE YALE COMMUNITY AND NEED ACCESS TO A THESIS RESTRICTED TO THE YALE NETWORK, PLEASE MAKE SURE YOUR VPN (VIRTUAL PRIVATE NETWORK) IS ON.

Theses/Dissertations from 2024 2024

Refractory Neurogenic Cough Management: The Non-Inferiority Of Soluble Steroids To Particulate Suspensions For Superior Laryngeal Nerve Blocks , Hisham Abdou

Percutaneous Management Of Pelvic Fluid Collections: A 10-Year Series , Chidumebi Alim

Behavioral Outcomes In Patients With Metopic Craniosynostosis: Relationship With Radiographic Severity , Mariana Almeida

Ventilator Weaning Parameters Revisited: A Traditional Analysis And A Test Of Artificial Intelligence To Predict Successful Extubation , John James Andrews

Developing Precision Genome Editors: Peptide Nucleic Acids Modulate Crispr Cas9 To Treat Autosomal Dominant Disease , Jem Atillasoy

Radiology Education For U.s. Medical Students In 2024: A State-Of-The-Art Analysis , Ryan Bahar

Out-Of-Pocket Spending On Medications For Diabetes In The United States , Baylee Bakkila

Imaging Markers Of Microstructural Development In Neonatal Brains And The Impact Of Postnatal Pathologies , Pratheek Sai Bobba

A Needs Assessment For Rural Health Education In United States Medical Schools , Kailey Carlson

Racial Disparities In Behavioral Crisis Care: Investigating Restraint Patterns In Emergency Departments , Erika Chang-Sing

Social Determinants Of Health & Barriers To Care In Diabetic Retinopathy Patients Lost To Follow-Up , Thomas Chang

Association Between Fine Particulate Matter And Eczema: A Cross-Sectional Study Of The All Of Us Research Program And The Center For Air, Climate, And Energy Solutions , Gloria Chen

Predictors Of Adverse Outcomes Following Surgical Intervention For Cervical Spondylotic Myelopathy , Samuel Craft

Genetic Contributions To Thoracic Aortic Disease , Ellelan Arega Degife

Actigraphy And Symptom Changes With A Social Rhythm Intervention In Young Persons With Mood Disorders , Gabriela De Queiroz Campos

Incidence Of Pathologic Nodal Disease In Clinically Node Negative, Microinvasive/t1a Breast Cancers , Pranammya Dey

Spinal Infections: Pathophysiology, Diagnosis, Prevention, And Management , Meera Madhav Dhodapkar

Childen's Reentry To School After Psychiatric Hospitalization: A Qualitative Study , Madeline Digiovanni

Bringing Large Language Models To Ophthalmology: Domain-Specific Ontologies And Evidence Attribution , Aidan Gilson

Surgical Personalities: A Cultural History Of Early 20th Century American Plastic Surgery , Joshua Zev Glahn

Implications Of Acute Brain Injury Following Transcatheter Aortic Valve Replacement , Daniel Grubman

Latent Health Status Trajectory Modelling In Patients With Symptomatic Peripheral Artery Disease , Scott Grubman

The Human Claustrum Tracks Slow Waves During Sleep , Brett Gu

Patient Perceptions Of Machine Learning-Enabled Digital Mental Health , Clara Zhang Guo

Variables Affecting The 90-Day Overall Reimbursement Of Four Common Orthopaedic Procedures , Scott Joseph Halperin

The Evolving Landscape Of Academic Plastic Surgery: Understanding And Shaping Future Directions In Diversity, Equity, And Inclusion , Sacha C. Hauc

Association Of Vigorous Physical Activity With Psychiatric Disorders And Participation In Treatment , John L. Havlik

Long-Term Natural History Of Ush2a-Retinopathy , Michael Heyang

Clinical Decision Support For Emergency Department-Initiated Buprenorphine For Opioid Use Disorder , Wesley Holland

Applying Deep Learning To Derive Noninvasive Imaging Biomarkers For High-Risk Phenotypes Of Prostate Cancer , Sajid Hossain

The Hardships Of Healthcare Among People With Lived Experiences Of Homelessness In New Haven, Ct , Brandon James Hudik

Outcomes Of Peripheral Vascular Interventions In Patients Treated With Factor Xa Inhibitors , Joshua Joseph Huttler

Janus Kinase Inhibition In Granuloma Annulare: Two Single-Arm, Open-Label Clinical Trials , Erica Hwang

Medicaid Coverage For Undocumented Children In Connecticut: A Political History , Chinye Ijeli

Population Attributable Fraction Of Reproductive Factors In Triple Negative Breast Cancer By Race , Rachel Jaber Chehayeb

Evaluation Of Gastroesophageal Reflux And Hiatal Hernia As Risk Factors For Lobectomy Complications , Michael Kaminski

Health-Related Social Needs Before And After Critical Illness Among Medicare Beneficiaries , Tamar A. Kaminski

Effects Of Thoracic Endovascular Aortic Repair On Cardiac Function At Rest , Nabeel Kassam

Conditioned Hallucinations By Illness Stage In Individuals With First Episode Schizophrenia, Chronic Schizophrenia, And Clinical High Risk For Psychosis , Adam King

The Choroid Plexus Links Innate Immunity To Dysregulation Of Csf Homeostasis In Diverse Forms Of Hydrocephalus , Emre Kiziltug

Health Status Changes After Stenting For Stroke Prevention In Carotid Artery Stenosis , Jonathan Kluger

Rare And Undiagnosed Liver Diseases: New Insights From Genomic And Single Cell Transcriptomic Analyses , Chigoziri Konkwo

“Teen Health” Empowers Informed Contraception Decision-Making In Adolescents And Young Adults , Christina Lepore

Barriers To Mental Health Care In Us Military Veterans , Connor Lewis

Barriers To Methadone For Hiv Prevention Among People Who Inject Drugs In Kazakhstan , Amanda Rachel Liberman

Unheard Voices: The Burden Of Ischemia With No Obstructive Coronary Artery Disease In Women , Marah Maayah

Partial And Total Tonsillectomy For Pediatric Sleep-Disordered Breathing: The Role Of The Cas-15 , Jacob Garn Mabey

Association Between Insurance, Access To Care, And Outcomes For Patients With Uveal Melanoma In The United States , Victoria Anne Marks

Urinary Vegf And Cell-Free Dna As Non-Invasive Biomarkers For Diabetic Retinopathy Screening , Mitchelle Matesva

Pain Management In Facial Trauma: A Narrative Review , Hunter Mccurdy

Meningioma Relational Database Curation Using A Pacs-Integrated Tool For Collection Of Clinical And Imaging Features , Ryan Mclean

Colonoscopy Withdrawal Time And Dysplasia Detection In Patients With Inflammatory Bowel Disease , Chandler Julianne Mcmillan

Cerebral Arachnoid Cysts Are Radiographic Harbingers Of Epigenetics Defects In Neurodevelopment , Kedous Mekbib

Regulation And Payment Of New Medical Technologies , Osman Waseem Moneer

Permanent Pacemaker Implantation After Tricuspid Valve Repair Surgery , Alyssa Morrison

Non-Invasive Epidermal Proteome-Based Subclassification Of Psoriasis And Eczema And Identification Of Treatment Relevant Biomarkers , Michael Murphy

Ballistic And Explosive Orthopaedic Trauma Epidemiology And Outcomes In A Global Population , Jamieson M. O'marr

Dermatologic Infectious Complications And Mimickers In Cancer Patients On Oncologic Therapy , Jolanta Pach

Distressed Community Index In Patients Undergoing Carotid Endarterectomy In Medicare-Linked Vqi Registry , Carmen Pajarillo

Preoperative Psychosocial Risk Burden Among Patients Undergoing Major Thoracic And Abdominal Surgery , Emily Park

Volumetric Assessment Of Imaging Response In The Pnoc Pediatric Glioma Clinical Trials , Divya Ramakrishnan

Racial And Sex Disparities In Adult Reconstructive Airway Surgery Outcomes: An Acs Nsqip Analysis , Tagan Rohrbaugh

A School-Based Study Of The Prevalence Of Rheumatic Heart Disease In Bali, Indonesia , Alysha Rose

Outcomes Following Hypofractionated Radiotherapy For Patients With Thoracic Tumors In Predominantly Central Locations , Alexander Sasse

Healthcare Expenditure On Atrial Fibrillation In The United States: The Medical Expenditure Panel Survey 2016-2021 , Claudia See

A Cost-Effectiveness Analysis Of Oropharyngeal Cancer Post-Treatment Surveillance Practices , Rema Shah

Machine Learning And Risk Prediction Tools In Neurosurgery: A Rapid Review , Josiah Sherman

Maternal And Donor Human Milk Support Robust Intestinal Epithelial Growth And Differentiation In A Fetal Intestinal Organoid Model , Lauren Smith

Constructing A Fetal Human Liver Atlas: Insights Into Liver Development , Zihan Su

Somatic Mutations In Aging, Paroxysmal Nocturnal Hemoglobinuria, And Myeloid Neoplasms , Tho Tran

Illness Perception And The Impact Of A Definitive Diagnosis On Women With Ischemia And No Obstructive Coronary Artery Disease: A Qualitative Study , Leslie Yingzhijie Tseng

Advances In Keratin 17 As A Cancer Biomarker: A Systematic Review , Robert Tseng

Regionalization Strategy To Optimize Inpatient Bed Utilization And Reduce Emergency Department Crowding , Ragini Luthra Vaidya

Survival Outcomes In T3 Laryngeal Cancer Based On Staging Features At Diagnosis , Vickie Jiaying Wang

Analysis Of Revertant Mosaicism And Cellular Competition In Ichthyosis With Confetti , Diana Yanez

A Hero's Journey: Experiences Using A Therapeutic Comicbook In A Children’s Psychiatric Inpatient Unit , Idil Yazgan

Prevalence Of Metabolic Comorbidities And Viral Infections In Monoclonal Gammopathy , Mansen Yu

Automated Detection Of Recurrent Gastrointestinal Bleeding Using Large Language Models , Neil Zheng

Vascular Risk Factor Treatment And Control For Stroke Prevention , Tianna Zhou

Theses/Dissertations from 2023 2023

Radiomics: A Methodological Guide And Its Applications To Acute Ischemic Stroke , Emily Avery

Characterization Of Cutaneous Immune-Related Adverse Events Due To Immune Checkpoint Inhibitors , Annika Belzer

An Investigation Of Novel Point Of Care 1-Tesla Mri Of Infants’ Brains In The Neonatal Icu , Elisa Rachel Berson

Understanding Perceptions Of New-Onset Type 1 Diabetes Education In A Pediatric Tertiary Care Center , Gabriel BetancurVelez

Effectiveness Of Acitretin For Skin Cancer Prevention In Immunosuppressed And Non-Immunosuppressed Patients , Shaman Bhullar

Adherence To Tumor Board Recommendations In Patients With Hepatocellular Carcinoma , Yueming Cao

Clinical Trials Related To The Spine & Shoulder/elbow: Rates, Predictors, & Reasons For Termination , Dennis Louis Caruana

Improving Delivery Of Immunomodulator Mpla With Biodegradable Nanoparticles , Jungsoo Chang

Sex Differences In Patients With Deep Vein Thrombosis , Shin Mei Chan

Incorporating Genomic Analysis In The Clinical Practice Of Hepatology , David Hun Chung

Emergency Medicine Resident Perceptions Of A Medical Wilderness Adventure Race (medwar) , Lake Crawford

Surgical Outcomes Following Posterior Spinal Fusion For Adolescent Idiopathic Scoliosis , Wyatt Benajmin David

Representing Cells As Sentences Enables Natural Language Processing For Single Cell Transcriptomics , Rahul M. Dhodapkar

Life Vs. Liberty And The Pursuit Of Happiness: Short-Term Involuntary Commitment Laws In All 50 US States , Sofia Dibich

Healthcare Disparities In Preoperative Risk Management For Total Joint Arthroplasty , Chloe Connolly Dlott

Toll-Like Receptors 2/4 Directly Co-Stimulate Arginase-1 Induction Critical For Macrophage-Mediated Renal Tubule Regeneration , Natnael Beyene Doilicho

Associations Of Atopic Dermatitis With Neuropsychiatric Comorbidities , Ryan Fan

International Academic Partnerships In Orthopaedic Surgery , Michael Jesse Flores

Young Adults With Adhd And Their Involvement In Online Communities: A Qualitative Study , Callie Marie Ginapp

Becoming A Doctor, Becoming A Monster: Medical Socialization And Desensitization In Nazi Germany And 21st Century USA , SimoneElise Stern Hasselmo

Comparative Efficacy Of Pharmacological Interventions For Borderline Personality Disorder: A Network Meta-Analysis , Olivia Dixon Herrington

Page 1 of 32

Advanced Search

  • Notify me via email or RSS
  • Disciplines
  • Researcher Profiles
  • Author Help

Copyright, Publishing and Open Access

  • Terms & Conditions
  • Open Access at Yale
  • Yale University Library
  • Yale Law School Repository

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

Radiology Research Paper Topics

Academic Writing Service

Radiology research paper topics encompass a wide range of fascinating areas within the field of medical imaging. This page aims to provide students studying health sciences with a comprehensive collection of radiology research paper topics to inspire and guide their research endeavors. By delving into various categories and exploring ten thought-provoking topics within each, students can gain insights into the diverse research possibilities in radiology. From advancements in imaging technology to the evaluation of diagnostic accuracy and the impact of radiological interventions, these topics offer a glimpse into the exciting world of radiology research. Additionally, expert advice is provided to help students choose the most suitable research topics and navigate the process of writing a research paper in radiology. By leveraging iResearchNet’s writing services, students can further enhance their research papers with professional assistance, ensuring the highest quality and adherence to academic standards. Explore the realm of radiology research paper topics and unleash your potential to contribute to the advancement of medical imaging and patient care.

100 Radiology Research Paper Topics

Radiology encompasses a broad spectrum of imaging techniques used to diagnose diseases, monitor treatment progress, and guide interventions. This comprehensive list of radiology research paper topics serves as a valuable resource for students in the field of health sciences who are seeking inspiration and guidance for their research endeavors. The following ten categories highlight different areas within radiology, each containing ten thought-provoking topics. Exploring these topics will provide students with a deeper understanding of the diverse research possibilities and current trends within the field of radiology.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

Diagnostic Imaging Techniques

  • Comparative analysis of imaging modalities: CT, MRI, and PET-CT.
  • The role of artificial intelligence in radiological image interpretation.
  • Advancements in digital mammography for breast cancer screening.
  • Emerging techniques in nuclear medicine imaging.
  • Image-guided biopsy: Enhancing accuracy and safety.
  • Application of radiomics in predicting treatment response.
  • Dual-energy CT: Expanding diagnostic capabilities.
  • Radiological evaluation of traumatic brain injuries.
  • Imaging techniques for evaluating cardiovascular diseases.
  • Radiographic evaluation of pulmonary nodules: Challenges and advancements.

Interventional Radiology

  • Minimally invasive treatments for liver tumors: Embolization techniques.
  • Radiofrequency ablation in the management of renal cell carcinoma.
  • Role of interventional radiology in the treatment of peripheral artery disease.
  • Transarterial chemoembolization in hepatocellular carcinoma.
  • Evaluation of uterine artery embolization for the treatment of fibroids.
  • Percutaneous vertebroplasty and kyphoplasty: Efficacy and complications.
  • Endovascular repair of abdominal aortic aneurysms: Long-term outcomes.
  • Interventional radiology in the management of deep vein thrombosis.
  • Transcatheter aortic valve replacement: Imaging considerations.
  • Emerging techniques in interventional oncology.

Radiation Safety and Dose Optimization

  • Strategies for reducing radiation dose in pediatric imaging.
  • Imaging modalities with low radiation exposure: Current advancements.
  • Effective use of dose monitoring systems in radiology departments.
  • The impact of artificial intelligence on radiation dose optimization.
  • Optimization of radiation therapy treatment plans: Balancing efficacy and safety.
  • Radioprotective measures for patients and healthcare professionals.
  • The role of radiology in addressing radiation-induced risks.
  • Evaluating the long-term effects of radiation exposure in diagnostic imaging.
  • Radiation dose tracking and reporting: Implementing best practices.
  • Patient education and communication regarding radiation risks.

Radiology in Oncology

  • Imaging techniques for early detection and staging of lung cancer.
  • Quantitative imaging biomarkers for predicting treatment response in solid tumors.
  • Radiogenomics: Linking imaging features to genetic profiles in cancer.
  • The role of imaging in assessing tumor angiogenesis.
  • Radiological evaluation of lymphoma: Challenges and advancements.
  • Imaging-guided interventions in the treatment of hepatocellular carcinoma.
  • Assessment of tumor heterogeneity using functional imaging techniques.
  • Radiomics and machine learning in predicting treatment outcomes in cancer.
  • Multimodal imaging in the evaluation of brain tumors.
  • Imaging surveillance after cancer treatment: Optimizing follow-up protocols.

Radiology in Musculoskeletal Disorders

  • Imaging modalities in the evaluation of sports-related injuries.
  • The role of imaging in diagnosing and monitoring rheumatoid arthritis.
  • Assessment of bone health using dual-energy X-ray absorptiometry (DXA).
  • Imaging techniques for evaluating osteoarthritis progression.
  • Imaging-guided interventions in the management of musculoskeletal tumors.
  • Role of imaging in diagnosing and managing spinal disorders.
  • Evaluation of traumatic injuries using radiography, CT, and MRI.
  • Imaging of joint prostheses: Complications and assessment techniques.
  • Imaging features and classifications of bone fractures.
  • Musculoskeletal ultrasound in the diagnosis of soft tissue injuries.

Neuroradiology

  • Advanced neuroimaging techniques for early detection of neurodegenerative diseases.
  • Imaging evaluation of acute stroke: Current guidelines and advancements.
  • Role of functional MRI in mapping brain functions.
  • Imaging of brain tumors: Classification and treatment planning.
  • Diffusion tensor imaging in assessing white matter integrity.
  • Neuroimaging in the evaluation of multiple sclerosis.
  • Imaging techniques for the assessment of epilepsy.
  • Radiological evaluation of neurovascular diseases.
  • Imaging of cranial nerve disorders: Diagnosis and management.
  • Radiological assessment of developmental brain abnormalities.

Pediatric Radiology

  • Radiation dose reduction strategies in pediatric imaging.
  • Imaging evaluation of congenital heart diseases in children.
  • Role of imaging in the diagnosis and management of pediatric oncology.
  • Imaging of pediatric gastrointestinal disorders.
  • Evaluation of developmental hip dysplasia using ultrasound and radiography.
  • Imaging features and management of pediatric musculoskeletal infections.
  • Neuroimaging in the assessment of pediatric neurodevelopmental disorders.
  • Radiological evaluation of pediatric respiratory conditions.
  • Imaging techniques for the evaluation of pediatric abdominal emergencies.
  • Imaging-guided interventions in pediatric patients.

Breast Imaging

  • Advances in digital mammography for early breast cancer detection.
  • The role of tomosynthesis in breast imaging.
  • Imaging evaluation of breast implants: Complications and assessment.
  • Radiogenomic analysis of breast cancer subtypes.
  • Contrast-enhanced mammography: Diagnostic benefits and challenges.
  • Emerging techniques in breast MRI for high-risk populations.
  • Evaluation of breast density and its implications for cancer risk.
  • Role of molecular breast imaging in dense breast tissue evaluation.
  • Radiological evaluation of male breast disorders.
  • The impact of artificial intelligence on breast cancer screening.

Cardiac Imaging

  • Imaging evaluation of coronary artery disease: Current techniques and challenges.
  • Role of cardiac CT angiography in the assessment of structural heart diseases.
  • Imaging of cardiac tumors: Diagnosis and treatment considerations.
  • Advanced imaging techniques for assessing myocardial viability.
  • Evaluation of valvular heart diseases using echocardiography and MRI.
  • Cardiac magnetic resonance imaging in the evaluation of cardiomyopathies.
  • Role of nuclear cardiology in the assessment of cardiac function.
  • Imaging evaluation of congenital heart diseases in adults.
  • Radiological assessment of cardiac arrhythmias.
  • Imaging-guided interventions in structural heart diseases.

Abdominal and Pelvic Imaging

  • Evaluation of hepatobiliary diseases using imaging techniques.
  • Imaging features and classification of renal masses.
  • Radiological assessment of gastrointestinal bleeding.
  • Imaging evaluation of pancreatic diseases: Challenges and advancements.
  • Evaluation of pelvic floor disorders using MRI and ultrasound.
  • Role of imaging in diagnosing and staging gynecological cancers.
  • Imaging of abdominal and pelvic trauma: Current guidelines and techniques.
  • Radiological evaluation of genitourinary disorders.
  • Imaging features of abdominal and pelvic infections.
  • Assessment of abdominal and pelvic vascular diseases using imaging techniques.

This comprehensive list of radiology research paper topics highlights the vast range of research possibilities within the field of medical imaging. Each category offers unique insights and avenues for exploration, enabling students to delve into various aspects of radiology. By choosing a topic of interest and relevance, students can contribute to the advancement of medical imaging and patient care. The provided topics serve as a starting point for students to engage in in-depth research and produce high-quality research papers.

Radiology: Exploring the Range of Research Paper Topics

Introduction: Radiology plays a crucial role in modern healthcare, providing valuable insights into the diagnosis, treatment, and monitoring of various medical conditions. As a dynamic and rapidly evolving field, radiology offers a wide range of research opportunities for students in the health sciences. This article aims to explore the diverse spectrum of research paper topics within radiology, shedding light on the current trends, innovations, and challenges in the field.

Radiology in Diagnostic Imaging : Diagnostic imaging is one of the core areas of radiology, encompassing various modalities such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and nuclear medicine. Research topics in this domain may include advancements in imaging techniques, comparative analysis of modalities, radiomics, and the integration of artificial intelligence in image interpretation. Students can explore how these technological advancements enhance diagnostic accuracy, improve patient outcomes, and optimize radiation exposure.

Interventional Radiology : Interventional radiology focuses on minimally invasive procedures performed under image guidance. Research topics in this area can cover a wide range of interventions, such as angioplasty, embolization, radiofrequency ablation, and image-guided biopsies. Students can delve into the latest techniques, outcomes, and complications associated with interventional procedures, as well as explore the emerging role of interventional radiology in managing various conditions, including vascular diseases, cancer, and pain management.

Radiation Safety and Dose Optimization : Radiation safety is a critical aspect of radiology practice. Research in this field aims to minimize radiation exposure to patients and healthcare professionals while maintaining optimal diagnostic image quality. Topics may include strategies for reducing radiation dose in pediatric imaging, dose monitoring systems, the impact of artificial intelligence on radiation dose optimization, and radioprotective measures. Students can investigate how to strike a balance between effective imaging and patient safety, exploring advancements in dose reduction techniques and the implementation of best practices.

Radiology in Oncology : Radiology plays a vital role in the diagnosis, staging, and treatment response assessment in cancer patients. Research topics in this area can encompass the use of imaging techniques for early detection, tumor characterization, response prediction, and treatment planning. Students can explore the integration of radiomics, machine learning, and molecular imaging in oncology research, as well as advancements in functional imaging and image-guided interventions.

Radiology in Neuroimaging : Neuroimaging is a specialized field within radiology that focuses on imaging the brain and central nervous system. Research topics in neuroimaging can cover areas such as stroke imaging, neurodegenerative diseases, brain tumors, neurovascular disorders, and functional imaging for mapping brain functions. Students can explore the latest imaging techniques, image analysis tools, and their clinical applications in understanding and diagnosing various neurological conditions.

Radiology in Musculoskeletal Imaging : Musculoskeletal imaging involves the evaluation of bone, joint, and soft tissue disorders. Research topics in this area can encompass imaging techniques for sports-related injuries, arthritis, musculoskeletal tumors, spinal disorders, and trauma. Students can explore the role of advanced imaging modalities such as MRI and ultrasound in diagnosing and managing musculoskeletal conditions, as well as the use of imaging-guided interventions for treatment.

Pediatric Radiology : Pediatric radiology focuses on imaging children, who have unique anatomical and physiological considerations. Research topics in this field may include radiation dose reduction strategies in pediatric imaging, imaging evaluation of congenital anomalies, pediatric oncology imaging, and imaging assessment of developmental disorders. Students can explore how to tailor imaging protocols for children, minimize radiation exposure, and improve diagnostic accuracy in pediatric patients.

Breast Imaging : Breast imaging is essential for the early detection and diagnosis of breast cancer. Research topics in this area can cover advancements in mammography, tomosynthesis, breast MRI, and molecular imaging. Students can explore topics related to breast density, imaging-guided biopsies, breast cancer screening, and the impact of artificial intelligence in breast imaging. Additionally, they can investigate the use of imaging techniques for evaluating breast implants and assessing high-risk populations.

Cardiac Imaging : Cardiac imaging focuses on the evaluation of heart structure and function. Research topics in this field may include imaging techniques for coronary artery disease, valvular heart diseases, cardiomyopathies, and cardiac tumors. Students can explore the role of cardiac CT, MRI, nuclear cardiology, and echocardiography in diagnosing and managing various cardiac conditions. Additionally, they can investigate the use of imaging in guiding interventional procedures and assessing treatment outcomes.

Abdominal and Pelvic Imaging : Abdominal and pelvic imaging involves the evaluation of organs and structures within the abdominal and pelvic cavities. Research topics in this area can encompass imaging of the liver, kidneys, gastrointestinal tract, pancreas, genitourinary system, and pelvic floor. Students can explore topics related to imaging techniques, evaluation of specific diseases or conditions, and the role of imaging in guiding interventions. Additionally, they can investigate emerging modalities such as elastography and diffusion-weighted imaging in abdominal and pelvic imaging.

Radiology offers a vast array of research opportunities for students in the field of health sciences. The topics discussed in this article provide a glimpse into the breadth and depth of research possibilities within radiology. By exploring these research areas, students can contribute to advancements in diagnostic accuracy, treatment planning, and patient care. With the rapid evolution of imaging technologies and the integration of artificial intelligence, the future of radiology research holds immense potential for improving healthcare outcomes.

Choosing Radiology Research Paper Topics

Introduction: Selecting a research topic is a crucial step in the journey of writing a radiology research paper. It determines the focus of your study and influences the impact your research can have in the field. To help you make an informed choice, we have compiled expert advice on selecting radiology research paper topics. By following these tips, you can identify a relevant and engaging research topic that aligns with your interests and contributes to the advancement of radiology knowledge.

  • Identify Your Interests : Start by reflecting on your own interests within the field of radiology. Consider which subspecialties or areas of radiology intrigue you the most. Are you interested in diagnostic imaging, interventional radiology, radiation safety, oncology imaging, or any other specific area? Identifying your interests will guide you in selecting a topic that excites you and keeps you motivated throughout the research process.
  • Stay Updated on Current Trends : Keep yourself updated on the latest advancements, breakthroughs, and emerging trends in radiology. Read scientific journals, attend conferences, and engage in discussions with experts in the field. By staying informed, you can identify gaps in knowledge or areas that require further investigation, providing you with potential research topics that are timely and relevant.
  • Consult with Faculty or Mentors : Seek guidance from your faculty members or mentors who are experienced in the field of radiology. They can provide valuable insights into potential research areas, ongoing projects, and research gaps. Discuss your research interests with them and ask for their suggestions and recommendations. Their expertise and guidance can help you narrow down your research topic and refine your research question.
  • Conduct a Literature Review : Conducting a thorough literature review is an essential step in choosing a research topic. It allows you to familiarize yourself with the existing body of knowledge, identify research gaps, and build a strong foundation for your study. Analyze recent research papers, systematic reviews, and meta-analyses related to radiology to identify areas that need further investigation or where controversies exist.
  • Brainstorm Research Questions : Once you have gained an understanding of the current state of research in radiology, brainstorm potential research questions. Consider the gaps or controversies you identified during your literature review. Develop research questions that address these gaps and contribute to the existing knowledge. Ensure that your research questions are clear, focused, and answerable within the scope of your study.
  • Consider the Practicality and Feasibility : When selecting a research topic, consider the practicality and feasibility of conducting the study. Evaluate the availability of resources, access to data, research facilities, and ethical considerations. Assess the time frame and potential constraints that may impact your research. Choosing a topic that is feasible within your given resources and time frame will ensure a successful and manageable research experience.
  • Collaborate with Peers : Consider collaborating with your peers or forming a research group to enhance your research experience. Collaborative research allows for a sharing of ideas, resources, and expertise, fostering a supportive environment. By working together, you can explore more complex research topics, conduct multicenter studies, and generate more impactful findings.
  • Seek Multidisciplinary Perspectives : Radiology intersects with various other medical disciplines. Consider exploring interdisciplinary research topics that integrate radiology with fields such as oncology, cardiology, neurology, or orthopedics. By incorporating multidisciplinary perspectives, you can address complex healthcare challenges and contribute to a broader understanding of patient care.
  • Choose a Topic with Clinical Relevance : Select a research topic that has direct clinical relevance. Focus on topics that can potentially influence patient outcomes, improve diagnostic accuracy, optimize treatment strategies, or enhance patient safety. By choosing a clinically relevant topic, you can contribute to the advancement of radiology practice and have a positive impact on patient care.
  • Seek Ethical Considerations : Ensure that your research topic adheres to ethical considerations in radiology research. Patient privacy, confidentiality, and informed consent should be prioritized when conducting studies involving human subjects. Familiarize yourself with the ethical guidelines and regulations specific to radiology research and ensure that your study design and data collection methods are in line with these principles.

Choosing a radiology research paper topic requires careful consideration and alignment with your interests, expertise, and the current trends in the field. By following the expert advice provided in this section, you can select a research topic that is engaging, relevant, and contributes to the advancement of radiology knowledge. Remember to consult with mentors, conduct a thorough literature review, and consider practicality and feasibility. With a well-chosen research topic, you can embark on an exciting journey of exploration, innovation, and contribution to the field of radiology.

How to Write a Radiology Research Paper

Introduction: Writing a radiology research paper requires a systematic approach and attention to detail. It is essential to effectively communicate your research findings, methodology, and conclusions to contribute to the body of knowledge in the field. In this section, we will provide you with valuable tips on how to write a successful radiology research paper. By following these guidelines, you can ensure that your paper is well-structured, informative, and impactful.

  • Define the Research Question : Start by clearly defining your research question or objective. It serves as the foundation of your research paper and guides your entire study. Ensure that your research question is specific, focused, and relevant to the field of radiology. Clearly articulate the purpose of your study and its potential implications.
  • Conduct a Thorough Literature Review : Before diving into writing, conduct a comprehensive literature review to familiarize yourself with the existing body of knowledge in your research area. Identify key studies, seminal papers, and relevant research articles that will support your research. Analyze and synthesize the literature to identify gaps, controversies, or areas for further investigation.
  • Develop a Well-Structured Outline : Create a clear and well-structured outline for your research paper. An outline serves as a roadmap and helps you organize your thoughts, arguments, and evidence. Divide your paper into logical sections such as introduction, literature review, methodology, results, discussion, and conclusion. Ensure a logical flow of ideas and information throughout the paper.
  • Write an Engaging Introduction : The introduction is the opening section of your research paper and should capture the reader’s attention. Start with a compelling hook that introduces the importance of the research topic. Provide background information, context, and the rationale for your study. Clearly state the research question or objective and outline the structure of your paper.
  • Conduct Rigorous Methodology : Describe your research methodology in detail, ensuring transparency and reproducibility. Explain your study design, data collection methods, sample size, inclusion/exclusion criteria, and statistical analyses. Clearly outline the steps you took to ensure scientific rigor and address potential biases. Include any ethical considerations and institutional review board approvals, if applicable.
  • Present Clear and Concise Results : Present your research findings in a clear, concise, and organized manner. Use tables, figures, and charts to visually represent your data. Provide accurate and relevant statistical analyses to support your results. Explain the significance and implications of your findings and their alignment with your research question.
  • Analyze and Interpret Results : In the discussion section, analyze and interpret your research results in the context of existing literature. Compare and contrast your findings with previous studies, highlighting similarities, differences, and potential explanations. Discuss any limitations or challenges encountered during the study and propose areas for future research.
  • Ensure Clear and Coherent Writing : Maintain clarity, coherence, and precision in your writing. Use concise and straightforward language to convey your ideas effectively. Avoid jargon or excessive technical terms that may hinder understanding. Clearly define any acronyms or abbreviations used in your paper. Ensure that each paragraph has a clear topic sentence and flows smoothly into the next.
  • Citations and References : Properly cite all the sources used in your research paper. Follow the citation style recommended by your institution or the journal you intend to submit to (e.g., APA, MLA, or Chicago). Include in-text citations for direct quotes, paraphrased information, or any borrowed ideas. Create a comprehensive reference list at the end of your paper, following the formatting guidelines.
  • Revise and Edit : Take the time to revise and edit your research paper before final submission. Review the content, structure, and organization of your paper. Check for grammatical errors, spelling mistakes, and typos. Ensure that your paper adheres to the specified word count and formatting guidelines. Seek feedback from colleagues or mentors to gain valuable insights and suggestions for improvement.

Conclusion: Writing a radiology research paper requires careful planning, attention to detail, and effective communication. By following the tips provided in this section, you can write a well-structured and impactful research paper in the field of radiology. Define a clear research question, conduct a thorough literature review, develop a strong outline, and present your findings with clarity. Remember to adhere to proper citation guidelines and revise your paper before submission. With these guidelines in mind, you can contribute to the advancement of radiology knowledge and make a meaningful impact in the field.

iResearchNet’s Writing Services

Introduction: At iResearchNet, we understand the challenges faced by students in the field of health sciences when it comes to writing research papers, including those in radiology. Our writing services are designed to provide you with expert assistance and support throughout your research paper journey. With our team of experienced writers, in-depth research capabilities, and commitment to excellence, we offer a range of services that will help you achieve your academic goals and ensure the success of your radiology research papers.

  • Expert Degree-Holding Writers : Our team consists of expert writers who hold advanced degrees in various fields, including radiology and health sciences. They possess extensive knowledge and expertise in their respective areas, allowing them to deliver high-quality and well-researched papers.
  • Custom Written Works : We understand that each research paper is unique, and we tailor our services to meet your specific requirements. Our writers craft custom-written research papers that align with your research objectives, ensuring originality and authenticity in every piece.
  • In-Depth Research : Research is at the core of any high-quality paper. Our writers conduct comprehensive and in-depth research to gather relevant literature, scientific articles, and other credible sources to support your research paper. They have access to reputable databases and libraries to ensure that your paper is backed by the latest and most reliable information.
  • Custom Formatting : Formatting your research paper according to the specified guidelines can be a challenging task. Our writers are well-versed in various formatting styles, including APA, MLA, Chicago/Turabian, and Harvard. They ensure that your paper adheres to the required formatting standards, including citations, references, and overall document structure.
  • Top Quality : We prioritize delivering top-quality research papers that meet the highest academic standards. Our writers pay attention to detail, ensuring accurate information, logical flow, and coherence in your paper. We conduct thorough editing and proofreading to eliminate any errors and improve the overall quality of your work.
  • Customized Solutions : We understand that every student has unique research requirements. Our services are tailored to provide customized solutions that address your specific needs. Whether you need assistance with topic selection, literature review, methodology, data analysis, or any other aspect of your research paper, we are here to support you at every step.
  • Flexible Pricing : We strive to make our services affordable and accessible to students. Our pricing structure is flexible, allowing you to choose the package that suits your budget and requirements. We offer competitive rates without compromising on the quality of our work.
  • Short Deadlines : We recognize the importance of meeting deadlines. Our team is equipped to handle urgent orders with short turnaround times. Whether you have a tight deadline or need assistance in a time-sensitive situation, we can deliver high-quality research papers within as little as three hours.
  • Timely Delivery : Punctuality is a priority for us. We understand the significance of submitting your research papers on time. Our writers work diligently to ensure that your paper is delivered within the agreed-upon timeframe, allowing you ample time for review and submission.
  • 24/7 Support : We provide round-the-clock support to address any queries or concerns you may have. Our customer support team is available 24/7 to assist you with any questions related to our services, order status, or any other inquiries you may have.
  • Absolute Privacy : We prioritize your privacy and confidentiality. Rest assured that all your personal information and research paper details are handled with the utmost discretion. We adhere to strict privacy policies to protect your identity and ensure confidentiality throughout the process.
  • Easy Order Tracking : We provide a user-friendly platform that allows you to easily track the progress of your order. You can stay updated on the status of your research paper, communicate with your assigned writer, and receive notifications regarding the completion and delivery of your paper.
  • Money Back Guarantee : We are committed to your satisfaction. In the rare event that you are not satisfied with the delivered research paper, we offer a money back guarantee. Our aim is to ensure that you are fully content with the final product and receive the value you expect.

At iResearchNet, we understand the challenges students face when it comes to writing research papers in radiology and other health sciences. Our comprehensive range of writing services is designed to provide you with expert assistance, customized solutions, and top-quality research papers. With our team of experienced writers, in-depth research capabilities, and commitment to excellence, we are dedicated to helping you succeed in your academic endeavors. Place your order with iResearchNet and experience the benefits of our professional writing services for your radiology research papers.

Unlock Your Research Potential with iResearchNet

Are you ready to take your radiology research papers to the next level? Look no further than iResearchNet. Our team of expert writers, in-depth research capabilities, and commitment to excellence make us the perfect partner for your academic success. With our range of comprehensive writing services, you can unlock your research potential and achieve outstanding results in your radiology studies.

Why settle for average when you can have exceptional? Our team of expert degree-holding writers is ready to work with you, providing custom-written research papers that meet your specific requirements. We delve deep into the world of radiology, conducting in-depth research and crafting well-structured papers that showcase your knowledge and expertise.

Don’t let the complexities of choosing a research topic hold you back. Our expert advice on selecting radiology research paper topics will guide you through the process, ensuring that you choose a topic that aligns with your interests and has the potential to make a meaningful contribution to the field of radiology.

It’s time to unleash your potential and achieve academic excellence in your radiology studies. Place your trust in iResearchNet and experience the exceptional quality and support that our writing services offer. Let us be your partner in success as you embark on your journey of writing remarkable radiology research papers.

Take the first step towards elevating your radiology research papers by contacting us today. Our dedicated support team is available 24/7 to assist you with any inquiries and guide you through the ordering process. Don’t settle for mediocrity when you can achieve greatness with iResearchNet. Unlock your research potential and exceed your academic expectations.

ORDER HIGH QUALITY CUSTOM PAPER

thesis medical imaging

Abstract :Deep learning has been widely adopted in data-intensive clinical applications for diverse medical imaging datasets - from 2D radiographs to 3D magnetic resonance imaging (MRI), computerized tomography (CT) scans, and digital histopathology. However, model performance is often affected in low-resource scenarios such as when dealing with an insufficient number of samples, inadequate access to privileged information during inference, missing modalities during model training and inference, and limited annotated exemplars. Prior approaches are prone to overfitting, struggle to leverage privileged information effectively, fail to train with heterogeneous modality combinations, and are unable to adapt to novel domains without considerable retraining.

This thesis proposal presents several algorithmic innovations to address the aforementioned scenarios: (1) We develop dense feature extraction techniques such as spatio-temporal learning and contrastive learning, coupled with clinically-inspired domain information, to discover meaningful and robust patterns even in small sequential datasets to inject generalizability. (2) We explore recalibration methods that incorporate feature or distribution-matching losses to generate enhanced snapshot representations leveraging the limited temporal representations accessible in training. (3) We employ meta-adversarial learning in latent space to generate modality-agnostic representations that can train with heterogeneous imaging modalities and yet achieve segmentation performance at par with full-modal trained frameworks. (4) We embrace domain generalization via episodic meta-learning, integrating triplet and consistency preserving losses, to design a domain-agnostic segmentation model with limited exemplars. Using these methods, we addressed several unmet clinical needs in radiology (using x-ray, MRI, CT) and digital pathology.

To conclude this thesis, we propose to further investigate the utility of privileged information such as radiologists' eye gaze towards superior medical report generation. The results of this work have been published at MICCAI, CVPR, ICCV, MIDL, etc.

thesis medical imaging

Department of Circulation and Medical Imaging

  • Master's programmes in English
  • For exchange students
  • PhD opportunities
  • All programmes of study
  • Language requirements
  • Application process
  • Academic calendar
  • NTNU research
  • Research excellence
  • Strategic research areas
  • Innovation resources
  • Student in Trondheim
  • Student in Gjøvik
  • Student in Ålesund
  • For researchers
  • Life and housing
  • Faculties and departments
  • International researcher support

Språkvelger

Master thesis and projects - ultrasound technology - studies - department of circulation and medical imaging.

  • Master thesis and projects
  • Specialisation courses

Master's thesis and projects

Master's thesis and projects.

The Department of circulation and medical imaging offers projects and master's thesis topics for technology students of most of the different technical study programmes at NTNU. There is a seperate page for the supplementary specialisation courses .

List of topics

Topics for thesis and projects are given below. Most of the topics can be adjusted to the students qualifications and wishes.

Don't hesitate to take contact with the corresponding supervisor - we're looking forward to a discussion with you!

Asset Publisher

Blood flow imaging projects, estimation of true flow velocity using ultrasound, fusion of multi-modal cardiac data, pocket size ultrasound technology, pulse-echo based method for estimation of speed of sound, ultrasonic imaging through solids, surf imaging topics, ultrasound mediated drug delivery, real-time monitoring of left ventricular function under interventional procedures, fighting cancer with cw shear-wave elastography, adaptive clutter filtering for coronary heart disease, patient adaptive imaging in echocardiography, how to write ....

  • a good abstract
  • a good introduction

person-portlet

Lasse løvstakken professor.

Logo

Medical Imaging and Applications

Erasmus mundus joint master degree in, the maia (medical imaging and applications) is coordinated by the university of girona (udg, spain) and with the university of bourgogne (ub, france) and the università degli studi di cassino e del lazio meridionale (unicas, italy) as partners., why joining maia.

Medical Image Analysis and Computer Aided Diagnosis (CAD) systems, in close development with novel imaging techniques, have revolutionised healthcare in recent years. Those developments have allowed doctors to achieve a much more accurate diagnosis, at an early stage, of the most important diseases. Technology behind the development of CAD systems stems from various research areas in computer science such as: artificial intelligence, machine learning, pattern recognition, computer vision, image processing and sensors and acquisition. There is a clear lack of MSc studies which cover the previously mentioned areas with a specific application to the analysis of medical images and development of CAD systems within an integrated medical imaging background. Moreover, the medical technology industry has detected a growing need of expert graduates in this field. Join MAIA to be part of this revolution and impact your career!

This master’s degree is the right degree not just for holders of a bachelor’s degree in Informatics Engineering but also in closely related fields in either Engineering (e.g., Electrical, Industrial and Telecommunications Engineering, …) or Science (e.g., Mathematics and Physics, …) who pursue a deeper knowledge of information technology and its applications.

2 years’ joint master degree (120 ECTS)

STUDENT TESTIMONIAL

Why did you apply for the maia master’s.

Interview Sheikh Adilina, 5th MAIA Promotion

I am a MAIA graduate and it has been one of the most amazing experiences of my life. During the MAIA master program, I witnessed several academic and research institutions in France, Italy, Spain, and Switzerland. For me, MAIA was not only about research skills that prepared me for future opportunities in the domain of medical imaging and applications but also improved my interpersonal and management skills with extensive exposure to a rich multicultural environment. During the two years of the MAIA master's program, I became well-equipped academically and socially with the help of very cooperative colleagues and faculty members in all the participating universities. While working on several medical imaging modalities for diagnosis, I found my interest in histopathology image analysis. Therefore, I completed my thesis mainly focused on the stain heterogeneity in histopathology images for machine learning-based diagnosis. Currently, as a doctoral student, I am very much glad that MAIA prepared me well enough to address the challenges in digital pathology at the Institute of Pathology, University of Bern, Switzerland.

The MAIA master has introduced me to practical applications of machine learning and image analysis to solve existing difficulties when dealing with analyzing medical images. In addition to boosting my professional career, MAIA has exposed me to an international community of students and researchers, and helped me to discover and learn about different cultures. Traveling to multiple cities has positively influenced the way I see and relate with my environment. For my thesis, I worked on soft tissue lesion detection in digital breast tomosynthesis using domain adaptation from mammograms in Screenpoint Medical (Nijmegen, The Netherlands).

MAIA has been one of the greatest experiences in my life. It gave me the opportunity to deepen into imaging analysis techniques and to collaborate with experts in the field all across Europe. It was also amazing to share my life with fellow MAIAns, professors and staff, learning from other cultures and ways to confront the world. Without any doubt a time I will always cherish.

I'm a graduate student of the MAIA masters program. During my master thesis, I worked on the development of quality assessment framework for online brain MRI processing at the ViCOROB lab at the University of Girona. It was a wonderful experience to work with the three universities involved in this masters program. The courses taught, and the life experience acquired during these two years have proved to be essential for my career. I am really satisfied with how this master helped me in boosting my professional career. Currently, I'm enrolled as a PhD student in a Marie Curie project (B-Q MINDED).

Joining MAIA was definitely a life-changing experience. The program provided me with skills in the Medical Imaging field and allowed me to actively interact with leading-researchers around Europe. During my thesis, I worked on cardiac MRI by proposing cutting-edge methodologies to assist radiologists. Currently, as a doctoral student I am very pleased on how MAIA boosted my career and helped me reach my goals.

The MAIA master provided me with a stimulating environment to learn about different imaging techniques and image processing algorithms applicable in the medical field. The knowledge I obtained enabled me to develop frameworks for computer-aided diagnosis that I could put into practice during my internship. I was able to grow both professionally and personally, acquiring skills for my future career while visiting new countries and making lifelong bonds with my multicultural classmates. I am truly grateful for the new opportunities that have resulted from the completion of this challenging and rewarding master program.

I completed my Masters in Medical Imaging under MAIA. With the mobility between the three universities in France, Italy, and Spain, I had the opportunity to meet, work and study under a diverse group of specialized researchers. This program is designed with specialized courses in the domain of medical imaging having intense projects and lab works centered around the idea of learning how to replicate and develop state of the art techniques. This program allowed me to jointly work with some of the biggest research groups working in medical imaging in Europe. Currently, I started my PhD under full fellowship at the University of British Columbia.

I did my PhD at UNICLAM (Cassino, Italy) where, in collaboration with RadboudUMC (Nijmegen, the Netherlands), I worked on the development of Computer Aided Diagnosis systems for digital mammography. As a side-project in collaboration with Allen Institute for Brain Science (Seattle, U.S.A.) and University Campus Bio-Medico of Rome (Rome, Italy), I also worked on ultra-Terabyte whole mouse brain image visualisation and assisted analysis. Now, I am happy to continue my work at UNICLAM as post-doctoral fellow and am extremely grateful to collaborate with world-leading researchers in their respective fields.

I did my PhD on medical imaging, particularly in breast ultrasound imaging, and I am currently working in a private R&D company. My PhD provided me with the necessary skills to start a new challenge in a complete different research topic.

I did my PhD on registration of multimodal prostate images from University of Girona, Spain and University of Burgundy, France. It was a wonderful experience to work with the clinicians in Girona and the professors from both the universities were extremely helpful. After the completion of my PhD in 2012, I have been working in CSIRO, Australia as a post-doctoral fellow on brain image analysis. All these experiences on prostate and brain image analysis have helped me to get a new role as a senior research associate at the Case Western Reserve University, Cleveland, Ohio, USA that I am looking forward to start towards the end of this year.

During my doctorate studies at the UdG, I’ve had an opportunity to work on a novel skin scanning system together with the world’s top dermatologists. It’s a great honor for me to continue this work as a post-doc researcher.

I did my PhD on medical imaging at the UdG and I currently work at the RadboudUMC (Nijmegen, the Netherlands) developing image analysis techniques to make breast cancer screening more effective. I am really satisfied with how my PhD research helped me in boosting my professional career."

thesis medical imaging

Click the button below to begin the application process.

For additional help, see the FAQs.

logo-udg1

thesis medical imaging

Chemical Society Reviews

Immunological nanomaterials to combat cancer metastasis.

ORCID logo

* Corresponding authors

a Department of Neurosurgery, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China E-mail: [email protected]

b Key Laboratory of Precise Treatment and Clinical Translational Research of Neurological Diseases, Hangzhou, Zhejiang, China

c Clinical Research Center for Neurological Diseases of Zhejiang Province, Hangzhou, China

d Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China

e Department of Neurosurgery, Neurosurgery Research Institute, The First Affiliated Hospital, Fujian Medical University, Fuzhou 350005, Fujian, China E-mail: [email protected]

f State Key Laboratory of Natural Medicines and Jiangsu Key Laboratory of Drug Discovery for Metabolic Diseases, Center of Advanced Pharmaceuticals and Biomaterials, China Pharmaceutical University, Nanjing, China E-mail: [email protected]

g Departments of Diagnostic Radiology, Surgery, Chemical and Biomolecular Engineering, and Biomedical Engineering, Yong Loo Lin School of Medicine and College of Design and Engineering, National University of Singapore, Singapore 119074, Singapore E-mail: [email protected]

h Clinical Imaging Research Centre, Centre for Translational Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117599, Singapore

i Nanomedicine Translational Research Program, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore

j Institute of Molecular and Cell Biology, Agency for Science, Technology, and Research (A*STAR), 61 Biopolis Drive, Proteos, Singapore, Singapore

k Theranostics Center of Excellence (TCE), Yong Loo Lin School of Medicine, National University of Singapore, 11 Biopolis Way, Helios, Singapore 138667, Singapore

Metastasis causes greater than 90% of cancer-associated deaths, presenting huge challenges for detection and efficient treatment of cancer due to its high heterogeneity and widespread dissemination to various organs. Therefore, it is imperative to combat cancer metastasis, which is the key to achieving complete cancer eradication. Immunotherapy as a systemic approach has shown promising potential to combat metastasis. However, current clinical immunotherapies are not effective for all patients or all types of cancer metastases owing to insufficient immune responses. In recent years, immunological nanomaterials with intrinsic immunogenicity or immunomodulatory agents with efficient loading have been shown to enhance immune responses to eliminate metastasis. In this review, we would like to summarize various types of immunological nanomaterials against metastasis. Moreover, this review will summarize a series of immunological nanomaterial-mediated immunotherapy strategies to combat metastasis, including immunogenic cell death, regulation of chemokines and cytokines, improving the immunosuppressive tumour microenvironment, activation of the STING pathway, enhancing cytotoxic natural killer cell activity, enhancing antigen presentation of dendritic cells, and enhancing chimeric antigen receptor T cell therapy. Furthermore, the synergistic anti-metastasis strategies based on the combinational use of immunotherapy and other therapeutic modalities will also be introduced. In addition, the nanomaterial-mediated imaging techniques ( e.g. , optical imaging, magnetic resonance imaging, computed tomography, photoacoustic imaging, surface-enhanced Raman scattering, radionuclide imaging, etc. ) for detecting metastasis and monitoring anti-metastasis efficacy are also summarized. Finally, the current challenges and future prospects of immunological nanomaterial-based anti-metastasis are also elucidated with the intention to accelerate its clinical translation.

Graphical abstract: Immunological nanomaterials to combat cancer metastasis

Article information

Download citation, permissions.

thesis medical imaging

Y. Pan, J. Cheng, Y. Zhu, J. Zhang, W. Fan and X. Chen, Chem. Soc. Rev. , 2024, Advance Article , DOI: 10.1039/D2CS00968D

To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page .

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author.

This article has not yet been cited.

Advertisements

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

May 14, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

Ultrasound can help patients with a type of rheumatic disease lead longer and healthier lives

by Claes Björnberg, Umea University

ultrasound

A dissertation at Umeå University shows that ultrasound can help patients with a type of rheumatic disease to live longer and healthier lives. These patients have so far had an increased risk of cardiovascular disease, which contributes to premature death.

Patients with a type of rheumatic disease called radiographic axial spondyloarthritis, a disease that causes changes in the bones in the pelvis and spine, have poorer health quality of life than control subjects.

"However, our investigations have found factors in these patients that, if changed, can help improve their health-related quality of life," says thesis author Lucy Law.

The thesis consists of 4 articles. In paper 1, patients had significantly lower overall health-related quality of life compared to the controls, especially in physical aspects. Factors such as longer duration of disease, poor physical function, high disease activity, and living alone were linked to lower physical health-related quality of life. Mental health-related quality of life was affected by fatigue, high disease activity, and living alone.

"We also saw some differences between the sexes in the patient group."

In paper 2, patients showed stiffness and less elasticity in a major artery in the neck compared to the control group. Disease-related factors and age were associated with these changes.

In addition, paper 3 revealed that patients, especially men, had thicker walls in this artery compared to controls. Blood tests showed that certain inflammatory markers were associated with this thickening, especially in men.

In paper 4, patients had thicker fat around the heart compared to the control group. In male patients , fat-thickness is associated with cholesterol levels .

"The knowledge gained from this thesis can help to optimize and individualize patient care and management, as well as reduce the effect that the disease has on the patient and society. This is important because there is currently no cure," says Lucy Law.

Explore further

Feedback to editors

thesis medical imaging

New study identifies mechanism of immune evasion of SARS-CoV-2 and variants

18 minutes ago

thesis medical imaging

More research supports androgen treatment for breast cancer

34 minutes ago

thesis medical imaging

Up to 246 million older people may be exposed to heat risk by 2050 due to global warming

thesis medical imaging

Researchers determine the mutations that protect mice from B-cell cancers

36 minutes ago

thesis medical imaging

Germline regulation and sex differences: How they impact lifespan in vertebrates

5 hours ago

thesis medical imaging

Scientists discover blood proteins that may give cancer warning seven years before diagnosis

thesis medical imaging

New tool can help surgeons quickly search videos and create interactive feedback

thesis medical imaging

Research sheds light on how proteins linked to Alzheimer's disease influence neuronal growth

6 hours ago

thesis medical imaging

Tech alone can't replace human coaches in obesity treatment, study finds

15 hours ago

thesis medical imaging

Research links sleep apnea severity during REM stage to verbal memory decline

16 hours ago

Related Stories

thesis medical imaging

Health-related quality-of-life differences in men and women with advanced kidney disease

Jan 24, 2022

thesis medical imaging

Sex differences in quality of life and clinical outcomes in patients with heart failure

Aug 3, 2023

thesis medical imaging

Early coronary disease, impaired heart function found in asymptomatic people with HIV

Apr 4, 2024

thesis medical imaging

Alcohol consumption tied to less disease activity with rheumatoid arthritis

Feb 23, 2023

thesis medical imaging

Exercise may boost quality of life for patients with metastatic breast cancer

Dec 7, 2023

thesis medical imaging

Loneliness linked with elevated risk of cardiovascular disease in patients with diabetes

Jun 29, 2023

Recommended for you

thesis medical imaging

Study introduces hyperspectral dark-field microscopy for rapid and accurate identification of cancerous tissues

May 9, 2024

thesis medical imaging

Low-cost MRI paired with AI produces high-quality results

thesis medical imaging

Microscopic heart vessels imaged in super-resolution for the first time

May 6, 2024

thesis medical imaging

New study reveals age-related brain changes influence recovery after stroke

thesis medical imaging

Real-time MRI reveals the movement dynamics of stuttering

May 3, 2024

thesis medical imaging

Four state-of-the-art AI search engines for histopathology images may not be ready for clinical use

May 2, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

thesis medical imaging

  • Health Care Home
  • Moran Eye Center Home

Moran Introduces Oculoplastics and Medical Retina Fellowships

Robert Kersten, MD, FACS, FASOPRS, left, and Sudarshan “Sudi” Srivatsan, MD, the first oculoplastic fellow at the Moran Eye Center.

While John A. Moran Eye Center oculoplastic surgeon Robert Kersten, MD, FACS, FASOPRS , is renowned for his surgical skills, he’s also known as a top educator who has authored books and received awards for his work teaching and training the next generation of specialists.

At Moran, he directs an American Society of Oculofacial Plastic Surgery (ASOPRS)-approved fellowship , working with practicing fellow Sudarshan “Sudi” Srivatsan, MD.

One of only about 20 such fellowships offered each year in the United States, the two-year ASOPRS program is one of the nation’s most rigorous and came with Kersten when he joined the Moran faculty in 2023.

“The training has to meet strict criteria for approval based on the number and mix of surgical cases and the availability and cooperative work with surgeons in ancillary programs such as neurosurgery, otolaryngology, ocular pathology, and neuro-ophthalmology,” explains Kersten.

The ASOPRS requires fellows to author a designated number of articles, make national and international presentations, and undergo demanding testing each year. They must publish at least one original thesis and pass yearly written and oral board exams.

“Moran presents an excellent environment for exposure to a vast range of oculoplastic cases given the extensive referral area and additional hands-on training at the Salt Lake City Veterans Affairs Medical Center and Primary Children’s Hospital,” says Kersten.

“Fellows also have the advantage of training with my partner, Joon Kim, MD . We see challenging and reportable cases on an almost weekly basis.”

Srivatsan says, "It's been gratifying to work as a team with two of the pre-eminent oculoplastic surgeons in the country. I gain new insights into diagnosis and treatment every day. I'm nearly halfway through my fellowship and excited about what lies ahead."

Kersten has mentored hundreds of residents and over 40 fellows in his field and describes teaching as a calling.

“Sudi has been a dream to work with,” says Kersten. “He never fails to run important clinical findings to ground and frequently enlightens us with reviews of the recent literature as it pertains to challenging management cases. And, he has a good sense of humor.”

Medical Retina Fellowship Addresses Growing Need

Steffen Schmitz-Valckenberg, MD, left, and Brian Solinsky, MD, the first medical retina fellow at the Moran Eye Center.

Retinal specialist and internationally recognized expert in age-related macular degeneration, Steffen Schmitz-Valckenberg, MD , oversees Moran’s new medical retina fellowship , a position that is being offered in a growing number of institutions nationwide.

“The medical management of several vitreous, retina, and choroidal diseases has seen major breakthroughs and important developments in recent years, along with an increasing number of patients with these conditions,” says Schmitz-Valckenberg. “With the medical retina fellowship, we can address the increasing need for well-trained specialists in the field. Moran offers a diverse and in-depth learning environment with a renowned retina faculty that shares a special interest in teaching and training young colleagues.”

The fellowship offers ample opportunities to intensively interact with the faculty, ophthalmology residents, and medical students. It includes conferences, journal clubs, and rounds. At Moran, trainees actively participate in imaging analysis at the Utah Retinal Reading Center, directed by Schmitz-Valckenberg. Fellows also have the opportunity to participate in local and international outreach with Moran's Global Outreach Division.

As Moran’s first medical retina fellow, Brian Solinsky, MD, says he paved the way for future fellows by giving feedback to a faculty willing to listen and shape the best clinical and research experience possible.

“The opportunities at Moran are unparalleled,” says Solinsky. “I don’t know of another program in the country where you get elite medical training, including time with oncology and uveitis, time for cataract surgery, and world-class research facilities. I’ve also been able to work with the global outreach team and had a chance to travel to Nepal, where I was able to teach, learn, and make some valuable connections.”

Schmitz-Valckenberg says, “We are happy to have Brian here at the Moran. He comes from a well-trained residency program and has quickly integrated into our team. The patients like him a lot, and it’s wonderful to see how he has progressed. I’m confident he will have a great future in the field.”

Read More From Education FOCUS 2024

Education FOCUS 2024

  • ophthalmology education

COMMENTS

  1. Radiology Thesis

    A thesis or dissertation, as some people would like to call it, is an integral part of the Radiology curriculum, be it MD, DNB, or DMRD. We have tried to aggregate radiology thesis topics from various sources for reference. Not everyone is interested in research, and writing a Radiology thesis can be daunting.

  2. Generalizable and Explainable Deep Learning in Medical Imaging with

    This dissertation investigates how to address these challenges, building generalizable and explainable deep learning models from small datasets. The thesis studies the impact on model performance of transferring prior knowledge learned from a non-medical source — ImageNet — to medical applications, especially when the dataset size is not ...

  3. AI in Medical Imaging Informatics: Current Challenges and Future

    Typical medical imaging examples. (a) Cine angiography X-ray image after injection of iodinated contrast; (b) An axial slice of a 4D, gated planning CT image taken before radiation therapy for lung cancer; (c) Echocardiogram - 4 chamber view showing the 4 ventricular chambers (ventricular apex located at the top); (d) First row - axial MRI slices in diastole (left), mid-systole (middle ...

  4. Medical image analysis based on deep learning approach

    Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning ...

  5. Medical Image Segmentation with Deep Learning

    Wang, Chuanbo, "Medical Image Segmentation with Deep Learning" (2020). Theses and Dissertations. 2434. https://dc.uwm.edu/etd/2434 This Thesis is brought to you for free and open access by UWM Digital Commons. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of UWM Digital Commons. For more

  6. Medical imaging

    Medical imaging comprises different imaging modalities and processes to image human body for diagnostic and treatment purposes. It is also used to follow the course of a disease already diagnosed ...

  7. A holistic overview of deep learning approach in medical imaging

    Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements ...

  8. Machine learning and deep learning approach for medical image analysis

    Machine Learning (ML), a subset of AI, has accelerated many research related to the medical field. Whereas, Deep Learning (DL) is a subset of ML that deals with neural network layers, analyzing the exact features required for disease detection [ 34, 71, 94 ]. The existing studies from 2014 to present, discusses many applications and algorithms ...

  9. Medical image analysis based on deep learning approach

    Medical imaging is a place of origin of the information necessary for clinical decisions. This paper discusses the new algorithms and strategies in the area of deep learning. In this brief introduction to DLA in medical image analysis, there are two objectives. The first one is an introduction to the field of deep learning and the associated ...

  10. PDF Enhancing Medical Imaging Workflows with Deep Learning

    electronic copies of this thesis document in whole or in part in any medium now known or hereafter created. Signature of Author: Department of Health Sciences and Technology ... Enhancing Medical Imaging Workflows with Deep Learning by Ken Chang Submitted to the Department of Health Sciences and Technology on May 28, 2020 in Partial Fulfillment ...

  11. PDF Medical Imaging and Applications Master Thesis, July 2023

    SZ and BD typically manifest in late teenage years or early 20s, while children at familial high risk may exhibit symptoms even earlier, often before the age of 12 (Robinson and Bergen, 2021; Thorup et al., 2015). Having a family history of BD or SZ is the strongest risk factor for developing these disorders and, according to a meta-analysis ...

  12. Patient Awareness and Knowledge of Medically Induced Radiation Exposure

    Patients' exposure to radiation has increased as medical imaging has expanded and new radiation technologies have arisen (Ditkofsky et al., 2016; Gargani & Picano, 2015; Sahiner et al., 2018). These procedures are essential in the medical profession because they are used for several purposes. These include the depiction and

  13. Fusion of medical imaging and electronic health records using deep

    In this paper, we describe different data fusion techniques that can be applied to combine medical imaging with EHR, and systematically review medical data fusion literature published between 2012 ...

  14. Master Thesis-Medical Image Analysis using Deep Learning

    This Master Thesis provides a summary overview on the use of current deep learning-based object detection methods for the analysis of medical images, in particular from microscopic tissue sections, and aims at making the results reproducible. This Master Thesis provides a summary overview on the use of current deep learning-based object detection methods for the analysis of medical images, in ...

  15. Yale Medicine Thesis Digital Library

    The digital thesis deposit has been a graduation requirement since 2006. Starting in 2012, alumni of the Yale School of Medicine were invited to participate in the YMTDL project by granting scanning and hosting permission to the Cushing/Whitney Medical Library, which digitized the Library's print copy of their thesis or dissertation.

  16. PDF Diagnostic and therapeutic radiography MSc dissertations

    Development of the novel, interactive e-learning package on pelvic RT late effects. Developed using Articulate 360 software with support from the hospital's blended learning team. In order to remain engaging for users, interactive material was incorporated including sliding scales and click-to-reveal boxes. To demonstrate the impact of late ...

  17. Radiology Research Paper Topics

    Explore the realm of radiology research paper topics and unleash your potential to contribute to the advancement of medical imaging and patient care. 100 Radiology Research Paper Topics. Radiology encompasses a broad spectrum of imaging techniques used to diagnose diseases, monitor treatment progress, and guide interventions. ...

  18. Ph.D. Thesis Defense: 'Advancing Representation Learning in Imperfect

    Abstract:Deep learning has been widely adopted in data-intensive clinical applications for diverse medical imaging datasets - from 2D radiographs to 3D magnetic resonance imaging (MRI), computerized tomography (CT) scans, and digital histopathology.However, model performance is often affected in low-resource scenarios such as when dealing with an insufficient number of samples, inadequate ...

  19. Imaging without barriers

    Recent portable MRI developments have produced machines dedicated solely to imaging the brain, extremities, or single organs—such as the prostate ()—with limited applications in screening across the body.Zhao et al. used a different approach to redesign a low-field machine for whole-body imaging by placing two permanent magnet plates, above and below the body, in an open configuration.

  20. A Comparative Study of Medical Imaging Techniques

    The interest ing techniques in this paper. are; X-ra y radiogra phy, X-ray C ompute d Tomography (CT), Magnetic Resonance Imaging. (MRI), ultrasono graphy, elastography, o ptical imaging ...

  21. Department of Circulation and Medical Imaging

    The Department of circulation and medical imaging offers projects and master's thesis topics for technology students of most of the different technical study programmes at NTNU. There is a seperate page for the supplementary specialisation courses.

  22. MAIA

    While working on several medical imaging modalities for diagnosis, I found my interest in histopathology image analysis. Therefore, I completed my thesis mainly focused on the stain heterogeneity in histopathology images for machine learning-based diagnosis. Currently, as a doctoral student, I am very much glad that MAIA prepared me well enough ...

  23. Immunological nanomaterials to combat cancer metastasis

    In addition, the nanomaterial-mediated imaging techniques (e.g., optical imaging, magnetic resonance imaging, computed tomography, photoacoustic imaging, surface-enhanced Raman scattering, radionuclide imaging, etc.) for detecting metastasis and monitoring anti-metastasis efficacy are also summarized. Finally, the current challenges and future ...

  24. Ultrasound can help patients with a type of rheumatic disease lead

    Radiology & Imaging May 14, 2024 Editors' notes

  25. PDF z Moscow Institute of Physics and Technology, Institutskii per. 9

    In what follows, we will measure the magnetic eld strength in units of Bc (1) and take the electron mass m, the Compton wavelength of the electron = ~=mc 3:86 10 11 cm, and its ratio to the speed of light =c 1:29 10 21 s as the units of mass, length, and time, respectively. Formally, this means that ~ = = c = 1.

  26. Moran Introduces Oculoplastics and Medical Retina Fellowships

    Retinal specialist and internationally recognized expert in age-related macular degeneration, Steffen Schmitz-Valckenberg, MD, oversees Moran's new medical retina fellowship, a position that is being offered in a growing number of institutions nationwide. "The medical management of several vitreous, retina, and choroidal diseases has seen major breakthroughs and important developments in ...

  27. Elektrostal

    Elektrostal. Elektrostal ( Russian: Электроста́ль) is a city in Moscow Oblast, Russia. It is 58 kilometers (36 mi) east of Moscow. As of 2010, 155,196 people lived there.