Role of artificial intelligence in head and neck surgery - A narrative review
PDF
Cite
Share
Request
Review
E-PUB
16 April 2026

Role of artificial intelligence in head and neck surgery - A narrative review

Turk J Surg. Published online 16 April 2026.
1. Department of General Surgery, Shaukat Khanum Memorial Cancer Hospital & Research Centre, Lahore, Pakistan
2. Department of Urology, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
3. Department of Internal Medicine, NewCross Hospital Royal Wolverhampton Trust, West Midlands, United Kingdom
4. Department of Internal Medicine, Ipswich Hospital East Suffolk and North Essex NHS Trust, Suffolk, United Kingdom
5. Department of Otolaryngology, Mayo Hospital, Lahore, Pakistan
6. Department of Head & Neck Surgery, Shaukat Khanum Memorial Cancer Hospital & Research Centre, Lahore, Pakistan
7. Department of Head & Neck Surgery, Queen Elizabeth Hospital, Birmingham, United Kingdom
No information available.
No information available
Received Date: 02.05.2025
Accepted Date: 12.03.2026
E-Pub Date: 16.04.2026
PDF
Cite
Share
Request

ABSTRACT

The advent of artificial intelligence (AI) in the field of head and neck surgery has remarkably impacted in various domains, including investigation, diagnosis, pre-operative decision making, surgical planning, rehabilitation, and so on. Head and neck surgery is a field that is practiced worldwide by otolaryngologists, maxillofacial surgeons, plastic surgeons and general surgeons. It has a number of challenging aspects. This review highlights the current advancements and future prospects in the field and its different applications in cancer detection, peri-operative care, and surgical planning. Deep learning and machine learning (ML) are the pillars of AI, by which huge datasets can be analyzed with precision. This can help to enhance the accuracy and meanwhile minimizing the chances of errors in making clinical decisions. On the other hand, there have been numerous ethical concerns regarding AI. Patient privacy, data security, element of empathy in complex decision making are a few concerns at the top. Like any other advancement, the decision is always taken after weighing risks and benefits. Therefore, instead of abandoning AI as a whole, we must come up with certain regulatory authorities that can keep these issues in check. And this can only happen with worldwide collaborative approach. We conducted this narrative review to summarize recent advancements in head and neck cancer surgery by searching PubMed, Scopus, and Google Scholar from January 2000 to July 2025 using predefined terms related to AI, ML, and head and neck surgery.

Keywords:
Cancer, endocrine surgery, excision, parathyroid, parathyroid surgery, robotic surgery, thyroid surgery

INTRODUCTION

Artificial intelligence (AI) has captivated medical researchers and practitioners since the advent of machine learning (ML) and deep learning (DL) in 1990 and 2010, respectively. The current abundance of health data and the desire to predict outcomes based on this data have sparked significant interest in ML. It allows computers to learn from information and past experiences, enabling them to make decisions without explicit programming for each task (Figure 1) (1). DL is a subset of ML that involves learning from hundreds of representative layers using neural networks, ultimately providing non-linear predictions.

AI encompasses the capacity of machines to replicate human intelligence autonomously without direct programming. As such, AI is expected to revolutionize healthcare because of its ability to handle data on a massive scale. Currently, AI-based medical platforms support diagnosis, treatment, and prognostic assessments in head and neck surgery at many healthcare facilities worldwide (1). Since the early 1990s, AI has been increasingly used to analyze radiological and pathological images and predict clinical outcomes based on these images, histology, and other clinical variables (2).

Surgery is all about precision, and now, with the help of AI, it is reaching new heights. Due to better computers, the explosion of health data, and the development of massive datasets, AI is rapidly advancing in healthcare. It has the potential to significantly aid medical decision-making and reduce cognitive biases and, ultimately, medical errors (3). This article will examine the current applications of AI-based solutions in head and neck surgery. Given the breadth and rapid evolution of the field, we conducted a narrative review rather than a formal systematic review, emphasizing recent clinical applications and translational advances (Table 1). This is a narrative review and is therefore susceptible to selection and publication bias. We mitigated these risks through multi-database searching, explicit inclusion and exclusion criteria, and prioritization of peer-reviewed primary studies; however, we did not adapt protocol registration, quantitative synthesis.

METHODOLOGY

A comprehensive search of the electronic online databases PubMed, Scopus, and Google Scholars was performed and relevant articles published from January 2000 to July 2025 on the applications of AI in head and neck surgery were identified. The combinations of the keywords “AI”, “ML”, “deep learning”, “neural networks”, “head and neck surgery”, “otolaryngology”, “oncology”, “robotic surgery”, and “diagnostic imaging”. We prioritized the last 10 years and the landmark earlier work.

Studies were included if they reported the use of AI-based tools, algorithms, or platforms in the diagnosis, preoperative surgical planning, intraoperative assistance, or postoperative care of head and neck surgical patients. Only the articles published in English peer-reviewed journals that presented either original research, systematic reviews, meta-analyses, or significant case series with a focus on AI applications were included. Non-English publications, conference abstracts without full-text availability, and studies not specific to head and neck surgery or not involving AI-based technologies were excluded.

Two authors independently screened titles and abstracts for relevance, and full-text articles meeting the inclusion criteria were retrieved for detailed review. The findings from the included literature were synthesized qualitatively, with emphasis on identifying recurring themes, trends, and potential future directions for AI integration in head and neck surgical practice.

AI in Diagnosis of Head and Neck Cancers

The prognosis of head and neck cancers heavily depends on early detection and prompt treatment. Most cases of head and neck cancer are typically diagnosed at an advanced stage, contributing to a pronounced decrease in survival rates even with effective treatment. AI has notable potential for diagnostic and therapeutic management in head and neck oncology. Oral cancers, the 13th most common cancer worldwide, pose a significant disease burden that necessitates early detection (4). Oral cavity cancers are the most common cancers of the upper respiratory digestive tract, with an increasing incidence rate. Detecting pre-malignant lesions can prevent them from developing into cancer, but it is challenging for non-specialists. 

Some ML tools enable detection through autofluorescence measurement or photography (5-7). Fu et al. (7) introduced a Conventional Neural Network (CNN) designed to identify oral cavity carcinomas in photographs. They trained it using 6176 images and achieved performance comparable to experts (Figure 2). A study conducted by Birur et al. (8) identified the potential of CNN-enabled m(mobile)health device as a powerful triaging tool for identifying suspicious oral lesions. This approach enhanced the screening proficiency of frontline workers in low-resource settings, leading to better disease outcomes (8). Deep learning-based algorithms were also employed to detect and localize benign and malignant vocal cord lesions during live endoscopies, reducing the need for an invasive diagnostic approach (9).

In pathology, diagnosing prognosis and predicting treatment response often requires additional tests like immunohistochemistry or next-generation sequencing. AI, combined with digital pathology, could transform this process by identifying subvisual structural features that are beyond the human eye’s ability to detect, creating new morphology-based biomarkers with prognostic and predictive value. The genotype-phenotype relationship suggests that genetic mutations in tumors often correlate with morphological changes. AI can predict molecular or clinical outcomes based on histomorphological data and is increasingly important in analyzing molecular pathology, such as using DL to classify DNA methylation profiles in lung squamous cell carcinomas and differentiate metastasis from primary cancer. AI is also becoming crucial in multi-omics-based molecular tumor classifications (10). 

Identifying nodal metastasis (NM) and tumor extranodal extension (ENE) is a crucial aspect of head and neck cancer management. Currently, this can only be achieved using post-operative histopathology specimens. While there has been some success in the detection of NM on diagnostic radiological imaging, studies on the detection of radiologic ENE have shown less than satisfactory performance overall, with the area under the curve (AUC) of the receiver operating characteristic plot ranging between 0.65 and 0.69 (11). Kann et al. (12) designed a 3D CNN model and trained it on 2.875 computed tomography (CT)-segmented lymph nodes from 124 samples with corresponding pathology labels. It was later tested on a blinded set of 131 samples, which showed the model’s strong performance, achieving an AUC of 0.91 (95% confidence interval: 0.85-0.97) for predicting both ENE and NM (12). Furthermore, Seidler et al. (13) demonstrated that using an AI-based approach, radiomic features of the primary tumor can be used for nodal status prediction. Several studies employed similar models and were proven highly effective in nodal identification (13, 14).

Furthermore, the diagnostic utility of AI methods has been extensively explored in all imaging modalities, including ultrasound (US), CT, magnetic resonance, and nuclear medicine (15). Hyperspectral imaging is a non-invasive tool used to detect cancers by analyzing the absorption, fluorescence emission, or reflectance spectrum of tissues (16). CNNs allow the detection and classification of features on panoramic radiographs (17, 18) and cone-beam images (19). Among these features, radiolucent bone lesions of the mandible and maxilla are the most studied (17, 18). While certain lesions can be treated with simple surgical procedures like enucleation and curettage, others, like ameloblastomas, may necessitate a more aggressive approach. This is to minimize the chances of tissue damage, malignant changes, and recurrence (20). Depending on their architecture, the algorithms used allow the automation of part or all of the process of detection, segmentation, feature extraction, and classification of lesions on imaging. Human papillomavirus (HPV) significantly impacts the development of head and neck cancers, particularly in the oropharynx, leading to distinct clinical characteristics. Patients with HPV-positive disease generally have better outcomes compared to those who are HPV-negative, influencing their treatment approaches. Detecting HPV status typically involves invasive methods such as immunohistochemistry for P16 or direct testing for HPV DNA. However, non-invasive approaches using head and neck imaging combined with ML algorithms have been explored to determine HPV status. The results affirm the practicality of predicting HPV status for head and neck cancers (21-23). Ultrasonography is the investigation of choice for diagnosing thyroid nodules, but its reliance on operator skill makes interpretation subjective. The surge in ultrasonography usage has overwhelmed radiologists, driving the need for automated data processing. Therefore, ML has been extensively trialled and proven effective for thyroid nodule detection and diagnosis (24, 25).

AI in Preoperative Planning

AI-enhanced preoperative planning marks a significant change in surgical preparation, providing surgeons with a data-driven method to make informed decisions and improve patient outcomes. Time limitations, decision exhaustion, deductive reasoning, intricacy, and prejudice deter surgical decision-making, contributing to avoidable harm. Conventional decision support systems suffer from time-consuming manual data input and inadequate precision. Automated AI models powered by real-time electronic health record data can overcome these deficiencies. AI-powered algorithms can be helpful in surgical risk assessment because of their ability to compile and interpret large amounts of data. Furthermore, they could also be explored to formulate strategies to reduce patient mortality and morbidity (26, 27). Preoperative risk can be calculated from AI-augmented analysis of radiological imaging, which is mainly utilized by surgeons to gather more information about the morphology of tumors and to plan surgical approaches.

Bihorac et al. (28) developed an ML algorithm using electronic health record data that could predict the risk of certain complications and mortality at 1-, 3-, 6-, 12-, and 24 months after surgery (AUCs of 0.82 and 0.94). Researchers also successfully pinpointed patients at higher risk of extended hospitalization and intricate therapeutic management after surgery. This outcome typically relies on factors such as operation duration, ischemia duration, transplantation, American Society of Anesthesiologists score, intensive care duration, and TNM stages, achieving accuracy levels of up to 97.92% (29).

Patients with dentofacial deformity (DFD) receive significant advantages throughout the orthodontic-surgical treatment process by integrating these new technologies, starting from the initial diagnosis and continuing through postoperative monitoring (30). Many studies have suggested using 3D imaging analysis tools to better understand complex DFD. By incorporating AI, these tools can quickly identify and interpret numerous bone and skin landmarks essential for comprehensive 3D analysis. This approach surpasses other computational techniques previously used (31). Most planning software now includes simulations of potential profile alteration to accommodate the increasing number of patients worried about their post-operative modification of appearance. However, the approximation models and bone-skin displacement ratios used by such software often have limitations when it comes to simulating changes in soft tissue (32).

A few other applications of AI in oral-maxillofacial surgery include computer-aided design and computer-aided manufacturing (CAD/CAM) (33-35). CAD/CAM uses digital imaging, three-dimensional (3D) photography, intraoral scans, and 3D printing to design and fabricate customized implants, prostheses, guides, and plates for reconstructing maxillofacial defects (36). Artificial neural networks (ANN), another domain of AI, can model complex non-linear relationships between inputs and outputs to perform tasks such as image recognition, natural language processing, speech synthesis, and generation. ANNs can be used to recognize facial features, emotions, expressions, and gestures, synthesize realistic speech and faces, generate captions and descriptions for images and videos, and create interactive avatars for maxillofacial surgery (33-35).

Early diagnosis of laryngeal carcinoma is crucial for preserving organ function. A study reviewed five databases for research on AI-based models assessing images of laryngeal lesions from endoscopy. The results showed AI’s promising accuracy, sensitivity, and specificity, with values ranging from 0.806 to 0.997, and 0.91 for benign and 0.94 for malignant lesions. AI performance improved with larger image databases and more pre-processing steps. To enhance AI’s diagnostic quality, developing image evaluation standards and fostering multi-center collaboration for sharing image databases is essential to create high-performing models (37).

Elliott Range et al. (38) applied ML to thyroid fine needle aspiration biopsy (FNAB) whole-slide images (WSI) to identify regions of interest (ROI) and predict malignancy. Overall performance was comparable to expert assessments. However, the algorithm underperformed with indeterminate Bethesda categories, reinforcing a hybrid pathway where cytopathologist supervision remains crucial and AI assistance is restricted to clearly classifiable cases (Figure 3) (38).

AI Enhanced Surgical Techniques

A study conducted at the University of Pennsylvania, Philadelphia, aimed to develop a minimally invasive robotic surgery technique for treating neoplasms in the parapharyngeal space and infratemporal fossa. Using the da Vinci Surgical System, initial procedures were performed on cadavers and a dog, followed by a trial on a human patient. The robotic approach provided excellent visualization and successful tissue dissection with minimal complications. Similarly, studies have shown that the da Vinci Surgical Robot has been effectively used to excise tongue neoplasms and treat various mediastinal lesions (39-41). Modern robotic platforms increasingly incorporate AI-enabled features (e.g., augmented anatomy recognition, scene interpretation, and intraoperative decision support), highlighting a potential of natural convergence between robotics and surgical AI.

Udelsman et al. (42) used preoperative and intraoperative parathyroid hormone (PTH) levels to predict cure rates in minimally invasive parathyroidectomy. This surgery targets a single adenoma identified through imaging. Sometimes, multiple adenomas go undetected, and PTH levels help ensure that all are removed. They transformed PTH data and used logistic regression to predict cures. Testing on 100 patients showed the model correctly predicted cures in 96.3% of single adenoma cases and 89.4% of multiple adenoma cases (42).

Halicek et al. (43) developed a deep CNN classifier to distinguish benign from malignant head and neck tissue within the excised cutaneous and thyroid specimens. Their model achieved incredible performance with 97% sensitivity, 96% specificity, and 96% accuracy and could be employed to alter resection margins accurately and, in turn, enhance prognostic outcomes (43).

Quellec et al. (44) proposed that automated case retrieval could provide real-time warnings or recommendations during surgery by analyzing live surgical video. André et al. (45) focused on the emerging field of probe-based confocal laser endomicroscopy (pCLE). They noted that the classification of pathologies for pCLE is still being developed by doctors and suggested that accessing cases with existing annotations and matching histopathological diagnoses could assist surgeons in making immediate decisions, such as whether to biopsy tissue (45). They achieved 80.1% accuracy using a weighted k-NN on 1036 images from 54 patients and reported 94.2% accuracy with a similar method on 121 videos from 68 patients. André et al. (46) reached 96.7% AUC for the top concept and 49.4 Kendall τ correlation on 118 videos from 66 patients. Tafresh et al. (49) obtained 89.9% accuracy and 48.8% Spearman ρ correlation on 118 videos from 66 patients. Gu et al. (50) reported high accuracy (96.6% and 89.2% respectively) on breast tissue datasets. Therefore, automated case retrieval and pCLE, can assist head and neck surgical oncology by providing real-time recommendations and improving diagnostic accuracy during surgery. 

AI in Thyroid and Parathyroid Surgery

In endocrine head and neck surgery, AI has rapidly advanced diagnostic imaging, cytopathology, and intraoperative decision support. On US, multiple groups report automated or AI-augmented TI-RADS workflows that improve efficiency and reduce inter-reader variability. DL models have demonstrated a potential to aid nodule detection, feature scoring, and malignancy prediction beyond manual TI-RADS alone (51-53). Beyond B-mode classification, CNNs trained on large US datasets achieve performance approaching expert readers for malignancy prediction and segmentation. Recent meta-analytic efforts continue to confirm the feasibility of such models (54-56).

In cytopathology, AI applied to WSI of thyroid FNAB can identify ROI and assist in diagnoses and risk prediction, with a performance comparable to pathologists. Such applications have potential use in indeterminate Bethesda categories and clinical implementation can help reduce cytopathologists work load (57).

AI-enhanced anatomical recognition during endoscopic thyroidectomy has shown remarkable accuracy for recurrent laryngeal nerve identification and could complement conventional intraoperative nerve monitoring (58). Moreover, during post postoperative course, ML-based risk calculators have been validated to predict early hypocalcemia after total thyroidectomy, enabling selective calcium supplementation pathways (59). For gland preservation, near-infrared autofluorescence—including probe-based systems (e.g., PTeye)—improves parathyroid identification and may reduce hypocalcemia (60).

AI in Post-operative Care and Monitoring

Complications following head and neck surgery can have an impact on both the surgical outcomes and the timely administration of any necessary adjuvant treatment (61). AI is an emerging tool that forecasts survival rates, surgical outcomes, and potential complications, providing tailored strategies for individual patients. When used thoughtfully, AI has the potential to transform postoperative surgical care (62). AI can predict post-operative outcomes and the length of hospital stays (63). Risk analysis is ML to analyze major post-operative events in head and neck surgery, which not only helped to predict but also gave us an indication of the level of care needed for patients after surgery (64). Kordzadeh et al. (65) applied AI to detect life-threatening complications after post-endovascular aneurysm repair .

Using thermal imaging, near-infrared spectroscopy, flow-coupler, and implantable Doppler for free flap viability monitoring in reconstructive surgeries has proven superior to clinical assessment, thus improving salvage rates for the surgery (66). One of the most common complications after surgery is cardiorespiratory events. While inward vital monitoring is done periodically, often at 4-6-hour intervals, using portable wireless monitoring devices for each patient will provide continuous monitoring and timely detection of complications. This benefits patient management and is cost-effective, preventing extended hospital stays secondary to complications (67). Hence, AI uses integrated data and precise calculations that help predict and monitor post-operative complications and can be utilized for better surgical outcomes. 

Challenges and Limitations

The limitations of AI in head and neck surgery, as well as medicine in general, are widely debated. Ethical considerations and patient privacy are the primary challenges in integrating AI into healthcare. Technical limitations related to data and analysis include (a) slight differences in training and testing dataset sizes, (b) varying performance metrics during regression or classification tasks, and (c) different validation methods, with some research lacking data validation (1).

An evident illustration of the ethical issues with AI can be seen in facial recognition software, which has the potential to perpetuate racial bias and pose significant risks in the field of plastic surgery (68). Additionally, there are potential risks, such as cyber-attacks on AI systems and the emergence of bias within the complex digital network. Addressing these challenges may require advanced techniques (69).

There are numerous challenges to the widespread implementation of AI as diagnostic assistance. Interestingly, the Royal College of Surgery released a report highlighting several important issues regarding the widespread adoption (70). There is a growing concern among healthcare professionals and patients about the potential replacement of medical personnel by machines. The thing is the loss of sensitive personal interaction, which generally goes together with a doctor-patient relationship (71). Data governance and ethics remain a significant concern as large amounts of patient data could be exposed to risk (72). Additionally, there is the “Black Box problem”, which means people cannot see how AI arrives at its decisions. This lack of transparency adds to some scepticism about AI now (73). The black-box problem is the high dependence of an algorithm on available information. However complex they may be in making determinations, AIs are only supposed to aid humans rather than replace them entirely (74).

Healthcare data segregation and limited access make it difficult to create effective AI systems. Protecting patients’ confidential information should be given priority because there is a risk of unauthorized access to this data. Biased AI algorithms over-fitting or poor-quality data can lead to unfair results, worsening health inequities through misleading diagnoses. Many AI systems lack transparency, raising accountability concerns at various levels. Ethical concerns are related to responsibility and standards for making decisions using AI and fear of employees losing jobs within healthcare (75). To face these challenges, we need to improve data sharing, enforce more robust security protocols, employ strategies that can counteract bias in algorithmic decision-making processes, make sure there is transparency in systems where AI is used, create ethical frameworks for AI development, and validate its applications through empirical methods (76).

Future Directions and Opportunities

Advancements in preoperative preparation have streamlined the process from pre-surgery to post-surgery, making it more efficient and effective. While this may lead to higher patient expenses, the benefits include reduced operation times and shorter hospital stays. Future research could focus on a cost-benefit analysis to determine if virtual planning ultimately reduces overall healthcare costs (77).

In otorhinolaryngology and head and neck surgery, image-based AI assists surgeons in identifying disorders in high-demand situations. Enhanced communication among educational institutions will improve AI reliability and broader applicability, moving it from proof-of-concept to clinical use. Challenges include training algorithms on large, diverse datasets, standardizing data exchange to reduce waste, and adopting 3D-based AI in surgical education. Prospective studies are needed to identify practical issues and compare real-world performance to lab studies (78).

DL networks for bone structures in the head and neck primarily focus on skull and mandible segmentation. Skull segmentation networks mainly utilise techniques using US or magnetic resonance imaging. Current research aims to enhance mandibular segmentation accuracy by considering slice layer correlations, multi-scale features, and adding attention mechanisms. However, further advancements are required, especially for mandibular segmentation that is compromised by metal artifacts and tumor erosions. Research on other tissues, such as the maxillary sinus, is limited due to the lack of publicly available data (79).

AI is expected to influence biomedical research and medicine significantly in the future. It is gaining popularity as a potential alternative to human efforts, especially in decision-making and reasoning. This capability could revolutionize its application in surgical procedures by mimicking a surgeon’s cognitive abilities and, when paired with robotics, serving as a reliable assistant. Robotic surgery within the field of rhinology will undoubtedly be the subject of further investigation, mainly to reduce surgical morbidity, elevate surgical dissection, and improve patient outcomes (80). Due to safety concerns, unsupervised AI use in surgery is unlikely in the near future. However, AI’s widespread implementation can significantly enhance diagnostic and non-invasive surgical aspects, reducing medical professionals’ workloads.

The rapid growth of data and AI integration in surgery is becoming increasingly necessary. Utilizing computers during surgery can enhance AI applications, making them more reliable and proficient tools in medicine. Over time, improved outcomes from AI involvement may further establish their importance in surgery. With ongoing technological advancements, there is optimism about AI integrated with robotics, revolutionizing surgical procedures. This approach could enhance the efficacy and safety of surgeries, with the surgeon acting as a guide.

CONCLUSION

Integrating AI and ML in head and neck surgery represents a significant transformation in the field. We highlighted the potential of AI to improve diagnostic accuracy, treatment planning, intraoperative decision-making, and postoperative care for patients with head and neck cancers. Despite the usefulness of AI in head and neck surgery, several challenges remain to be addressed. Integrating AI into clinical practice requires validation through large-scale clinical trials to ensure accuracy and reliability. Moreover, ethical considerations, including patient privacy and data security, must be addressed to increase trust and acceptance among healthcare professionals and patients.  Interdisciplinary collaboration, continued innovation, and the establishing clear regulatory frameworks can catapult the adaptation of AI-enhanced head and neck surgery techniques.

Author Contributions

Concept - S.S., N-H.B., M.A.; Design - S.S., N-H.B., M.A.; Data Collection or Processing - S.S., N-H.B., M.A., N.J., M.H.; Analysis or Interpretation - S.S., N-H.B., M.A.; Literature Search - S.S., N-H.B., M.A., N.J., M.H., G.K.; Writing - S.S., N-H.B., M.A., M.H., G.K., S.K., R.H.
Conflict of Interest: No conflict of interest was declared by the authors.
Financial Disclosure: The authors declared that this study received no financial support.

References

1
Tama BA, Kim DH, Kim G, Kim SW, Lee S. Recent advances in the application of artificial intelligence in otorhinolaryngology-head and neck surgery. Clin Exp Otorhinolaryngol. 2020;13:326-339.
2
Crowson MG, Ranisau J, Eskander A, Babier A, Xu B, Kahmke RR, et al. A contemporary review of machine learning in otolaryngology-head and neck surgery. Laryngoscope. 2020;130:45-51.
3
Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92:807-812.
4
Comprehensive assessment of evidence on oral cancer prevention released. Accessed June 3, 2024. Available from:https://www.who.int/news/item/29-11-2023-comprehensive-assessment-of-evidence-on-oral-cancer-prevention-released-29-november-2023
5
van Staveren HJ, van Veen RL, Speelman OC, Witjes MJ, Star WM, Roodenburg JL. Classification of clinical autofluorescence spectra of oral leukoplakia using an artificial neural network: a pilot study. Oral Oncol. 2000;36:286-293.
6
Shamim MZ, Syed S, Shiblee M, Usman M, Zaidi M, Ahmad Z. Detecting benign and precancerous tongue lesions using deep convolutional neural networks for early signs of oral cancer. Basic Clin Pharmacol Toxicol. 2019;125:184-185.
7
Fu Q, Chen Y, Li Z, Jing Q, Hu C, Liu H, et al. A deep learning algorithm for detection of oral cavity squamous cell carcinoma from photographic images: a retrospective study. EClinicalMedicine. 2020;27:100558.
8
Birur N P, Song B, Sunny SP, G K, Mendonca P, Mukhia N, et al. Field validation of deep learning based point-of-care device for early detection of oral malignant and potentially malignant disorders. Sci Rep. 2022;12:14283.
9
BS SM, Baird BJ, Holsinger FC, FACS. Detecting oropharyngeal carcinoma using multispectral, narrow‐band imaging and machine learning. The Laryngoscope. 2018.
10
Amin A, Cardoso SA, Suyambu J, Abdus Saboor H, Cardoso RP, Husnain A, et al. Future of artificial intelligence in surgery: a narrative review. Cureus. 2024;16:e51631.
11
Maxwell JH, Rath TJ, Byrd JK, Albergotti WG, Wang H, Duvvuri U, et al. Accuracy of computed tomography to predict extracapsular spread in p16-positive squamous cell carcinoma. Laryngoscope. 2015;125:1613-1618.
12
Kann BH, Aneja S, Loganadane GV, Kelly JR, Smith SM, Decker RH, et al. Pretreatment identification of head and neck cancer nodal metastasis and extranodal extension using deep learning neural networks. Sci Rep. 2018;8:14036.
13
Seidler M, Forghani B, Reinhold C, Pérez-Lara A, Romero-Sanchez G, Muthukrishnan N, et al. Dual-energy CT texture analysis with machine learning for the evaluation and characterization of cervical lymphadenopathy. Comput Struct Biotechnol J. 2019;17:1009-1015.
14
Chen L, Zhou Z, Sher D, Zhang Q, Shah J, Pham NL, et al. Combining many-objective radiomics and 3D convolutional neural network through evidential reasoning to predict lymph node metastasis in head and neck cancer. Phys Med Biol. 2019;64:075011.
15
Mahmood H, Shaban M, Rajpoot N, Khurram SA. Artificial intelligence-based methods in head and neck cancer diagnosis: an overview. Br J Cancer. 2021;124:1934-1940.
16
Zhang Y, Wu X, He L, Meng C, Du S, Bao J, et al. Applications of hyperspectral imaging in the detection and diagnosis of solid tumors. Transl Cancer Res. 2020;9:1265-1277.
17
Poedjiastoeti W, Suebnukarn S. Application of convolutional neural network in the diagnosis of jaw tumors. Heal Inf Res. 2018;24:236-241.
18
Yang H, Jo E, Kim HJ, Cha IH, Jung YS, Nam W. Deep learning for automated detection of cyst and tumors of the jaw in panoramic radiographs. J Clin Med. 2020;9:1-14.
19
Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J. 2020;53:9.
20
Effiom OA, Ogundana OM, Akinshipo AO, Akintoye SO. Ameloblastoma: current etiopathological concepts and management. Oral Dis. 2018;24:307-316.
21
Yu K, Zhang Y, Yu Y, Huang C, Liu R, Li T, et al. Radiomic analysis in prediction of human papilloma virus status. Clin Transl Radiat Oncol. 2017;7:49-54.
22
Vallieres M, Kumar A, Sultanem K, Naqa IE. FDG-PET image-derived features can determine HPV status in head-and-neck cancer. Int J Radiat Oncol Biol Phys. 2013;87:S467.
23
Buch K, Fujita A, Li B, Kawashima Y, Qureshi MM, Sakai O. Using texture analysis to determine human papillomavirus status of oropharyngeal squamous cell carcinomas on CT. AJNR Am J Neuroradiol. 2015;36:1343-1348.
24
Chi J, Walia E, Babyn P, Wang J, Groot G, Eramian M. Thyroid nodule classification in ultrasound images by fine-tuning deep convolutional neural network. J Digit Imaging. 2017;30:477-486.
25
Park VY, Han K, Seong YK, Park MH, Kim EK, Moon HJ, et al. Diagnosis of thyroid nodules: performance of a deep learning convolutional neural network model vs. radiologists. Sci Rep. 2019;9:17843.
26
Loftus TJ, Tighe PJ, Filiberto AC, Efron PA, Brakenridge SC, Mohr AM, et al. Artificial intelligence and surgical decision-making. JAMA Surg. 2020;155:148-158.
27
Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial intelligence in surgery: promises and perils. Ann Surg. 2018;268:70-76.
28
Bihorac A, Ozrazgat-Baslanti T, Ebadi A, Motaei A, Madkour M, Pardalos PM, et al. MySurgeryRisk: development and validation of a machine-learning risk algorithm for major complications and death after surgery. Ann Surg. 2019;269:652-662.
29
Vollmer A, Nagler S, Hörner M, Hartmann S, Brands RC, Breitenbücher N, et al. Performance of artificial intelligence-based algorithms to predict prolonged length of stay after head and neck cancer surgery. Heliyon. 2023;9:e20752.
30
Bouletreau P, Makaremi M, Ibrahim B, Louvrier A, Sigaux N. Artificial intelligence: applications in orthognathic surgery. J Stomatol Oral Maxillofac Surg. 2019;120:347-354.
31
Dot G, Rafflenbeul F, Arbotto M, Gajny L, Rouch P, Schouman T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. Int J Oral Maxillofac Surg. 2020;49:78.
32
Rasteau S, Sigaux N, Louvrier A, Bouletreau P. Three-dimensional acquisition technologies for facial soft tissues - applications and prospects in orthognathic surgery. J Stomatol Oral Maxillofac Surg. 2020;121:721-728.
33
Rokhshad R, Keyhan SO, Yousefi P. Artificial intelligence applications and ethical challenges in oral and maxillo-facial cosmetic surgery: a narrative review. Maxillofac Plast Reconstr Surg. 2023;45.
34
Rasteau S, Ernenwein D, Savoldelli C, Bouletreau P. Artificial intelligence for oral and maxillo-facial surgery: a narrative review. J Stomatol Oral Maxillofac Surg. 2022;123:276-282.
35
Pereira KR. Harnessing artificial intelligence in maxillofacial surgery. In: Lidströmer N, Ashrafian H, editors. Artificial Intelligence in Medicine. Springer; 2021.
36
Cristache CM, Tudor I, Moraru L, Cristache G, Lanza A, Burlibasa M. Digital workflow in maxillofacial prosthodontics –an update on defect data acquisition, editing and design using open-source and commercial available software. Appl Sci. 2021;11:973.
37
Żurek M, Jasak K, Niemczyk K, Rzepakowska A. Artificial intelligence in laryngeal endoscopy: systematic review and meta-analysis. J Clin Med. 2022;11:2752.
38
Elliott Range DD, Dov D, Kovalsky SZ, Henao R, Carin L, Cohen J. Application of a machine learning algorithm to predict malignancy in thyroid cytopathology. Cancer Cytopathol. 2020;128:287-295.
39
BW O ’Malley Jr, GS W. Robotic skull base surgery: preclinical investigations to human clinical application. Arch Otolaryngol Head Neck Surg. 2007; 133:1215-1219.
40
BW O ’Malley Jr, GS W, W S, NG H. Transoral robotic surgery (TORS) for base of tongue neoplasms. Laryngoscope. 2006;116:1465-1472.
41
Augustin F, Schmid T, Bodner J. The robotic approach for mediastinal lesions. Int J Med Robot. 2006;2:262-270.
42
Udelsman R, Donovan P, Shaw C. Cure predictability during parathyroidectomy. World J Surg. 2014;38:525-533.
43
Halicek M, Lu G, Little JV, Wang X, Patel M, Griffith CC, et al. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J Biomed Opt. 2017;22:60503.
44
Quellec G, Lamard M, Cazuguel G, Droueche Z, Roux C, Cochener B. Real-time retrieval of similar videos with application to computer-aided retinal surgery. Annu Int Conf IEEE Eng Med Biol Soc IEEE Eng Med Biol Soc Annu Int Conf. 2011;2011:4465-4468.
45
André B, Vercauteren T, Buchner AM, Wallace MB, Ayache N. Endomicroscopic video retrieval using mosaicing and visualwords. IEEE Trans Med Imaging. 2020.
46
André B, Vercauteren T, Buchner AM, Wallace MB, Ayache N. Learning semantic and visual similarity for endomicroscopy video retrieval. IEEE Trans Med Imaging. 2012;31:1276-1288.
47
André B, Vercauteren T, Perchant A, Buchner A, Wallace M, Ayache N. Endomicroscopic image retrieval and classification using invariant visual features. IEEE Trans Med Imaging. 2009:346-334.
48
M KT, N L, B A, N A, T V. Semi-automated query construction for content-based endomicroscopy video retrieval. In: Medical image computing and computer-assisted intervention — MICCAI 2014. Springer International Publishing; 2014:89-96.
49
Tafreshi MK, Linard N, André B, Ayache N, Vercauteren T. Semi-automated query construction for content-based endomicroscopy video retrieval. Med Image Comput Comput Assist Interv. 2014;17:89-96.
50
Gu Y, Vyas K, Yang J, Yang GZ. Unsupervised feature learning for endomicroscopy image retrieval. In: Medical image computing and computer assisted intervention — MICCAI 2017. Springer International Publishing; 2017:64-71.
51
Barinov L, Jairaj A, Middleton WD, D M, Beland, Kirsch J, et al. Improving the efficacy of ACR TI-RADS through deep learning-based descriptor augmentation. J Digit Imaging. 2023;36:2392-2401.
52
Hoang JK, Middleton WD, Tessler FN. Update on ACR TI-RADS: successes, challenges, and future directions, from the AJR special series on radiology reporting and data systems. AJR Am J Roentgenol. 2021;216:570-578.
53
Kim J, Kim MH, Lim DJ, Lee H, Lee JJ, Kwon HS, et al. Deep learning technology for classification of thyroid nodules using multi-view ultrasound images: potential benefits and challenges in clinical application. Endocrinol Metab. 2025;40:216-224.
54
Zhang P, Xu Q, Jiang F. The diagnostic value of convolutional neural networks in thyroid cancer detection using ultrasound images. Front Oncol. 2025;15:1534228.
55
Ni J, You Y, Wu X, Chen X, Wang J, Li Y. Performance evaluation of deep learning for the detection and segmentation of thyroid nodules: systematic review and meta-analysis. J Med Internet Res. 2025;27:e73516.
56
Yu D, Song T, Yu Y, Zhang H, Gao F, Wang Z, et al Risk assessment of thyroid nodules with a multi-instance convolutional neural network. Front Oncol. 2025;15:1608963.
57
Dov D, Kovalsky SZ, Feng Q, Assaad S, Cohen J, Bell J, et al. Use of machine learning-based software for the screening of thyroid cytopathology whole slide images. Arch Pathol Lab Med. 2022;146:872-878.
58
Nishiya Y, Matsuura K, Ogane T, Hayashi K, Kinebuchi Y, Tanaka H, et al. Anatomical recognition artificial intelligence for identifying the recurrent laryngeal nerve during endoscopic thyroid surgery: a single-center feasibility study. Laryngoscope Investig Otolaryngol. 2024;9:e70049.
59
Muller O, Bauvin P, Bacoeur O, Michailos T, Bertoni MV, Demory C, et al. Machine learning-based algorithm for the early prediction of postoperative hypocalcemia risk after thyroidectomy. Ann Surg. 2024;280:835-841.
60
Kiernan CM, Thomas G, Baregamian N, Solόrzano CC. Initial clinical experiences using the intraoperative probe-based parathyroid autofluorescence identification system-PTeye™ during thyroid and parathyroid procedures. J Surg Oncol. 2021;124:271-281.
61
Mesolella M, Allosso S, M Di Lullo A, Ricciardiello F, Motta G. Postoperative infectious complications in head and neck cancer surgery. Ann Ital Chir. 2022;93:637-647.
62
Giannitto C. The use of artificial intelligence in head and neck cancers: a multidisciplinary survey. J Pers Med. 2024;14:341.
63
Gupta P, Haeberle HS, Zimmer ZR, Levine WN, Williams RJ, Ramkumar PN. Artificial intelligence-based applications in shoulder surgery leaves much to be desired: a systematic review. JSES Rev Rep Tech. 2023;3:189-200.
64
Mascarella MA. Above and beyond age: prediction of major postoperative adverse events in head and neck surgery. Ann Otol Rhinol Laryngol. 2021;131:697-703.
65
Kordzadeh A, Hanif M, Ramirez M, Railton N, Prionidis I, Browne TR. Prediction, pattern recognition and modelling of complications post-endovascular infra renal aneurysm repair by artificial intelligence. Vascular. 2020;29:171-182.
66
Knoedler S. Postoperative free flap monitoring in reconstructive surgery—man or machine? Front Surg. 2023;10:1130566.
67
Khanna AK, Ahuja S, Weller RS, Harwood TN. Postoperative ward monitoring – why and what now?,” Baillière’s best practice and research in clinical anaesthesiology/Bailliere’s best practice & research. Clin Anaesthesiol. 2019;33:229-245.
68
Gumbs AA, Perretta S, d’Allemagne B, Chouillard E. What is artificial intelligence surgery. Art Int Surg. 2021;1:1-10.
69
Kiener M. Artificial intelligence in medicine and the disclosure of risks. AI Soc. 2021;36:705-713.
70
R.C.S. RCS: future of surgery 2018. Available from: https://futureofsurgery.rcseng.ac.uk/?_ga=2.134153868.344240087.1578048159-1041599817.1578048159
71
Haan M, Ongena YP, Hommes S, Kwee TC, Yakar D. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol. 2019;16:1416-1419.
72
Jones LD, Golan D, Hanna SA, Ramachandran M. Artificial intelligence, machine learning and the evolution of healthcare: a bright future or cause for concern? Bone Jt Res. 2018;7:223-225.
73
Miragall MF, Knoedler S, Kauke-Navarro M, Saadoun R, Grabenhorst A, Grill FD, et al. Face the future-artificial intelligence in oral and maxillofacial surgery. J Clin Med. 202;12:6843.
74
Devabalan Y. The use and challenges of artificial intelligence in otolaryngology. Authorea Preprints; 2020.
75
Mithany RH, Aslam S, Abdallah S, Abdelmaseeh M, Gerges F, Mohamed MS, et al. Advancements and challenges in the application of artificial intelligence in surgical arena: a literature review. Cureus. 2023;15:e47924.
76
Khan B, Fatima H, Qureshi A, Kumar S, Hanan A, Hussain J, et al. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed Mater Devices. 2023:1-8.
77
Jandali D, Barrera JE. Recent advances in orthognathic surgery. Curr Opin Otolaryngol Head Neck Surg. 2020;28:246-250.
78
Wu Q, Wang X, Liang G, Luo X, Zhou M, Deng H, et al. Advances in image-based artificial intelligence in otorhinolaryngology-head and neck surgery: a systematic review. Otolaryngol Head Neck Surg. 2023;169:1132-1142.
79
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, et al. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol. 2022;67.
80
Amanian A, Heffernan A, Ishii M, Creighton FX, Thamboo A. The evolution and application of artificial intelligence in rhinology: a state of the art review. Otolaryngol Head Neck Surg. 2023;169:21-30.