Quantitative assessment of vocal cord abduction and adduction to measure movement symmetry using flexible fibreoptic endoscopy

  • Radhika Menon

Student thesis: Doctoral Thesis

Abstract

Vocal cord movement abnormalities are diagnosed by subjective visual assessment using endoscopy. Objective measures through image processing have been proposed in previous studies to overcome the drawback of subjectivity in the current clinical practice. However, they have mainly focussed on quantifying high-speed vocal cord vibrations using specialist expensive acquisition systems. An approach more applicable to routine clinics would be to quantify the slower vocal cord movements, i.e., abduction and adduction, which are recordable at normal camera capture rates. Moreover, in the UK, the flexible fibreoptic endoscope is preferred for primary diagnosis, but it renders poorer image quality than that by the rigid laryngoscope commonly used for objective assessment. Therefore, in this thesis, a generalisable technique is developed by quantifying vocal cord abduction and adduction through novel image processing techniques for videos acquired at the routine voice clinic. In the absence of publicly available data of vocal cord motion, acquired at the voice clinic using flexible fibreoptic endoscopy and normal camera capture, such a database is created in this work with 30 videos of normal and abnormal cases. A 5- category scale is designed for quantifying vocal motion because clinicians do not currently have a numerical grading system. Vocal cord motion in the video database is graded using the proposed rating scale by six clinicians through subjective visual assessment. Inter- and intra-rater agreement and reliability measures are computed to evaluate their performance using the scale. Furthermore, ground truth scores of vocal cord motion are obtained using the clinicians' ratings for all the videos in the database. A novel framework is presented for the localisation and segmentation of the glottal area in a given image sequence of vocal cord abduction or adduction from the database. The challenges specific to abducting and adducting vocal cords from fibreoptic endoscopy videos are addressed since algorithms developed in previous studies for vibrating vocal cords with rigid endoscopy cannot be directly applied to the present database. In particular, the honeycomb artefact is suppressed, and a knowledge-based approach is proposed for glottis localisation and removal of spatial glottal drift, utilising a single user-defined reference point in one frame of a sequence. Techniques are proposed for image enhancement, initial contour estimation using SUSAN edge detection and thresholding, and glottal area segmentation with a localised region-based level set method. These techniques form a novel framework that accounts for the variation in shape, size and illumination of the glottal area in a sequence of abducting/adducting vocal cords. A novel model called SynGlotIm is developed to create synthetic image sequences of the glottal area during abduction and adduction. Analogous to the head phantoms used in MRI, this model is the first of its kind to synthesise glottal images over a realistic range of abduction angles, intensity inhomogeneity patterns of the glottal area, image contrast, blurring and noise, through modification of its input parameters. Four synthetic sequences that simulate real ones from the database are segmented. The similarity between the segmented contours from the synthetic and real images demonstrate that SynGlotIm can be used to validate segmentation algorithms. Thus, this technique serves as an alternative to the laborious and time-consuming process of generating manually marked ground truth contours by clinicians. The quantification of vocal cord abduction/adduction has so far only been achieved by measuring the angle between the straight edges of the vocal cords, which is prone to inaccuracies such as due to the tilt of the endoscope. Therefore, a
Date of Award28 May 2020
Original languageEnglish
Awarding Institution
  • University Of Strathclyde
SponsorsUniversity of Strathclyde & Beatson Institute for Cancer Research
SupervisorLykourgos Petropoulakis (Supervisor) & John Soraghan (Supervisor)

Cite this

'