Decision Support System for Clinical Diagnosis Based On Emotion Detection by Bhakti Sonawane
Material type:
- TT000118 SON
Item type | Current library | Collection | Call number | Status | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|---|
![]() |
NIMA Knowledge Centre | Reference | TT000118 SON (Browse shelf(Opens below)) | Not For Loan | TT000118 | |||
![]() |
NIMA Knowledge Centre | Reference | TT000118 SON (Browse shelf(Opens below)) | Not For Loan | TT000118-1 |
Guided by: Dr. Priyanka Sharma
15EXTPHDE153
ABSTRACT:
One of the key aspects of Artificial Intelligence (AI) and a persistent prerequisite of
healthcare is to evaluate complicated circumstances, make forecasts, and determine
patterns. Computerized clinical decision support systems (CDSS), reflects a dramatic
transformation in healthcare today using machine learning and computer vision. This
field of AI hopes to incorporate the human capacities of data sensing, data interpretation,
and behavior based on historical and current findings into computers. An
evolving area that finds many therapeutic uses is the automated interpretation of
facial expressions and speech analysis. Parkinson’s disease (PD) is a movement disorder that impacts the neurological system. Facial bradykinesia is a significant motor symptom of PD that results in the
reduction and the slowdown of facial movements of patients. Often dysarthria is observed
in patients with PD which causes slurred or slow speech that can be difficult
to understand. Since there are currently no proven biomarkers for diagnostic tests, conventional PD
diagnosis methodologies are primarily on a patient’s clinical history and physical
examination. Time-consuming assessments are carried out by qualified medical professionals.
It can become burdensome if periodic re-evaluation is necessary. Development of tools and techniques using computer vision and machine analysis may turn into an alternative automated evaluation in verbal and non-verbal platforms in this scenario. This leads to the motive of this research work as to develop automated
Decision Support System (DSS) in PD management. Such DSS has the potential to
improve healthcare by bridging the gap between optimal practice and actual clinical
care. A major challenge in the implementation of this research work is the unavailability
of the data (facial expressions images and raw speech data) of patients with PD
due to ethical constraints. Thus, in the initial phase of implementation, the proposed
DSS modules were trained and tested using data from freely available standard
datasets. For this research work, four standard image datasets images were
used. These datasets include: The Karolinska Directed Emotional Faces (KDEF)
[Lindquist, Flykt, and Öhman 1998], FACES [Ebner, Riediger, and Linden Berger
2010], Montreal Set of Facial Displays of Emotion (MSFDE)[Beaupré, Cheung, and
Hess 2000], The Amsterdam Dynamic Facial Expression Set-Bath intensity variation
(ADFES-BIV)[Wingenbach, Ashwin, and Brosnan 2016] and for speech well known TORGO [Rudzicz, Namasivayam, and Wolff 2012] dataset is used. Simultaneously during this phase, this research work collaborated with Lokmanya Tilak Municipal Medical College and Government Hospital (LTMMC & GH), Sion,
Mumbai and the process of real-life data collection were initiated. In the later phase
of this research work, analysis on the collected data from LTMMC & GH, Sion, Mumbai
is carried out. In conclusion, this research work is a provides DSS to address different issues related
to PD management based on deep neural network architecture. Each solution is a
distinct type of DSS. Patients lose their expressivity in the case of movement disorder
of PD. Thus, target facial expressions of patients with PD are undetectable to the
naive social observer during daily interpersonal life that may affect the social quality
of life of PD patients or observer. The proposed first DSS is a binary classifier that
assists clinicians to detect the masked face of patients with PD. The next two DSS modules are to provide real-time input on emotional facial expression of patients with PD. These proposed DSS modules are emotional facial expression detection (for basic seven emotional facial expressions), and emotional facial expression
grader (for extreme end emotional facial expression into three levels). These DSS
modules are intended to test social expressive ability. To ultimately help with planning more targeted management for PD patients, this research work presents another DSS, a prediction model. Here, this research work
has classified all the patient’s data in 3 new classes based on their Hoehn & Yahr (H
& Y) stage. Then, based on emotional facial expression, proposed DSS predict class
that corresponds to the H & Y stage ( a widely used mechanism for explaining how
PD symptoms develop). Performance in voice degrades with PD progression. Thus, this research work presents
the next supporting DSS based on speech analysis to detect dysarthria and to predict
class that corresponds to the H & Y stage based on labial sound phrases. System
performance for masked face detection module is found to be 86.84% on real-life data
collected from LTMMC & GH, Sion, Mumbai. The system has shown 93.64% accuracy
to predict class, that corresponds to the H & Y stage using emotional facial
expression on test readings from LTMMC & GH, Sion, Mumbai. Also, when labial
sound phrases were used to predict class that corresponds to the H & Y stage, then the system has shown 80.85% accuracy on test samples from LTMMC & GH, Sion,
Mumbai. According to the findings of this research work, emotional facial expression
as well as labial sound phrases plays a role in the detection of severity of PD.
There are no comments on this title.