- Home
- Research Areas
Research Areas
- Speech
- Dialogue
- Text to Speech
- Gait
- RRB
- Joint Attention
- Emotion
- Gaze
- Data-Provenance
- Mobile-Client
- SAR
Automatic Speech recognition system to screen speech impairments in young children to detect ASD
Autism Spectrum Disorder (ASD) is a neuro-developmental disorder characterized by social impairments, communication difficulties and repetitive behaviors. Lack of speech is the main symptom that 82.4% of children were directed to clinics at the average age of 35.8 months and finally get diagnosed for ASD. Children can be directed for early intervention by identifying the symptoms appears in speech and language such as difficulties in prosody, abstract use of language, echolalia, delay in response and limited amount of functional words that appears as early as first 6 -18 months of life. Kaldi based novel Automatic Speech Recognition (ASR) system in Sinhala for children is to be developed that can recognize speech impairments in autistic children in early childhood utterances and help the speech therapists to carryout therapeutic activities to overcome ASD. Moreover, this research develops a speech data corpus in Sinhala Language of both typical and atypical children for future research.
Research Question
How to detect any Sinhala speech and language impairments relative to ASD in children?
Sub Research Questions
- What are the main speech or language impairments relative to ASD?
- How the Sinhala language or speech of children varies with age?
- What are speech components to analyze to detect any impairments?
- How to detect the speech components and analyze for any impairments?
- How to convert Sinhala speech to text of children?
- How to collect data of typical and atypical children
- How to carry out speech diarization?
Objective of the study
To develop a scanning tool to detect any speech or language impairments relative to ASD in children of 12 months to 6 years and develop a Sinhala speech corpus of both typical and atypical children for future research
System Overview Diagram
- Area of research :Speech Recognition , Natural Language Processing
- Supervisor :Prof. Koliya Pulasinghe
- Co-supervisor :Dr. Shyam Reyal
- Research Assistant :Ms. Veerandi Kulasekara
- Contact :veerandi.k@sliit.lk
- Current MPhil Progress :
✔ Completed MPhil Application
✔ Completed Interview
✔ Completed Initial MPhil Presentation
Dialogue Management
According to recent statistics, one among 95 kids in Sri Lanka is diagnosed with Autism Spectrum Disorder (ASD), a neuro-developmental disorder. In Sri Lanka, they are identified at average of 35.6 months of age where symptoms can be diagnosed as early as 6 months. Early diagnosis and clinical intervention improve them to coexist with typical students when they enter schools. ASD consists of pool of social interaction impairments including speech and language impairments such as echolalia, poor reciprocity in conversations, self-talk, delay in response, respond in a few words or no talk at all. A Novel Sinhala Dialogue Management System based on RASA framework is proposed to engage with atypical kids to instigate above hallmarks in language impairments to assess the level of impairment and conduct therapeutic conversations to help recovery. Culturally inherited role-plays and conversational games will be used to sustain interest and engagement of the kids.
- Area of research :Sinhala Speech Analysis, Dialogue Management, Natural Language Processing
- Supervisor :Prof. Koliya Pulasinghe
- Co-supervisor :Dr. Shyam Reyal
- Research Assistant :
- Contact :
Sinhalese Text to Speech with Culturally Sensitive Semantic Information to stimulate ASD traits in Children
Autism Spectrum Disorder (ASD) is a neurodevelopment disorder which presents a spectrum of symptoms affecting the social interactions of an individual. The challenges faced in non-verbal communication and the delays and regression in speech associated with the language development of children present the need for an early screening and diagnosis for autistic children, allowing them to lead a normal life, with minimal hindrance as a result of their symptoms. The lack of diagnostic and screening processes for Autism in Sri Lanka hinders individuals from receiving the necessary health care to lead a normal life. This study aims to develop a system to translate Sinhalese child directed text to an audio output, with the incorporation of prosodic features to grasp childrens’ attention. The proposed Text-To-Speech system is the first ever attempt in the development of a speech screening tool for children as early as 6 months of age. The analysis in Sinhala language further highlights the novelty of this research. The proposed system could be used as a communication interface in the attempt of maintaining a smooth back-and-forth conversation between child and robot, to assist in the screening of autistic traits in children.
Research Question
How can a Sinhalese child directed text be translated into a computer-generated speech?
Sub Research Questions
- How to include culturally sensitive semantic information within the TTS system?
- How the Sinhala language or speech of children varies with age?
- How to incorporate prosodic features to grasp children’s attention?
- How to assist in screening of autistic traits in children?
Objectives
- Add support for Sinhala language under the MARY environment
- Develop Sinhalese modules and build a new Sinhalese voice for the Mary TTS system
- Extensively pre-process the system to synthesize a more natural-sounding intelligible voice
- Fine-tune TTS system by adjusting features of voice for communication with children, by grasping attention and motivating a child to speak in the hope of stimulating autistic traits through the synthesized voice
System Overview Diagram
- Area of research :Computer Vision, Machine Learning, Natural Language Processing, Speech Synthesis, Text-To-Speech
- Supervisor :Prof. Koliya Pulasinghe
- Co-supervisor :Dr. Shyam Reyal
- Research Assistant :Ms. Manuri Senarathna
- Contact :manuri.s@sliit.lk.com
- Current MPhil Progress :
✔ Completed MPhil Application
✔ Completed Interview
computer vision based gait and gesture analysis
A variety of movement disturbances including atypical gait, upper limb movements and postural control are also important as early signs of autism. Atypical gait is defined as an unusual style of walking from the normal pattern of walking and researchers have tried many different variables to test the abnormal gait patterns of children with ASD. Most studies have either used basic gait measurements, kinematic, kinetic or a combination of these. A limited research has been conducted on the link between infant motor skills and autism and much of the literature tend to provide a qualitative description of gait and motion based on the observation of clinicians. Thus, it is imperative to quantify these descriptions with measuring tools real time, specially starting as early as 6 months of age. Therefore, automated tools for quantitative gait and motion analysis have become vital for assessing pathologies manifested by atypical motor behaviors. This research is to analyze the gesture and gait patterns of children with Autism.
- Area of research :Deep learning , Computer Vision , Digital signal processing , robotics , gait and gesture pattern recognition
- Supervisor :Prof. Chandimal Jayawardene (primary), Dr. Pradeepa Samarasinghe, Dr. Lasantha Senevirathne
- External Supervisor : Dr. Pratheepan Yogarajah
- Research Assistant :Mr. Gagan hashentha silva
- Contact :gagan.s@sliit.lk
SOCIALLY ASSISTIVE ROBOT AS AN EARLY ASD DIAGNOSIS AGENT
About 1 in every 160 children globally has autism spectrum disorder (ASD). ASD is a developmental disability, which is characterized by social, emotional, and communication challenges. Recent advancements in the research domain of socially-assistive robots have shown a promising direction to use robots to help children with ASD. A socially-assistive robot can be built with capabilities such as games to engage children, collect data and to identify the risk of developing ASD; thus, enabling early interventions to improve their conditions. Further, recent research has shown that socially-assistive robots can be successfully used for enhancing social skills of children with ASD as well. In this particular MPhil project an already available programmable robot will be developed as a social-assistive robot to perform the above mentioned functions. The primary focus will be on early detection of ASD related behavior, although the same robot may be extended as a therapeutic agent as well.
Research Question
How to develop an early screening protocol to facilitate in identifying autism in children from rural areas?
Sub Research Questions
- How to develop the robot-based activities for the screening protocol?
- What criteria should be considered in developing the activities to trigger the relevant ASD traits for observation?
- How can the robot decide on which activities will be suitable, according to the subject's behaviour?
- How to assist in final screening of autistic traits in children?
Objectives
- Devlop a reinforment-learning dynamic activity selection system for an efficient robot-based ASD diagnosis protocol
- Develop robot-based diagnosis activities to be performed with the subject according to the ADOS standards to trigger the relevant ASD traits from the subject for observation.
- Conduct ethical trials with ASD subjects to develop the optimal robot-based diagnosis protocol
System Overview Diagram
NAO V6 Humanoid Robot
- Area of research :Robotics, Machine Learning
- Supervisor :Prof. Koliya Pulasinghe (primary), Prof. Chandimal Jayawardena
- External Supervisor :
- Research Assistant :Mr. Nadun Ranasinghe
- Contact :nadun.r@sliit.lk
- Current MPhil Progress :
✔ Completed MPhil Application
✔ Completed Interview
✔ Completed Initial MPhil Presentation
Detecting Restricted and Repetitive Behaviors
Once a subject carries out a single action or sticks into a specific sequence actions (Routine type of behaviors) for longer time periods it is known as Restricted and Repetitive Behaviors.According to key diagnosis journals related to psychological disorders such as Diagnostic and Statistical Manual (DSM),and Classification of Mental and Behavioral Disorders (ICD) globally used by many psychiatrics has stated that Restricted and Repetitive Behaviors (RRB’s) are one of the key features of Autism when diagnosing a child with Autism. The benifits of developing a screening tool to early detect RRB’s are Early detection helps to start early interventions to cure autism, Helps to overcome the scarcity of expertise through the automated screening and focusing their service for critical cases, Enables to reach the community who are not aware of autistic symptoms due to varying cultural beliefs, and Helps to achieve the goal of Screen Early, Screen All, and Screen Often.
Research Objectives
- Developing a model to analyze and recognize actions of children.
- Developing a model to detect which actions were repetitive and which were not.
- Developing a model to analyze each repetitive action or multiple repetitive actions conducted by a child with autism.
- Identifying special features such as number of repetitions, duration of each repetition, periodic analysis of the repetitions, aperiodic analysis of the repetitions, unique repetitive actions, motor movement pattern analysis during the repetition through the developed model.
- Comparing the differences and commonness of repetitive actions conducted by normal children and children with autism.
- Comparing differences of the repetitive actions conducted by children with autism in different age groups.
- Conducting a comparison how cultural variations effects restricted, and repetitive behaviors conducted by children with autism.
System Overview Diagram
- Area of research :Computer Vision, Deep Learning, Signal Processsing
- Supervisor :Dr. Pradeepa Samarasinghe (primary), Dr. Lasantha Senevirathne
- External Supervisor :Michela Papandrea, Prof. Alessandro Puiatti, Dr. Dulangi Dahanayake, Dr. Swarna Wijethunga
- Research Assistant :Mr. Nushara Wedasinghe
- Contact :nushara.w@sliit.lk
- Current MPhil Progress :
✔ Completed MPhil Application
✔ Completed Interview
✔ Completed Initial MPhil Presentation
Facial emotion analysis of autistic children
It has been found that there are significant differences in social interaction of children with ASD. Emotion expression and emotion recognition play key roles in social interaction. Children with Autism has shown poor emotion expression as well as recognition of others’ emotions. This research is aimed at developing an automated tool to identify the deficits in both emotion expression and recognition of children with Autism. The automated analysis of smile, imitation of facial expressions, responsiveness would signal an early indication of autistic symptoms.
Research Question
How to detect facial emotion expressions in ASD children?
Sub Research Questions
- How to develop a novel algorithm to predict facial emotion expressions in ASD children?
- How to develop an ensemble based model to predict facial emotion expressions?
- What are the different techiques used to predict facial emotion expressions?
- Can existing algorithms optimzed using optimizing techniques?
- How to overcome illumination and pose variences when detecting facial expressions?
- How facial emotion expressions are going to be varied with age?
- How to collect data from typical and atypical children?
Objective of the study
To develop a novel scanning tool to detect facial emotion expressions in ASD children and identify how facial emotions are going to be changed with children's age.
Project Mindmap
- Area of research :Computer Vision, Deep Learning
- Supervisor :Dr. Pradeepa Samarasinghe (primary), Dr. Anuradha Karunasena
- External Supervisor :
- Research Assistant :Ms. Madhuka Nadeeshani
- Contact :madhuka.n@sliit.lk
Gaze and Attention Analysis for the early detection of Autism Spectrum Disorder (ASD)
Children with ASD have shown variant behaviors in eye contact, disengagement of visual attention, visual tracking and social interest and affect in comparison to typically developing (TD) children. These social cues that we measure, gaze and attention in particular, can be recorded in greater detail and at an early age which would give more opportunities for diagnosis of ASD and early intervention. Though there has been few research studies in developed countries using gaze patterns for autistic screening, the devices and technologies they have used are not affordable in Sri Lanka. This research area attempts to find affordable techniques for evaluating gaze and attention of children with Autism.
Research Question
How to detect atypical traits of gaze and attention in children with ASD ?
Sub Research Questions
- How to develop a head pose estimation model targeted for children ?
- How can the atypcial traits of gaze and attention in autistic children be triggered ?
- How to analyse the gaze and head pose within a specific task for autistic chilren?
- How to derive an ensemble model to result in an overall prediction based on multiple tasks ?
Objectives of the study
- Development of an algorithm optimized for the reliable estimation of head pose of children in a video.
- Development of a model to accurately identify the direction of gaze in children, incorporating head pose and eye gaze estimation.
- A comprehensive analysis of gaze and attention for individual tasks which trigger atypical traits of in autistic children.
- Development of an ensemble model to screen children for ASD based on their patterns of gaze and attention.
Project Mindmap
- Area of research :Computer Vision, Deep Learning
- Supervisor :Dr. Pradeepa Samarasinghe (Primary), Dr. Anuradha Karunasena
- External Supervisor :Dr. Bryan Gardiner (Affiliation : Ulster University, UK), Dr. Pratheepan Yogarajah (Affiliation : Ulster University, UK)
- Research Assistant :Ms. Vidushani Dhanawansa
- Contact :vidushani.d@sliit.lk
- Current MPhil Progress :
✔ Completed MPhil Application
✔ Completed Interview
Provenance Preserving Scientific Data Store for Research Data
The CSAAT project aims to early-detect Autism in young children using anomalies in emotion, gaze, movement, and speech by application of machine learning / deep learning techniques. This requires collection and storage of large volumes of video, speech and image data, along with corresponding meta-data. Meta-data could be automatically extracted features from the data, or annotations made by researchers. Further, it is required to perform custom queries to retrieve certain subsets of this data on-demand, using the above meta-data as search parameters. This section addresses the research questions such as what meta-data could (and should) be extracted from the source data, how best to design the interface to make annotations, and how the meta-data could be stored and indexed.
- Area of research :Computer Vision, Deep Learning
- Supervisor :Dr. Shyam Reyal (primary), Dr. Pradeepa Samarasinghe, Dr. Anuradha Karunasena
- Research Assistant :
- Contact :
Federated Learning Models for Distributed Mobile Computing
The CSAAT project aims to early-detect Autism in young children using anomalies in emotion, gaze, movement, and speech by application of machine learning / deep learning techniques. The major objective of this study is to build a mobile app capable of handling various machine learning algorithms optimised for the mobile environment in the most suitable way according to the available resources of the device and network capabilities. The ultimate goal of this section is to implement a mobile-app for the general public which concerned parties (parents, carers, etc.) that can be used to upload a video, voice-clip, or image of a child onto the the mobile app for preliminary screening for Autism in multiple ways powered by AI based recognition systems implemented by the CSAAT project team. This section focuses on the mobile application endpoint of the CSAAT project.
Research Questions
Given the usage of a model and a specific edge device can it be deployed on edge device itself or will it has to be deployed on a cloud service in order to get the expcected prediction or output?
Sub Research Questions
- How benchmarking works and what are the parameters tested by benchmark tests?
- What is indicated by the result of a benchmarks test regarding capability of mobile ?
- How the benchmarks tests can be compared with each other?
- How to use benchmarking in order to decide if a particular model can be deployed on a mobile or not?
- Can an existing neural network be optimized using optimizing techniques and convert to a mobile friendly architecture?
- Given the use case of a neural network what is the best way to optimize it for the mobile environment?
- What are the available cloud services to deploy machine learning models and what are their capabilities, limitations, pros and cons?
- What is the best way to run deep learning model predictions in mobile app?
Objective of the study
The major objective of this study is to build a mobile app capable of handling various machine learning algorithms optimized for the mobile environment in the best way according to the available resources of the device and network capabilities.
Project Mindmap
- Area of research :Network Optimization and Acceleration, Mobile Computing, Distributed Computing, Machine Learning, Federated Learning
- Supervisor :Dr. Nuwan Kodagoda (primary), Dr. Dharshana Kasthurirathna, Dr. Shyam Reyal, Dr. Pradeepa Samarasinghe
- Research Assistant :Mr. Asiri Gawesha Lindamulage
- Contact :asiri.l@sliit.lk
Joint Attention Analysis
The main analysis on the CSAAT project involves videos of children for early detection of the autism spectrum disorder (ASD). Often, monitoring and quantifying the ability of children to interact with parents and professional therapists is essential in identifying any potential issues in their development growth. In this research, we plan to analyse the interactions between child and mother/examiner. This involves the use of image pre-processing techniques along with deep learning techniques such as graph neural networks to analyse the behaviours of the child as well as his/her interaction with others. The outcomes of this research may be useful in early identification and continuous monitoring of development goals and social interaction expectations of children.
Research Question
How to detect and estimate joint attention of children using video data of their interactions with adults ?
Sub Research Questions
- How to develop an action recognition model using GNN that can be used to detect child actions?
- What are the possible approches to develop human-human interaction detection models ?
- How to extract and improve human skeleton data using RGB images?
- How to develop a GNN based child-adult interaction recognition model for detecting and estimating joint attention of the child ?
Objectives of the study
- Doing a comprehensive literature review of human - human interaction recognitions systems curently developed with an emphasise on deep learning methods.
- Analysing available child action/behavior datasets and looking into the possiblity of developing a dataset.
- Developing a robust action recognition graph neural network (GNN) based model for detecting child actions/behaviors.
- Doing a comprehensive literature review of graph neural network achitectures which can be utilized in this research.
- Developing a GNN based adult-child interaction recognition model in order to assess joint attention of a child.
- Area of research :Deep Learning, Computer Vision
- Supervisor :Dr. Pradeepa Samarasinghe (primary), Dr. Dharshana Kasthurirathna
- External Supervisor :Dr. Charith Abhayaratne (Affiliation : Sheffield Hallam University, UK)
- Research Assistant :Mr. Sanka Mohottala
- Contact :sanka.m@sliit.lk