Skip to main content

Full text of "2 IJCSEITRJUN 20192"

See other formats


International Journal of Computer Science Engineering 
and Information Technology Research (UCSEITR) 
ISSN (P): 2249-6831; ISSN (E): 2249-7943 
Vol. 9, Issue 1, Jun 2019,11-14 
© TJPRC Pvt. Ltd. 



TRANS 

STELLAR 

•Journal Publications • Research Consultancy 


IMPLEMENTATION OF STRESS DETECTION SYSTEM 

MRUNAL SAWANT 1 , SHAUNAK PAI KANE 2 , SHAILESH HALDANKAR 3 , 

SANJANA DIAS 4 & PRADNYA PRABHU KHOLKAR 5 

Assistant Professor, Agnel Institute of Technology and Design, Bardez, Goa, India 
2345 Student, Agnel Institute of Technology and Design, Bardez, Goa, India 

ABSTRACT 

In this proposed system, students will be continuously monitored through a camera. Facial expressions of every 
student will be captured and will be further processed. Depending upon the expressions and the processed data, results 
will be obtained i.e. if the student is stressed or no. A notification will be sent to the teacher, counsellor or the person in- 
charge to guide the students. We propose a method which will help the child to cope up and deal with personal or other 
psychological problems. Our project will help the student to cope up and perform well in academics, will improve the 
student’s self-confidence, will help the institution to prepare and train quality students and achieve good results. 

KEYWORDS: Haar Cascade, Convolutional Neural Network, Viola Jones, Classifier & Random Forest 

Received: Feb 11, 2018; Accepted: Mar 01, 2019; Published: Mar 14, 2019; Paper Id.: IJCSEITRJUN20192 

1. INTRODUCTION 

The area of focus in this project work is to identify if a student is stressed. Stress is like a disease which 
affects a human being and due to lack of awareness is mostly ignored. It has serious outcomes and at times can also 
be fatal leading to an increase in the number of suicide cases. In this paper we present a method to determine 
whether a person is stressed by continuously determining the emotion of the person for a particular duration of 
time. We first classify the input image into one of the seven expressions like sad, happy, angry, disgust, surprise, 
fear and neutral with the Haar Cascade classifier and Convolutional Neural Network. 

Facial expressions of every student will be captured and will be further processed. Students will be 
continuously monitored. Depending upon the expressions, results will be obtained that is if the student is stressed 
or no. A notification will be sent to the teacher, counsellor or person in charge to guide the students. 1050 images 
from the database created by us along with Kaggle database are divided into two parts in 4:1 proportion for 
Training and Testing. The simplified block diagram is shown in the figure 1. 

2. RELATED WORKS 

Many researches on stress detection methods are done. A method is proposed that detects the stress using 
EEG signals and reduces the stress by introducing the interventions into the system but is highly disadvantageous 
due to high cost [1], Studies employ physiological data, such as electrodermal, cardiovascular, and muscular 
activity to measure participants' stress. Other instruments such as questionnaires and scales can be used to estimate 
a person's stress [2]. Monitoring mental stress patterns and how clinical intervention could best be applied and treat 
a person stressed out. Measurement of galvanic skin response (GSR) using a GSR sensor system to further detect 
stress [3], however this method is not accurate. In our project we determine whether an individual is stressed or not 


www.tjprc.org 


editor@tjprc. org 


Original Article 








12 Mrunal Sawant, Shaunak Pai Kane, Shailesh Haldankar, 

Sanjana Dias & Pradnya Prabhu Kholkar 

using images of the students using the concept of Haar-Features and Convolutional Neural Networks. The proposed 


method does not use any kind of invasive methods or sensors as used in the other methods. 

3. DATASET 

Our experiment is conducted on student facial expression database assembled by the group members by taking 
photos at Agnel Institute of Technology and Design which consists of 1050 images and we also used images from Kaggle 
online repository which consist of 48*48 pixel grayscale images of faces having 28709 training and 3589 testing images. 
The number of images that corresponds to each of the 7 expressions that is happy, fear, disgust, anger, neutral, sadness and 
surprise are taken as shown in figure 2. These images are then converted to grayscale for faster processing. 




IMACE 

PREPROCESSING 


SEGMENTATION 

EXTRACTION 


CLASSIFIER 


EMOTION 

DETECTOR 


DATABASE 


STRESS 

DETECTOR 



Figure 1 


4. PROPOSED SET 

4.1. Input Image 


We make use of webcam to feed images to our model to do the processing. The 5MP webcam will take pictures 
every three minutes and will be processed further. The raw image will be of 1280x720 pixels. 


4.2. Pre-Processing 

Pre-processing is operation at the lowest level of abstraction of the input image. An image goes through 
pre-processing to reduce the distortion so that we have a good quality image to process. The image is enhanced. The image 
is then converted to grayscale since Open-CV works better with this form of image. Since the size of the image is very 
large, that is 1280x720 pixels, it is resized to 256x256 for faster processing and computation. 

4.3. Segmentation 

To identify a face in an image we need to find key attributes of the face such as eyes, nose, mouth etc. This is 
accomplished using Viola-Jones Algorithm. The machine is already trained with data sets to identify eyes, nose, mouth etc. 
Once the face is detected that part of the image is being cropped to size 48x48 and is processed ahead for further 
segmentation. Now the facial attributes will be identified and will be cropped separately. Only one eye is segmented to 
avoid redundancy of data since the other eye will be a mirror of it. 


Impact Factor (JCC): 8.9034 


NAAS Rating: 3.76 









Implementation of Stress Detection System 


13 


4.4. Feature Extraction 

To identify the emotion of the person we need to extract features from the different attributes of the face. 
For example position of Eyebrows will vary according to emotion, when a person is in anger it will be raised high and in 
neutral state it will be in its original position. This different positions of various attributes of the face will help us in 
recognizing different emotions. We made use of cross correlation where the raw image is compared with other face of 
different persons. Accordingly it will measure the similarity. 

4.5. Classifier 

The actual decision of detecting stress is taken here. In this classifier we have seven classes namely Anger, 
Disgust, Fear, Happy, Sad, Neutral and Surprise It takes input from feature extraction block and classify to input data. 
Accordingly the data is grouped. The grouping will decide to which class the input image belong to. 

4.6. Emotion Detection 

The emotion detector program detects the emotion of an individual from the input image. Since the Network is 
already trained using the database, when any new image is given as the input to the program, it identifies the emotion and it 
stores the emotion in the database into the profile of every student time to time. 

4.7. Database 

The database will have profiles of all the students of the class. Once the emotions are detected, these emotions 
will be stored into the database after every time intervals under the profile of a particular individual. 

4.8. Stress Detector 

Depending on the emotions of an individual recorded for the entire day a decision will be made depending on the 
emotions of an individual. If the percentage of particular emotions crosses a certain threshold then the software will 
determine that an individual is stressed. Depending on the emotions of an individual recorded for the entire day a decision 
will be made depending on the emotions of an individual. If the percentage of particular emotions crosses a certain 
threshold then the software will determine that an individual is stressed. 


4.9. Notify 

If the individual is stressed then the higher authority or the family members of the individual will be notified. 



www.tjprc.ors 


editor@tjprc. org 



14 


Mrunal Sawant, Shaunak Pai Kane, Shailesh Haldankar, 
Sanjana Dias & Pradnya Prabhn Kholkar 


4.10. Face Recognition 

This block is used to determine the individual while storing the emotion in the database corresponding to profile 
of individual. 

5. RESULTS 

Loss function used in this case is Cross-Entropy Loss 161 


No. of 
Images in 
Dataset 

No. of 
Epochs in 
Training 

Loss % 

Accuracy 

% 

36886 

(Kaggle) 

5000 

0.48 

99.7 


6. CONCLUSIONS 

In the past there have been many efforts in this field to find whether an individual is stressed. Methods like using 
EEG signals. Blood Pressure measurement, GSR Sensor System and Providing Questionnaire to the people. The method 
proposed by us doesn’t use any kind of sensors or invasive methods for determining the stress but only makes decision 
based on the facial emotions of Individual. A Questionnaire can also be used along with this method for improved 
efficiency. This will help to determine the stress of an individual and thus help them to cope up by various measures to 
improve in the academics in the case of students or increase the efficiency in workplace in case of employees. 

REFERENCES 

1. Emotion Recognition based on Vertical Cross Correlation Sequence of Facial Expression images -M.I. Rashid, M. Hasan, N. 
Yeasmin, C. Shahnaz, S.A. Fattah, W.P. Zhu, M.O. Ahmed Bangladesh University of Engineering and Technology, Dhaka, 
Bangladesh. 

2. Rapid Object detection using a boosted Cascade of simple feature - Paul Viola, Michael Jones. 

3. Kamboj, R.. & Rana, V. (2013). Implementation of Attack Data Collection Incorporating Multi Level Detection Capabilities 
Using Low Interaction Honeypot. Science and Engineering (1JCSE), 2(4), 27-36. 

4. Face detection and Tracking using OpenCV - Kruti Goyal, Kartikey Agarwal, Rishi Kumar. 

5. Real Emotion Recognition by Detecting Symmetry Patterns with Dihedral Group -Mehdi Ghayoumi, Artificial Intelligence 
Lab, Computer Science Department, Kent State, University Kent, Ohio, USA 

6. Hiremath, B., & Prasannakumar, S. (2015). Automated Evaluation Of Breast Cancer Detection Using Svm Classifier. 
International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR). 5(1), 11-20. 

7. Random Forests - Leo Breiman, Statistics Department, University of California, Berkeley. 

8. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels - Zhilu Zhang, Mert R. Sabuncu 


Impact Factor (JCC): 8.9034 


NAAS Rating: 3.76