Developing an Emotion Detection Media Player Using AI Algorithms
VerifiedAdded on 2019/10/01
|44
|12685
|396
Project
AI Summary
This project details the development of an emotion detection media player using three key algorithms: edge detection, face detection, and Support Vector Machine (SVM). The system aims to automatically generate music playlists based on the user's emotional state, determined through facial expression analysis captured via webcam. The project's introduction establishes the background, aims, objectives, research questions, significance, and limitations, outlining the system's architecture and the role of each algorithm. The literature review explores existing research on emotion detection and facial expression-based music players, including the application of SVM. The methodology chapter describes the research philosophy, approach, design, data collection, and analysis methods. The data analysis section provides coding examples for face detection in Python and edge detection algorithms. The project concludes with a discussion of the findings, recommendations, and a comprehensive overview of the system's functionality, emphasizing its potential to enhance the user's music listening experience by dynamically adjusting playlists to match their current emotional state. The project highlights the significance of AI in the evolution of music technology and its impact on human-computer interaction.

Emotion detection media player using 3 different
algorithms
1
algorithms
1
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Table of Contents
Chapter 1: Introduction....................................................................................................................3
1.1 Background of the paper........................................................................................................3
1.2 Aim of the research................................................................................................................4
1.3 Objectives of the research......................................................................................................4
1.4 Research questions.................................................................................................................4
1.5 Significance of the paper.......................................................................................................5
1.6 Problem statement.................................................................................................................5
1.7 Research limitation................................................................................................................6
1.8 summary................................................................................................................................6
1.7 structure of the research project.............................................................................................7
Chapter 2: Literature review............................................................................................................8
2.1 Emotion detection of human..................................................................................................8
2.2 Facial expression based music player....................................................................................8
2.3 Edge detection music player................................................................................................10
2.4 Support Vector Machine or SVM........................................................................................12
Chapter 3: Research Methodology................................................................................................16
3.1 Introduction..........................................................................................................................16
3.2 Research Philosophy............................................................................................................16
3.3 Research Approach..............................................................................................................17
3.4 Research design...................................................................................................................17
3.5 Data collection process........................................................................................................18
3.6 Data analysis modes............................................................................................................19
Chapter 4: Data analysis................................................................................................................20
Face detection coding in python using webcam........................................................................20
2
Chapter 1: Introduction....................................................................................................................3
1.1 Background of the paper........................................................................................................3
1.2 Aim of the research................................................................................................................4
1.3 Objectives of the research......................................................................................................4
1.4 Research questions.................................................................................................................4
1.5 Significance of the paper.......................................................................................................5
1.6 Problem statement.................................................................................................................5
1.7 Research limitation................................................................................................................6
1.8 summary................................................................................................................................6
1.7 structure of the research project.............................................................................................7
Chapter 2: Literature review............................................................................................................8
2.1 Emotion detection of human..................................................................................................8
2.2 Facial expression based music player....................................................................................8
2.3 Edge detection music player................................................................................................10
2.4 Support Vector Machine or SVM........................................................................................12
Chapter 3: Research Methodology................................................................................................16
3.1 Introduction..........................................................................................................................16
3.2 Research Philosophy............................................................................................................16
3.3 Research Approach..............................................................................................................17
3.4 Research design...................................................................................................................17
3.5 Data collection process........................................................................................................18
3.6 Data analysis modes............................................................................................................19
Chapter 4: Data analysis................................................................................................................20
Face detection coding in python using webcam........................................................................20
2

Edge detection algorithm...........................................................................................................22
Chapter 5: Conclusion and Recommendation...............................................................................30
References......................................................................................................................................32
3
Chapter 5: Conclusion and Recommendation...............................................................................30
References......................................................................................................................................32
3
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

Chapter 1: Introduction
1.1 Background of the paper
Music has some important role in enhancing the life of a person because it is a highly significant
relaxation medium for music followers as well as listeners. Now a day, the music technology is
very improved and the music listener uses this improved technology such as local playback,
multicast stream, reverse, and other facilities. The listeners are satisfied with these factors and
play music based on their mood as well as behaviour. Audio feeling recognition provides list of
music that is supported to various mood and emotions of music lover. Various categories of
emotions and audio signal that is received, is classified by audio felling recognition. To explore
some features of audio, AN signal is used and MR is used to extract various important data from
the AN signal of audio. The listener is trying to set their playlist according to their mood.
However, it consumes more time. Various music players provide different kinds of features such
as proper lyrics with singer name and all. In this system, the arrangement of playlist is reacted
based on listener’s emotions to save the time for manually storing playlist.
The best way to express emotion and mood of person is the facial expression and physical
gesture of human. This system is used extract facial expression and based on this expression the
system automatic generate playlist that is completely matched with the mood and behaviour of
listeners. This system consumes less time, reduces the cost hardware and removes the overheads
of memory. The face expression is classified into five different parts like joy, surprise,
excitement, anger and sadness of human. To extract important, related data from any audio
signal, high accurate audio extraction technology is used with lesser time. A model of emotion is
expressed to classify the song into seven types such as unhappy, anger, excitement with joy, sad-
anger, joy, surprise. The emotion-audio module is combination of feeling extraction module and
module of audio feature extraction. This mechanism provides a more huge potentiality and good
real-time performance rather than recent methodology. Various kinds of approaches are used to
process the edge detection of an image such as mouse, speech reorganization using AI and
others. In this case, three algorithms are used these are used like edge detection, face detection
and SVM. The edge detection algorithm helps to reduce computation of processing any large
image. Face detection algorithm provides high accuracy to detect object as well as image. This
4
1.1 Background of the paper
Music has some important role in enhancing the life of a person because it is a highly significant
relaxation medium for music followers as well as listeners. Now a day, the music technology is
very improved and the music listener uses this improved technology such as local playback,
multicast stream, reverse, and other facilities. The listeners are satisfied with these factors and
play music based on their mood as well as behaviour. Audio feeling recognition provides list of
music that is supported to various mood and emotions of music lover. Various categories of
emotions and audio signal that is received, is classified by audio felling recognition. To explore
some features of audio, AN signal is used and MR is used to extract various important data from
the AN signal of audio. The listener is trying to set their playlist according to their mood.
However, it consumes more time. Various music players provide different kinds of features such
as proper lyrics with singer name and all. In this system, the arrangement of playlist is reacted
based on listener’s emotions to save the time for manually storing playlist.
The best way to express emotion and mood of person is the facial expression and physical
gesture of human. This system is used extract facial expression and based on this expression the
system automatic generate playlist that is completely matched with the mood and behaviour of
listeners. This system consumes less time, reduces the cost hardware and removes the overheads
of memory. The face expression is classified into five different parts like joy, surprise,
excitement, anger and sadness of human. To extract important, related data from any audio
signal, high accurate audio extraction technology is used with lesser time. A model of emotion is
expressed to classify the song into seven types such as unhappy, anger, excitement with joy, sad-
anger, joy, surprise. The emotion-audio module is combination of feeling extraction module and
module of audio feature extraction. This mechanism provides a more huge potentiality and good
real-time performance rather than recent methodology. Various kinds of approaches are used to
process the edge detection of an image such as mouse, speech reorganization using AI and
others. In this case, three algorithms are used these are used like edge detection, face detection
and SVM. The edge detection algorithm helps to reduce computation of processing any large
image. Face detection algorithm provides high accuracy to detect object as well as image. This
4
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

algorithm is very fast in nature. This system can extract emotion and mood of human and they
started communication with human being. This system is used to sense facial expression of
human and based on this facial expression it arranges the playlist for human. Human does not
need to select their playlist based on their mood or emotion. Computer can communicate with
human beings like talking, reacting and extracting human emotion, automatically guess the
feelings of human beings. The emotion detection develops new technology in recent years to
process various images, interaction between humans and machines and machine learning. Now a
day, emotion detection plays a vital role in neuroscience, computer science, medical and other
various purposes. To make rational decisions, interaction with social media, emotion detection is
very important. This system can understand what listener wants to listen to according to their
current state of mind.
1.2 Aim of the research
The aim of the research is to understand the functioning of the three algorithms used in order to
develop the Emotion detection Media player.
1.3 Objectives of the research
The objectives of the research are listed below:
To play the song automatically based on human emotions and feelings.
To discuss three algorithms for detecting the emotion of humans
To communicate with the people and extract the gesture and mind state of people
To discuss the impact of three algorithms on emotion detection media player
To develop proper algorithm for recognizing human emotion
To develop the appropriate system as per the human emotion this is used to identify
human gestures using some technology.
1.4 Research questions
How to play the song automatically as per the listener’s mood?
What is main purpose of the three algorithms to detect emotion of human?
How to extract the people's emotions and gestures of humans using media player?
What are the impacts of thee algorithms on emotion detection media player?
How to develop appropriate algorithm for detecting emotion?
5
started communication with human being. This system is used to sense facial expression of
human and based on this facial expression it arranges the playlist for human. Human does not
need to select their playlist based on their mood or emotion. Computer can communicate with
human beings like talking, reacting and extracting human emotion, automatically guess the
feelings of human beings. The emotion detection develops new technology in recent years to
process various images, interaction between humans and machines and machine learning. Now a
day, emotion detection plays a vital role in neuroscience, computer science, medical and other
various purposes. To make rational decisions, interaction with social media, emotion detection is
very important. This system can understand what listener wants to listen to according to their
current state of mind.
1.2 Aim of the research
The aim of the research is to understand the functioning of the three algorithms used in order to
develop the Emotion detection Media player.
1.3 Objectives of the research
The objectives of the research are listed below:
To play the song automatically based on human emotions and feelings.
To discuss three algorithms for detecting the emotion of humans
To communicate with the people and extract the gesture and mind state of people
To discuss the impact of three algorithms on emotion detection media player
To develop proper algorithm for recognizing human emotion
To develop the appropriate system as per the human emotion this is used to identify
human gestures using some technology.
1.4 Research questions
How to play the song automatically as per the listener’s mood?
What is main purpose of the three algorithms to detect emotion of human?
How to extract the people's emotions and gestures of humans using media player?
What are the impacts of thee algorithms on emotion detection media player?
How to develop appropriate algorithm for detecting emotion?
5

How to develop appropriate media player to extract emotion of human?
1.5 Significance of the paper
The importance of emotion detection media player is looking forward to the emotion of users
and it is used to detect human emotion. This system helps to play music automatically by
understanding the emotion of humans. It reduces the time of computation and searching time for
human. Memory overhead is reduced by this emotion detection media player. The system
provides accuracy and efficient results for detecting gestures of humans. It recognizes the facial
expression of people and arranges suitable playlist for them. It mainly focuses on the features of
the detected emotion rather than actual image. Here, listener does not want choose any song from
the playlist automatically. There are no requirements for any playlists. The music lovers do not
want to categorize the song according to their gestures, mental state and feelings. Here SVM
algorithm is used to classify and analyse the regression. It uses hyper-plane for doing this. It is
used to access image with the help of computer devices such as mouse and various computer
networks. It mainly detects the digital image through the video. It classifies data as per their
behaviour and features. Data is transformed by using this technique and according to this
collected data set; it set some boundaries among the promising outcomes. Another algorithm is
edge detection algorithm. In any image the edge parts are the basic thing. The edge is the set of
several pixels. The detection of edge is one of the best techniques to extract the edge of an
image. The output of this algorithm is expressed by the two-dimensional function. The third
algorithm that is used for emotion detection is the face recognition. The main advantage of this
algorithm is to identify the actual emotion of human beings by their facial expressions such as
joy, sadness, anger and other emotions. The algorithm is mainly responsible to reduce the
computation time for detecting the facial expression of any person.
1.6 Problem statement
This is difficult task for music lover to create and segregate the manual playlist among the huge
number of songs. It is quite impossible to keep the track of large number of song in the playlist.
Some songs that are not used for long time waste lot of space and listener have to delete this kind
of song manually. Listeners have some difficulties to identify the suitable song and play the song
based on variable moods. Now a day, people cannot automatically change or update the playlist
in a easiest way. Listener has to update this playlist manually. The list of playlist is not equal all
6
1.5 Significance of the paper
The importance of emotion detection media player is looking forward to the emotion of users
and it is used to detect human emotion. This system helps to play music automatically by
understanding the emotion of humans. It reduces the time of computation and searching time for
human. Memory overhead is reduced by this emotion detection media player. The system
provides accuracy and efficient results for detecting gestures of humans. It recognizes the facial
expression of people and arranges suitable playlist for them. It mainly focuses on the features of
the detected emotion rather than actual image. Here, listener does not want choose any song from
the playlist automatically. There are no requirements for any playlists. The music lovers do not
want to categorize the song according to their gestures, mental state and feelings. Here SVM
algorithm is used to classify and analyse the regression. It uses hyper-plane for doing this. It is
used to access image with the help of computer devices such as mouse and various computer
networks. It mainly detects the digital image through the video. It classifies data as per their
behaviour and features. Data is transformed by using this technique and according to this
collected data set; it set some boundaries among the promising outcomes. Another algorithm is
edge detection algorithm. In any image the edge parts are the basic thing. The edge is the set of
several pixels. The detection of edge is one of the best techniques to extract the edge of an
image. The output of this algorithm is expressed by the two-dimensional function. The third
algorithm that is used for emotion detection is the face recognition. The main advantage of this
algorithm is to identify the actual emotion of human beings by their facial expressions such as
joy, sadness, anger and other emotions. The algorithm is mainly responsible to reduce the
computation time for detecting the facial expression of any person.
1.6 Problem statement
This is difficult task for music lover to create and segregate the manual playlist among the huge
number of songs. It is quite impossible to keep the track of large number of song in the playlist.
Some songs that are not used for long time waste lot of space and listener have to delete this kind
of song manually. Listeners have some difficulties to identify the suitable song and play the song
based on variable moods. Now a day, people cannot automatically change or update the playlist
in a easiest way. Listener has to update this playlist manually. The list of playlist is not equal all
6
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

time as per the listener requirement. It changes frequently. So there is no such system to reduce
these kinds of problems.
1.7 Research limitation
In this research, there are some limitations to the used methodologies. Three types of algorithms
are used to design the system. There are some chances to use two or more algorithms to make the
new developed system efficient. However, this process takes more cost for using these
algorithms. It is very much time consuming task because of the processing of each algorithm in a
very efficient way. For this reason, it is big limitation for this research. If the research gets much
time and cost then the research will be very effective and the new improved system will more
accurate to give the actual output.
1.8 summary
Chapter 1 depicts the background of the Emotion detection media player. The aim of this
research is to understand the functioning of the three algorithms used to develop the Emotion
detection Media player. The main objective of the research is to develop a new system to detect
the human emotion and based on this emotion this system generates the lists of song
automatically. For this reason, the system uses three algorithms. Edge detection algorithm is used
to detect the edge of the digital image, face detection algorithm is used to detect the facial
expression of the human, and SVM is used to classify the detected image as per the features and
behaviour.
7
these kinds of problems.
1.7 Research limitation
In this research, there are some limitations to the used methodologies. Three types of algorithms
are used to design the system. There are some chances to use two or more algorithms to make the
new developed system efficient. However, this process takes more cost for using these
algorithms. It is very much time consuming task because of the processing of each algorithm in a
very efficient way. For this reason, it is big limitation for this research. If the research gets much
time and cost then the research will be very effective and the new improved system will more
accurate to give the actual output.
1.8 summary
Chapter 1 depicts the background of the Emotion detection media player. The aim of this
research is to understand the functioning of the three algorithms used to develop the Emotion
detection Media player. The main objective of the research is to develop a new system to detect
the human emotion and based on this emotion this system generates the lists of song
automatically. For this reason, the system uses three algorithms. Edge detection algorithm is used
to detect the edge of the digital image, face detection algorithm is used to detect the facial
expression of the human, and SVM is used to classify the detected image as per the features and
behaviour.
7
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Introduction
Literature Review
Methodology
Data Analysis and
Findings
Conclusion and
Recommendations
1.7 structure of the research project
8
Literature Review
Methodology
Data Analysis and
Findings
Conclusion and
Recommendations
1.7 structure of the research project
8

Chapter 2: Literature review
2.1 Emotion detection of human
According to Sarda et al. (2019), a new and improved technology of music generation is
introduced. The mood of the human are detected from different sources such as audio, video,
images and sensors. The mood of humans is identified by their facial expressions as well as the
tone of speech. The physical activities of human can be recognized by the cell phone, which is
carried by the human. According to the large amount of data, the computation is very sufficient
to find the human’s actions. With the help of trained information, the machine learning is used to
classify and predict the outcomes. It is very advantageous for the human by using some
improved technologies to identify the mood and gesture of human. This media player
continuously follows the listening habits and provides playlist as per the recognized mood of
humans. It is also called the generator of personalized playlist.
On behalf of Rahman and Mohamed (2016), music plays an important role in the human life. To
motivate the mood and divert the state of mind, human does listen to different kinds of music.
The best way to capture the human emotion is the facial expression. Facial expression
recognition is used to detect the emotion of humans by their faces. For a particular function, the
face recognition techniques help in the system to identify the human emotion. Some renowned
playlist like “Spotify” gives some lists of song that is mood or emotion based and listener is not
able to modify the song lists. The provided playlist is based on the facial expression of the
human that is detected from recent emotion. Then the listeners play from this list according to
their mood. The proposed system is used some methodologies like Rapid application
development and Android studio. For recognizing facial expression, Affdex SDK is used. The
system gives lists of song that can play the song according to the current emotion of human and
it gives the relevant song for listener.
2.2 Facial expression based music player
It states Kamble and Kulkarni (2016), the method of listening to some music based on the
current emotion of the human. Some automated system is introduced to do the automatic task.
For detecting the current emotion of human and play suitable music, an improved algorithm is
used. It is very cost-effective and reduces the time than manual search and updating of playlist
9
2.1 Emotion detection of human
According to Sarda et al. (2019), a new and improved technology of music generation is
introduced. The mood of the human are detected from different sources such as audio, video,
images and sensors. The mood of humans is identified by their facial expressions as well as the
tone of speech. The physical activities of human can be recognized by the cell phone, which is
carried by the human. According to the large amount of data, the computation is very sufficient
to find the human’s actions. With the help of trained information, the machine learning is used to
classify and predict the outcomes. It is very advantageous for the human by using some
improved technologies to identify the mood and gesture of human. This media player
continuously follows the listening habits and provides playlist as per the recognized mood of
humans. It is also called the generator of personalized playlist.
On behalf of Rahman and Mohamed (2016), music plays an important role in the human life. To
motivate the mood and divert the state of mind, human does listen to different kinds of music.
The best way to capture the human emotion is the facial expression. Facial expression
recognition is used to detect the emotion of humans by their faces. For a particular function, the
face recognition techniques help in the system to identify the human emotion. Some renowned
playlist like “Spotify” gives some lists of song that is mood or emotion based and listener is not
able to modify the song lists. The provided playlist is based on the facial expression of the
human that is detected from recent emotion. Then the listeners play from this list according to
their mood. The proposed system is used some methodologies like Rapid application
development and Android studio. For recognizing facial expression, Affdex SDK is used. The
system gives lists of song that can play the song according to the current emotion of human and
it gives the relevant song for listener.
2.2 Facial expression based music player
It states Kamble and Kulkarni (2016), the method of listening to some music based on the
current emotion of the human. Some automated system is introduced to do the automatic task.
For detecting the current emotion of human and play suitable music, an improved algorithm is
used. It is very cost-effective and reduces the time than manual search and updating of playlist
9
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

based on the current emotion of the human. PCA algorithm and Euclidean algorithm is used to
detect the facial expression of human very easily. In order to capture the facial expression of the
human, an inbuilt camera is used that mitigates the time as well as the cost of system design. The
accuracy level of this system is near about 84.82 percentages.
According to Noblejas et al. (2017), to identify the emotion and gesture of human, some new
model is used from song of Filipino. The songs are classified as per their characteristics and then
this list of classifications is presented to the listeners. To classify the songs from this playlist
some classifiers are used such as Naive Bayes, SVM, and K-nearest neighbour. The main aim of
the researcher is to find out the suitable classifier to extract song from Filipino song. To check
the accuracy of the model, the model is again tested with different data set model.
On behalf of Iyer et al. (2017), to change and divert the mind of people, music plays an
important role in their lives. It is especially affects the emotion and mind of the people. Music is
more effective rather than other media like video, images, etc. When people feeling low then
they listen to some sad song and when they are happy then they listen to happy songs. They have
to choose their song as per their current mood. However, this is done automatically and it takes
lots of time. Researcher introduces an improved system that is "EmoPlayer" to remove this
problem and provides better way to arrange this kind of song automatically based on the current
emotion of human. The main function of this system is to capture the image of people and then
detect the face using some face detection methods. After getting all information, they create a
suitable playlist for the listener to motivate their mood. It uses the Viola Jones algorithm to
detect the face and uses the Fisherfaces classifier to classify the emotion.
According to Altieri et al. (2019), the a new system is developed to implement the emotion of the
human based on current situations. The system is responsible to manage the multimedia based on
the human emotions. The human is mainly detected by the facial expression of the human used
some specific algorithm. The collected information is stored in the 2D matrix and then it is
mapped with some lighting colours. The system has some ability to maintain the current
environment and current emotions of the human. This emotion can be detected by the edge
detection algorithm to detect the background of the detected image and actual edge of the image
to recognize emotion and based on this emotion the system provides the playlist. The playlist is
automatically updated by the system when the mood of the human is changed. The system is
10
detect the facial expression of human very easily. In order to capture the facial expression of the
human, an inbuilt camera is used that mitigates the time as well as the cost of system design. The
accuracy level of this system is near about 84.82 percentages.
According to Noblejas et al. (2017), to identify the emotion and gesture of human, some new
model is used from song of Filipino. The songs are classified as per their characteristics and then
this list of classifications is presented to the listeners. To classify the songs from this playlist
some classifiers are used such as Naive Bayes, SVM, and K-nearest neighbour. The main aim of
the researcher is to find out the suitable classifier to extract song from Filipino song. To check
the accuracy of the model, the model is again tested with different data set model.
On behalf of Iyer et al. (2017), to change and divert the mind of people, music plays an
important role in their lives. It is especially affects the emotion and mind of the people. Music is
more effective rather than other media like video, images, etc. When people feeling low then
they listen to some sad song and when they are happy then they listen to happy songs. They have
to choose their song as per their current mood. However, this is done automatically and it takes
lots of time. Researcher introduces an improved system that is "EmoPlayer" to remove this
problem and provides better way to arrange this kind of song automatically based on the current
emotion of human. The main function of this system is to capture the image of people and then
detect the face using some face detection methods. After getting all information, they create a
suitable playlist for the listener to motivate their mood. It uses the Viola Jones algorithm to
detect the face and uses the Fisherfaces classifier to classify the emotion.
According to Altieri et al. (2019), the a new system is developed to implement the emotion of the
human based on current situations. The system is responsible to manage the multimedia based on
the human emotions. The human is mainly detected by the facial expression of the human used
some specific algorithm. The collected information is stored in the 2D matrix and then it is
mapped with some lighting colours. The system has some ability to maintain the current
environment and current emotions of the human. This emotion can be detected by the edge
detection algorithm to detect the background of the detected image and actual edge of the image
to recognize emotion and based on this emotion the system provides the playlist. The playlist is
automatically updated by the system when the mood of the human is changed. The system is
10
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

responsible to interact with the people and react to human emotion. The system reduces the cost
of system design including the time needed to detect the emotion of people. Now a day, this
system is very effective to provide the good services to the people within a short time period.
2.3 Edge detection music player
According to Gilda et al., (2017), songs acts as the medium for expressions that is very popular
for depicting as well as understanding the various emotions of humans. There are several
researches in the same subject that is classification of music as per the human emotions, which
has not given optimal outcomes in the practical field. The author suggested for an effective
music player that is based on a cross-platform or in other words EMP that endorses music in the
basis of present mood of the person. The EMP also has the ability to provide music
recommendations based on the moods that is smart detection of moods by the integration of the
capability of emotions reasoning within the adaptive music player or system. The music player
consists of three different modules that include emotion module, recommendation module as
well as classification module. As an input, the emotion module takes picture of the face and then
it uses the deep learning algorithms for the identification of moods effectively with 90.23%
accuracy (Aljanaki et al., 2016). Then the classification module will do the classification of the
songs in four classes of moods with the implementation of internal audio feature with an
accuracy of 97%. Lastly, the recommendation module makes the proper suggestion of the songs
with the mapping technology of the different emotions with that suggested song considering the
user’s preference.
However, Sen et al., (2018), states that for keeping an individual stress-free, different options of
releasing stress are adopted by the classification of the works of those individuals. The moods of
those persons can be depicted by their emotional changes within their workplaces. With the help
of the various facial expressions, the present psychology of a person can be analyzed. The
authors have proposed a smart music player that will be user intuitive and this player will have
the ability to detect the various facial expressions of the user as well as identify the present
emotional condition of the user who will work on computers. This music player can be used for
relaxing the users in their workplace. The smart music player can analyze the various processes
within the computer system or the smartphone that the user is executing. As there are various
types of music that can boost the zeal of an individual, the smart music player will generate a
11
of system design including the time needed to detect the emotion of people. Now a day, this
system is very effective to provide the good services to the people within a short time period.
2.3 Edge detection music player
According to Gilda et al., (2017), songs acts as the medium for expressions that is very popular
for depicting as well as understanding the various emotions of humans. There are several
researches in the same subject that is classification of music as per the human emotions, which
has not given optimal outcomes in the practical field. The author suggested for an effective
music player that is based on a cross-platform or in other words EMP that endorses music in the
basis of present mood of the person. The EMP also has the ability to provide music
recommendations based on the moods that is smart detection of moods by the integration of the
capability of emotions reasoning within the adaptive music player or system. The music player
consists of three different modules that include emotion module, recommendation module as
well as classification module. As an input, the emotion module takes picture of the face and then
it uses the deep learning algorithms for the identification of moods effectively with 90.23%
accuracy (Aljanaki et al., 2016). Then the classification module will do the classification of the
songs in four classes of moods with the implementation of internal audio feature with an
accuracy of 97%. Lastly, the recommendation module makes the proper suggestion of the songs
with the mapping technology of the different emotions with that suggested song considering the
user’s preference.
However, Sen et al., (2018), states that for keeping an individual stress-free, different options of
releasing stress are adopted by the classification of the works of those individuals. The moods of
those persons can be depicted by their emotional changes within their workplaces. With the help
of the various facial expressions, the present psychology of a person can be analyzed. The
authors have proposed a smart music player that will be user intuitive and this player will have
the ability to detect the various facial expressions of the user as well as identify the present
emotional condition of the user who will work on computers. This music player can be used for
relaxing the users in their workplace. The smart music player can analyze the various processes
within the computer system or the smartphone that the user is executing. As there are various
types of music that can boost the zeal of an individual, the smart music player will generate a
11

playlist for the user by analyzing the tasks performed in their computers with their present
emotions. There will be another option for the user to modify the playlist suggested by the music
player to add more flexibility in the emotional detection and music recommendations. Therefore,
by using this music player, the various working professionals can have a good option to relax by
doing stressful works (Sánchez-Moreno et al., 2016).
As per Kabani et al., (2015) different algorithms have been designed for generating an automatic
playlist generation within a music player according to the different emotional states of the user.
The authors also said that human face is the most important part of the human body, which plays
a key role to extract the current emotions as well as behaviour of the person. Moreover, manual
process for selecting music as per the moods and emotions of the user will be a time consuming
as well as tedious task. So in this context, the algorithms designed will help in the automatic
generation of the music playlist based on the current mood or emotions of the user. The existing
algorithms that are used in the music players practically slow or have less accuracy and it can
require other sensors externally. However, the proposed system by the authors will work on the
basis of various facial expressions detection by the sensors for generating an automatic playlist
and reducing the effort and time for making a manual playlist. The algorithms that are used in
this system will minimize the processing time in order to obtain the results as well as overall
costs of the system. Due to these factors, the accuracy of the smart music player will be
increased as compared to the existing music players that are available in the market.
Bhardwaj et al., (2015) states that testing is done for the systems on datasets by user dependent
as well as user independent, in the other words, dynamic and static datasets. The inbuilt camera
of the device will work for capturing the various facial expressions of the users by which the
accuracy of the algorithm used for emotion detection will be approximately 85-90% with the real
time images. However with the static images, accuracy will be approximately 98-100%. In
addition, this algorithm has the ability to generate a music playlist on the basis emotion detection
in around 0.95-1.05 second in an average calculation as well as estimation. Therefore, the
proposed algorithms by the authors provide enhanced accuracy in its performance and its
operational time by reducing the costs of designing as compared to other existing algorithms.
Different methods or approaches are developed by for classifying the emotions and behaviours
of human beings. The approaches are basically based on some regular emotions and for
12
emotions. There will be another option for the user to modify the playlist suggested by the music
player to add more flexibility in the emotional detection and music recommendations. Therefore,
by using this music player, the various working professionals can have a good option to relax by
doing stressful works (Sánchez-Moreno et al., 2016).
As per Kabani et al., (2015) different algorithms have been designed for generating an automatic
playlist generation within a music player according to the different emotional states of the user.
The authors also said that human face is the most important part of the human body, which plays
a key role to extract the current emotions as well as behaviour of the person. Moreover, manual
process for selecting music as per the moods and emotions of the user will be a time consuming
as well as tedious task. So in this context, the algorithms designed will help in the automatic
generation of the music playlist based on the current mood or emotions of the user. The existing
algorithms that are used in the music players practically slow or have less accuracy and it can
require other sensors externally. However, the proposed system by the authors will work on the
basis of various facial expressions detection by the sensors for generating an automatic playlist
and reducing the effort and time for making a manual playlist. The algorithms that are used in
this system will minimize the processing time in order to obtain the results as well as overall
costs of the system. Due to these factors, the accuracy of the smart music player will be
increased as compared to the existing music players that are available in the market.
Bhardwaj et al., (2015) states that testing is done for the systems on datasets by user dependent
as well as user independent, in the other words, dynamic and static datasets. The inbuilt camera
of the device will work for capturing the various facial expressions of the users by which the
accuracy of the algorithm used for emotion detection will be approximately 85-90% with the real
time images. However with the static images, accuracy will be approximately 98-100%. In
addition, this algorithm has the ability to generate a music playlist on the basis emotion detection
in around 0.95-1.05 second in an average calculation as well as estimation. Therefore, the
proposed algorithms by the authors provide enhanced accuracy in its performance and its
operational time by reducing the costs of designing as compared to other existing algorithms.
Different methods or approaches are developed by for classifying the emotions and behaviours
of human beings. The approaches are basically based on some regular emotions and for
12
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide
1 out of 44
Related Documents

Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.