Designing Emotion-Aware Music Recommendation Systems
VerifiedAdded on 2019/10/01
|44
|12685
|396
Report
AI Summary
The provided content consists of a mix of academic articles, conference proceedings, and patents related to music recommendation systems, emotion detection, and ambient intelligence. The articles explore various approaches to recommending music based on users' emotions, preferences, and facial expressions. Additionally, the content includes research methodology guides, introducing students to qualitative and quantitative research methods.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Emotion detection media player using 3 different
algorithms
1
algorithms
1
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Table of Contents
Chapter 1: Introduction....................................................................................................................3
1.1 Background of the paper........................................................................................................3
1.2 Aim of the research................................................................................................................4
1.3 Objectives of the research......................................................................................................4
1.4 Research questions.................................................................................................................4
1.5 Significance of the paper.......................................................................................................5
1.6 Problem statement.................................................................................................................5
1.7 Research limitation................................................................................................................6
1.8 summary................................................................................................................................6
1.7 structure of the research project.............................................................................................7
Chapter 2: Literature review............................................................................................................8
2.1 Emotion detection of human..................................................................................................8
2.2 Facial expression based music player....................................................................................8
2.3 Edge detection music player................................................................................................10
2.4 Support Vector Machine or SVM........................................................................................12
Chapter 3: Research Methodology................................................................................................16
3.1 Introduction..........................................................................................................................16
3.2 Research Philosophy............................................................................................................16
3.3 Research Approach..............................................................................................................17
3.4 Research design...................................................................................................................17
3.5 Data collection process........................................................................................................18
3.6 Data analysis modes............................................................................................................19
Chapter 4: Data analysis................................................................................................................20
Face detection coding in python using webcam........................................................................20
2
Chapter 1: Introduction....................................................................................................................3
1.1 Background of the paper........................................................................................................3
1.2 Aim of the research................................................................................................................4
1.3 Objectives of the research......................................................................................................4
1.4 Research questions.................................................................................................................4
1.5 Significance of the paper.......................................................................................................5
1.6 Problem statement.................................................................................................................5
1.7 Research limitation................................................................................................................6
1.8 summary................................................................................................................................6
1.7 structure of the research project.............................................................................................7
Chapter 2: Literature review............................................................................................................8
2.1 Emotion detection of human..................................................................................................8
2.2 Facial expression based music player....................................................................................8
2.3 Edge detection music player................................................................................................10
2.4 Support Vector Machine or SVM........................................................................................12
Chapter 3: Research Methodology................................................................................................16
3.1 Introduction..........................................................................................................................16
3.2 Research Philosophy............................................................................................................16
3.3 Research Approach..............................................................................................................17
3.4 Research design...................................................................................................................17
3.5 Data collection process........................................................................................................18
3.6 Data analysis modes............................................................................................................19
Chapter 4: Data analysis................................................................................................................20
Face detection coding in python using webcam........................................................................20
2
Edge detection algorithm...........................................................................................................22
Chapter 5: Conclusion and Recommendation...............................................................................30
References......................................................................................................................................32
3
Chapter 5: Conclusion and Recommendation...............................................................................30
References......................................................................................................................................32
3
Chapter 1: Introduction
1.1 Background of the paper
Music has some important role in enhancing the life of a person because it is a highly significant
relaxation medium for music followers as well as listeners. Now a day, the music technology is
very improved and the music listener uses this improved technology such as local playback,
multicast stream, reverse, and other facilities. The listeners are satisfied with these factors and
play music based on their mood as well as behaviour. Audio feeling recognition provides list of
music that is supported to various mood and emotions of music lover. Various categories of
emotions and audio signal that is received, is classified by audio felling recognition. To explore
some features of audio, AN signal is used and MR is used to extract various important data from
the AN signal of audio. The listener is trying to set their playlist according to their mood.
However, it consumes more time. Various music players provide different kinds of features such
as proper lyrics with singer name and all. In this system, the arrangement of playlist is reacted
based on listener’s emotions to save the time for manually storing playlist.
The best way to express emotion and mood of person is the facial expression and physical
gesture of human. This system is used extract facial expression and based on this expression the
system automatic generate playlist that is completely matched with the mood and behaviour of
listeners. This system consumes less time, reduces the cost hardware and removes the overheads
of memory. The face expression is classified into five different parts like joy, surprise,
excitement, anger and sadness of human. To extract important, related data from any audio
signal, high accurate audio extraction technology is used with lesser time. A model of emotion is
expressed to classify the song into seven types such as unhappy, anger, excitement with joy, sad-
anger, joy, surprise. The emotion-audio module is combination of feeling extraction module and
module of audio feature extraction. This mechanism provides a more huge potentiality and good
real-time performance rather than recent methodology. Various kinds of approaches are used to
process the edge detection of an image such as mouse, speech reorganization using AI and
others. In this case, three algorithms are used these are used like edge detection, face detection
and SVM. The edge detection algorithm helps to reduce computation of processing any large
image. Face detection algorithm provides high accuracy to detect object as well as image. This
4
1.1 Background of the paper
Music has some important role in enhancing the life of a person because it is a highly significant
relaxation medium for music followers as well as listeners. Now a day, the music technology is
very improved and the music listener uses this improved technology such as local playback,
multicast stream, reverse, and other facilities. The listeners are satisfied with these factors and
play music based on their mood as well as behaviour. Audio feeling recognition provides list of
music that is supported to various mood and emotions of music lover. Various categories of
emotions and audio signal that is received, is classified by audio felling recognition. To explore
some features of audio, AN signal is used and MR is used to extract various important data from
the AN signal of audio. The listener is trying to set their playlist according to their mood.
However, it consumes more time. Various music players provide different kinds of features such
as proper lyrics with singer name and all. In this system, the arrangement of playlist is reacted
based on listener’s emotions to save the time for manually storing playlist.
The best way to express emotion and mood of person is the facial expression and physical
gesture of human. This system is used extract facial expression and based on this expression the
system automatic generate playlist that is completely matched with the mood and behaviour of
listeners. This system consumes less time, reduces the cost hardware and removes the overheads
of memory. The face expression is classified into five different parts like joy, surprise,
excitement, anger and sadness of human. To extract important, related data from any audio
signal, high accurate audio extraction technology is used with lesser time. A model of emotion is
expressed to classify the song into seven types such as unhappy, anger, excitement with joy, sad-
anger, joy, surprise. The emotion-audio module is combination of feeling extraction module and
module of audio feature extraction. This mechanism provides a more huge potentiality and good
real-time performance rather than recent methodology. Various kinds of approaches are used to
process the edge detection of an image such as mouse, speech reorganization using AI and
others. In this case, three algorithms are used these are used like edge detection, face detection
and SVM. The edge detection algorithm helps to reduce computation of processing any large
image. Face detection algorithm provides high accuracy to detect object as well as image. This
4
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
algorithm is very fast in nature. This system can extract emotion and mood of human and they
started communication with human being. This system is used to sense facial expression of
human and based on this facial expression it arranges the playlist for human. Human does not
need to select their playlist based on their mood or emotion. Computer can communicate with
human beings like talking, reacting and extracting human emotion, automatically guess the
feelings of human beings. The emotion detection develops new technology in recent years to
process various images, interaction between humans and machines and machine learning. Now a
day, emotion detection plays a vital role in neuroscience, computer science, medical and other
various purposes. To make rational decisions, interaction with social media, emotion detection is
very important. This system can understand what listener wants to listen to according to their
current state of mind.
1.2 Aim of the research
The aim of the research is to understand the functioning of the three algorithms used in order to
develop the Emotion detection Media player.
1.3 Objectives of the research
The objectives of the research are listed below:
To play the song automatically based on human emotions and feelings.
To discuss three algorithms for detecting the emotion of humans
To communicate with the people and extract the gesture and mind state of people
To discuss the impact of three algorithms on emotion detection media player
To develop proper algorithm for recognizing human emotion
To develop the appropriate system as per the human emotion this is used to identify
human gestures using some technology.
1.4 Research questions
How to play the song automatically as per the listener’s mood?
What is main purpose of the three algorithms to detect emotion of human?
How to extract the people's emotions and gestures of humans using media player?
What are the impacts of thee algorithms on emotion detection media player?
How to develop appropriate algorithm for detecting emotion?
5
started communication with human being. This system is used to sense facial expression of
human and based on this facial expression it arranges the playlist for human. Human does not
need to select their playlist based on their mood or emotion. Computer can communicate with
human beings like talking, reacting and extracting human emotion, automatically guess the
feelings of human beings. The emotion detection develops new technology in recent years to
process various images, interaction between humans and machines and machine learning. Now a
day, emotion detection plays a vital role in neuroscience, computer science, medical and other
various purposes. To make rational decisions, interaction with social media, emotion detection is
very important. This system can understand what listener wants to listen to according to their
current state of mind.
1.2 Aim of the research
The aim of the research is to understand the functioning of the three algorithms used in order to
develop the Emotion detection Media player.
1.3 Objectives of the research
The objectives of the research are listed below:
To play the song automatically based on human emotions and feelings.
To discuss three algorithms for detecting the emotion of humans
To communicate with the people and extract the gesture and mind state of people
To discuss the impact of three algorithms on emotion detection media player
To develop proper algorithm for recognizing human emotion
To develop the appropriate system as per the human emotion this is used to identify
human gestures using some technology.
1.4 Research questions
How to play the song automatically as per the listener’s mood?
What is main purpose of the three algorithms to detect emotion of human?
How to extract the people's emotions and gestures of humans using media player?
What are the impacts of thee algorithms on emotion detection media player?
How to develop appropriate algorithm for detecting emotion?
5
How to develop appropriate media player to extract emotion of human?
1.5 Significance of the paper
The importance of emotion detection media player is looking forward to the emotion of users
and it is used to detect human emotion. This system helps to play music automatically by
understanding the emotion of humans. It reduces the time of computation and searching time for
human. Memory overhead is reduced by this emotion detection media player. The system
provides accuracy and efficient results for detecting gestures of humans. It recognizes the facial
expression of people and arranges suitable playlist for them. It mainly focuses on the features of
the detected emotion rather than actual image. Here, listener does not want choose any song from
the playlist automatically. There are no requirements for any playlists. The music lovers do not
want to categorize the song according to their gestures, mental state and feelings. Here SVM
algorithm is used to classify and analyse the regression. It uses hyper-plane for doing this. It is
used to access image with the help of computer devices such as mouse and various computer
networks. It mainly detects the digital image through the video. It classifies data as per their
behaviour and features. Data is transformed by using this technique and according to this
collected data set; it set some boundaries among the promising outcomes. Another algorithm is
edge detection algorithm. In any image the edge parts are the basic thing. The edge is the set of
several pixels. The detection of edge is one of the best techniques to extract the edge of an
image. The output of this algorithm is expressed by the two-dimensional function. The third
algorithm that is used for emotion detection is the face recognition. The main advantage of this
algorithm is to identify the actual emotion of human beings by their facial expressions such as
joy, sadness, anger and other emotions. The algorithm is mainly responsible to reduce the
computation time for detecting the facial expression of any person.
1.6 Problem statement
This is difficult task for music lover to create and segregate the manual playlist among the huge
number of songs. It is quite impossible to keep the track of large number of song in the playlist.
Some songs that are not used for long time waste lot of space and listener have to delete this kind
of song manually. Listeners have some difficulties to identify the suitable song and play the song
based on variable moods. Now a day, people cannot automatically change or update the playlist
in a easiest way. Listener has to update this playlist manually. The list of playlist is not equal all
6
1.5 Significance of the paper
The importance of emotion detection media player is looking forward to the emotion of users
and it is used to detect human emotion. This system helps to play music automatically by
understanding the emotion of humans. It reduces the time of computation and searching time for
human. Memory overhead is reduced by this emotion detection media player. The system
provides accuracy and efficient results for detecting gestures of humans. It recognizes the facial
expression of people and arranges suitable playlist for them. It mainly focuses on the features of
the detected emotion rather than actual image. Here, listener does not want choose any song from
the playlist automatically. There are no requirements for any playlists. The music lovers do not
want to categorize the song according to their gestures, mental state and feelings. Here SVM
algorithm is used to classify and analyse the regression. It uses hyper-plane for doing this. It is
used to access image with the help of computer devices such as mouse and various computer
networks. It mainly detects the digital image through the video. It classifies data as per their
behaviour and features. Data is transformed by using this technique and according to this
collected data set; it set some boundaries among the promising outcomes. Another algorithm is
edge detection algorithm. In any image the edge parts are the basic thing. The edge is the set of
several pixels. The detection of edge is one of the best techniques to extract the edge of an
image. The output of this algorithm is expressed by the two-dimensional function. The third
algorithm that is used for emotion detection is the face recognition. The main advantage of this
algorithm is to identify the actual emotion of human beings by their facial expressions such as
joy, sadness, anger and other emotions. The algorithm is mainly responsible to reduce the
computation time for detecting the facial expression of any person.
1.6 Problem statement
This is difficult task for music lover to create and segregate the manual playlist among the huge
number of songs. It is quite impossible to keep the track of large number of song in the playlist.
Some songs that are not used for long time waste lot of space and listener have to delete this kind
of song manually. Listeners have some difficulties to identify the suitable song and play the song
based on variable moods. Now a day, people cannot automatically change or update the playlist
in a easiest way. Listener has to update this playlist manually. The list of playlist is not equal all
6
time as per the listener requirement. It changes frequently. So there is no such system to reduce
these kinds of problems.
1.7 Research limitation
In this research, there are some limitations to the used methodologies. Three types of algorithms
are used to design the system. There are some chances to use two or more algorithms to make the
new developed system efficient. However, this process takes more cost for using these
algorithms. It is very much time consuming task because of the processing of each algorithm in a
very efficient way. For this reason, it is big limitation for this research. If the research gets much
time and cost then the research will be very effective and the new improved system will more
accurate to give the actual output.
1.8 summary
Chapter 1 depicts the background of the Emotion detection media player. The aim of this
research is to understand the functioning of the three algorithms used to develop the Emotion
detection Media player. The main objective of the research is to develop a new system to detect
the human emotion and based on this emotion this system generates the lists of song
automatically. For this reason, the system uses three algorithms. Edge detection algorithm is used
to detect the edge of the digital image, face detection algorithm is used to detect the facial
expression of the human, and SVM is used to classify the detected image as per the features and
behaviour.
7
these kinds of problems.
1.7 Research limitation
In this research, there are some limitations to the used methodologies. Three types of algorithms
are used to design the system. There are some chances to use two or more algorithms to make the
new developed system efficient. However, this process takes more cost for using these
algorithms. It is very much time consuming task because of the processing of each algorithm in a
very efficient way. For this reason, it is big limitation for this research. If the research gets much
time and cost then the research will be very effective and the new improved system will more
accurate to give the actual output.
1.8 summary
Chapter 1 depicts the background of the Emotion detection media player. The aim of this
research is to understand the functioning of the three algorithms used to develop the Emotion
detection Media player. The main objective of the research is to develop a new system to detect
the human emotion and based on this emotion this system generates the lists of song
automatically. For this reason, the system uses three algorithms. Edge detection algorithm is used
to detect the edge of the digital image, face detection algorithm is used to detect the facial
expression of the human, and SVM is used to classify the detected image as per the features and
behaviour.
7
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Introduction
Literature Review
Methodology
Data Analysis and
Findings
Conclusion and
Recommendations
1.7 structure of the research project
8
Literature Review
Methodology
Data Analysis and
Findings
Conclusion and
Recommendations
1.7 structure of the research project
8
Chapter 2: Literature review
2.1 Emotion detection of human
According to Sarda et al. (2019), a new and improved technology of music generation is
introduced. The mood of the human are detected from different sources such as audio, video,
images and sensors. The mood of humans is identified by their facial expressions as well as the
tone of speech. The physical activities of human can be recognized by the cell phone, which is
carried by the human. According to the large amount of data, the computation is very sufficient
to find the human’s actions. With the help of trained information, the machine learning is used to
classify and predict the outcomes. It is very advantageous for the human by using some
improved technologies to identify the mood and gesture of human. This media player
continuously follows the listening habits and provides playlist as per the recognized mood of
humans. It is also called the generator of personalized playlist.
On behalf of Rahman and Mohamed (2016), music plays an important role in the human life. To
motivate the mood and divert the state of mind, human does listen to different kinds of music.
The best way to capture the human emotion is the facial expression. Facial expression
recognition is used to detect the emotion of humans by their faces. For a particular function, the
face recognition techniques help in the system to identify the human emotion. Some renowned
playlist like “Spotify” gives some lists of song that is mood or emotion based and listener is not
able to modify the song lists. The provided playlist is based on the facial expression of the
human that is detected from recent emotion. Then the listeners play from this list according to
their mood. The proposed system is used some methodologies like Rapid application
development and Android studio. For recognizing facial expression, Affdex SDK is used. The
system gives lists of song that can play the song according to the current emotion of human and
it gives the relevant song for listener.
2.2 Facial expression based music player
It states Kamble and Kulkarni (2016), the method of listening to some music based on the
current emotion of the human. Some automated system is introduced to do the automatic task.
For detecting the current emotion of human and play suitable music, an improved algorithm is
used. It is very cost-effective and reduces the time than manual search and updating of playlist
9
2.1 Emotion detection of human
According to Sarda et al. (2019), a new and improved technology of music generation is
introduced. The mood of the human are detected from different sources such as audio, video,
images and sensors. The mood of humans is identified by their facial expressions as well as the
tone of speech. The physical activities of human can be recognized by the cell phone, which is
carried by the human. According to the large amount of data, the computation is very sufficient
to find the human’s actions. With the help of trained information, the machine learning is used to
classify and predict the outcomes. It is very advantageous for the human by using some
improved technologies to identify the mood and gesture of human. This media player
continuously follows the listening habits and provides playlist as per the recognized mood of
humans. It is also called the generator of personalized playlist.
On behalf of Rahman and Mohamed (2016), music plays an important role in the human life. To
motivate the mood and divert the state of mind, human does listen to different kinds of music.
The best way to capture the human emotion is the facial expression. Facial expression
recognition is used to detect the emotion of humans by their faces. For a particular function, the
face recognition techniques help in the system to identify the human emotion. Some renowned
playlist like “Spotify” gives some lists of song that is mood or emotion based and listener is not
able to modify the song lists. The provided playlist is based on the facial expression of the
human that is detected from recent emotion. Then the listeners play from this list according to
their mood. The proposed system is used some methodologies like Rapid application
development and Android studio. For recognizing facial expression, Affdex SDK is used. The
system gives lists of song that can play the song according to the current emotion of human and
it gives the relevant song for listener.
2.2 Facial expression based music player
It states Kamble and Kulkarni (2016), the method of listening to some music based on the
current emotion of the human. Some automated system is introduced to do the automatic task.
For detecting the current emotion of human and play suitable music, an improved algorithm is
used. It is very cost-effective and reduces the time than manual search and updating of playlist
9
based on the current emotion of the human. PCA algorithm and Euclidean algorithm is used to
detect the facial expression of human very easily. In order to capture the facial expression of the
human, an inbuilt camera is used that mitigates the time as well as the cost of system design. The
accuracy level of this system is near about 84.82 percentages.
According to Noblejas et al. (2017), to identify the emotion and gesture of human, some new
model is used from song of Filipino. The songs are classified as per their characteristics and then
this list of classifications is presented to the listeners. To classify the songs from this playlist
some classifiers are used such as Naive Bayes, SVM, and K-nearest neighbour. The main aim of
the researcher is to find out the suitable classifier to extract song from Filipino song. To check
the accuracy of the model, the model is again tested with different data set model.
On behalf of Iyer et al. (2017), to change and divert the mind of people, music plays an
important role in their lives. It is especially affects the emotion and mind of the people. Music is
more effective rather than other media like video, images, etc. When people feeling low then
they listen to some sad song and when they are happy then they listen to happy songs. They have
to choose their song as per their current mood. However, this is done automatically and it takes
lots of time. Researcher introduces an improved system that is "EmoPlayer" to remove this
problem and provides better way to arrange this kind of song automatically based on the current
emotion of human. The main function of this system is to capture the image of people and then
detect the face using some face detection methods. After getting all information, they create a
suitable playlist for the listener to motivate their mood. It uses the Viola Jones algorithm to
detect the face and uses the Fisherfaces classifier to classify the emotion.
According to Altieri et al. (2019), the a new system is developed to implement the emotion of the
human based on current situations. The system is responsible to manage the multimedia based on
the human emotions. The human is mainly detected by the facial expression of the human used
some specific algorithm. The collected information is stored in the 2D matrix and then it is
mapped with some lighting colours. The system has some ability to maintain the current
environment and current emotions of the human. This emotion can be detected by the edge
detection algorithm to detect the background of the detected image and actual edge of the image
to recognize emotion and based on this emotion the system provides the playlist. The playlist is
automatically updated by the system when the mood of the human is changed. The system is
10
detect the facial expression of human very easily. In order to capture the facial expression of the
human, an inbuilt camera is used that mitigates the time as well as the cost of system design. The
accuracy level of this system is near about 84.82 percentages.
According to Noblejas et al. (2017), to identify the emotion and gesture of human, some new
model is used from song of Filipino. The songs are classified as per their characteristics and then
this list of classifications is presented to the listeners. To classify the songs from this playlist
some classifiers are used such as Naive Bayes, SVM, and K-nearest neighbour. The main aim of
the researcher is to find out the suitable classifier to extract song from Filipino song. To check
the accuracy of the model, the model is again tested with different data set model.
On behalf of Iyer et al. (2017), to change and divert the mind of people, music plays an
important role in their lives. It is especially affects the emotion and mind of the people. Music is
more effective rather than other media like video, images, etc. When people feeling low then
they listen to some sad song and when they are happy then they listen to happy songs. They have
to choose their song as per their current mood. However, this is done automatically and it takes
lots of time. Researcher introduces an improved system that is "EmoPlayer" to remove this
problem and provides better way to arrange this kind of song automatically based on the current
emotion of human. The main function of this system is to capture the image of people and then
detect the face using some face detection methods. After getting all information, they create a
suitable playlist for the listener to motivate their mood. It uses the Viola Jones algorithm to
detect the face and uses the Fisherfaces classifier to classify the emotion.
According to Altieri et al. (2019), the a new system is developed to implement the emotion of the
human based on current situations. The system is responsible to manage the multimedia based on
the human emotions. The human is mainly detected by the facial expression of the human used
some specific algorithm. The collected information is stored in the 2D matrix and then it is
mapped with some lighting colours. The system has some ability to maintain the current
environment and current emotions of the human. This emotion can be detected by the edge
detection algorithm to detect the background of the detected image and actual edge of the image
to recognize emotion and based on this emotion the system provides the playlist. The playlist is
automatically updated by the system when the mood of the human is changed. The system is
10
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
responsible to interact with the people and react to human emotion. The system reduces the cost
of system design including the time needed to detect the emotion of people. Now a day, this
system is very effective to provide the good services to the people within a short time period.
2.3 Edge detection music player
According to Gilda et al., (2017), songs acts as the medium for expressions that is very popular
for depicting as well as understanding the various emotions of humans. There are several
researches in the same subject that is classification of music as per the human emotions, which
has not given optimal outcomes in the practical field. The author suggested for an effective
music player that is based on a cross-platform or in other words EMP that endorses music in the
basis of present mood of the person. The EMP also has the ability to provide music
recommendations based on the moods that is smart detection of moods by the integration of the
capability of emotions reasoning within the adaptive music player or system. The music player
consists of three different modules that include emotion module, recommendation module as
well as classification module. As an input, the emotion module takes picture of the face and then
it uses the deep learning algorithms for the identification of moods effectively with 90.23%
accuracy (Aljanaki et al., 2016). Then the classification module will do the classification of the
songs in four classes of moods with the implementation of internal audio feature with an
accuracy of 97%. Lastly, the recommendation module makes the proper suggestion of the songs
with the mapping technology of the different emotions with that suggested song considering the
user’s preference.
However, Sen et al., (2018), states that for keeping an individual stress-free, different options of
releasing stress are adopted by the classification of the works of those individuals. The moods of
those persons can be depicted by their emotional changes within their workplaces. With the help
of the various facial expressions, the present psychology of a person can be analyzed. The
authors have proposed a smart music player that will be user intuitive and this player will have
the ability to detect the various facial expressions of the user as well as identify the present
emotional condition of the user who will work on computers. This music player can be used for
relaxing the users in their workplace. The smart music player can analyze the various processes
within the computer system or the smartphone that the user is executing. As there are various
types of music that can boost the zeal of an individual, the smart music player will generate a
11
of system design including the time needed to detect the emotion of people. Now a day, this
system is very effective to provide the good services to the people within a short time period.
2.3 Edge detection music player
According to Gilda et al., (2017), songs acts as the medium for expressions that is very popular
for depicting as well as understanding the various emotions of humans. There are several
researches in the same subject that is classification of music as per the human emotions, which
has not given optimal outcomes in the practical field. The author suggested for an effective
music player that is based on a cross-platform or in other words EMP that endorses music in the
basis of present mood of the person. The EMP also has the ability to provide music
recommendations based on the moods that is smart detection of moods by the integration of the
capability of emotions reasoning within the adaptive music player or system. The music player
consists of three different modules that include emotion module, recommendation module as
well as classification module. As an input, the emotion module takes picture of the face and then
it uses the deep learning algorithms for the identification of moods effectively with 90.23%
accuracy (Aljanaki et al., 2016). Then the classification module will do the classification of the
songs in four classes of moods with the implementation of internal audio feature with an
accuracy of 97%. Lastly, the recommendation module makes the proper suggestion of the songs
with the mapping technology of the different emotions with that suggested song considering the
user’s preference.
However, Sen et al., (2018), states that for keeping an individual stress-free, different options of
releasing stress are adopted by the classification of the works of those individuals. The moods of
those persons can be depicted by their emotional changes within their workplaces. With the help
of the various facial expressions, the present psychology of a person can be analyzed. The
authors have proposed a smart music player that will be user intuitive and this player will have
the ability to detect the various facial expressions of the user as well as identify the present
emotional condition of the user who will work on computers. This music player can be used for
relaxing the users in their workplace. The smart music player can analyze the various processes
within the computer system or the smartphone that the user is executing. As there are various
types of music that can boost the zeal of an individual, the smart music player will generate a
11
playlist for the user by analyzing the tasks performed in their computers with their present
emotions. There will be another option for the user to modify the playlist suggested by the music
player to add more flexibility in the emotional detection and music recommendations. Therefore,
by using this music player, the various working professionals can have a good option to relax by
doing stressful works (Sánchez-Moreno et al., 2016).
As per Kabani et al., (2015) different algorithms have been designed for generating an automatic
playlist generation within a music player according to the different emotional states of the user.
The authors also said that human face is the most important part of the human body, which plays
a key role to extract the current emotions as well as behaviour of the person. Moreover, manual
process for selecting music as per the moods and emotions of the user will be a time consuming
as well as tedious task. So in this context, the algorithms designed will help in the automatic
generation of the music playlist based on the current mood or emotions of the user. The existing
algorithms that are used in the music players practically slow or have less accuracy and it can
require other sensors externally. However, the proposed system by the authors will work on the
basis of various facial expressions detection by the sensors for generating an automatic playlist
and reducing the effort and time for making a manual playlist. The algorithms that are used in
this system will minimize the processing time in order to obtain the results as well as overall
costs of the system. Due to these factors, the accuracy of the smart music player will be
increased as compared to the existing music players that are available in the market.
Bhardwaj et al., (2015) states that testing is done for the systems on datasets by user dependent
as well as user independent, in the other words, dynamic and static datasets. The inbuilt camera
of the device will work for capturing the various facial expressions of the users by which the
accuracy of the algorithm used for emotion detection will be approximately 85-90% with the real
time images. However with the static images, accuracy will be approximately 98-100%. In
addition, this algorithm has the ability to generate a music playlist on the basis emotion detection
in around 0.95-1.05 second in an average calculation as well as estimation. Therefore, the
proposed algorithms by the authors provide enhanced accuracy in its performance and its
operational time by reducing the costs of designing as compared to other existing algorithms.
Different methods or approaches are developed by for classifying the emotions and behaviours
of human beings. The approaches are basically based on some regular emotions and for
12
emotions. There will be another option for the user to modify the playlist suggested by the music
player to add more flexibility in the emotional detection and music recommendations. Therefore,
by using this music player, the various working professionals can have a good option to relax by
doing stressful works (Sánchez-Moreno et al., 2016).
As per Kabani et al., (2015) different algorithms have been designed for generating an automatic
playlist generation within a music player according to the different emotional states of the user.
The authors also said that human face is the most important part of the human body, which plays
a key role to extract the current emotions as well as behaviour of the person. Moreover, manual
process for selecting music as per the moods and emotions of the user will be a time consuming
as well as tedious task. So in this context, the algorithms designed will help in the automatic
generation of the music playlist based on the current mood or emotions of the user. The existing
algorithms that are used in the music players practically slow or have less accuracy and it can
require other sensors externally. However, the proposed system by the authors will work on the
basis of various facial expressions detection by the sensors for generating an automatic playlist
and reducing the effort and time for making a manual playlist. The algorithms that are used in
this system will minimize the processing time in order to obtain the results as well as overall
costs of the system. Due to these factors, the accuracy of the smart music player will be
increased as compared to the existing music players that are available in the market.
Bhardwaj et al., (2015) states that testing is done for the systems on datasets by user dependent
as well as user independent, in the other words, dynamic and static datasets. The inbuilt camera
of the device will work for capturing the various facial expressions of the users by which the
accuracy of the algorithm used for emotion detection will be approximately 85-90% with the real
time images. However with the static images, accuracy will be approximately 98-100%. In
addition, this algorithm has the ability to generate a music playlist on the basis emotion detection
in around 0.95-1.05 second in an average calculation as well as estimation. Therefore, the
proposed algorithms by the authors provide enhanced accuracy in its performance and its
operational time by reducing the costs of designing as compared to other existing algorithms.
Different methods or approaches are developed by for classifying the emotions and behaviours
of human beings. The approaches are basically based on some regular emotions and for
12
proposing feature recognition, various facial expressions are divided into two different forms that
are “Appearance based” as well as “Geometric based” extraction. With the geometric based
extraction methodology, shapes or prominent areas of the various facial expressions like eyes
and mouth are considered (Schedl et al., 2015). However, around 58 prominent points has been
chosen while designing the ASM and the appearance was on the basis of the texture that was also
considered in the development of the algorithm. In this system, effective techniques have been
used for designing, coding as well as implementing the facial expressions along with multi-
orientations sets.
Hsu et al., 2017 proposed a system for playing music along with the playing method that will be
suitable for the user by the recognition of the speech recognition. The authors also proposed for a
playing method for the music with a few steps. The plurality of the songs will be mapped into the
emotional coordinates of the songs to form a graph and will be stored in the database. Data will
be captured by the voice recognition and then analyzed as per the coordinate of the real time
emotions of the user. Moreover, these coordinates of the voice data will be mapped on the
coordinate graph of the emotions as per the second database and this way the setting of the
coordinated is received. After that a specific song with the real-time emotional coordinate is
found and they are sequentially played considering the emotion coordinates within the database.
Knight (2016), proposed that the system will design a music system that will work on the basis
of the recognition of the speech emotions. Four units of the systems play an important role in
giving efficient output with much accuracy. The units will include the two database systems,
voice recognition unit along with a control device. In the first database, plurality will be stored
long with emotional coordinates of songs that will be mapped on the coordinate graph of
emotions. Moreover, the parameters of the emotion recognition will be stored in the other
database. Voice data are received by the voice unit and the control device will do the analysis of
the input data that will detect a present emotional coordinate of the user mapped as per the
second database.
2.4 Support Vector Machine or SVM
It is a challenge to increase and maintain the productivity of an individual for various tasks as per
Patel et al., (2016). Moreover, the authors suggested that music will help in enhancing the mood
as well as the state of mind that acts as a catalyst for improving the productivity of an individual.
13
are “Appearance based” as well as “Geometric based” extraction. With the geometric based
extraction methodology, shapes or prominent areas of the various facial expressions like eyes
and mouth are considered (Schedl et al., 2015). However, around 58 prominent points has been
chosen while designing the ASM and the appearance was on the basis of the texture that was also
considered in the development of the algorithm. In this system, effective techniques have been
used for designing, coding as well as implementing the facial expressions along with multi-
orientations sets.
Hsu et al., 2017 proposed a system for playing music along with the playing method that will be
suitable for the user by the recognition of the speech recognition. The authors also proposed for a
playing method for the music with a few steps. The plurality of the songs will be mapped into the
emotional coordinates of the songs to form a graph and will be stored in the database. Data will
be captured by the voice recognition and then analyzed as per the coordinate of the real time
emotions of the user. Moreover, these coordinates of the voice data will be mapped on the
coordinate graph of the emotions as per the second database and this way the setting of the
coordinated is received. After that a specific song with the real-time emotional coordinate is
found and they are sequentially played considering the emotion coordinates within the database.
Knight (2016), proposed that the system will design a music system that will work on the basis
of the recognition of the speech emotions. Four units of the systems play an important role in
giving efficient output with much accuracy. The units will include the two database systems,
voice recognition unit along with a control device. In the first database, plurality will be stored
long with emotional coordinates of songs that will be mapped on the coordinate graph of
emotions. Moreover, the parameters of the emotion recognition will be stored in the other
database. Voice data are received by the voice unit and the control device will do the analysis of
the input data that will detect a present emotional coordinate of the user mapped as per the
second database.
2.4 Support Vector Machine or SVM
It is a challenge to increase and maintain the productivity of an individual for various tasks as per
Patel et al., (2016). Moreover, the authors suggested that music will help in enhancing the mood
as well as the state of mind that acts as a catalyst for improving the productivity of an individual.
13
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
If any person has the tendency to listen continuous music then it will require a long personalized
playlist that has to be created and managed by the user, which will require much time. This
challenge can be eliminated by the music player if it can create a playlist as per the real-time
moods of the user. The detection of moods can be analyzed with the various facial expressions of
the user. The detection of facial expression of an individual must should define three issues such
as image detection, extraction of facial feature as well as classification of the facial expressions.
Firstly, from an image, the detection of the face will be made in which different methods will be
involved like face tracking on the basis of models. The detection will also be associated with the
real-time detection of faces and matching of the edge orientations. Face detection will be robust
by the implementation of the Huasdroff distance that will cascade the algorithm of viola as well
as Jones along with the HOG.
After this process, the next step is to extract the facial features from the face. There will be two
approaches regarding the extraction of the features that includes the Gabor filters and the other
one is PCA or “Principle Component Analysis”. Finally, the process of image classification will
be done for the detection of moods, in which different classifiers will be used. The classifiers
will include the BrownBoost, SVM and AdaBoost for the classification of moods. The proposed
system by the authors will implement the HOG or “Histograms of Oriented Gradients” with the
method of facial detection that will pass the detected features within the SVM for the mood
prediction of the user (Van der Steen et al., 2018). This prediction of the moods will be used to
generate the automatic music playlist in accordance with the real time mood of the user.
The entire system can be categorized into different stages that include the image capturing, face
detection, landmark point’s findings on the face that will be detected and then classifying the
facial features with the application of the SVM classifier and then creating the music playlist as
per the detected mood in real time basis. Initially, some datasets will be generated for initializing
the SVM classifier. The SVM will classify the various moods like happy, neutral as well as sad
by the analyzing the images. After that, the player will then be able to sort the various songs as
per their genre with the moods (Scirea et al., 2015). Moreover, with the help of these sorted
playlist, heuristics can be created within the music player.
Iyer et al., (2017) also proposed an Android application called as EmoPlayer that helps in
recommending playlist as per the real time emotion of the user. This will help in the eliminating
14
playlist that has to be created and managed by the user, which will require much time. This
challenge can be eliminated by the music player if it can create a playlist as per the real-time
moods of the user. The detection of moods can be analyzed with the various facial expressions of
the user. The detection of facial expression of an individual must should define three issues such
as image detection, extraction of facial feature as well as classification of the facial expressions.
Firstly, from an image, the detection of the face will be made in which different methods will be
involved like face tracking on the basis of models. The detection will also be associated with the
real-time detection of faces and matching of the edge orientations. Face detection will be robust
by the implementation of the Huasdroff distance that will cascade the algorithm of viola as well
as Jones along with the HOG.
After this process, the next step is to extract the facial features from the face. There will be two
approaches regarding the extraction of the features that includes the Gabor filters and the other
one is PCA or “Principle Component Analysis”. Finally, the process of image classification will
be done for the detection of moods, in which different classifiers will be used. The classifiers
will include the BrownBoost, SVM and AdaBoost for the classification of moods. The proposed
system by the authors will implement the HOG or “Histograms of Oriented Gradients” with the
method of facial detection that will pass the detected features within the SVM for the mood
prediction of the user (Van der Steen et al., 2018). This prediction of the moods will be used to
generate the automatic music playlist in accordance with the real time mood of the user.
The entire system can be categorized into different stages that include the image capturing, face
detection, landmark point’s findings on the face that will be detected and then classifying the
facial features with the application of the SVM classifier and then creating the music playlist as
per the detected mood in real time basis. Initially, some datasets will be generated for initializing
the SVM classifier. The SVM will classify the various moods like happy, neutral as well as sad
by the analyzing the images. After that, the player will then be able to sort the various songs as
per their genre with the moods (Scirea et al., 2015). Moreover, with the help of these sorted
playlist, heuristics can be created within the music player.
Iyer et al., (2017) also proposed an Android application called as EmoPlayer that helps in
recommending playlist as per the real time emotion of the user. This will help in the eliminating
14
manual selection of songs and creating a playlist according to the mood of the users. Music
skilfully plays with the human emotions that directly affect the different moods. In comparison
with movies, books as well as television shows, music will work most of the times in changing
the changing the moods. Music has the ability of empowering an individual at the time of bad
moods. With the proposed system by the authors, the music player will have the ability to detect
the facial features from the camera of the device. After capturing of the current image of the
user, the application can detect the emotion of the user and then generate a playlist of songs that
will improve the mood of the user. The algorithm that is used in the EmoPlayer is Viola Jones
for the detection of face and for the classification of the emotions, Fisherfaces classifier has been
used. As the popularity and trend of digital music is increasing on a daily basis, proper
recommendations of music are much helpful. Earlier, the recommendations were made by the
music player according to the music preferences of the user. However, sometimes these
recommendations should be on the basis of user’s emotion as in the recent times, the preferences
of the users are changing. Users’ needs music recommendations for improving their moods as in
this busy world, this improvement will make a huge difference.
The authors proposed a music player that will use a novel model for recommending music as the
emotions of the user. They have investigated the extraction of the music features and then have
made changes in the affinity graph to link the emotions as well as features of the music. Finally,
in the practical field, this proposed system has given accuracy of 85% on an average testing and
implementation.
In accordance with the recommending music to the users as per the moods by the different
streaming services, Yepes et al., (2018) sates that the sales of music albums or the digital
versions has decreased in the past few years. The major part of the streaming systems is the
recommendation systems that help the software in facilitating music to the users. The authors
proposed an approach for the recommendation that will be based on the contents searched by the
users. This system will use the algorithm of one-class SVM classification that will act as the
anomaly detector. Their main purpose for developing this system is to create a music playlist in
accordance with the tastes of users, and will integrate the new releases too. The proposed model
will also have the ability to detect the various elements that will reflect the preference of the user
with an enhanced accuracy that will work in associated with an Android application. This system
15
skilfully plays with the human emotions that directly affect the different moods. In comparison
with movies, books as well as television shows, music will work most of the times in changing
the changing the moods. Music has the ability of empowering an individual at the time of bad
moods. With the proposed system by the authors, the music player will have the ability to detect
the facial features from the camera of the device. After capturing of the current image of the
user, the application can detect the emotion of the user and then generate a playlist of songs that
will improve the mood of the user. The algorithm that is used in the EmoPlayer is Viola Jones
for the detection of face and for the classification of the emotions, Fisherfaces classifier has been
used. As the popularity and trend of digital music is increasing on a daily basis, proper
recommendations of music are much helpful. Earlier, the recommendations were made by the
music player according to the music preferences of the user. However, sometimes these
recommendations should be on the basis of user’s emotion as in the recent times, the preferences
of the users are changing. Users’ needs music recommendations for improving their moods as in
this busy world, this improvement will make a huge difference.
The authors proposed a music player that will use a novel model for recommending music as the
emotions of the user. They have investigated the extraction of the music features and then have
made changes in the affinity graph to link the emotions as well as features of the music. Finally,
in the practical field, this proposed system has given accuracy of 85% on an average testing and
implementation.
In accordance with the recommending music to the users as per the moods by the different
streaming services, Yepes et al., (2018) sates that the sales of music albums or the digital
versions has decreased in the past few years. The major part of the streaming systems is the
recommendation systems that help the software in facilitating music to the users. The authors
proposed an approach for the recommendation that will be based on the contents searched by the
users. This system will use the algorithm of one-class SVM classification that will act as the
anomaly detector. Their main purpose for developing this system is to create a music playlist in
accordance with the tastes of users, and will integrate the new releases too. The proposed model
will also have the ability to detect the various elements that will reflect the preference of the user
with an enhanced accuracy that will work in associated with an Android application. This system
15
also has the capability in detecting the different changes in the user preferences. With this model,
it will help to manage the recommended playlist along with the integration of the new releases in
the profile of the user to form a new music list.
The conventional music player will need a human interaction in order to play music as per the
mood of an individual as per Kamble and Kulkarni, (2016). But the migration of this
conventional system has been done with the integration of AI and technology for the automation.
For achieving this objective, algorithms have been used for classifying the various human
expressions and then generating a music playlist in accordance with the real time emotions of the
users that will be detected by the system. This will reduce the time as well as effort of searching
for a required song according to the mood of the user. By the extractions of the facial features of
the user, the expressions of the user can be detected. This system will use the PCA algorithm for
the extraction and Euclidean Distance classifier will be used to classify the songs according to
the mood. The various expressions of the user is captured by the front camera of the device,
which helps in reducing the costs of designing the model in comparison with other models or
methods.
The outcomes of this system by using the PCA algorithm will help in achieving the accuracy up
to 84% for recognizing the various facial expressions of user. The development of the
application is done in such a way that the contents of the user can be analysed and the
determination of the mood can identified. The work is done capturing the images of user with the
help of inbuilt camera and then the various facial features are analyzed in the accordance to
which the music playlist is generated to improve the emotional state of the user.
16
it will help to manage the recommended playlist along with the integration of the new releases in
the profile of the user to form a new music list.
The conventional music player will need a human interaction in order to play music as per the
mood of an individual as per Kamble and Kulkarni, (2016). But the migration of this
conventional system has been done with the integration of AI and technology for the automation.
For achieving this objective, algorithms have been used for classifying the various human
expressions and then generating a music playlist in accordance with the real time emotions of the
users that will be detected by the system. This will reduce the time as well as effort of searching
for a required song according to the mood of the user. By the extractions of the facial features of
the user, the expressions of the user can be detected. This system will use the PCA algorithm for
the extraction and Euclidean Distance classifier will be used to classify the songs according to
the mood. The various expressions of the user is captured by the front camera of the device,
which helps in reducing the costs of designing the model in comparison with other models or
methods.
The outcomes of this system by using the PCA algorithm will help in achieving the accuracy up
to 84% for recognizing the various facial expressions of user. The development of the
application is done in such a way that the contents of the user can be analysed and the
determination of the mood can identified. The work is done capturing the images of user with the
help of inbuilt camera and then the various facial features are analyzed in the accordance to
which the music playlist is generated to improve the emotional state of the user.
16
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Chapter 3: Research Methodology
3.1 Introduction
The methodology plays a key role in enhancing the outcomes that are resulted from the
conducting of the research (Kumar, 2019). It has been understood from the fact that the aspect of
the research is glorified with the use of the different research methodology tools. One of the
major facets that need to be covered by any scientific research is the induction of a proper
framework or guideline, on which the research is conducted. The use of the research
methodology helps in accomplishing this particular facet and enhances the outcomes of the
research. The presence of the guidelines play an important role in the researches of the modern
era as the research topics are quite complicated and beckons the use of stipulated rules and
regulations (Taylor et al., 2015). It is the part where the efficacies of the research methodology is
noteworthy and increases the quality of the project. One of the necessities that are surfaced by
the research methodology is the provision of the specified milestone. It is understood that the
accomplishment of the project within specified timelines has gained popularity because of the
monetary constraints that come into play. The research methodology is vested with the
parameters that allow the project to be accomplished within the stipulated milestone as well as
maintain the quality of the research simultaneously. Thus, the research methodology has been
implemented within the research. The following section describes the tools of research
methodology that has been inducted within the research. The tools of the research methodology
that has been considered for the research includes the Research Philosophy, Research Approach,
Research design, Sample size distribution, data collection techniques and the ethical
considerations.
3.2 Research Philosophy
The facets of the research philosophy shed light on the manner in which the researcher acquires
information about the topic (Mackey and Gass, 2015). This in turn assists the researcher to
devise the aim of the research. The research philosophy is vested with the features of enhancing
the quality of the research. The research philosophy serves as the belief on which the data
assimilation and assessment is based on. The research philosophy is composed of the positivism,
intrepretivism and the realism. The facets of the positivism are described by its affluences in
17
3.1 Introduction
The methodology plays a key role in enhancing the outcomes that are resulted from the
conducting of the research (Kumar, 2019). It has been understood from the fact that the aspect of
the research is glorified with the use of the different research methodology tools. One of the
major facets that need to be covered by any scientific research is the induction of a proper
framework or guideline, on which the research is conducted. The use of the research
methodology helps in accomplishing this particular facet and enhances the outcomes of the
research. The presence of the guidelines play an important role in the researches of the modern
era as the research topics are quite complicated and beckons the use of stipulated rules and
regulations (Taylor et al., 2015). It is the part where the efficacies of the research methodology is
noteworthy and increases the quality of the project. One of the necessities that are surfaced by
the research methodology is the provision of the specified milestone. It is understood that the
accomplishment of the project within specified timelines has gained popularity because of the
monetary constraints that come into play. The research methodology is vested with the
parameters that allow the project to be accomplished within the stipulated milestone as well as
maintain the quality of the research simultaneously. Thus, the research methodology has been
implemented within the research. The following section describes the tools of research
methodology that has been inducted within the research. The tools of the research methodology
that has been considered for the research includes the Research Philosophy, Research Approach,
Research design, Sample size distribution, data collection techniques and the ethical
considerations.
3.2 Research Philosophy
The facets of the research philosophy shed light on the manner in which the researcher acquires
information about the topic (Mackey and Gass, 2015). This in turn assists the researcher to
devise the aim of the research. The research philosophy is vested with the features of enhancing
the quality of the research. The research philosophy serves as the belief on which the data
assimilation and assessment is based on. The research philosophy is composed of the positivism,
intrepretivism and the realism. The facets of the positivism are described by its affluences in
17
imparting a scientific assessment to the domains of the research topic. It plays a major role in the
selection of the research objectives as it lays the down the framework for its development. The
intrepretivism is focused on the integration of the perceptions of human nature within the
research. The particular research has been conducted with the help of the positivism research
philosophy. The research topic includes the use of different algorithms that would develop the
Emotion Detection Music Player and the appropriate selection of the algorithm incurs the
implementation of the logical reasoning models. It could only be possible if the research is
endowed with the efficacies of the positivism philosophy as it is provisioned with exemplary
scientific evaluation techniques. Thus the introduction of the positivism philosophy is justified.
3.3 Research Approach
The aspect of the research approach has important significance in drawing the backgrounds on
which the research is propagated (Silverman, 2016). It also plays a key role in the selection of
the research topic. The research approach provides the platform on which the facets of research
objectives and data analysis gains prominence. The research approach is implemented on the
basis of the deductive and the inductive model. The Deductive model reflects on the provision of
outcomes and results through the interpretation of the research objectives by implementing the
theoretical frameworks and models. The inductive model resembles on the development of the
hypothesis through the conclusive results that is drawn from the research. In the particular
research, the inductive model has been introduced. The research topic reciprocates on the
development of the algorithms for the AI programs and thus it beckons the creation of the new
frameworks through the conclusive results drawn from the project. It could be only
accomplished with the help of the inductive model as it bestowed with the provisions of testing
null hypothesis, which has proven beneficial to the cause of the project. Thus the
implementation of the inductive model has been justified.
3.4 Research design
It is one of the important tools of the research methodology, as it provides the research with the
protocols that needs to be adhered to, in order to come good on accomplishing the research
objectives within the stipulated milestones (Flick, 2015). It plays an important role in providing
the research with the guidelines that is necessary for the research objectives to be accomplished
with clinical precision. The research design endowed with the presence of the exploratory,
18
selection of the research objectives as it lays the down the framework for its development. The
intrepretivism is focused on the integration of the perceptions of human nature within the
research. The particular research has been conducted with the help of the positivism research
philosophy. The research topic includes the use of different algorithms that would develop the
Emotion Detection Music Player and the appropriate selection of the algorithm incurs the
implementation of the logical reasoning models. It could only be possible if the research is
endowed with the efficacies of the positivism philosophy as it is provisioned with exemplary
scientific evaluation techniques. Thus the introduction of the positivism philosophy is justified.
3.3 Research Approach
The aspect of the research approach has important significance in drawing the backgrounds on
which the research is propagated (Silverman, 2016). It also plays a key role in the selection of
the research topic. The research approach provides the platform on which the facets of research
objectives and data analysis gains prominence. The research approach is implemented on the
basis of the deductive and the inductive model. The Deductive model reflects on the provision of
outcomes and results through the interpretation of the research objectives by implementing the
theoretical frameworks and models. The inductive model resembles on the development of the
hypothesis through the conclusive results that is drawn from the research. In the particular
research, the inductive model has been introduced. The research topic reciprocates on the
development of the algorithms for the AI programs and thus it beckons the creation of the new
frameworks through the conclusive results drawn from the project. It could be only
accomplished with the help of the inductive model as it bestowed with the provisions of testing
null hypothesis, which has proven beneficial to the cause of the project. Thus the
implementation of the inductive model has been justified.
3.4 Research design
It is one of the important tools of the research methodology, as it provides the research with the
protocols that needs to be adhered to, in order to come good on accomplishing the research
objectives within the stipulated milestones (Flick, 2015). It plays an important role in providing
the research with the guidelines that is necessary for the research objectives to be accomplished
with clinical precision. The research design endowed with the presence of the exploratory,
18
explanatory and descriptive designs. The exploratory research design is focused in identifying
the possible factors that proves to be a driving factor to the prevalence of the particular
occurrence. The outcomes that are resulted from the implementation of such design poses to be
the preliminary grounds on which extensive research is conducted. The explanatory research
design is implemented for the research topic that reflects on the evaluation of the cause effect
relationship. It assesses the manner in which the two variables are interrelated and how one
affects the other. The descriptive research design is inducted within the project which incurs the
use of statistical evaluation and numerical interferences. The particular research has been
performed with the help of the exploratory research. The research topic is inclined to the
development of the algorithm that aligns with the scripting languages of AI and it requires the
identification of the feature vector and Natural Language Processing entities, which needs to be
implemented in the source code. Such inclusions could only be actuated if the research follows
the stipulation that is laid down by the exploratory research design as it is vested with the
prospects of identifying the factors that leads to a development or occurrence. Thus, it justifies
the introduction of the exploratory research design within the project.
3.5 Data collection process
The process of collecting data is associated with the information collection by the researcher
from various resources in order to find the answers of the research questions. It is also necessary
for testing the theory or hypothesis and then evaluating the different outcomes of a research. The
identification of the method of data collection must be done by the researcher that will include
the data source, techniques to be implemented for the data collection (Johnston, 2017).
Moreover, the researcher has to identify the answers of the research questions and by whom and
where the data collection will be done also has to be addressed well. The technique of the data
collection is depended on the type of research that will be done by the researcher and these
techniques can be divided in two categories, those are primary and the other is secondary data
collection techniques (Nassaji, 2015). The primary collection is also known as the first hand data
that the researcher collects on his own first time. In this process, the researcher will collect data
freshly if the problem of the research is unique and the work is not done before. In this method,
the data that will be collected is more accurate but this method is more time consuming and
costly.
19
the possible factors that proves to be a driving factor to the prevalence of the particular
occurrence. The outcomes that are resulted from the implementation of such design poses to be
the preliminary grounds on which extensive research is conducted. The explanatory research
design is implemented for the research topic that reflects on the evaluation of the cause effect
relationship. It assesses the manner in which the two variables are interrelated and how one
affects the other. The descriptive research design is inducted within the project which incurs the
use of statistical evaluation and numerical interferences. The particular research has been
performed with the help of the exploratory research. The research topic is inclined to the
development of the algorithm that aligns with the scripting languages of AI and it requires the
identification of the feature vector and Natural Language Processing entities, which needs to be
implemented in the source code. Such inclusions could only be actuated if the research follows
the stipulation that is laid down by the exploratory research design as it is vested with the
prospects of identifying the factors that leads to a development or occurrence. Thus, it justifies
the introduction of the exploratory research design within the project.
3.5 Data collection process
The process of collecting data is associated with the information collection by the researcher
from various resources in order to find the answers of the research questions. It is also necessary
for testing the theory or hypothesis and then evaluating the different outcomes of a research. The
identification of the method of data collection must be done by the researcher that will include
the data source, techniques to be implemented for the data collection (Johnston, 2017).
Moreover, the researcher has to identify the answers of the research questions and by whom and
where the data collection will be done also has to be addressed well. The technique of the data
collection is depended on the type of research that will be done by the researcher and these
techniques can be divided in two categories, those are primary and the other is secondary data
collection techniques (Nassaji, 2015). The primary collection is also known as the first hand data
that the researcher collects on his own first time. In this process, the researcher will collect data
freshly if the problem of the research is unique and the work is not done before. In this method,
the data that will be collected is more accurate but this method is more time consuming and
costly.
19
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
However, the researcher in this research has selected the secondary method for the collection of
data. This secondary method is associated with the statistical analysis of the data that is easily
available from various books, journals as well as web sources. The advantage of the secondary
research is low cost and easy availability of the resources. Due to these reasons, the researcher
has chosen the secondary method for the data collection.
3.6 Data analysis modes
In this context, three different codes with respective algorithms have been implemented for the
development of the emotion recognition media player by the researcher. The three algorithms
will do three different tasks associated to provide accurate suggestions of music to the user in
accordance with the real time mood of the user. First algorithm will do the face detection
accurately with the inbuilt camera of the device and with the second algorithm classification of
the songs will be done according to the emotion that will be detected as per the facial features
that will be done by the detection of the face through the camera. Then by the third algorithm,
the recommended playlist will be generated for the improvement of the mood of the user. As the
application will be android based and the codes will be written in Python then it they will be
generated in a hybrid platform as the SVM has to be integrated in the system as a classifier. The
data has been analysed in accordance with the output of the algorithms for the emotion detection
music player for enhancing the mood of the user.
20
data. This secondary method is associated with the statistical analysis of the data that is easily
available from various books, journals as well as web sources. The advantage of the secondary
research is low cost and easy availability of the resources. Due to these reasons, the researcher
has chosen the secondary method for the data collection.
3.6 Data analysis modes
In this context, three different codes with respective algorithms have been implemented for the
development of the emotion recognition media player by the researcher. The three algorithms
will do three different tasks associated to provide accurate suggestions of music to the user in
accordance with the real time mood of the user. First algorithm will do the face detection
accurately with the inbuilt camera of the device and with the second algorithm classification of
the songs will be done according to the emotion that will be detected as per the facial features
that will be done by the detection of the face through the camera. Then by the third algorithm,
the recommended playlist will be generated for the improvement of the mood of the user. As the
application will be android based and the codes will be written in Python then it they will be
generated in a hybrid platform as the SVM has to be integrated in the system as a classifier. The
data has been analysed in accordance with the output of the algorithms for the emotion detection
music player for enhancing the mood of the user.
20
Chapter 4: Data analysis
Three algorithms are used to detect the emotion of human and make playlist with help of
emotion detection media player. Three algorithms are support vector machine to classify the data
set based on their features, face detection algorithm to detect image of any human face and edge
detection algorithm to identify each edge and background of selected image. In this section, the
used algorithm is briefly explained with code.
Face detection coding in python using webcam
Source code
import cv2
import sys
cascPath = sys.argv[1]
faceCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
21
Three algorithms are used to detect the emotion of human and make playlist with help of
emotion detection media player. Three algorithms are support vector machine to classify the data
set based on their features, face detection algorithm to detect image of any human face and edge
detection algorithm to identify each edge and background of selected image. In this section, the
used algorithm is briefly explained with code.
Face detection coding in python using webcam
Source code
import cv2
import sys
cascPath = sys.argv[1]
faceCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
21
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Algorithm
22
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Algorithm
22
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Step1: In the first step, the source file of video is provided to webcam. With the help of this, the
video is easily captured. Here, the programmer can use the video file name and code can capture
the file. To decode the compressed video file, need ffmpeg installer. This installer acts as front
end to work with video file.
Step 2: at the second step, actual video file is capture using read() function. This function
captures one frame of a video at a time until it finish. In every loop of the program each farm is
captured. One return function is used to tell if the number of frame in video file is running out or
not. It is not all time possible to keep track the number of frame. So in this case, it is ignored.
Step 3: this is an important step. Here, the actual face detection is happened from the captured
video. The face detection code is looking forward to detect the actual face in this step.
Step 4: Here, programmer is waiting to press the ‘q’ key to exit from script.
Step 5:
video_capture.release()
cv2.destroyAllWindows()
These two functions are used to clean up the previous captured video.
With the help of face recognition algorithm, emotion detection model is created. It helps to
identify the face of images from the video by using webcam. It has three stages such as feature
extraction that is used to extract the main features of images. The second step is to make subset
of detected based on some criteria such as age, colour and other factors. Last step is to classify
them to detect the emotion of the image. In order to detect the image, Haar method is using in
this model. By using this method, extract the image face according to eye, mouth including other
parts of the face. After getting this information, Sobel edge detection is used to get the value of
characteristics. There are six types of emotions that can be detected from this model. This model
indicate how this algorithm is suitable for detect the emotion of human being. This algorithm is
designed for recognition of emotion of human using various facial expressions. In this case,
facial expression helps the system to capture exact emotion of the human and based on this
emotion the system generate automatic playlist for listener. There are various applications in face
23
video is easily captured. Here, the programmer can use the video file name and code can capture
the file. To decode the compressed video file, need ffmpeg installer. This installer acts as front
end to work with video file.
Step 2: at the second step, actual video file is capture using read() function. This function
captures one frame of a video at a time until it finish. In every loop of the program each farm is
captured. One return function is used to tell if the number of frame in video file is running out or
not. It is not all time possible to keep track the number of frame. So in this case, it is ignored.
Step 3: this is an important step. Here, the actual face detection is happened from the captured
video. The face detection code is looking forward to detect the actual face in this step.
Step 4: Here, programmer is waiting to press the ‘q’ key to exit from script.
Step 5:
video_capture.release()
cv2.destroyAllWindows()
These two functions are used to clean up the previous captured video.
With the help of face recognition algorithm, emotion detection model is created. It helps to
identify the face of images from the video by using webcam. It has three stages such as feature
extraction that is used to extract the main features of images. The second step is to make subset
of detected based on some criteria such as age, colour and other factors. Last step is to classify
them to detect the emotion of the image. In order to detect the image, Haar method is using in
this model. By using this method, extract the image face according to eye, mouth including other
parts of the face. After getting this information, Sobel edge detection is used to get the value of
characteristics. There are six types of emotions that can be detected from this model. This model
indicate how this algorithm is suitable for detect the emotion of human being. This algorithm is
designed for recognition of emotion of human using various facial expressions. In this case,
facial expression helps the system to capture exact emotion of the human and based on this
emotion the system generate automatic playlist for listener. There are various applications in face
23
recognition such as the prevention of retail crime, to unlock the phone, for finding the missing
person, helping the blind etc. the face detection is mainly used to find the emotion of the data
that is used to help in this report. The data findings from this algorithm are the actual finding the
emotion and capture it by using a webcam. Then, each frame of the video is captured separately
at a time one frame is considered. This process is done until the number of frame is exhausted.
At the final stage, actual image or face of human body is detected from the captured video.
Edge detection algorithm
Source code and algorithm
1.Noise reduction
import numpy as np
def gaussian_kernel(size, sigma=1):
size = int(size) // 2
x, y = np.mgrid[-size:size+1, -size:size+1]
normal = 1 / (2.0 * np.pi * sigma**2)
g = np.exp(-((x**2 + y**2) / (2.0*sigma**2))) * normal
return g
output:
24
person, helping the blind etc. the face detection is mainly used to find the emotion of the data
that is used to help in this report. The data findings from this algorithm are the actual finding the
emotion and capture it by using a webcam. Then, each frame of the video is captured separately
at a time one frame is considered. This process is done until the number of frame is exhausted.
At the final stage, actual image or face of human body is detected from the captured video.
Edge detection algorithm
Source code and algorithm
1.Noise reduction
import numpy as np
def gaussian_kernel(size, sigma=1):
size = int(size) // 2
x, y = np.mgrid[-size:size+1, -size:size+1]
normal = 1 / (2.0 * np.pi * sigma**2)
g = np.exp(-((x**2 + y**2) / (2.0*sigma**2))) * normal
return g
output:
24
2. Gradient calculation
from
scipy
import
ndimage
def sobel_filters(img):
Kx = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]], np.float32)
Ky = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]], np.float32)
Ix = ndimage.filters.convolve(img, Kx)
Iy = ndimage.filters.convolve(img, Ky)
G = np.hypot(Ix, Iy)
G = G / G.max() * 255
theta = np.arctan2(Iy, Ix)
return (G, theta)
Output
25
from
scipy
import
ndimage
def sobel_filters(img):
Kx = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]], np.float32)
Ky = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]], np.float32)
Ix = ndimage.filters.convolve(img, Kx)
Iy = ndimage.filters.convolve(img, Ky)
G = np.hypot(Ix, Iy)
G = G / G.max() * 255
theta = np.arctan2(Iy, Ix)
return (G, theta)
Output
25
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
3. Non-maximum suppression
26
26
def non_max_suppression(img,
D):
M, N = img.shape
Z = np.zeros((M,N), dtype=np.int32)
angle = D * 180. / np.pi
angle[angle < 0] += 180
for i in range(1,M-1):
for j in range(1,N-1):
try:
q = 255
r = 255
#angle 0
if (0 <= angle[i,j] < 22.5) or (157.5 <= angle[i,j] <=
180):
q = img[i, j+1]
r = img[i, j-1]
#angle 45
elif (22.5 <= angle[i,j] < 67.5):
q = img[i+1, j-1]
27
D):
M, N = img.shape
Z = np.zeros((M,N), dtype=np.int32)
angle = D * 180. / np.pi
angle[angle < 0] += 180
for i in range(1,M-1):
for j in range(1,N-1):
try:
q = 255
r = 255
#angle 0
if (0 <= angle[i,j] < 22.5) or (157.5 <= angle[i,j] <=
180):
q = img[i, j+1]
r = img[i, j-1]
#angle 45
elif (22.5 <= angle[i,j] < 67.5):
q = img[i+1, j-1]
27
r = img[i-1, j+1]
#angle 90
elif (67.5 <= angle[i,j] < 112.5):
q = img[i+1, j]
r = img[i-1, j]
#angle 135
elif (112.5 <= angle[i,j] < 157.5):
q = img[i-1, j-1]
r = img[i+1, j+1]
if (img[i,j] >= q) and (img[i,j] >= r):
Z[i,j] = img[i,j]
else:
Z[i,j] = 0
except IndexError as e:
pass
return Z
28
#angle 90
elif (67.5 <= angle[i,j] < 112.5):
q = img[i+1, j]
r = img[i-1, j]
#angle 135
elif (112.5 <= angle[i,j] < 157.5):
q = img[i-1, j-1]
r = img[i+1, j+1]
if (img[i,j] >= q) and (img[i,j] >= r):
Z[i,j] = img[i,j]
else:
Z[i,j] = 0
except IndexError as e:
pass
return Z
28
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Output:
4. Double-threshold
def threshold(img,
lowThresholdRatio=0.05,
highThresholdRatio=0.0
9):
highThreshold = img.max() * highThresholdRatio;
lowThreshold = highThreshold * lowThresholdRatio;
M, N = img.shape
res = np.zeros((M,N), dtype=np.int32)
weak = np.int32(25)
strong = np.int32(255)
29
4. Double-threshold
def threshold(img,
lowThresholdRatio=0.05,
highThresholdRatio=0.0
9):
highThreshold = img.max() * highThresholdRatio;
lowThreshold = highThreshold * lowThresholdRatio;
M, N = img.shape
res = np.zeros((M,N), dtype=np.int32)
weak = np.int32(25)
strong = np.int32(255)
29
strong_i, strong_j = np.where(img >= highThreshold)
zeros_i, zeros_j = np.where(img < lowThreshold)
weak_i, weak_j = np.where((img <= highThreshold) & (img >=
lowThreshold))
res[strong_i, strong_j] = strong
res[weak_i, weak_j] = weak
return (res, weak, strong)
5. Hyteresis tracks edges
def
hysteresis(img,
weak,
strong=255):
M, N = img.shape
for i in range(1, M-1):
for j in range(1, N-1):
if (img[i,j] == weak):
try:
if ((img[i+1, j-1] == strong) or (img[i+1, j] == strong) or (img[i+1,
j+1] == strong)
30
zeros_i, zeros_j = np.where(img < lowThreshold)
weak_i, weak_j = np.where((img <= highThreshold) & (img >=
lowThreshold))
res[strong_i, strong_j] = strong
res[weak_i, weak_j] = weak
return (res, weak, strong)
5. Hyteresis tracks edges
def
hysteresis(img,
weak,
strong=255):
M, N = img.shape
for i in range(1, M-1):
for j in range(1, N-1):
if (img[i,j] == weak):
try:
if ((img[i+1, j-1] == strong) or (img[i+1, j] == strong) or (img[i+1,
j+1] == strong)
30
or (img[i, j-1] == strong) or (img[i, j+1] == strong)
or (img[i-1, j-1] == strong) or (img[i-1, j] == strong) or (img[i-1,
j+1] == strong)):
img[i, j] = strong
else:
img[i, j] = 0
except IndexError as e:
pass
Final output:
return img
In this algorithm, an operator is used that is canny edge detector. The operator is used in the
various parts of edge detection algorithm for detecting a huge range of edges of an image. This
algorithm is classified into five steps these are:
Reduction of noise
Calculations of gradient
31
or (img[i-1, j-1] == strong) or (img[i-1, j] == strong) or (img[i-1,
j+1] == strong)):
img[i, j] = strong
else:
img[i, j] = 0
except IndexError as e:
pass
Final output:
return img
In this algorithm, an operator is used that is canny edge detector. The operator is used in the
various parts of edge detection algorithm for detecting a huge range of edges of an image. This
algorithm is classified into five steps these are:
Reduction of noise
Calculations of gradient
31
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Non-maximum suppression
Double threshold and
Hysteresis tracks the edge of image
This algorithm is mainly based on gray scale image. The captured image by face detection is not
gray scale image. For this reason, before using algorithm, the image must be converted into gray
scale image by using MATLAB software. The result of edge detection algorithm is very
sensitive in nature to remove the noise of images. The way to remove noise from an image is
using of Gaussian blurs to smooth the image. For this reason, the techniques for image
convolution are used with this Gaussian kernel. The expected effect of blurring determines the
size of kernel. If the kernel size is small then the visibility of the blur is less. In this algorithm the
given size of Gaussian, filter kernel is (2k + 1) * (2K + 1) and it is totally given from:
Hij = 1
2 πσ 2 exp( ( i− ( k +1 ) ) 2+ ( j− ( k +1 ) ) 2
2 σ 2 ); 1<= I, j <= ( 2k + 1 )
This is the equation of Gaussian filter kernel.
The next part of this algorithm is to estimate calculation of the gradient. With the help of this,
intensity of edge and direction of edge is estimated by image gradient using operators of edge
detection. In order to show change of intensity of pixel the filters are applied. The change of
pixel is considered for both the case like horizontal and vertical directions. After getting smooth
image then derivatives are estimated like Ix and Iy with respect to x and y these are horizontal
and vertical direction respectively. It is calculated with the help of Sobel kernel.
After that, the calculation of G and ϴ is done using the following formula.
32
Double threshold and
Hysteresis tracks the edge of image
This algorithm is mainly based on gray scale image. The captured image by face detection is not
gray scale image. For this reason, before using algorithm, the image must be converted into gray
scale image by using MATLAB software. The result of edge detection algorithm is very
sensitive in nature to remove the noise of images. The way to remove noise from an image is
using of Gaussian blurs to smooth the image. For this reason, the techniques for image
convolution are used with this Gaussian kernel. The expected effect of blurring determines the
size of kernel. If the kernel size is small then the visibility of the blur is less. In this algorithm the
given size of Gaussian, filter kernel is (2k + 1) * (2K + 1) and it is totally given from:
Hij = 1
2 πσ 2 exp( ( i− ( k +1 ) ) 2+ ( j− ( k +1 ) ) 2
2 σ 2 ); 1<= I, j <= ( 2k + 1 )
This is the equation of Gaussian filter kernel.
The next part of this algorithm is to estimate calculation of the gradient. With the help of this,
intensity of edge and direction of edge is estimated by image gradient using operators of edge
detection. In order to show change of intensity of pixel the filters are applied. The change of
pixel is considered for both the case like horizontal and vertical directions. After getting smooth
image then derivatives are estimated like Ix and Iy with respect to x and y these are horizontal
and vertical direction respectively. It is calculated with the help of Sobel kernel.
After that, the calculation of G and ϴ is done using the following formula.
32
As the result of this part, some thick and thin edges are detected in this image. However, this is
not the expected one and has to remove these thick edges from the image by using non-max
suppression method. It helps to mitigate those thick edges. The intensity level of gradient is
varying from o to 255 that are not consistent in nature. In the final output, the intensity level of
edge of image must be same.
To get the final image including thin edges, a non-max suppression method is used. The
objective of this algorithm is very clear that means, algorithm is going through entire points on
intensity matrix of gradient and finding pixels that has maximum value in case of edge detection.
Every pixel has these are the direction of the edge in direction and the intensity of pixel. The
steps of non-max suppression are as follows:
Step 1: build a matrix that has the value with zero, which is equal to the intensity matrix of
original gradient.
Step 2: now find out the direction of edge based on the angle matrix that provides angle value.
Step 3: it is the time to check if the same direction pixel including high intensity than currently
executed pixel.
Step 4: now return the resultant matrix from the algorithm of non-max suppression.
After this suppression, it gets clear image with thin edges. However, in this case, some edges are
bright and some edges are not so bright. To mitigate this problem two more process is needed
these are double threshold and hysteresis method to track edges. Main objective of double
threshold is to find out three kinds of pixels such as strong, weak and that pixel that is non-
relevant. The strong pixel is considered those pixels, which has the high intensity and it is
confirmed that these pixels are added to the final image. The weak pixel has the intensity level
very low that is not considered as the strong pixel but it is not considered as non-relevant pixel
for the edge detection algorithm. Besides these edges, other kinds of edges are considered as the
non-relevant for edge detection. In double threshold, two types of thresholds are used. In order to
identify the strong pixel, high threshold is used. Similarly, to identify the non-relevant pixel, low
intensity threshold is used. The pixel that has intensity between above two-threshold values is
considered as the weak pixel. The next step will help for identifying those pixels, which is
33
not the expected one and has to remove these thick edges from the image by using non-max
suppression method. It helps to mitigate those thick edges. The intensity level of gradient is
varying from o to 255 that are not consistent in nature. In the final output, the intensity level of
edge of image must be same.
To get the final image including thin edges, a non-max suppression method is used. The
objective of this algorithm is very clear that means, algorithm is going through entire points on
intensity matrix of gradient and finding pixels that has maximum value in case of edge detection.
Every pixel has these are the direction of the edge in direction and the intensity of pixel. The
steps of non-max suppression are as follows:
Step 1: build a matrix that has the value with zero, which is equal to the intensity matrix of
original gradient.
Step 2: now find out the direction of edge based on the angle matrix that provides angle value.
Step 3: it is the time to check if the same direction pixel including high intensity than currently
executed pixel.
Step 4: now return the resultant matrix from the algorithm of non-max suppression.
After this suppression, it gets clear image with thin edges. However, in this case, some edges are
bright and some edges are not so bright. To mitigate this problem two more process is needed
these are double threshold and hysteresis method to track edges. Main objective of double
threshold is to find out three kinds of pixels such as strong, weak and that pixel that is non-
relevant. The strong pixel is considered those pixels, which has the high intensity and it is
confirmed that these pixels are added to the final image. The weak pixel has the intensity level
very low that is not considered as the strong pixel but it is not considered as non-relevant pixel
for the edge detection algorithm. Besides these edges, other kinds of edges are considered as the
non-relevant for edge detection. In double threshold, two types of thresholds are used. In order to
identify the strong pixel, high threshold is used. Similarly, to identify the non-relevant pixel, low
intensity threshold is used. The pixel that has intensity between above two-threshold values is
considered as the weak pixel. The next step will help for identifying those pixels, which is
33
considered as strong as well as non-relevant for edge detection. According to result of double
threshold, hysteresis transforms all weak pixels into the strong pixels. If there exist only one
strong pixel and other weak pixels then this method also transforms entire weak into strong pixel
to get final image.
Support vector machine
# creating line space between -1 to 3.5
xfit = np.linspace(-1, 3.5)
# plotting scatter
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring')
# plot a line between the different sets of data
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
plt.show()
Output
34
threshold, hysteresis transforms all weak pixels into the strong pixels. If there exist only one
strong pixel and other weak pixels then this method also transforms entire weak into strong pixel
to get final image.
Support vector machine
# creating line space between -1 to 3.5
xfit = np.linspace(-1, 3.5)
# plotting scatter
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring')
# plot a line between the different sets of data
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
plt.show()
Output
34
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
To creating line between two dataset
# creating line space between -1 to 3.5
xfit = np.linspace(-1, 3.5)
# plotting scatter
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring')
# plot a line between the different sets of data
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
35
# creating line space between -1 to 3.5
xfit = np.linspace(-1, 3.5)
# plotting scatter
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring')
# plot a line between the different sets of data
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
35
plt.xlim(-1, 3.5);
plt.show()
Output
Import the datasets
# importing required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# reading csv file and extracting class column to y.
x = pd.read_csv("C:\...\cancer.csv")
a = np.array(x)
y = a[:,30] # classes having 0 and 1
36
plt.show()
Output
Import the datasets
# importing required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# reading csv file and extracting class column to y.
x = pd.read_csv("C:\...\cancer.csv")
a = np.array(x)
y = a[:,30] # classes having 0 and 1
36
# extracting two features
x = np.column_stack((x.malignant,x.benign))
x.shape # 569 samples and 2 features
print (x),(y)
Output
37
x = np.column_stack((x.malignant,x.benign))
x.shape # 569 samples and 2 features
print (x),(y)
Output
37
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Fit a support vector machine with the dataset
# import support vector classifier
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
# fitting x samples and y classes
clf.fit(x, y)
Final Output
Algorithm
Step 1: in the first step, identify the hyper plane that is suitable for this. This algorithm choose
suitable plane by using thumb rule.
Step 2: find out the distance between each near pixel to in the same class or different class. This
is named as the margin. The hyper plane that is higher than rest two planes. It is considered as
right hyper plane for this.
Step 3: in this case, support vector machine is used to select the hyper plane that has high
margin. Then with the help of this classify all data points into different classes as per their
characteristics. Naïve Bayes classifier is used in this coding to classify emotion of the image.
38
# import support vector classifier
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
# fitting x samples and y classes
clf.fit(x, y)
Final Output
Algorithm
Step 1: in the first step, identify the hyper plane that is suitable for this. This algorithm choose
suitable plane by using thumb rule.
Step 2: find out the distance between each near pixel to in the same class or different class. This
is named as the margin. The hyper plane that is higher than rest two planes. It is considered as
right hyper plane for this.
Step 3: in this case, support vector machine is used to select the hyper plane that has high
margin. Then with the help of this classify all data points into different classes as per their
characteristics. Naïve Bayes classifier is used in this coding to classify emotion of the image.
38
Based on this classification the emotion detection based media player makes the playlist for
music lover. The support vector machine has several advantages as well as disadvantages. The
support vector machine helps to provide the accurate and clear margin for separation. It is very
effective in nature for some spaces that is high dimensional. It gives best result when total
numbers of samples are lesser than dimensional number. Main advantage of support vector
machine is that various training sets are used in related decision-making function. It is very
efficient for memory. One of the disadvantages of support vector machine is it cannot give
accurate result when the training data set is huge and it takes huge time to work with it. It mainly
works with the noiseless data that is gathered from the face detection method in this paper. To do
this coding in an efficient way, python coding and library is used.
The main purpose of the support vector machine in emotion detection is to classify the human
emotion. With the help of this, classification is done among large number of data and as the
result of this; it produces different classes including those emotions that have some relevant
information. Sometimes, neural network is used to do 85% level of classification with good
accuracy. It is very helpful for the emotion detection media player to recognize the facial
expression of human and it classify the dataset for the purpose of regression. Support vector
machine classification algorithm uses hyperplane having N dimension. This N-dimensional
hyperplane separates similar kinds of data into different types of classes.
Main objective of this algorithm is to convert actual input dataset into a features space that is
high dimension and it uses kernel function to do this. After that, the optimal solution is
encountered in newly feature space. One emotion database is created to extract and train support
vector machine. Here the used database is Berlin emotion database. The support vector machine
is mainly used for automatic facial expression recognition to mitigate some issues regarding
manual techniques. There are many other classifiers like LDA and others. However, SVM is best
classifier rather than any other classifiers. In each expression, a displacement vector is estimated
by considering the Euclidean distance in impartial and height frame related with the expression.
The input of support vector machine is considered as displacement vector and that is supplied to
the support vector classifier to classify the unknown or unseen behaviour and features of data.
This classification is done dynamically. This gives best result in the small set of data member.
39
music lover. The support vector machine has several advantages as well as disadvantages. The
support vector machine helps to provide the accurate and clear margin for separation. It is very
effective in nature for some spaces that is high dimensional. It gives best result when total
numbers of samples are lesser than dimensional number. Main advantage of support vector
machine is that various training sets are used in related decision-making function. It is very
efficient for memory. One of the disadvantages of support vector machine is it cannot give
accurate result when the training data set is huge and it takes huge time to work with it. It mainly
works with the noiseless data that is gathered from the face detection method in this paper. To do
this coding in an efficient way, python coding and library is used.
The main purpose of the support vector machine in emotion detection is to classify the human
emotion. With the help of this, classification is done among large number of data and as the
result of this; it produces different classes including those emotions that have some relevant
information. Sometimes, neural network is used to do 85% level of classification with good
accuracy. It is very helpful for the emotion detection media player to recognize the facial
expression of human and it classify the dataset for the purpose of regression. Support vector
machine classification algorithm uses hyperplane having N dimension. This N-dimensional
hyperplane separates similar kinds of data into different types of classes.
Main objective of this algorithm is to convert actual input dataset into a features space that is
high dimension and it uses kernel function to do this. After that, the optimal solution is
encountered in newly feature space. One emotion database is created to extract and train support
vector machine. Here the used database is Berlin emotion database. The support vector machine
is mainly used for automatic facial expression recognition to mitigate some issues regarding
manual techniques. There are many other classifiers like LDA and others. However, SVM is best
classifier rather than any other classifiers. In each expression, a displacement vector is estimated
by considering the Euclidean distance in impartial and height frame related with the expression.
The input of support vector machine is considered as displacement vector and that is supplied to
the support vector classifier to classify the unknown or unseen behaviour and features of data.
This classification is done dynamically. This gives best result in the small set of data member.
39
It is trying to classify the basic six emotion of human into different class. Based on this result the
newly developed system is set the playlist automatically for the user. As per the human’s
changing mood the system, update playlist with help of this classification algorithm. The system
at first uses face detection algorithm to detect the face of any image and it identifies various
emotion of the human being. Then after detecting the face and its emotion, edge detection
algorithm is used to detect the edge of the selected face and it provides the confirmation for the
detected image and it collects six different kinds of emotions. Now the time for the support
vector machine to apply proper classifier and make them separate into different class to provide
different playlist for different mood of users.
40
newly developed system is set the playlist automatically for the user. As per the human’s
changing mood the system, update playlist with help of this classification algorithm. The system
at first uses face detection algorithm to detect the face of any image and it identifies various
emotion of the human being. Then after detecting the face and its emotion, edge detection
algorithm is used to detect the edge of the selected face and it provides the confirmation for the
detected image and it collects six different kinds of emotions. Now the time for the support
vector machine to apply proper classifier and make them separate into different class to provide
different playlist for different mood of users.
40
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Chapter 5: Conclusion and Recommendation
The research has been exemplary to the understanding of the diverse parameters that are
necessary for the development of the AI algorithm. The research has also shed on the procedures
that are needed to be maintained while integrating the algorithm within the software. It has been
evaluated through the research that the use of the Python as the scripting language has been
exemplary to the development of the three algorithms that has served to the interest of the voice
detection modules. It has been understood that the identification of the human emotions is a
complicated process and imparts the use of the customized source code to patch the neural
network of the AI system. The research has provided credible information in the subjects of
developing algorithms for AI environment, which could be used for extensive research in the
future and come good on improving the emotion detection processes. The report has been
observant to the creation of the exemplary objectives that has resulted in the generation of such
outcomes. The research has been enhanced with a credible background study of the topic as the
research has been able to grown on such background and helped in accomplishing the research
aim. The development of the AI program is a complicated procedure, which incurs the use of
improvisation, and the particular research has been able to deliver on such process. The research
has been identified with the extensive implementation of Python algorithm in order to align with
the EMP emotion detection algorithm.
The Python library that has been implemented within the project has been selected based on its
prowess in handling the complicated process of the neural patch. It has been observed through
that the research the use of the facial expressions as a means of emotion detector has been
beneficial when it comes to the development of the algorithms. The research objectives have
been accomplished with the literature review. The literature review has been able to provide key
concepts that have been used in the research to set up the SVM classifier. It has been provisioned
through the algorithm and has been influential to the development of the emotion detection
algorithm. The objectives of evaluating the script that is necessary for playing songs
automatically has been realized with the assessment of the literatures that have been resourced
with the help of the data collection method. The algorithm that has been used to identify the
different facets of emotions has been created in accordance to the SVM classifier. The prospects
41
The research has been exemplary to the understanding of the diverse parameters that are
necessary for the development of the AI algorithm. The research has also shed on the procedures
that are needed to be maintained while integrating the algorithm within the software. It has been
evaluated through the research that the use of the Python as the scripting language has been
exemplary to the development of the three algorithms that has served to the interest of the voice
detection modules. It has been understood that the identification of the human emotions is a
complicated process and imparts the use of the customized source code to patch the neural
network of the AI system. The research has provided credible information in the subjects of
developing algorithms for AI environment, which could be used for extensive research in the
future and come good on improving the emotion detection processes. The report has been
observant to the creation of the exemplary objectives that has resulted in the generation of such
outcomes. The research has been enhanced with a credible background study of the topic as the
research has been able to grown on such background and helped in accomplishing the research
aim. The development of the AI program is a complicated procedure, which incurs the use of
improvisation, and the particular research has been able to deliver on such process. The research
has been identified with the extensive implementation of Python algorithm in order to align with
the EMP emotion detection algorithm.
The Python library that has been implemented within the project has been selected based on its
prowess in handling the complicated process of the neural patch. It has been observed through
that the research the use of the facial expressions as a means of emotion detector has been
beneficial when it comes to the development of the algorithms. The research objectives have
been accomplished with the literature review. The literature review has been able to provide key
concepts that have been used in the research to set up the SVM classifier. It has been provisioned
through the algorithm and has been influential to the development of the emotion detection
algorithm. The objectives of evaluating the script that is necessary for playing songs
automatically has been realized with the assessment of the literatures that have been resourced
with the help of the data collection method. The algorithm that has been used to identify the
different facets of emotions has been created in accordance to the SVM classifier. The prospects
41
of the conclusive data have been exemplary in the development of the neural environment that is
necessary to the optimal function of the music player. The research has been exemplary to the
cause of evaluating the validity of the programs but the algorithm has room for improvement in
order to come good on classifying the different emotions.
The system should use some other algorithms for better performance. Due to large number of
data is used in this system will use HADOOP and ORCLE combined to process this large
amount of data. SVM is efficient classifier but it does not give proper accuracy level in result
when the data set is used. For this reason, the system may incorporate some other classifiers such
as LDA and others. Face detection algorithm has some drawback; it sometime does not give
accurate result. In case of figure print recognition, it may include various germs and the output of
this algorithm may non-relevant with the expected outputs. The disadvantages of face detection
algorithm are like the storing of data and processing of this very difficult to maintain. Some it
face trouble with the size of the image of video as well as the quality of the image that is detected
by this emotion detection media player system. Another main disadvantage is that it is strongly
influenced by the angle of camera. Therefore, the research should use another detection process
to mitigate these disadvantages.
Canny edge detection process is used in the research that is very slow process to generate the
accurate result. There are some another operator to detect the edge of image like MATLAB tools
is also used for detecting the edge. In this edge detection algorithm Gaussian blur is used to
smooth the image and remove noise from detected image and for this reason, MATLAB is very
useful tool for processing any image. The canny edge detection method works when the image is
in gray scale mode. It is the major issue. Now all images follow RGB system. It is not all time
possible to convert each image into gray scale format. For this reason, the research may use
another technique to get the better output in future.
42
necessary to the optimal function of the music player. The research has been exemplary to the
cause of evaluating the validity of the programs but the algorithm has room for improvement in
order to come good on classifying the different emotions.
The system should use some other algorithms for better performance. Due to large number of
data is used in this system will use HADOOP and ORCLE combined to process this large
amount of data. SVM is efficient classifier but it does not give proper accuracy level in result
when the data set is used. For this reason, the system may incorporate some other classifiers such
as LDA and others. Face detection algorithm has some drawback; it sometime does not give
accurate result. In case of figure print recognition, it may include various germs and the output of
this algorithm may non-relevant with the expected outputs. The disadvantages of face detection
algorithm are like the storing of data and processing of this very difficult to maintain. Some it
face trouble with the size of the image of video as well as the quality of the image that is detected
by this emotion detection media player system. Another main disadvantage is that it is strongly
influenced by the angle of camera. Therefore, the research should use another detection process
to mitigate these disadvantages.
Canny edge detection process is used in the research that is very slow process to generate the
accurate result. There are some another operator to detect the edge of image like MATLAB tools
is also used for detecting the edge. In this edge detection algorithm Gaussian blur is used to
smooth the image and remove noise from detected image and for this reason, MATLAB is very
useful tool for processing any image. The canny edge detection method works when the image is
in gray scale mode. It is the major issue. Now all images follow RGB system. It is not all time
possible to convert each image into gray scale format. For this reason, the research may use
another technique to get the better output in future.
42
References
Rahman, A. and Mohamed, F., 2016. Face Based Intelligence Classification for Music
Player. UTM Computing Proceedings, p.65.
Sarda, P., Halasawade, S., Padmawar, A. and Aghav, J., 2019. Emousic: Emotion and Activity-
Based Music Player Using Machine Learning. In Advances in Computer Communication and
Computational Sciences (pp. 179-188). Springer, Singapore.
Sen, A., Popat, D., Shah, H., Kuwor, P. and Johri, E., 2018. Music playlist generation using
facial expression analysis and task extraction. In Intelligent Communication and Computational
Technologies (pp. 129-139). Springer, Singapore.
Kabani, H., Khan, S., Khan, O. and Tadvi, S., 2015. Emotion based music player. International
Journal of Engineering Research and General Science, 3(1), pp.2091-2730.
Hsu, Y.L., Wang, J.S., Chiang, W.C. and Hung, C.H., 2017. Automatic ecg-based emotion
recognition in music listening. IEEE Transactions on Affective Computing.
Patel, A.R., Vollal, A., Kadam, P.B., Yadav, S. and Samant, R.M., 2016. MoodyPlayer: a mood
based music player. Int. J. Comput. Appl, 141(4), pp.0975-8887.
Yepes, F.A., López, V.F., Pérez-Marcos, J., Gil, A.B. and Villarrubia, G., 2018, June. Listen to
This: Music Recommendation Based on One-Class Support Vector Machine. In International
Conference on Hybrid Artificial Intelligence Systems (pp. 467-478). Springer, Cham.
Johnston, M.P., 2017. Secondary data analysis: A method of which the time has
come. Qualitative and quantitative methods in libraries, 3(3), pp.619-626.
Nassaji, H., 2015. Qualitative and descriptive research: Data type versus data analysis.
Aljanaki, A., Wiering, F. and Veltkamp, R.C., 2016. Studying emotion induced by music
through a crowdsourcing game. Information Processing & Management, 52(1), pp.115-128.
43
Rahman, A. and Mohamed, F., 2016. Face Based Intelligence Classification for Music
Player. UTM Computing Proceedings, p.65.
Sarda, P., Halasawade, S., Padmawar, A. and Aghav, J., 2019. Emousic: Emotion and Activity-
Based Music Player Using Machine Learning. In Advances in Computer Communication and
Computational Sciences (pp. 179-188). Springer, Singapore.
Sen, A., Popat, D., Shah, H., Kuwor, P. and Johri, E., 2018. Music playlist generation using
facial expression analysis and task extraction. In Intelligent Communication and Computational
Technologies (pp. 129-139). Springer, Singapore.
Kabani, H., Khan, S., Khan, O. and Tadvi, S., 2015. Emotion based music player. International
Journal of Engineering Research and General Science, 3(1), pp.2091-2730.
Hsu, Y.L., Wang, J.S., Chiang, W.C. and Hung, C.H., 2017. Automatic ecg-based emotion
recognition in music listening. IEEE Transactions on Affective Computing.
Patel, A.R., Vollal, A., Kadam, P.B., Yadav, S. and Samant, R.M., 2016. MoodyPlayer: a mood
based music player. Int. J. Comput. Appl, 141(4), pp.0975-8887.
Yepes, F.A., López, V.F., Pérez-Marcos, J., Gil, A.B. and Villarrubia, G., 2018, June. Listen to
This: Music Recommendation Based on One-Class Support Vector Machine. In International
Conference on Hybrid Artificial Intelligence Systems (pp. 467-478). Springer, Cham.
Johnston, M.P., 2017. Secondary data analysis: A method of which the time has
come. Qualitative and quantitative methods in libraries, 3(3), pp.619-626.
Nassaji, H., 2015. Qualitative and descriptive research: Data type versus data analysis.
Aljanaki, A., Wiering, F. and Veltkamp, R.C., 2016. Studying emotion induced by music
through a crowdsourcing game. Information Processing & Management, 52(1), pp.115-128.
43
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Sánchez-Moreno, D., González, A.B.G., Vicente, M.D.M., Batista, V.F.L. and García, M.N.M.,
2016. A collaborative filtering method for music recommendation using playing coefficients for
artists and users. Expert Systems with Applications, 66, pp.234-244.
Bhardwaj, A., Gupta, A., Jain, P., Rani, A. and Yadav, J., 2015, February. Classification of
human emotions from EEG signals using SVM and LDA Classifiers. In 2015 2nd International
Conference on Signal Processing and Integrated Networks (SPIN) (pp. 180-185). IEEE.
Schedl, M., Knees, P., McFee, B., Bogdanov, D. and Kaminskas, M., 2015. Music recommender
systems. In Recommender systems handbook (pp. 453-492). Springer, Boston, MA.
Knight, P.A., Nokia Technologies Oy, 2016. Method and apparatus for providing an emotion-
based user interface. U.S. Patent 9,386,139.
van der Steen, J.T., Smaling, H.J., van der Wouden, J.C., Bruinsma, M.S., Scholten, R.J. and
Vink, A.C., 2018. Music‐based therapeutic interventions for people with dementia. Cochrane
Database of Systematic Reviews, (7).
Kumar, R., 2019. Research methodology: A step-by-step guide for beginners. Sage Publications
Limited.
Mackey, A. and Gass, S.M., 2015. Second language research: Methodology and design.
Routledge.
Silverman, D. ed., 2016. Qualitative research. Sage.
Flick, U., 2015. Introducing research methodology: A beginner's guide to doing a research
project. Sage.
Taylor, S.J., Bogdan, R. and DeVault, M., 2015. Introduction to qualitative research methods: A
guidebook and resource. John Wiley & Sons.
Scirea, M., Nelson, M.J. and Togelius, J., 2015, April. Moody music generator: Characterising
control parameters using crowdsourcing. In International Conference on Evolutionary and
Biologically Inspired Music and Art (pp. 200-211). Springer, Cham.
44
2016. A collaborative filtering method for music recommendation using playing coefficients for
artists and users. Expert Systems with Applications, 66, pp.234-244.
Bhardwaj, A., Gupta, A., Jain, P., Rani, A. and Yadav, J., 2015, February. Classification of
human emotions from EEG signals using SVM and LDA Classifiers. In 2015 2nd International
Conference on Signal Processing and Integrated Networks (SPIN) (pp. 180-185). IEEE.
Schedl, M., Knees, P., McFee, B., Bogdanov, D. and Kaminskas, M., 2015. Music recommender
systems. In Recommender systems handbook (pp. 453-492). Springer, Boston, MA.
Knight, P.A., Nokia Technologies Oy, 2016. Method and apparatus for providing an emotion-
based user interface. U.S. Patent 9,386,139.
van der Steen, J.T., Smaling, H.J., van der Wouden, J.C., Bruinsma, M.S., Scholten, R.J. and
Vink, A.C., 2018. Music‐based therapeutic interventions for people with dementia. Cochrane
Database of Systematic Reviews, (7).
Kumar, R., 2019. Research methodology: A step-by-step guide for beginners. Sage Publications
Limited.
Mackey, A. and Gass, S.M., 2015. Second language research: Methodology and design.
Routledge.
Silverman, D. ed., 2016. Qualitative research. Sage.
Flick, U., 2015. Introducing research methodology: A beginner's guide to doing a research
project. Sage.
Taylor, S.J., Bogdan, R. and DeVault, M., 2015. Introduction to qualitative research methods: A
guidebook and resource. John Wiley & Sons.
Scirea, M., Nelson, M.J. and Togelius, J., 2015, April. Moody music generator: Characterising
control parameters using crowdsourcing. In International Conference on Evolutionary and
Biologically Inspired Music and Art (pp. 200-211). Springer, Cham.
44
1 out of 44
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.