Machine Learning for Pattern Recognition
VerifiedAdded on 2020/03/04
|22
|5751
|79
AI Summary
This assignment delves into the realm of machine learning, specifically focusing on its applications in pattern recognition. It examines various machine learning techniques, such as classification, clustering, and dimensionality reduction, and their effectiveness in solving real-world pattern recognition problems. The assignment also discusses the advantages and limitations of different machine learning algorithms and provides a comprehensive overview of current research trends in this field.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
SUPERVISED LEARNING 1
SUPERVISED LEARNING
SUPERVISED LEARNING
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
SUPERVISED LEARNING 2
Abstract
Machine learning is a field of Artificial intelligence which imparts the computer applications the
ability to learn from the inputs. The field of machine learning deals with the study of algorithms
which have the ability to learn from the training or historical data and make predictions or
classification of that input data by building a model from the sample inputs. Due to their ability
to learn from the input data, classification and prediction machine learning has a wide range of
application from pattern recognition, speech recognition, medical diagnosis applications,
intrusion detection systems, congestion detection systems, optical character recognition,
computer vision etc. Generally, the machine learning methods are classified as supervised
learning, unsupervised learning, and reinforcement learning methods. Supervised learning
method system is presented with a set of input condition and an instructor providing the desired
output, thus making the system understand what to expect for an answer in a certain scenario,
provides the desired outputs. The supervised learning method is the most popular machine
learning method due to its ease of use and flexibility it offers. This work explores supervised
learning in more detailed manner.
Keywords: Machine learning, Supervised learning.
Abstract
Machine learning is a field of Artificial intelligence which imparts the computer applications the
ability to learn from the inputs. The field of machine learning deals with the study of algorithms
which have the ability to learn from the training or historical data and make predictions or
classification of that input data by building a model from the sample inputs. Due to their ability
to learn from the input data, classification and prediction machine learning has a wide range of
application from pattern recognition, speech recognition, medical diagnosis applications,
intrusion detection systems, congestion detection systems, optical character recognition,
computer vision etc. Generally, the machine learning methods are classified as supervised
learning, unsupervised learning, and reinforcement learning methods. Supervised learning
method system is presented with a set of input condition and an instructor providing the desired
output, thus making the system understand what to expect for an answer in a certain scenario,
provides the desired outputs. The supervised learning method is the most popular machine
learning method due to its ease of use and flexibility it offers. This work explores supervised
learning in more detailed manner.
Keywords: Machine learning, Supervised learning.
SUPERVISED LEARNING 3
Table of Contents
Abstract............................................................................................................................................2
Introduction......................................................................................................................................4
Supervised learning.....................................................................................................................5
Supervised learning model and algorithm...............................................................................6
Issues to be considered in supervised learning........................................................................8
Algorithm of supervised learning................................................................................................9
Applications of supervised learning..............................................................................................15
Conclusion.....................................................................................................................................16
References......................................................................................................................................17
Table of Contents
Abstract............................................................................................................................................2
Introduction......................................................................................................................................4
Supervised learning.....................................................................................................................5
Supervised learning model and algorithm...............................................................................6
Issues to be considered in supervised learning........................................................................8
Algorithm of supervised learning................................................................................................9
Applications of supervised learning..............................................................................................15
Conclusion.....................................................................................................................................16
References......................................................................................................................................17
SUPERVISED LEARNING 4
Introduction
Machine Learning (ML) could be termed as a branch of Artificial Intelligence, as they contain
the methods, which allows the computers and systems to act more smartly. The result is a unique
way of cumulative working of various functions in an orderly manner, rather than simple data
insertion and retrieval from applications like database and others, making the machines take
better decision are different situations with minimal input from the user. Machine learning is a
branch of study, grouped from various different fields, such as computer science, statistics,
biology, and psychology. The base functionality of Machine Learning is to identify the best
Predictor model for making decisions, by analyzing and learning from previous scenarios, which
is the job of a Classifier. The job of Classification is the prediction of unknown scenarios
(output) by analyzing known scenarios (input). The process of classification is performed over
data set D comprising of the following objects:
Let the set size be {X1, X2, |X|} where |X| signifies the attributes count of the set X
Class label is Y then the target attribute; ? y1, y2, |Y|} where Y is the number of classes and Y
2.
Then the basic goal of the machine learning is prediction or classification over the dataset D,
such that it relates the attributes in X and classes in Y (Mohri et al, 2012).
Classification of Machine Learning is on the basis of the type of input signal or feedback
received by the learning system. These are as follows:
Introduction
Machine Learning (ML) could be termed as a branch of Artificial Intelligence, as they contain
the methods, which allows the computers and systems to act more smartly. The result is a unique
way of cumulative working of various functions in an orderly manner, rather than simple data
insertion and retrieval from applications like database and others, making the machines take
better decision are different situations with minimal input from the user. Machine learning is a
branch of study, grouped from various different fields, such as computer science, statistics,
biology, and psychology. The base functionality of Machine Learning is to identify the best
Predictor model for making decisions, by analyzing and learning from previous scenarios, which
is the job of a Classifier. The job of Classification is the prediction of unknown scenarios
(output) by analyzing known scenarios (input). The process of classification is performed over
data set D comprising of the following objects:
Let the set size be {X1, X2, |X|} where |X| signifies the attributes count of the set X
Class label is Y then the target attribute; ? y1, y2, |Y|} where Y is the number of classes and Y
2.
Then the basic goal of the machine learning is prediction or classification over the dataset D,
such that it relates the attributes in X and classes in Y (Mohri et al, 2012).
Classification of Machine Learning is on the basis of the type of input signal or feedback
received by the learning system. These are as follows:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
SUPERVISED LEARNING 5
Supervised learning: System is presented with a set of input condition and an instructor
providing the desired output, thus making the system understand what to expect for an answer in
a certain scenario, provides the desired outputs (Zhang et al, 2011).
Unsupervised learning: There are no markers in the learning methodology, thus stranding the
system to discover patterns in the inputs. This type of learning can become a target in itself
(unraveling pattern in data) or the end game (feature learning).
Reinforcement learning: System is challenged with an unprecedented environment to perform
(for instance driving or playing against an opponent). The system is either rewarded or
punished, based on the performance it displayed while navigating in the problem sphere.
This paper deals with Supervised Learning, the various processes and all the functionalities.
Supervised learning
This type of learning is based on understanding the mapping of certain attributes and functions in
a predefined scenario, then using the knowledge gathered in this process to make a decision
based on the mapping learned, in unprecedented scenarios. This way of learning is very
important and is a key functionality in multimedia processing (Mohri et al, 2012).
In supervised learning a computer model represents a learner system which contains two set of
data namely the training data set and the other set is the testing data set. For example consider a
system for classification of a particular disease. In such scenario a system will use a data set
which contains records of the patients with their diseases. This record is split into training data
set and the testing data set. The idea behind this type of learning is to train the learner system
with all possible outcomes in the training set, so it can perform with the highest percentage of
accuracy in the test set. Which means, the target of a learner is to make out the pattern in the
Supervised learning: System is presented with a set of input condition and an instructor
providing the desired output, thus making the system understand what to expect for an answer in
a certain scenario, provides the desired outputs (Zhang et al, 2011).
Unsupervised learning: There are no markers in the learning methodology, thus stranding the
system to discover patterns in the inputs. This type of learning can become a target in itself
(unraveling pattern in data) or the end game (feature learning).
Reinforcement learning: System is challenged with an unprecedented environment to perform
(for instance driving or playing against an opponent). The system is either rewarded or
punished, based on the performance it displayed while navigating in the problem sphere.
This paper deals with Supervised Learning, the various processes and all the functionalities.
Supervised learning
This type of learning is based on understanding the mapping of certain attributes and functions in
a predefined scenario, then using the knowledge gathered in this process to make a decision
based on the mapping learned, in unprecedented scenarios. This way of learning is very
important and is a key functionality in multimedia processing (Mohri et al, 2012).
In supervised learning a computer model represents a learner system which contains two set of
data namely the training data set and the other set is the testing data set. For example consider a
system for classification of a particular disease. In such scenario a system will use a data set
which contains records of the patients with their diseases. This record is split into training data
set and the testing data set. The idea behind this type of learning is to train the learner system
with all possible outcomes in the training set, so it can perform with the highest percentage of
accuracy in the test set. Which means, the target of a learner is to make out the pattern in the
SUPERVISED LEARNING 6
inputs provided in the test set and find a solution from what it has learned in the training set? The
classification then can be the categories of diseases for the given example. Similarly the training
set might include pictures of dogs like terrier and spaniel, along with the identification of each,
now the test set would include another group of unidentified images but of the same set. The
target for the learner is to design a rule to guide itself towards a solution in un-known scenarios
(Mohri et al, 2012).
In supervised learning the training set comprises of various ordered pairs like (a1, b1), (a2,
b2), ...,(an, bn), where each ai is represents the set of measurements of a single example data
point, and bi is the label for that data point. Consider an example where, a ai might be a group of
five attributes for a cricket match, such as run-rate, wickets in hand, strike rate, fielding plan and
individual performance. In such case the corresponding bi would be a classification of the game
as a ‘win” or “loose”. Generally a testing data is comprises of data but without labels: (an+1,
an+2... an+m). As discussed earlier, the target is to make an educated guess in the test set about
“win” or “loose” by using the learning achieved in training set (Mohri et al, 2012).
Supervised learning model and algorithm
Following are the steps performed in order to solve a problem of Supervised Learning:
1. Classifying the type of training set. Before proceeding further, an engineer must decide
what type of training set he must use for his system. It could be a single unit, a group of it
or a bunch of it (Rambhajani et al.,2015).
2. Collect a set. The training set should model the real world entities, so a training set is
gathered according. Along with this, possible outcomes are collected to form a set, either
through experience or through some empirical measurements (Mohri et al, 2012).
inputs provided in the test set and find a solution from what it has learned in the training set? The
classification then can be the categories of diseases for the given example. Similarly the training
set might include pictures of dogs like terrier and spaniel, along with the identification of each,
now the test set would include another group of unidentified images but of the same set. The
target for the learner is to design a rule to guide itself towards a solution in un-known scenarios
(Mohri et al, 2012).
In supervised learning the training set comprises of various ordered pairs like (a1, b1), (a2,
b2), ...,(an, bn), where each ai is represents the set of measurements of a single example data
point, and bi is the label for that data point. Consider an example where, a ai might be a group of
five attributes for a cricket match, such as run-rate, wickets in hand, strike rate, fielding plan and
individual performance. In such case the corresponding bi would be a classification of the game
as a ‘win” or “loose”. Generally a testing data is comprises of data but without labels: (an+1,
an+2... an+m). As discussed earlier, the target is to make an educated guess in the test set about
“win” or “loose” by using the learning achieved in training set (Mohri et al, 2012).
Supervised learning model and algorithm
Following are the steps performed in order to solve a problem of Supervised Learning:
1. Classifying the type of training set. Before proceeding further, an engineer must decide
what type of training set he must use for his system. It could be a single unit, a group of it
or a bunch of it (Rambhajani et al.,2015).
2. Collect a set. The training set should model the real world entities, so a training set is
gathered according. Along with this, possible outcomes are collected to form a set, either
through experience or through some empirical measurements (Mohri et al, 2012).
SUPERVISED LEARNING 7
3. Ascertain the input predictive model of the educated function. Learned function's
accuracy is directly dependent on the representation of the input. Basically, the input is
converted to a feature vector, comprising of various features to model the object. The
features should neither be too large in number to confuse the prediction nor should they
be too small in number to strangle the decision-making capabilities (Mohri et al, 2012).
4. Ascertain the model of the predictor function and the method employed. The choice
may be any model of support vector machine, neural network, decision tree etc.
5. Design finalization. Repetitive use of the carved out method to sharpen the accuracy.
Some of the methods may require strengthening of a feature by practicing over a subset
or through cross-referencing (Rambhajani et al.,2015).
6. Calculate method's accuracy. Once the accuracy is achieved on the training set, the
system must be presented with a test set to ascertain the final accuracy in real world
scenario (Mohri et al, 2012).
Figure 1: Supervised learning model
3. Ascertain the input predictive model of the educated function. Learned function's
accuracy is directly dependent on the representation of the input. Basically, the input is
converted to a feature vector, comprising of various features to model the object. The
features should neither be too large in number to confuse the prediction nor should they
be too small in number to strangle the decision-making capabilities (Mohri et al, 2012).
4. Ascertain the model of the predictor function and the method employed. The choice
may be any model of support vector machine, neural network, decision tree etc.
5. Design finalization. Repetitive use of the carved out method to sharpen the accuracy.
Some of the methods may require strengthening of a feature by practicing over a subset
or through cross-referencing (Rambhajani et al.,2015).
6. Calculate method's accuracy. Once the accuracy is achieved on the training set, the
system must be presented with a test set to ascertain the final accuracy in real world
scenario (Mohri et al, 2012).
Figure 1: Supervised learning model
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
SUPERVISED LEARNING 8
The base step in supervised learning is handling the dataset. A subject matter expert could be
employed to help with feature selection from a given data set. When a subject matter expert is
not available then the second best method to ascertain the feature is "Brute Force" method, the
use of empirical measurements, judging every possible scenario with all factors in mind and
using statistical techniques for arriving at a conclusion, i.e. a feature. This method although is not
applicable in all the cases like for a perfect induction, as in all of these cases this feature
comprises of significant noise which is to be removed before final induction, thus creating a
requirement of over-head pre-processing of the data set. The next step would deal with
information preparation and pre-process to be used in Assisted Machine Learning (AML)
(Rambhajani et al.,2015).
There are various techniques available from several researchers to deal with missing data.
Rambhajani et al.(2015), performed a survey and deducted a method to remove noise from the
system. Zaremba et al (2016), have also deduced another method for noise removal and is used
in several other systems. Konkol (2014), has made a comparative study on six different noise
removal techniques by working over base data sets and using a hypothetical test data set
(Rambhajani et al.,2015).
Issues to be considered in supervised learning
The main issues which are to be considered while designing the supervised learning method are
as follows:
a. Trade-off between the bias and the variance. Consider a scenario having different set of
training data. In such scenario if the learning model tends to be biased to a particular
The base step in supervised learning is handling the dataset. A subject matter expert could be
employed to help with feature selection from a given data set. When a subject matter expert is
not available then the second best method to ascertain the feature is "Brute Force" method, the
use of empirical measurements, judging every possible scenario with all factors in mind and
using statistical techniques for arriving at a conclusion, i.e. a feature. This method although is not
applicable in all the cases like for a perfect induction, as in all of these cases this feature
comprises of significant noise which is to be removed before final induction, thus creating a
requirement of over-head pre-processing of the data set. The next step would deal with
information preparation and pre-process to be used in Assisted Machine Learning (AML)
(Rambhajani et al.,2015).
There are various techniques available from several researchers to deal with missing data.
Rambhajani et al.(2015), performed a survey and deducted a method to remove noise from the
system. Zaremba et al (2016), have also deduced another method for noise removal and is used
in several other systems. Konkol (2014), has made a comparative study on six different noise
removal techniques by working over base data sets and using a hypothetical test data set
(Rambhajani et al.,2015).
Issues to be considered in supervised learning
The main issues which are to be considered while designing the supervised learning method are
as follows:
a. Trade-off between the bias and the variance. Consider a scenario having different set of
training data. In such scenario if the learning model tends to be biased to a particular
SUPERVISED LEARNING 9
variable y then while training on these data set the model gives incorrect prediction to the
exact value for y. Similarly if the model has high variance for a input y then it predicts
different output values for that variable for different training sets. Generally the
prediction error of any learning model is sum of the bias and the variance of the model.
Thus, there exists a tradeoff between the bias and the variance of the model. In case the
learning model is too flexible in nature then it will make arrangements within the training
data sets differently and thus shall exhibit high variance.
b. The complexity of the function of the classifier and the relative quantity of the training
data is the second issue. In case the function of the classifier is a simple function then it
tends to be inflexible in nature and this result in the learning model having high bias and
low variance values and thus shall be capable to learn from a very small quantity of the
training data. On the other hand if the function is complex in nature than it requires high
number of data set as the learning model shall be flexible with low bias and high variance
making (Marsland, 2015).
c. The dimensionality of the input space also poses challenges to the learning model. In case
the input feature vectors comprise of high dimensions than it becomes difficult for the
learning model to work upon as high dimensions causes the model to have high
variances. Thus the input dimensions need to have low variances and high bias for correct
learning model of the classifier (Marsland, 2015).
d. Another issue which arises is the noise coming from the output values. In case the desired
output values comprise of error due to human error or errors caused by sensors then the
learning model should never try to compute a function that exactly matches the training
variable y then while training on these data set the model gives incorrect prediction to the
exact value for y. Similarly if the model has high variance for a input y then it predicts
different output values for that variable for different training sets. Generally the
prediction error of any learning model is sum of the bias and the variance of the model.
Thus, there exists a tradeoff between the bias and the variance of the model. In case the
learning model is too flexible in nature then it will make arrangements within the training
data sets differently and thus shall exhibit high variance.
b. The complexity of the function of the classifier and the relative quantity of the training
data is the second issue. In case the function of the classifier is a simple function then it
tends to be inflexible in nature and this result in the learning model having high bias and
low variance values and thus shall be capable to learn from a very small quantity of the
training data. On the other hand if the function is complex in nature than it requires high
number of data set as the learning model shall be flexible with low bias and high variance
making (Marsland, 2015).
c. The dimensionality of the input space also poses challenges to the learning model. In case
the input feature vectors comprise of high dimensions than it becomes difficult for the
learning model to work upon as high dimensions causes the model to have high
variances. Thus the input dimensions need to have low variances and high bias for correct
learning model of the classifier (Marsland, 2015).
d. Another issue which arises is the noise coming from the output values. In case the desired
output values comprise of error due to human error or errors caused by sensors then the
learning model should never try to compute a function that exactly matches the training
SUPERVISED LEARNING 10
examples. Doing such execution may lead to over fitting of the function (Marsland,
2015).
Algorithm of supervised learning
Supervised learning has attracted researchers and thus many types of efficient algorithms have
been designed for supervised learning. Many of these algorithms have been used for wide range
of applications like for classification of diseases and non-disease attributes, pattern recognition,
speech recognition, etc (Ling et al, 2015; Nguyen et al ,2014; Marsland, 2015). Each algorithm
of supervised learning has its own set of advantages and disadvantages. Still, there exists no
single algorithm that can be used on all supervised learning problem. Some of the popular
supervised learning algorithms are as follows:
a. Decision Trees:
Decision trees classify the input instances by arranging the instances based on feature values and
forms a tree like structure where the tree node represents the feature of the input instance which
is to be classified and the branch of the tree signifies a value that can be assigned to the node
vector. Classification of the input instances starts at the root node and these instances are
arranged based on their assigned feature values (Karimi et al, 2011). Example:
examples. Doing such execution may lead to over fitting of the function (Marsland,
2015).
Algorithm of supervised learning
Supervised learning has attracted researchers and thus many types of efficient algorithms have
been designed for supervised learning. Many of these algorithms have been used for wide range
of applications like for classification of diseases and non-disease attributes, pattern recognition,
speech recognition, etc (Ling et al, 2015; Nguyen et al ,2014; Marsland, 2015). Each algorithm
of supervised learning has its own set of advantages and disadvantages. Still, there exists no
single algorithm that can be used on all supervised learning problem. Some of the popular
supervised learning algorithms are as follows:
a. Decision Trees:
Decision trees classify the input instances by arranging the instances based on feature values and
forms a tree like structure where the tree node represents the feature of the input instance which
is to be classified and the branch of the tree signifies a value that can be assigned to the node
vector. Classification of the input instances starts at the root node and these instances are
arranged based on their assigned feature values (Karimi et al, 2011). Example:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
SUPERVISED LEARNING 11
Figure 2: Decision tree and data
In the figure above there is a decision tree for the data given in the table on the right side. In this
example an input instance comprising of at1 with value a1 at2 with a2 etc. can be arranged and
classified as a class of Yes or No (Kamiński et al, 2017).
Decision trees are widely used in the classification of input data as they are flexible to be used
for wide range of classification problems. Generally, the tree path or the resulting set of the rules
of the tree are mutually exclusive which makes each data of the input to be covered in a single
rule. This enhances the accuracy of the predictive systems and makes them more scalable
(Kamiński et al, 2017).
b. Naïve Bayes Classifier:
Naïve Bayes classifier comes under the group of Bayesian network based classifier which is
generally statistical learning algorithms (Huang et al, 2014). The Naïve Bayes Classifier
networks comprise of directed acyclic graphs where there is one parent which represents the un
observed node and many children nodes representing the observed nodes. There is a sting
assumption of independence among the children nodes (Archana et al, 2014).
Figure 2: Decision tree and data
In the figure above there is a decision tree for the data given in the table on the right side. In this
example an input instance comprising of at1 with value a1 at2 with a2 etc. can be arranged and
classified as a class of Yes or No (Kamiński et al, 2017).
Decision trees are widely used in the classification of input data as they are flexible to be used
for wide range of classification problems. Generally, the tree path or the resulting set of the rules
of the tree are mutually exclusive which makes each data of the input to be covered in a single
rule. This enhances the accuracy of the predictive systems and makes them more scalable
(Kamiński et al, 2017).
b. Naïve Bayes Classifier:
Naïve Bayes classifier comes under the group of Bayesian network based classifier which is
generally statistical learning algorithms (Huang et al, 2014). The Naïve Bayes Classifier
networks comprise of directed acyclic graphs where there is one parent which represents the un
observed node and many children nodes representing the observed nodes. There is a sting
assumption of independence among the children nodes (Archana et al, 2014).
SUPERVISED LEARNING 12
The Naïve Bayes makes use of the Bayes theorem to calculate the probability by counting the
value frequencies and the combination of values of the historical data (Cohen et al, 2014). The
posterior probability is calculated using the Bayes formula as given below:
Where,
P (c|x) is the posterior probability of class (target) given predictor (attribute).
P(c) is the prior probability of class.
P (x|c) is the likelihood which is the probability of predictor given class.
P(x) is the prior probability of predictor
For example, the frequency table is given below with the likelihood table.
Figure 3: Frequency table and likelihood table
The Naïve Bayes makes use of the Bayes theorem to calculate the probability by counting the
value frequencies and the combination of values of the historical data (Cohen et al, 2014). The
posterior probability is calculated using the Bayes formula as given below:
Where,
P (c|x) is the posterior probability of class (target) given predictor (attribute).
P(c) is the prior probability of class.
P (x|c) is the likelihood which is the probability of predictor given class.
P(x) is the prior probability of predictor
For example, the frequency table is given below with the likelihood table.
Figure 3: Frequency table and likelihood table
SUPERVISED LEARNING 13
The posterior probability now can be calculated using the above equation and the class with the
highest probability becomes the result of the prediction. The advantage of this method is that it
requires a very small amount of input training data and is fast to predict the outcome. But the
biggest challenge while using this method is that getting an independent set of predictors is not
possible always in a real life situation (Ghahramani et al, 2015).
c. K-Nearest Neighbor
This method of the supervised learning algorithm is used for classification when there is no prior
or very less information about the distribution of the input data. Thus, this method is a good
choice where it is required to perform the discriminate analysis when the probability densities are
unknown. The classification in this method is done on the majority of the k-nearest neighbor
category (Hamid et al., 2010). The input is classified as a new object based on the training
samples and the attributes. The classification uses majority vote for the classification of the k
objects. For example, consider the given sample data:
X1 X2 Result
7 7 NO
7 4 NO
3 4 Yes
1 4 Yes
Table 1: sample table
The values of the variables x1 and x2 drive the outcome for the data as Yes or No. Now suppose
the input value of x1 and x2 are 4 and 7 respectively, then there is no need to perform a lengthy
survey in such scenarios this method can be used and the nearest k value shall be the result of the
given input and thus the outcome shall be yes (Michalski et al, 2013).
The posterior probability now can be calculated using the above equation and the class with the
highest probability becomes the result of the prediction. The advantage of this method is that it
requires a very small amount of input training data and is fast to predict the outcome. But the
biggest challenge while using this method is that getting an independent set of predictors is not
possible always in a real life situation (Ghahramani et al, 2015).
c. K-Nearest Neighbor
This method of the supervised learning algorithm is used for classification when there is no prior
or very less information about the distribution of the input data. Thus, this method is a good
choice where it is required to perform the discriminate analysis when the probability densities are
unknown. The classification in this method is done on the majority of the k-nearest neighbor
category (Hamid et al., 2010). The input is classified as a new object based on the training
samples and the attributes. The classification uses majority vote for the classification of the k
objects. For example, consider the given sample data:
X1 X2 Result
7 7 NO
7 4 NO
3 4 Yes
1 4 Yes
Table 1: sample table
The values of the variables x1 and x2 drive the outcome for the data as Yes or No. Now suppose
the input value of x1 and x2 are 4 and 7 respectively, then there is no need to perform a lengthy
survey in such scenarios this method can be used and the nearest k value shall be the result of the
given input and thus the outcome shall be yes (Michalski et al, 2013).
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
SUPERVISED LEARNING 14
d. Support Vector Machine
Support Vector Machine model represents the examples as a point in a given space and these
examples than are separated into classes by a clear gap or the hyper plane. The new instances of
the examples are then mapped into the same input space and their classes are predicted such that
they fall on the either side of the hyper plane (Karamizadeh et al, 2014). Thus, the hyper plane is
used for classification and regression. Thus, Support Vector Machine is a linear binary classifier
which first classifies the entire input into two categories divided by a wide gap of the hyper plane
based on some function criteria. SVM has been widely used for classification as they are
effective in high dimensional spaces, is memory efficient and versatile in nature for holding
different kernel functions which can be used for decision making. The only challenge this
method poses is the refinement to be done for multi variable classification which requires linear
binary classifier to be executed recursively thereby consuming more time (Liu et al, 2012).
e. Artificial neural network and Deep Learning:
Supervised learning can be efficiently implemented using the artificial neural network algorithms
like back propagation algorithm. Neural Networks are machine model of human neurons and
have a set of weighted inputs. The neuron produces an output based on the threshold function. In
the training phase, various combinations of input and weights are used to train the network to
give an output. The error from the output obtained is used for learning via back propagation and
thus the network gets trained from the error (LeCun et al, 2015).Deep artificial neural networks
are a variant of neural networks which have become popular in pattern recognition and machine
learning. In the deep Learning model, neural networks with many layers are trained in a layer
wise manner. Each layer by learning enhances the quality of learning and the accuracy of the
d. Support Vector Machine
Support Vector Machine model represents the examples as a point in a given space and these
examples than are separated into classes by a clear gap or the hyper plane. The new instances of
the examples are then mapped into the same input space and their classes are predicted such that
they fall on the either side of the hyper plane (Karamizadeh et al, 2014). Thus, the hyper plane is
used for classification and regression. Thus, Support Vector Machine is a linear binary classifier
which first classifies the entire input into two categories divided by a wide gap of the hyper plane
based on some function criteria. SVM has been widely used for classification as they are
effective in high dimensional spaces, is memory efficient and versatile in nature for holding
different kernel functions which can be used for decision making. The only challenge this
method poses is the refinement to be done for multi variable classification which requires linear
binary classifier to be executed recursively thereby consuming more time (Liu et al, 2012).
e. Artificial neural network and Deep Learning:
Supervised learning can be efficiently implemented using the artificial neural network algorithms
like back propagation algorithm. Neural Networks are machine model of human neurons and
have a set of weighted inputs. The neuron produces an output based on the threshold function. In
the training phase, various combinations of input and weights are used to train the network to
give an output. The error from the output obtained is used for learning via back propagation and
thus the network gets trained from the error (LeCun et al, 2015).Deep artificial neural networks
are a variant of neural networks which have become popular in pattern recognition and machine
learning. In the deep Learning model, neural networks with many layers are trained in a layer
wise manner. Each layer by learning enhances the quality of learning and the accuracy of the
SUPERVISED LEARNING 15
output and henceforth this is used in many applications like computer vision, speech recognition,
handwriting recognition, natural language processing etc (Ruslan et al. , 2012; Dalessandro,
2013).
Today Deep learning has been implemented in various to products of the company like
Microsoft, Google, and Apple for data analysis and natural language processing. Google Voice
search and Apple Siri make use of Deep learning method for natural language processing and
thus are able to recognize the spoken text (Vincent et al,2010). These applications learn from the
user usage every time and thus enhance their quality over the time of their usage (Dahl et al. ,
2012).
Applications of supervised learning
Supervised learning has been used for various applications. There are many algorithms for
supervised learning and many of them have proven to be useful in various applications. Any
researchers have successfully implemented applications which make use of supervised learning
for prediction, for classification for recognition etc. The main popularity of the usage of the
supervised learning algorithms is due to its simplicity and that it helps in the development of the
application with the available input sets and expected output sets. This makes the automation
process to be an easier one. Supervised learning has been used for pattern recognition, speech
recognition, handwriting recognition, optical character recognition, image classification, data
mining, knowledge mining, data and text classification, spam filtering, single filtering, intrusion
detection system, automated systems for traffic lights, flight scheduling, congestion control
systems, disease classification and prediction systems etc. Some of the notable works are given
below
output and henceforth this is used in many applications like computer vision, speech recognition,
handwriting recognition, natural language processing etc (Ruslan et al. , 2012; Dalessandro,
2013).
Today Deep learning has been implemented in various to products of the company like
Microsoft, Google, and Apple for data analysis and natural language processing. Google Voice
search and Apple Siri make use of Deep learning method for natural language processing and
thus are able to recognize the spoken text (Vincent et al,2010). These applications learn from the
user usage every time and thus enhance their quality over the time of their usage (Dahl et al. ,
2012).
Applications of supervised learning
Supervised learning has been used for various applications. There are many algorithms for
supervised learning and many of them have proven to be useful in various applications. Any
researchers have successfully implemented applications which make use of supervised learning
for prediction, for classification for recognition etc. The main popularity of the usage of the
supervised learning algorithms is due to its simplicity and that it helps in the development of the
application with the available input sets and expected output sets. This makes the automation
process to be an easier one. Supervised learning has been used for pattern recognition, speech
recognition, handwriting recognition, optical character recognition, image classification, data
mining, knowledge mining, data and text classification, spam filtering, single filtering, intrusion
detection system, automated systems for traffic lights, flight scheduling, congestion control
systems, disease classification and prediction systems etc. Some of the notable works are given
below
SUPERVISED LEARNING 16
Disease classification & prediction
Supervised learning can be used for diagnosis of any disease where the existing disease patterns
and symptoms of the previously known patients are given as the training inputs. After the
training is done the new set of inputs for prediction are supplied to the model and the model
predicts the probability of the disease i.e. it is present or not (Jordan et al, 2015). It classifies the
diseases in the given class of present or absent. Such work has been done for predicting diseases
like Breast cancer, Diabetes, heart diseases etc (Yugowati et al ,2013).
In a research by Vembandasamy et al. (2015), the authors used Naïve Bayes supervised learning
algorithm to predict the heart disease of the patients. The data set comprised of 500 patients and
the classification of the data was done using 70 % rules. The system accuracy was 87%.
Similarly, Iyer et al (2015), performed the prediction of diabetes diseases using decision tree and
Naïve Bayes classifier.
Pattern Recognition
Supervised learning algorithms can be used to classify different patterns of images, shapes,
handwriting characters, etc. The model is trained with the existing set of input and output
patterns after the training the model is used to classify the other trusted input pattern into the
various classes identified in the training phase. Pattern recognition abilities have helped in the
development of computer vision applications using supervised learning (Murty et al, 2011).
Many applications have been devised and been successfully implemented which work on the
supervised learning model for image processing and pattern recognition system. For example
automated smartcard recharge system is able to recognize the dollars through pattern recognition
of the dollars submitted to the machine.
Disease classification & prediction
Supervised learning can be used for diagnosis of any disease where the existing disease patterns
and symptoms of the previously known patients are given as the training inputs. After the
training is done the new set of inputs for prediction are supplied to the model and the model
predicts the probability of the disease i.e. it is present or not (Jordan et al, 2015). It classifies the
diseases in the given class of present or absent. Such work has been done for predicting diseases
like Breast cancer, Diabetes, heart diseases etc (Yugowati et al ,2013).
In a research by Vembandasamy et al. (2015), the authors used Naïve Bayes supervised learning
algorithm to predict the heart disease of the patients. The data set comprised of 500 patients and
the classification of the data was done using 70 % rules. The system accuracy was 87%.
Similarly, Iyer et al (2015), performed the prediction of diabetes diseases using decision tree and
Naïve Bayes classifier.
Pattern Recognition
Supervised learning algorithms can be used to classify different patterns of images, shapes,
handwriting characters, etc. The model is trained with the existing set of input and output
patterns after the training the model is used to classify the other trusted input pattern into the
various classes identified in the training phase. Pattern recognition abilities have helped in the
development of computer vision applications using supervised learning (Murty et al, 2011).
Many applications have been devised and been successfully implemented which work on the
supervised learning model for image processing and pattern recognition system. For example
automated smartcard recharge system is able to recognize the dollars through pattern recognition
of the dollars submitted to the machine.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
SUPERVISED LEARNING 17
Many researchers have used algorithms like Neural networks, Naïve Bayes, SVM, K-Means to
classify and identify patterns of handwriting, speech images, shapes etc (Smith et al, 2011;
Sharma et al, 2015).
Intrusion detection:
The signature of the intrusions and their consequences are given as the training input to a
supervised model. Based on the learning the model then classifies the new set of inputs into
intrusion category or safe category. In this way, the supervised learning helps in instruction
detection applications. The supervised learning methods are also helping in detecting signatures
of cyber-attacks and viruses for the networks. Many types of research have been carried out in
this area using various algorithms like neural networks, SVM etc (Poojitha et al, 2010).
Conclusion
One of the main objectives of the machine learning process is to impart computer an ability to
learn from the data or the past experiences and thereby help in solving problems of classification
and prediction. Machine learning can be done in 3 different ways i.e. supervised learning,
unsupervised learning and reinforcement learning. The main aim of the supervised learning
method is to build a model based on the trained input classes and their predicted features. The
result is a classifier which assigns a label to the tested class instances with eh class labels formed
in the training phase. Supervised learning has proven to be one of the efficient and easier
methods of machines learning and hence it is widely used in various applications like speech
recognition, pattern recognition, computer vision, intrusion detection, medical diagnosis systems
etc. Supervised learning algorithms are more powerful than the other machine learning model
Many researchers have used algorithms like Neural networks, Naïve Bayes, SVM, K-Means to
classify and identify patterns of handwriting, speech images, shapes etc (Smith et al, 2011;
Sharma et al, 2015).
Intrusion detection:
The signature of the intrusions and their consequences are given as the training input to a
supervised model. Based on the learning the model then classifies the new set of inputs into
intrusion category or safe category. In this way, the supervised learning helps in instruction
detection applications. The supervised learning methods are also helping in detecting signatures
of cyber-attacks and viruses for the networks. Many types of research have been carried out in
this area using various algorithms like neural networks, SVM etc (Poojitha et al, 2010).
Conclusion
One of the main objectives of the machine learning process is to impart computer an ability to
learn from the data or the past experiences and thereby help in solving problems of classification
and prediction. Machine learning can be done in 3 different ways i.e. supervised learning,
unsupervised learning and reinforcement learning. The main aim of the supervised learning
method is to build a model based on the trained input classes and their predicted features. The
result is a classifier which assigns a label to the tested class instances with eh class labels formed
in the training phase. Supervised learning has proven to be one of the efficient and easier
methods of machines learning and hence it is widely used in various applications like speech
recognition, pattern recognition, computer vision, intrusion detection, medical diagnosis systems
etc. Supervised learning algorithms are more powerful than the other machine learning model
SUPERVISED LEARNING 18
like the unsupervised method because in the supervised learning the training data availability
provides clear criteria for the optimization of the model.
References
Alpaydin, E., (2014). Introduction to machine learning. MIT press.
Alsheikh, M.A., Lin, S., Niyato, D. and Tan, H.P., (2014). Machine learning in wireless sensor
networks: Algorithms, strategies, and applications. IEEE Communications Surveys & Tutorials,
16(4), pp.1996-2018.
Archana, S. and DR Elangovan, K. (2014) Survey of Classification Techniques in Data Mining.
International Journal of Computer Science and Mobile Applications, 2, 65-71
Cohen, P.R., and Feigenbaum, E.A. eds., (2014). The handbook of artificial intelligence (Vol. 3).
Butterworth-Heinemann.
Dahl, G.; Yu, D.; Deng, L.; Acero, A. (2012). "Context-Dependent Pre-Trained Deep Neural
Networks for Large-Vocabulary Speech Recognition". IEEE Transactions on Audio, Speech, and
Language Processing. 20 (1): 30–42.
Dalessandro, B., (2013). Bring the noise: Embracing randomness is the key to scaling up
machine learning algorithms. Big Data, 1(2), pp.110-112.
Deng, H.; Runger, G.; Tuv, E. (2011). The bias of important measures for multivalued attributes
and solutions. Proceedings of the 21st International Conference on Artificial Neural Networks
(ICANN).
Ghahramani, Z., (2015). Probabilistic machine learning and artificial intelligence. Nature,
521(7553), p.452.
like the unsupervised method because in the supervised learning the training data availability
provides clear criteria for the optimization of the model.
References
Alpaydin, E., (2014). Introduction to machine learning. MIT press.
Alsheikh, M.A., Lin, S., Niyato, D. and Tan, H.P., (2014). Machine learning in wireless sensor
networks: Algorithms, strategies, and applications. IEEE Communications Surveys & Tutorials,
16(4), pp.1996-2018.
Archana, S. and DR Elangovan, K. (2014) Survey of Classification Techniques in Data Mining.
International Journal of Computer Science and Mobile Applications, 2, 65-71
Cohen, P.R., and Feigenbaum, E.A. eds., (2014). The handbook of artificial intelligence (Vol. 3).
Butterworth-Heinemann.
Dahl, G.; Yu, D.; Deng, L.; Acero, A. (2012). "Context-Dependent Pre-Trained Deep Neural
Networks for Large-Vocabulary Speech Recognition". IEEE Transactions on Audio, Speech, and
Language Processing. 20 (1): 30–42.
Dalessandro, B., (2013). Bring the noise: Embracing randomness is the key to scaling up
machine learning algorithms. Big Data, 1(2), pp.110-112.
Deng, H.; Runger, G.; Tuv, E. (2011). The bias of important measures for multivalued attributes
and solutions. Proceedings of the 21st International Conference on Artificial Neural Networks
(ICANN).
Ghahramani, Z., (2015). Probabilistic machine learning and artificial intelligence. Nature,
521(7553), p.452.
SUPERVISED LEARNING 19
Hamid, P, Alizadeh, H., and Minati, B. (2010). “A Modification on K-Nearest Neighbor
Classifier”, Global Journal of Computer Science and Technology, Vol. 10, No. 14 (Ver.1.0), pp.
37-41.
Huang, G., Huang, G.B., Song, S. and You, K., (2015). Trends in extreme learning machines: A
review. Neural Networks, 61, pp.32-48.
Huang, G., Song, S., Gupta, J.N. and Wu, C., (2014). Semi-supervised and unsupervised extreme
learning machines. IEEE transactions on cybernetics, 44(12), pp.2405-2417.
Iyer, A., Jeyalatha, S. and Sumbaly, R. (2015) Diagnosis of Diabetes Using Classification
Mining Techniques. International Journal of Data Mining & Knowledge Management Process
(IJDKP), 5, 1-14.
Jordan, M.I., and Mitchell, T.M., (2015). Machine learning: Trends, perspectives, and prospects.
Science, 349(6245), pp.255-260.
Kamiński, B.; Jakubczyk, M.; Szufel, P. (2017). "A framework for sensitivity analysis of
decision trees". Central European Journal of Operations Research. doi:10.1007/s10100-017-
0479-6
Karamizadeh, S., Abdullah, S.M., Halimi, M., Shayan, J. and Rajabi, M.J. (2014) Advantage and
Drawback of Support Vector Machine Functionality. 2014 IEEE International Conference on
Computer, Communication and Control Technology (I4CT), Langkawi, 2-4 September 2014, 64-
65
Karimi, K. and Hamilton, H.J. (2011). "Generation and Interpretation of Temporal Decision
Rules", International Journal of Computer Information Systems and Industrial Management
Applications, Volume 3.
Hamid, P, Alizadeh, H., and Minati, B. (2010). “A Modification on K-Nearest Neighbor
Classifier”, Global Journal of Computer Science and Technology, Vol. 10, No. 14 (Ver.1.0), pp.
37-41.
Huang, G., Huang, G.B., Song, S. and You, K., (2015). Trends in extreme learning machines: A
review. Neural Networks, 61, pp.32-48.
Huang, G., Song, S., Gupta, J.N. and Wu, C., (2014). Semi-supervised and unsupervised extreme
learning machines. IEEE transactions on cybernetics, 44(12), pp.2405-2417.
Iyer, A., Jeyalatha, S. and Sumbaly, R. (2015) Diagnosis of Diabetes Using Classification
Mining Techniques. International Journal of Data Mining & Knowledge Management Process
(IJDKP), 5, 1-14.
Jordan, M.I., and Mitchell, T.M., (2015). Machine learning: Trends, perspectives, and prospects.
Science, 349(6245), pp.255-260.
Kamiński, B.; Jakubczyk, M.; Szufel, P. (2017). "A framework for sensitivity analysis of
decision trees". Central European Journal of Operations Research. doi:10.1007/s10100-017-
0479-6
Karamizadeh, S., Abdullah, S.M., Halimi, M., Shayan, J. and Rajabi, M.J. (2014) Advantage and
Drawback of Support Vector Machine Functionality. 2014 IEEE International Conference on
Computer, Communication and Control Technology (I4CT), Langkawi, 2-4 September 2014, 64-
65
Karimi, K. and Hamilton, H.J. (2011). "Generation and Interpretation of Temporal Decision
Rules", International Journal of Computer Information Systems and Industrial Management
Applications, Volume 3.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
SUPERVISED LEARNING 20
Konkol, M., (2014),. Brainy: A machine learning library. In International Conference on
Artificial Intelligence and Soft Computing (pp. 490-499). Springer, Cham.
LeCun, Y., Bengio, Y. and Hinton, G., (2015). Deep learning. Nature, 521(7553), pp.436-444.
Ling, J. and Templeton, J., (2015). Evaluation of machine learning algorithms for prediction of
regions of high Reynolds averaged Navier Stokes uncertainty. Physics of Fluids, 27(8),
p.085103.
Liu, X. Y., Gao, C. H.and P. Li, (2012). “A comparative analysis of support vector machines
and extreme learning machines,” Neural Network., vol. 33, pp. 58–66.
Marsland, S., (2015). Machine learning: an algorithmic perspective. CRC press.
Michalski, R.S., Carbonell, J.G. and Mitchell, T.M. eds., (2013). Machine learning: An artificial
intelligence approach. Springer Science & Business Media.
Mohri, M, Rostamizadeh, A., Talwalkar, A. (2012). Foundations of Machine Learning. USA,
Massachusetts: MIT Press. ISBN 9780262018258.
Murty, N.M.; Susheela Devi, V. (2011). Pattern Recognition: An Algorithmic Approach. ISBN
0857294946.
Nguyen, D.H., and Patrick, J.D.,( 2014). Supervised machine learning and active learning in the
classification of radiology reports. Journal of the American Medical Informatics Association,
21(5), pp.893-901.
Poojitha, G. Kumar, K. N., and Reddy, P.J. (2010). Intrusion Detection Using Artificial Neural
Network, International Conference on Computing Communication and Networking Technologies
(ICCCNT), pp. 1-7.
Konkol, M., (2014),. Brainy: A machine learning library. In International Conference on
Artificial Intelligence and Soft Computing (pp. 490-499). Springer, Cham.
LeCun, Y., Bengio, Y. and Hinton, G., (2015). Deep learning. Nature, 521(7553), pp.436-444.
Ling, J. and Templeton, J., (2015). Evaluation of machine learning algorithms for prediction of
regions of high Reynolds averaged Navier Stokes uncertainty. Physics of Fluids, 27(8),
p.085103.
Liu, X. Y., Gao, C. H.and P. Li, (2012). “A comparative analysis of support vector machines
and extreme learning machines,” Neural Network., vol. 33, pp. 58–66.
Marsland, S., (2015). Machine learning: an algorithmic perspective. CRC press.
Michalski, R.S., Carbonell, J.G. and Mitchell, T.M. eds., (2013). Machine learning: An artificial
intelligence approach. Springer Science & Business Media.
Mohri, M, Rostamizadeh, A., Talwalkar, A. (2012). Foundations of Machine Learning. USA,
Massachusetts: MIT Press. ISBN 9780262018258.
Murty, N.M.; Susheela Devi, V. (2011). Pattern Recognition: An Algorithmic Approach. ISBN
0857294946.
Nguyen, D.H., and Patrick, J.D.,( 2014). Supervised machine learning and active learning in the
classification of radiology reports. Journal of the American Medical Informatics Association,
21(5), pp.893-901.
Poojitha, G. Kumar, K. N., and Reddy, P.J. (2010). Intrusion Detection Using Artificial Neural
Network, International Conference on Computing Communication and Networking Technologies
(ICCCNT), pp. 1-7.
SUPERVISED LEARNING 21
Rambhajani, M., Deepanker, W. and Pathak, N. (2015). A Survey on Implementation of
Machine Learning Techniques for Dermatology Diseases Classification. International Journal of
Advances in Engineering & Technology, 8, 194-195.
Ruslan, S, Joshua, T., (2012). "Learning with Hierarchical-Deep Models". IEEE Transactions on
Pattern Analysis and Machine Intelligence. 35: 1958–71. doi:10.1109/TPAMI.2012.269
Sharma, P. and Kaur, M. (2013) Classification in Pattern Recognition: A Review. International
Journal of Advanced Research in Computer Science and Software Engineering, 3, 298.
Singh, Y., Bhatia, P.K., and Sangwan, O. (2007). A Review of Studies on Machine Learning
Techniques. International Journal of Computer Science and Security, 1, 70-84.
Smith, M.R, and Martinez, T. (2011). "Improving Classification Accuracy by Identifying and
Removing Instances that Should Be Misclassified". Proceedings of International Joint
Conference on Neural Networks (IJCNN 2011). pp. 2690–2697.
Vembandasamy, K., Sasipriya, R. and Deepa, E. (2015). Heart Diseases Detection Using Naive
Bayes Algorithm. IJISET-International Journal of Innovative Science, Engineering &
Technology, 2, 441-444.
Vincent, P, Larochelle, H, Lajoie, I, Bengio, Y, Manzagol, P.(2010). "Stacked Denoising
Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising
Criterion". The Journal of Machine Learning Research. 11: 3371–3408.
Wenger, E., (2014). Artificial intelligence and tutoring systems: computational and cognitive
approaches to the communication of knowledge. Morgan Kaufmann.
Witten, I.H., Frank, E., Hall, M.A. and Pal, C.J., (2016). Data Mining: Practical machine
learning tools and techniques. Morgan Kaufmann.
Rambhajani, M., Deepanker, W. and Pathak, N. (2015). A Survey on Implementation of
Machine Learning Techniques for Dermatology Diseases Classification. International Journal of
Advances in Engineering & Technology, 8, 194-195.
Ruslan, S, Joshua, T., (2012). "Learning with Hierarchical-Deep Models". IEEE Transactions on
Pattern Analysis and Machine Intelligence. 35: 1958–71. doi:10.1109/TPAMI.2012.269
Sharma, P. and Kaur, M. (2013) Classification in Pattern Recognition: A Review. International
Journal of Advanced Research in Computer Science and Software Engineering, 3, 298.
Singh, Y., Bhatia, P.K., and Sangwan, O. (2007). A Review of Studies on Machine Learning
Techniques. International Journal of Computer Science and Security, 1, 70-84.
Smith, M.R, and Martinez, T. (2011). "Improving Classification Accuracy by Identifying and
Removing Instances that Should Be Misclassified". Proceedings of International Joint
Conference on Neural Networks (IJCNN 2011). pp. 2690–2697.
Vembandasamy, K., Sasipriya, R. and Deepa, E. (2015). Heart Diseases Detection Using Naive
Bayes Algorithm. IJISET-International Journal of Innovative Science, Engineering &
Technology, 2, 441-444.
Vincent, P, Larochelle, H, Lajoie, I, Bengio, Y, Manzagol, P.(2010). "Stacked Denoising
Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising
Criterion". The Journal of Machine Learning Research. 11: 3371–3408.
Wenger, E., (2014). Artificial intelligence and tutoring systems: computational and cognitive
approaches to the communication of knowledge. Morgan Kaufmann.
Witten, I.H., Frank, E., Hall, M.A. and Pal, C.J., (2016). Data Mining: Practical machine
learning tools and techniques. Morgan Kaufmann.
SUPERVISED LEARNING 22
Xia, R., Pan, Y., Lai, H., Liu, C. and Yan, S., (2014). Supervised Hashing for Image Retrieval
via Image Representation Learning. In AAAI (Vol. 1, pp. 2156-2162).
Yugowati P, Shaou-Gang, M. Wee,H. 92013). Supervised learning approaches and feature
selection - a case study in diabetes”, International Journal of Data Analysis Techniques and
Strategies 2013 - Vol. 5, No.3 pp. 323 – 337
Zaremba, W., Mikolov, T., Joulin, A. and Fergus, R., (2016). Learning simple algorithms from
examples. In International Conference on Machine Learning (pp. 421-429).
Zhang, J, Zhan, Z., Lin, Y., Chen, N, Gong, Y; Jing-hui, Z. Chung, H, Li, Yun; Shi, Yu-hui
(2011). "Evolutionary Computation Meets Machine Learning: A Survey" (PDF). Computational
Intelligence Magazine. IEEE. 6 (4): 68–75.
Xia, R., Pan, Y., Lai, H., Liu, C. and Yan, S., (2014). Supervised Hashing for Image Retrieval
via Image Representation Learning. In AAAI (Vol. 1, pp. 2156-2162).
Yugowati P, Shaou-Gang, M. Wee,H. 92013). Supervised learning approaches and feature
selection - a case study in diabetes”, International Journal of Data Analysis Techniques and
Strategies 2013 - Vol. 5, No.3 pp. 323 – 337
Zaremba, W., Mikolov, T., Joulin, A. and Fergus, R., (2016). Learning simple algorithms from
examples. In International Conference on Machine Learning (pp. 421-429).
Zhang, J, Zhan, Z., Lin, Y., Chen, N, Gong, Y; Jing-hui, Z. Chung, H, Li, Yun; Shi, Yu-hui
(2011). "Evolutionary Computation Meets Machine Learning: A Survey" (PDF). Computational
Intelligence Magazine. IEEE. 6 (4): 68–75.
1 out of 22
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.