logo

Comparative Exploration of KNN, J48 and Lazy IBK Classifiers in Weka

   

Added on  2023-06-07

19 Pages2887 Words140 Views
16
Comparative Exploration of KNN, J48 and
Lazy IBK Classifiers in Weka
Comparative Exploration of KNN, J48 and Lazy IBK Classifiers in Weka_1
16
Abstract
A data mining technique that predicts that a group belongs to data instances in a particular record
with certain restrictions is known as a Classification tool. The data classification problem has
been found in several areas of data mining (Xhemali, Hinde, & Stone, 2009). This is the problem
of knowing a number of characteristic variables and an objective variable. Customer orientation,
medical diagnostics, social networking analysis, and artificial intelligence are some places of
application. This article focuses on different classification techniques, their pros, and cons. In
this article, J48 (decision tree), k-nearest neighbor and naive Bayes classification algorithms
have been used. A comparative evaluation of J48, k-nearest neighbor and naive Bayes in
connection with voting preferences has been performed. The results of the comparison presented
in this document relate to the accuracy of the classification and cost analysis. The result reveals
the efficacy and precision of the classifiers.
Comparative Exploration of KNN, J48 and Lazy IBK Classifiers in Weka_2
16
Table of Contents
Abstract......................................................................................... 2
Introduction................................................................................... 4
K- Nearest Neighbor Classification.................................................4
Naive Bayes Classification.............................................................5
Decision Tree................................................................................. 5
Comparison among K-NN (IBK), Decision Tree (J48) and Naive
Bayes Techniques..........................................................................6
Instrument for the Comparison......................................................6
Data Exploration............................................................................ 7
Performance Investigation of the Classifiers..................................9
Classification by Naïve Bayes........................................................9
K-Nearest Neighbor Classification................................................11
Decision Tree (J48) Classification.................................................14
Conclusion................................................................................... 17
Reference.....................................................................................18
Comparative Exploration of KNN, J48 and Lazy IBK Classifiers in Weka_3
16
Introduction
In data classification, the format of the data organized accurately into categories in accordance to
similarity and specificity, so that objects in different groups are dissimilar and the algorithm
assigns each instance in order to minimize the error (Brijain, Patel, Kushik, & Rana, 2014).
Categorization of labels and classification of data based on the constraints of the model is created
with the associated data set and class labels. The classification utilizes a controlled approach
which can be categorized in two steps. Firstly, the permutation of training with the pre-
processing phase that constructs the classification model. Secondly, the application of the model
on an experimental data set with class variables (Jadhav, & Channe, 2016). The current article is
focused on exploring three different classification techniques in data mining. This study
compares between K-NN classification, Decision Tree and Bayesian Network based on the
precision of each algorithm. This comparative guide may help future researchers to develop
innovative algorithms in the field of data mining (Islam, Wu, Ahmadi, & Sid-Ahmed, 2007).
K- Nearest Neighbor Classification
The KNN algorithm assumes that the samples are close to one another and it is an instance-based
learning method. These types of classifiers are also known as lazy learners. KNN creates a
classifier by storing all the samples without marking. The slow learning algorithms take less
calculation time during the learning phase and are based on the learning of similarity. That is,
for a sample of X data to be sorted, it will search for its nearest K neighbor and then assign X to
the class name to which most of its neighbors belong. The performance of the nearest k neighbor
algorithm is influenced by choice of the value of k.
Comparative Exploration of KNN, J48 and Lazy IBK Classifiers in Weka_4

End of preview

Want to access all the pages? Upload your documents or become a member.

Related Documents
Data Mining and Visualization: Performance Comparison of Classification Algorithms
|6
|956
|66

Assignment on Intelligent Systems for Analytics
|47
|6004
|28

Data Mining and Visualization for Business Intelligence
|14
|1554
|444

Data Mining for Cardiac Arrhythmia Detection using KNN, Naive Bayes, SVM, Gradient Boosting, Model Tree and Random Forest
|10
|4822
|461

Data Mining Case Study 2022
|25
|1821
|23

ISY310 Business Analyses of Data Mining Report
|6
|1882
|203