Predictive Maintenance for Industrial Machines using Artificial Neural Network
VerifiedAdded on 2023/06/11
|56
|19002
|80
AI Summary
This article discusses the concept of predictive maintenance for industrial machines using Artificial Neural Network (ANN) and its comparison with other algorithms. It covers the technical overview, objective, problem formulation, and literature survey. The article provides insights on how to reduce asset loss and improve quality in manufacturing industries. It also explains the importance of predictive analytics in decision-making and future trends in decision support systems.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
PREDICTIVE MAINTENANCE FOR INDUSTRIAL MACHINES USING
ARTIFICAL NEURAL NETWORK
Abstract:
The concept of predictive analysis plays a vital role around industries due to its enormity and
complexity. Complex information retrieval and categorization systems are needed to process
queries, filter, and store and organize huge amount of data which are mainly composed of texts.
As soon as datasets becomes large, most information combines with algorithms that might not
perform well. For example, if an algorithm needs to load data into memory constantly, the
program may run out of memory for large datasets. Moreover prediction is really important in
today’s industrial purposes since that could reduce the issues of heavy asset loss towards the
organization. The purpose of prediction is really necessary in every field since it could help us to
stop the cause of occurring the error before any vulnerable activities could happen. Predictive
maintenance is a method, which consumes the direct monitoring of mechanical condition of plant
equipments to decide the actual mean time to malfunction for each preferred machine. Based on
the mechanical construction of the equipment we could able to estimate the fault that could occur
in the machines and decide the time that could cause a critical situation. This prediction should
be done effectively and for this purpose we have stepped into the concept of machine learning.
Machine learning algorithm plays the future of the world at the present situation. In this
dissertation paper we’re going to deal with the Artificial Neural Network (ANN) that could be
best used for the prediction purpose when compared to other algorithms since it does the best
purpose of function approximation, clustering and forecasting. Finally the comparison has been
made between ANN and other algorithms in terms of accuracy, precision and specification.
ARTIFICAL NEURAL NETWORK
Abstract:
The concept of predictive analysis plays a vital role around industries due to its enormity and
complexity. Complex information retrieval and categorization systems are needed to process
queries, filter, and store and organize huge amount of data which are mainly composed of texts.
As soon as datasets becomes large, most information combines with algorithms that might not
perform well. For example, if an algorithm needs to load data into memory constantly, the
program may run out of memory for large datasets. Moreover prediction is really important in
today’s industrial purposes since that could reduce the issues of heavy asset loss towards the
organization. The purpose of prediction is really necessary in every field since it could help us to
stop the cause of occurring the error before any vulnerable activities could happen. Predictive
maintenance is a method, which consumes the direct monitoring of mechanical condition of plant
equipments to decide the actual mean time to malfunction for each preferred machine. Based on
the mechanical construction of the equipment we could able to estimate the fault that could occur
in the machines and decide the time that could cause a critical situation. This prediction should
be done effectively and for this purpose we have stepped into the concept of machine learning.
Machine learning algorithm plays the future of the world at the present situation. In this
dissertation paper we’re going to deal with the Artificial Neural Network (ANN) that could be
best used for the prediction purpose when compared to other algorithms since it does the best
purpose of function approximation, clustering and forecasting. Finally the comparison has been
made between ANN and other algorithms in terms of accuracy, precision and specification.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Table of contents Page.no
1 Predictive Maintenance 6
1.1 Introduction 6
1.2 Technical Overview 7
1.3 Objective 8
1.4 Problem Formulation 9
1.5 Research Outcome 10
1.6 Research impact 11
2 Literature Survey 14
2.1 Background 14
2.2 Literature 16
3 Machine Learning Algorithms 18
3.1 Theoretical Background 18
3.2 Dimension Reduction Techniques 19
3.2.1 Feature selection 20
3.2.2 Sequential Forward Selection 21
3.2.3 Sequential Backward Selection 22
3.2.4 Feature Extraction 22
3.3 Machine Learning Algorithms 22
3.3.1 k-Nearest Neighbor 22
3.3.2 Support Vector Machines (SVM) 24
3.3.3 Random Forest 28
3.3.4 Naïve Bayes 28
3.3.5 Ada Boost 29
4 Artificial Neural Network 31
1 Predictive Maintenance 6
1.1 Introduction 6
1.2 Technical Overview 7
1.3 Objective 8
1.4 Problem Formulation 9
1.5 Research Outcome 10
1.6 Research impact 11
2 Literature Survey 14
2.1 Background 14
2.2 Literature 16
3 Machine Learning Algorithms 18
3.1 Theoretical Background 18
3.2 Dimension Reduction Techniques 19
3.2.1 Feature selection 20
3.2.2 Sequential Forward Selection 21
3.2.3 Sequential Backward Selection 22
3.2.4 Feature Extraction 22
3.3 Machine Learning Algorithms 22
3.3.1 k-Nearest Neighbor 22
3.3.2 Support Vector Machines (SVM) 24
3.3.3 Random Forest 28
3.3.4 Naïve Bayes 28
3.3.5 Ada Boost 29
4 Artificial Neural Network 31
4.1 Technical overview 31
4.2 Specification 34
4.3 Design 35
4.4 Methodology 36
4.4.1 Fuzzy based decision system: 36
5 Classification Techniques 38
5.1 Comparative study 38
5 Result 42
5.1 Evaluation 44
5.2 Analysis 45
6 Conclusion 48
References 49
4.2 Specification 34
4.3 Design 35
4.4 Methodology 36
4.4.1 Fuzzy based decision system: 36
5 Classification Techniques 38
5.1 Comparative study 38
5 Result 42
5.1 Evaluation 44
5.2 Analysis 45
6 Conclusion 48
References 49
Figures
Figure 1: Manufacturing industry quality and analysis 8
Figure 2: Predictive analysis with the degree of intelligence 15
Figure 3: General process in the Artificial neural network 17
Figure 4: Feature Selection 21
Figure 5: Xn set of independent features or dimensions are reduced to Yn 22
Figure 6: k- Nearest Neighbour. Modified from k-Nearest Neighbour 23
and Dynamic Time Wrapping (2016)
Figure 7: Classification based on linear SVM 26
Figure 8: Classification based on Hard SVM 27
Figure 9: Schematic of Adaboost 29
Figure 10: 2-dimensional neural network model 33
Figure 11: Kohenen self-organizing feature map. 34
Figure 12: Block diagram for the fuzzy logical system 37
Figure 13: Comparison of Classification accuracy 46
Figure 14: Prediction of false alarm rate 47
Figure 1: Manufacturing industry quality and analysis 8
Figure 2: Predictive analysis with the degree of intelligence 15
Figure 3: General process in the Artificial neural network 17
Figure 4: Feature Selection 21
Figure 5: Xn set of independent features or dimensions are reduced to Yn 22
Figure 6: k- Nearest Neighbour. Modified from k-Nearest Neighbour 23
and Dynamic Time Wrapping (2016)
Figure 7: Classification based on linear SVM 26
Figure 8: Classification based on Hard SVM 27
Figure 9: Schematic of Adaboost 29
Figure 10: 2-dimensional neural network model 33
Figure 11: Kohenen self-organizing feature map. 34
Figure 12: Block diagram for the fuzzy logical system 37
Figure 13: Comparison of Classification accuracy 46
Figure 14: Prediction of false alarm rate 47
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Tables
Table 1: Advantages of different classification algorithms 39
Table 2: Feature comparisons 40
Table 3: Comparison of Classification Algorithms 40
Table 4: Comparison of classifiers employing the method of cross validation 45
Table 1: Advantages of different classification algorithms 39
Table 2: Feature comparisons 40
Table 3: Comparison of Classification Algorithms 40
Table 4: Comparison of classifiers employing the method of cross validation 45
Chapter 1: Predictive Maintenance
1.1 Introduction:
In this modern era new technologies, innovation and developments are increasing day by day
that leads to the evolvement of mass production of voluminous data. This huge volume of data
seems to be more informative, which leads to the production and enhancement of the
manufacturing process in various industries. The purpose of prediction is really necessary in
every field since it could help us to stop the cause of occurring the error before any vulnerable
activities could happen. This could save the asset of a production industry. The main role of
prediction is best suitable in the field of health monitoring that helps in continously monitoring
the health of the patient without the need of a caretaker near them. Motivated by the quality of
life and less expensive healthcare systems, a change in existing healthcare system should be
focused to a home centered setting including dealing with illness to preserving wellness.
Innovative user-centered preventive healthcare model can be regarded as a promising solution
for this transformation. It doesn’t substitute traditional healthcare, but rather directed towards
this technology. The technology used in the pervasive healthcare could be considered from two
aspects: i) as pervasive computing tools for healthcare, and ii) as enabling it anywhere, anytime
and to anyone. It has progressed on biomedical engineering (BE), medical informatics (MI), and
pervasive computing. Biomedical engineering is the integration of both engineering and medical
science that helps in the improvement of the equipments used in the hospitals. Medical
informatics comprises of huge amount of medical resources to enhance storage, retrieval, and
employ these resources in healthcare [1]. The advancement has been done to monitor the health
of the patients and provide the details to the caretakers, who are near by the remote areas. This
could be done in a real-time with the help of the internet access. Due to the condition of
monitoring the patient at a real-time, the caretaker can provide the suggestions regarding their
essential signs of their body situation through a video conference.
Another key aspect of predictive maintenance deals with the condition monitoring or condition
based maintenance. Condition based monitoring deals with the analysis of the machine or
anything without interrupting its regular work. Moreover monitoring the condition of equipment
is like decision making strategy, which could avoid any types of faults or failures that happens at
the near future to that equipment and its components. This is similar to the Prognostic and Health
Monitoring (PHM) that is mentioned above. Prognostic is nothing but analyzing the upcoming
situation that could occur for the patient. Similarly for the machines it could be stated as
Remaining Useful Life (RUL). The latest advancement of computerized control, information
techniques and communication networks have made potential accumulating mass amount of
operation and process’s conditions data to be harvested in making an automated Fault Detection
and Diagnosis (FDD), and increasing more resourceful approaches for intelligent defensive
maintenance behavior, also termed as predictive maintenance.
1.1 Introduction:
In this modern era new technologies, innovation and developments are increasing day by day
that leads to the evolvement of mass production of voluminous data. This huge volume of data
seems to be more informative, which leads to the production and enhancement of the
manufacturing process in various industries. The purpose of prediction is really necessary in
every field since it could help us to stop the cause of occurring the error before any vulnerable
activities could happen. This could save the asset of a production industry. The main role of
prediction is best suitable in the field of health monitoring that helps in continously monitoring
the health of the patient without the need of a caretaker near them. Motivated by the quality of
life and less expensive healthcare systems, a change in existing healthcare system should be
focused to a home centered setting including dealing with illness to preserving wellness.
Innovative user-centered preventive healthcare model can be regarded as a promising solution
for this transformation. It doesn’t substitute traditional healthcare, but rather directed towards
this technology. The technology used in the pervasive healthcare could be considered from two
aspects: i) as pervasive computing tools for healthcare, and ii) as enabling it anywhere, anytime
and to anyone. It has progressed on biomedical engineering (BE), medical informatics (MI), and
pervasive computing. Biomedical engineering is the integration of both engineering and medical
science that helps in the improvement of the equipments used in the hospitals. Medical
informatics comprises of huge amount of medical resources to enhance storage, retrieval, and
employ these resources in healthcare [1]. The advancement has been done to monitor the health
of the patients and provide the details to the caretakers, who are near by the remote areas. This
could be done in a real-time with the help of the internet access. Due to the condition of
monitoring the patient at a real-time, the caretaker can provide the suggestions regarding their
essential signs of their body situation through a video conference.
Another key aspect of predictive maintenance deals with the condition monitoring or condition
based maintenance. Condition based monitoring deals with the analysis of the machine or
anything without interrupting its regular work. Moreover monitoring the condition of equipment
is like decision making strategy, which could avoid any types of faults or failures that happens at
the near future to that equipment and its components. This is similar to the Prognostic and Health
Monitoring (PHM) that is mentioned above. Prognostic is nothing but analyzing the upcoming
situation that could occur for the patient. Similarly for the machines it could be stated as
Remaining Useful Life (RUL). The latest advancement of computerized control, information
techniques and communication networks have made potential accumulating mass amount of
operation and process’s conditions data to be harvested in making an automated Fault Detection
and Diagnosis (FDD), and increasing more resourceful approaches for intelligent defensive
maintenance behavior, also termed as predictive maintenance.
According to the estimation about 20 to 30 percent of the periodically monitored equipments for
predictive maintenance have been affected from its production and quality that has to be
examined regularly. In fact monitoring the equipments in a weekly or monthly manner does not
prove to be enough for detecting certain abnormalities in the machines [2]. If we change the
equipments from periodical to continuous monitoring then it could lead to lower the cost of
expenditure for the machines considerably. This could save the expenditure from on-line
monitoring systems on PC and accelerometer. Artificial Neural Network (ANN) could be used
effectively in this type of evaluation for detecting the abnormal patterns of the sensor validation
and for trend evaluation. In this chapter we’re going to discuss regarding the effect of predictive
maintenance in industries and the methods of ANN that could helps us with the maintenance.
1.2 Technical Overview:
Predictive maintenance is a method, which consumes the direct monitoring of mechanical
condition of plant equipments to decide the actual mean time to malfunction for each preferred
machine [3]. Based on the mechanical construction of the equipment we could able to estimate
the fault that could occur in the machines and decide the time that could cause a critical situation.
With the help of predictive maintenance we could able to detect the equipment that could
seriously be affected before the happenings of the situation. The information could be generated
before any hazardous situation could occur in the equipment. For this estimation, one of the
most popularly used equipment is vibration signature analysis. With the help of this analysis we
could able to predict the mechanical condition of the machines. But there is a condition that the
this particular analysis method alone could not be used for the estimation of the mechanical
failure, which does not include the oil lubricating condition, displacement of axis and many other
parameters. There is another method called failure modes that could estimate the components
and the magnitude obtained by the method could detect the fault evolution and machine
operating conditions.
In general, Prediction is something that finds pattern in the huge quantity of data. The best
statistical model had to be chosen in order to gain the right insight of data from our disposed
information [4]. The hidden pattern disclosed by a process could make us to find the prediction.
This is what is termed as predictive analysis. In a production or manufacturing environment, we
encounter with the concern of optimizing the process, appointing the job-shop, decoding or
arranging, organizing the cell, controlling the quality, labor work maintenance, planning the
required material, activity resource planning during inclined situation, supply-chain management
and future-worth examination of cost inference, but the acquaintance of data-mining tools that
could decrease the general nightmares in these areas is not extensively obtainable. Not only in
the manufacturing industries but also this could be best suitable for the retail industries, where
they have to know the wish list of the customers that they are going to purchase at the particular
month in the particular period of time based on the fashion, weather etc. The best formula for
this is 1) collect data 2) clean data (remove the unwanted data from the list) 3) identify patterns
predictive maintenance have been affected from its production and quality that has to be
examined regularly. In fact monitoring the equipments in a weekly or monthly manner does not
prove to be enough for detecting certain abnormalities in the machines [2]. If we change the
equipments from periodical to continuous monitoring then it could lead to lower the cost of
expenditure for the machines considerably. This could save the expenditure from on-line
monitoring systems on PC and accelerometer. Artificial Neural Network (ANN) could be used
effectively in this type of evaluation for detecting the abnormal patterns of the sensor validation
and for trend evaluation. In this chapter we’re going to discuss regarding the effect of predictive
maintenance in industries and the methods of ANN that could helps us with the maintenance.
1.2 Technical Overview:
Predictive maintenance is a method, which consumes the direct monitoring of mechanical
condition of plant equipments to decide the actual mean time to malfunction for each preferred
machine [3]. Based on the mechanical construction of the equipment we could able to estimate
the fault that could occur in the machines and decide the time that could cause a critical situation.
With the help of predictive maintenance we could able to detect the equipment that could
seriously be affected before the happenings of the situation. The information could be generated
before any hazardous situation could occur in the equipment. For this estimation, one of the
most popularly used equipment is vibration signature analysis. With the help of this analysis we
could able to predict the mechanical condition of the machines. But there is a condition that the
this particular analysis method alone could not be used for the estimation of the mechanical
failure, which does not include the oil lubricating condition, displacement of axis and many other
parameters. There is another method called failure modes that could estimate the components
and the magnitude obtained by the method could detect the fault evolution and machine
operating conditions.
In general, Prediction is something that finds pattern in the huge quantity of data. The best
statistical model had to be chosen in order to gain the right insight of data from our disposed
information [4]. The hidden pattern disclosed by a process could make us to find the prediction.
This is what is termed as predictive analysis. In a production or manufacturing environment, we
encounter with the concern of optimizing the process, appointing the job-shop, decoding or
arranging, organizing the cell, controlling the quality, labor work maintenance, planning the
required material, activity resource planning during inclined situation, supply-chain management
and future-worth examination of cost inference, but the acquaintance of data-mining tools that
could decrease the general nightmares in these areas is not extensively obtainable. Not only in
the manufacturing industries but also this could be best suitable for the retail industries, where
they have to know the wish list of the customers that they are going to purchase at the particular
month in the particular period of time based on the fashion, weather etc. The best formula for
this is 1) collect data 2) clean data (remove the unwanted data from the list) 3) identify patterns
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
(it is necessary to group the data making a similarity list from the insights) and finally 4) Make
predictions (foresight).
In relevant forms predictive modeling, decision analysis and optimization, deal outlining and
prognostic hunt and explore. Predictive analytics can be functional to an assortment of industry
approaches and plays a key role in search marketing and commendation engines that is shown in
the figure given below.
Figure 1: Manufacturing industry quality and analysis
1.3 Objective:
The main objective of our project is to predict the malfunction that may occur at the future in
certain function. These malfunctions could be identified with the data that could be derived from
the machine. For instance, every machine tends to have its own natural or harmonic frequency.
Every natural frequency produced by the instruments has its own vibrational mode or its own
wave pattern. When there is an effect in these natural frequencies then it means that a machine
can have an error. These frequency patterns due to vibration of the machine could be determined
by the vibrational sensor. Not only by vibrations but also we can include temperature, pressure
etc. Extreme variations of these parameters from the limit or set point would tend the machine to
malfunction. This should be known previously before any error could happen in the machine.
This is entirely possible by prediction. By this type of prediction is very difficult for the humans
to handle since it requires a lot of man power with highly monitoring skills. What if the
intelligence is attached to the head of the machine? That’s the concept of machine learning.
There are several machine learning algorithms but Artificial neural network could be the best
predictions (foresight).
In relevant forms predictive modeling, decision analysis and optimization, deal outlining and
prognostic hunt and explore. Predictive analytics can be functional to an assortment of industry
approaches and plays a key role in search marketing and commendation engines that is shown in
the figure given below.
Figure 1: Manufacturing industry quality and analysis
1.3 Objective:
The main objective of our project is to predict the malfunction that may occur at the future in
certain function. These malfunctions could be identified with the data that could be derived from
the machine. For instance, every machine tends to have its own natural or harmonic frequency.
Every natural frequency produced by the instruments has its own vibrational mode or its own
wave pattern. When there is an effect in these natural frequencies then it means that a machine
can have an error. These frequency patterns due to vibration of the machine could be determined
by the vibrational sensor. Not only by vibrations but also we can include temperature, pressure
etc. Extreme variations of these parameters from the limit or set point would tend the machine to
malfunction. This should be known previously before any error could happen in the machine.
This is entirely possible by prediction. By this type of prediction is very difficult for the humans
to handle since it requires a lot of man power with highly monitoring skills. What if the
intelligence is attached to the head of the machine? That’s the concept of machine learning.
There are several machine learning algorithms but Artificial neural network could be the best
suitable algorithm for the prediction since this could perform intelligent tasks as performed by
the human brain [5]. ANN gains knowledge through learning and this knowledge could be stored
in the inter-neuron connection known as weights. They have Multilayer perceptron, which means
they have input layer followed by certain hidden layers then finally the output layer. Back
propagation is the process where the input data is repeatedly presented by the input layer. In this
process we have to train the data. Supervised training involves passing of historical data into the
input layer followed by the hidden layers. Then the desired output is measured. This type could
produce a model that could map the input to output with the historical data. This model could
provide with the necessary output when the preferred output is not known. The concept that has
been used for our project is supervised neural network model that works with Multiple layer
perceptron (MLP) with classification that could train using back propagation. MLP works with
too arrays X_array that includes samples and features for training the data and Y_array for holds
target values for training the samples. Finally the accuracy, precision and the specifications have
been derived at the end of the project with the comparison of other algorithm.
1.4 Problem Formulation:
Predictive modeling and analysis could be the utmost importance factor in every business
organizations that leads to noteworthy development in their quality and function and also plays a
major role in the decisions they take that could improve their asset value. In present scenario we
could consider the Amazon and EBay which depends on the online ad network like Face book
and Google. Each and every organization could find a statistical analysis of their data and could
analyze their environment well at certain limits like present clients, turnover activities, follow the
supplies and so on. The maximum profit could be yield by the predictive maintenance thus
identifying the best course of plan and work accordingly [6]. In spite of the achievement stories
of integrated predictive analytics by practitioner’s large touchable and considerable payback to
organization, information systems researchers as well comprise the significance and meaning of
predictive analytics when discussing the future trends and evolution of decision support systems.
Moreover the data could be taken for the analysis of a machine in a manufacturing. But a
noteworthy point is that what type of data has to be considered for classifying it under the
predictive maintenance is a question here [7]. Millions and millions of data will be generated
annually. Considering all data could be tedious and its waste of time. Therefore efficient data
should be predicted. Second part to be considered is that once the data is generated, there is a
situation that certain operator should be alert when any possible malfunction is ready to occur. If
the operator missed his sight then what could happen to a machine. Human error is normal in
every case, but in the higher industries when the errors persist then there could be heavy loss
towards the whole industry [8]. This has to be noted.
the human brain [5]. ANN gains knowledge through learning and this knowledge could be stored
in the inter-neuron connection known as weights. They have Multilayer perceptron, which means
they have input layer followed by certain hidden layers then finally the output layer. Back
propagation is the process where the input data is repeatedly presented by the input layer. In this
process we have to train the data. Supervised training involves passing of historical data into the
input layer followed by the hidden layers. Then the desired output is measured. This type could
produce a model that could map the input to output with the historical data. This model could
provide with the necessary output when the preferred output is not known. The concept that has
been used for our project is supervised neural network model that works with Multiple layer
perceptron (MLP) with classification that could train using back propagation. MLP works with
too arrays X_array that includes samples and features for training the data and Y_array for holds
target values for training the samples. Finally the accuracy, precision and the specifications have
been derived at the end of the project with the comparison of other algorithm.
1.4 Problem Formulation:
Predictive modeling and analysis could be the utmost importance factor in every business
organizations that leads to noteworthy development in their quality and function and also plays a
major role in the decisions they take that could improve their asset value. In present scenario we
could consider the Amazon and EBay which depends on the online ad network like Face book
and Google. Each and every organization could find a statistical analysis of their data and could
analyze their environment well at certain limits like present clients, turnover activities, follow the
supplies and so on. The maximum profit could be yield by the predictive maintenance thus
identifying the best course of plan and work accordingly [6]. In spite of the achievement stories
of integrated predictive analytics by practitioner’s large touchable and considerable payback to
organization, information systems researchers as well comprise the significance and meaning of
predictive analytics when discussing the future trends and evolution of decision support systems.
Moreover the data could be taken for the analysis of a machine in a manufacturing. But a
noteworthy point is that what type of data has to be considered for classifying it under the
predictive maintenance is a question here [7]. Millions and millions of data will be generated
annually. Considering all data could be tedious and its waste of time. Therefore efficient data
should be predicted. Second part to be considered is that once the data is generated, there is a
situation that certain operator should be alert when any possible malfunction is ready to occur. If
the operator missed his sight then what could happen to a machine. Human error is normal in
every case, but in the higher industries when the errors persist then there could be heavy loss
towards the whole industry [8]. This has to be noted.
1.5 Research Outcome:
In the above problem statement Artificial Neural Network (ANN) could be a greatest solution.
The evaluation of On-line monitoring of the mechanical condition of the equipments could be
reasonable by the low cost and highly reliable data acquisition system. First before entering into
ANN, the major concept to be noted is that whether the data retrieves a supervised or
unsupervised approach. Supervised approach is the concept that could give a specified output for
the given input and unsupervised approach means that we retrieve output that could be unknown.
The unsupervised method comes with the concept of involvement in experimentation. This
involves the selection of sample data from the existing dataset that could require further
handling, clustering of an object that involves the inner structure to which the data belong and
making inverse or direct method for prediction in a quantitative manner. Once we have
determined the type of dataset we have then ANN could be done related to that. In our strategy
we have the data that belongs to supervised approach.
The concept of ANN could be briefly said as Artificial Neural Network (ANN) that stress on the
work neural which is neuron. The meaning for this abbreviation is that how “artificial neurons”
are networked or associated together and how these individual neurons perform its operation.
These Artificial neurons copy the action of the biological neurons, which could accept various
signals from the neighboring neurons and then process them in the pre-defined way. Based on
the result of this process, the neuron makes the decision whether to provide an output. The output
signal could be in terms of 0 and 1 or any real value between 0 and 1 that could be based on the
values that we deal.
The fuzzy logic system associated with the Artificial neural network could helps us in finding the
reliable data and could indicate and identify features that will authorize to permit a consistent
diagnostic with the machines. Any abnormal or worsening condition in the machine could be
reported by the Artificial neural network and this particular system should be trained in certain
manner. Recently deploying the fuzzy logic with the neural network has been increased and this
integrated behavior could form a successful path towards the hybrid neural networks as well as
the expert systems with the help of fuzzy rule based system rather than using the traditional
systems.
Certain points are to be considered while following the rules. The machines are trained in certain
manner of detecting the fault and at the first defect indication itself the fault should be noted
since the similar signal pattern and various fault mechanism could led a machine to show
evidence of analogous vibrational symptoms. But with the help the inherent capacity of the
generalization of ANN will be very useful in handling the situation. Expert systems are
particularly useful where the knowledge of an expert is unambiguously available. ANN could
“take out” the knowledge with the accessible information although the operator is not available.
For the alarm system and diagnosing the fault the best integrated system that could be used with
the neural network is the expert system. Connectionist expert systems along with the ANN in its
In the above problem statement Artificial Neural Network (ANN) could be a greatest solution.
The evaluation of On-line monitoring of the mechanical condition of the equipments could be
reasonable by the low cost and highly reliable data acquisition system. First before entering into
ANN, the major concept to be noted is that whether the data retrieves a supervised or
unsupervised approach. Supervised approach is the concept that could give a specified output for
the given input and unsupervised approach means that we retrieve output that could be unknown.
The unsupervised method comes with the concept of involvement in experimentation. This
involves the selection of sample data from the existing dataset that could require further
handling, clustering of an object that involves the inner structure to which the data belong and
making inverse or direct method for prediction in a quantitative manner. Once we have
determined the type of dataset we have then ANN could be done related to that. In our strategy
we have the data that belongs to supervised approach.
The concept of ANN could be briefly said as Artificial Neural Network (ANN) that stress on the
work neural which is neuron. The meaning for this abbreviation is that how “artificial neurons”
are networked or associated together and how these individual neurons perform its operation.
These Artificial neurons copy the action of the biological neurons, which could accept various
signals from the neighboring neurons and then process them in the pre-defined way. Based on
the result of this process, the neuron makes the decision whether to provide an output. The output
signal could be in terms of 0 and 1 or any real value between 0 and 1 that could be based on the
values that we deal.
The fuzzy logic system associated with the Artificial neural network could helps us in finding the
reliable data and could indicate and identify features that will authorize to permit a consistent
diagnostic with the machines. Any abnormal or worsening condition in the machine could be
reported by the Artificial neural network and this particular system should be trained in certain
manner. Recently deploying the fuzzy logic with the neural network has been increased and this
integrated behavior could form a successful path towards the hybrid neural networks as well as
the expert systems with the help of fuzzy rule based system rather than using the traditional
systems.
Certain points are to be considered while following the rules. The machines are trained in certain
manner of detecting the fault and at the first defect indication itself the fault should be noted
since the similar signal pattern and various fault mechanism could led a machine to show
evidence of analogous vibrational symptoms. But with the help the inherent capacity of the
generalization of ANN will be very useful in handling the situation. Expert systems are
particularly useful where the knowledge of an expert is unambiguously available. ANN could
“take out” the knowledge with the accessible information although the operator is not available.
For the alarm system and diagnosing the fault the best integrated system that could be used with
the neural network is the expert system. Connectionist expert systems along with the ANN in its
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
knowledge bases could provide maximum reimbursement in terms of velocity, toughness and
knowledge attainment.
The software that we’re going to use to implement the ANN will be python. Python is a popular
programming language that could be used for software development, web development,
mathematics and scripting language. We used python since it’s a simple language that could run
in any platform such as linux, windows, Mac etc. Moreover the syntax of the language could be
very simple when compared to the other scripting language. It runs on the interpreter system
which means that it could be executed as soon as possible. It could be either object-oriented
language or procedural language or functional language. Huge amount of calculations could be
carried out with the python of python very easily.
1.6 Research impact:
The impacts of the predictive analysis are many. There is a common question that arises around
so many people that why there is a need for the predictive analysis and why the companies invest
plenty of money for the technical equipment to do a prediction. There was a study undergone at
Honeywell where the responders says that the company that involved in this predictive
maintenance make a real-time decisions at a rate of 64%, limits the wastage at the rate of 74%
and estimates the risk at the rate of 73%. This is not in the case of those companies who don’t
like to have this model, for those who try to build the process later and for those who does not
want to spend for this maintenance [5]. These companies could face heavy asset expenditure
when compared to the companies who involved in this maintenance construction.
The companies who involved in this predictive maintenance could achieve practical as well as
achievable results due to the following factor:
Assembling the correct quality data:
In every manufacturing company it is necessary to collect the data regarding the quality of the
product, machine safety, personnel fulfillment as well as the recording level. These data could
be collected with the help of product and quality manager, professionals and the operators who
are in charge of each and every particular task. Through these data we could obtain the efficiency
of a machine [6], circumvent injuries, avoiding the unwanted products to pass through the
process and the companies should have these details to face the audit. These details are collected
normally in a pen and paper method where later it could be digitalized. There is a common
thinking that the collection of these relevant data could help to improve the quality of the
company but the real factor is different. The problem arises with the quantity of data that could
be useful as well as useless. The reality is it does not matter about the quantity of data but the
right data and better analysis. The collected data should be meaningful and brief. Bringing the
Statistical Process Control (SPC) method in the place of accurate data will necessarily help us in
a proactive control process and avert the quality of manufacture malfunction before they appear.
knowledge attainment.
The software that we’re going to use to implement the ANN will be python. Python is a popular
programming language that could be used for software development, web development,
mathematics and scripting language. We used python since it’s a simple language that could run
in any platform such as linux, windows, Mac etc. Moreover the syntax of the language could be
very simple when compared to the other scripting language. It runs on the interpreter system
which means that it could be executed as soon as possible. It could be either object-oriented
language or procedural language or functional language. Huge amount of calculations could be
carried out with the python of python very easily.
1.6 Research impact:
The impacts of the predictive analysis are many. There is a common question that arises around
so many people that why there is a need for the predictive analysis and why the companies invest
plenty of money for the technical equipment to do a prediction. There was a study undergone at
Honeywell where the responders says that the company that involved in this predictive
maintenance make a real-time decisions at a rate of 64%, limits the wastage at the rate of 74%
and estimates the risk at the rate of 73%. This is not in the case of those companies who don’t
like to have this model, for those who try to build the process later and for those who does not
want to spend for this maintenance [5]. These companies could face heavy asset expenditure
when compared to the companies who involved in this maintenance construction.
The companies who involved in this predictive maintenance could achieve practical as well as
achievable results due to the following factor:
Assembling the correct quality data:
In every manufacturing company it is necessary to collect the data regarding the quality of the
product, machine safety, personnel fulfillment as well as the recording level. These data could
be collected with the help of product and quality manager, professionals and the operators who
are in charge of each and every particular task. Through these data we could obtain the efficiency
of a machine [6], circumvent injuries, avoiding the unwanted products to pass through the
process and the companies should have these details to face the audit. These details are collected
normally in a pen and paper method where later it could be digitalized. There is a common
thinking that the collection of these relevant data could help to improve the quality of the
company but the real factor is different. The problem arises with the quantity of data that could
be useful as well as useless. The reality is it does not matter about the quantity of data but the
right data and better analysis. The collected data should be meaningful and brief. Bringing the
Statistical Process Control (SPC) method in the place of accurate data will necessarily help us in
a proactive control process and avert the quality of manufacture malfunction before they appear.
Knowledge regarding the prediction of suitable opportunity:
In this case the best opportunity should be identified by the plant, which could help in improving
the company. For instance let’s take a client who comes to buy a product of a company. He finds
a defect in that product and informs the plant manager regarding the defect. Immediately the
manager should necessarily take action if the information given by the client is valid. The plant
manager comes into action and determines the defect of the product that did not meet the
requirement. It could take certain expenditure to cure this defect but that expenditure could be
given as a profit to the company. Within certain time period the plant could run at the top that
could reduce the cost and defects.
Maintenance of best practices:
It is necessary to make adjustments in the wide range of enterprise by having maintenance over
the entire project not to their own plant. In a company both the quality and the plant manager
should take decision wisely. For example if the plant manager tries to cut down the cost of $ 1
million dollars by neglecting the raw materials that hold minimum good to a product then there
could be sustainable cost reduction to the profit which could exceed more than a million dollar in
that particular year [7]. So it is necessary to take the decisions wisely not quickly.
Manufacturers give their concentration more on the quality of the product. They also needs to
ensure that there is an optimal functioning around the whole manufacturing plant, updated,
efficiency of the workers, appropriate measurements, and the finest creation potential. With
predictive analytics, it’s doable to not only get better the quality of manufacture, increase
equipment return on investment (ROI) and overall equipment effectiveness (OEE), and look
forward to the needs across the plant and enterprise, but also improve the brand’s status, leave
behind the competition and make sure the safety of the customer. Hence, mainly focusing on the
predictive analysis benefits, it is necessary to find the possible routes that how it could be used
within the organization in order to save the industry.
They have certain advantages that are as follows:
1) Minimization of downtime cost: when the machine has malfunction and the time
required to make its functionality to normal condition is known as downtime. This could
be totally minimized with the predictive analysis since it could predict the abnormal
condition of the machine at a very early stage.
2) Minimize the production loss: The major advantage of predictive analysis is that it
could necessarily indicate us at which part the replacement has to be done rather than
indicating us the maintenance around the whole machine.
3) Reduction of man work: Since prediction analysis with the procedure of ANN helps us
in detecting the defect found in the particular part of the machine, it could reduce the
effort of the man power. Hence the labour cost could be reduced abruptly.
In this case the best opportunity should be identified by the plant, which could help in improving
the company. For instance let’s take a client who comes to buy a product of a company. He finds
a defect in that product and informs the plant manager regarding the defect. Immediately the
manager should necessarily take action if the information given by the client is valid. The plant
manager comes into action and determines the defect of the product that did not meet the
requirement. It could take certain expenditure to cure this defect but that expenditure could be
given as a profit to the company. Within certain time period the plant could run at the top that
could reduce the cost and defects.
Maintenance of best practices:
It is necessary to make adjustments in the wide range of enterprise by having maintenance over
the entire project not to their own plant. In a company both the quality and the plant manager
should take decision wisely. For example if the plant manager tries to cut down the cost of $ 1
million dollars by neglecting the raw materials that hold minimum good to a product then there
could be sustainable cost reduction to the profit which could exceed more than a million dollar in
that particular year [7]. So it is necessary to take the decisions wisely not quickly.
Manufacturers give their concentration more on the quality of the product. They also needs to
ensure that there is an optimal functioning around the whole manufacturing plant, updated,
efficiency of the workers, appropriate measurements, and the finest creation potential. With
predictive analytics, it’s doable to not only get better the quality of manufacture, increase
equipment return on investment (ROI) and overall equipment effectiveness (OEE), and look
forward to the needs across the plant and enterprise, but also improve the brand’s status, leave
behind the competition and make sure the safety of the customer. Hence, mainly focusing on the
predictive analysis benefits, it is necessary to find the possible routes that how it could be used
within the organization in order to save the industry.
They have certain advantages that are as follows:
1) Minimization of downtime cost: when the machine has malfunction and the time
required to make its functionality to normal condition is known as downtime. This could
be totally minimized with the predictive analysis since it could predict the abnormal
condition of the machine at a very early stage.
2) Minimize the production loss: The major advantage of predictive analysis is that it
could necessarily indicate us at which part the replacement has to be done rather than
indicating us the maintenance around the whole machine.
3) Reduction of man work: Since prediction analysis with the procedure of ANN helps us
in detecting the defect found in the particular part of the machine, it could reduce the
effort of the man power. Hence the labour cost could be reduced abruptly.
4) Revenue maximization: Normally increase in the machine production could lead to the
huge revenue to the industries. The maintenance could help us in monitoring the unusual
shut downs and helps in analyzing the smoother operation of the machines.
5) Improve employee’s maintenance: Predictive maintenance will permit more liberated
time for labours maintenance by determining the accurate mend job desirable and
subordinate extent of job.
huge revenue to the industries. The maintenance could help us in monitoring the unusual
shut downs and helps in analyzing the smoother operation of the machines.
5) Improve employee’s maintenance: Predictive maintenance will permit more liberated
time for labours maintenance by determining the accurate mend job desirable and
subordinate extent of job.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Chapter 2: Literature Survey
2.1 Background
Analytics is associated to the usage of large amount of data that could be either quantitative or
statistically analyzed, either descriptive or analytical models and either fact-based management
to drive judgment and append assessment”[9]. There are many variations among these analytical
models and these variations could lead to the categorization of the following: descriptive,
prescriptive and predictive. The descriptive analytics is something that is purely based on
reporting the interested phenomenon. With the help of this analysis we could able to gather,
collect, organize, tabulate and depict the data but it does not provide the user with any sort of
information regarding how the event occurred or regarding the happenings that could occur in
the near future which could be a drawback.
The prescriptive analytics on the other hand, is related to making certain predictions regarding
the equipments that it could happen in the near future [10]. To make this understanding very
clear and precise, predictive analysis is analyzing the near future happenings events and it may
cause bad then it has to be stopped at once. A shorter form of this analysis is described in the
table given below. Certain suggestions were done by [11] [12] that include experimentation
design and optimization. These experimental designs explain the causes why a happening
happens by making experiments where independent variables are manipulated, exterior values
are proscribed and hence the results are being completed that concludes with activities that the
decision producer should follow. Davenport and Harris [13] had produced a report regarding the
intelligence based predictive modeling that is shown in the figure given below.
2.1 Background
Analytics is associated to the usage of large amount of data that could be either quantitative or
statistically analyzed, either descriptive or analytical models and either fact-based management
to drive judgment and append assessment”[9]. There are many variations among these analytical
models and these variations could lead to the categorization of the following: descriptive,
prescriptive and predictive. The descriptive analytics is something that is purely based on
reporting the interested phenomenon. With the help of this analysis we could able to gather,
collect, organize, tabulate and depict the data but it does not provide the user with any sort of
information regarding how the event occurred or regarding the happenings that could occur in
the near future which could be a drawback.
The prescriptive analytics on the other hand, is related to making certain predictions regarding
the equipments that it could happen in the near future [10]. To make this understanding very
clear and precise, predictive analysis is analyzing the near future happenings events and it may
cause bad then it has to be stopped at once. A shorter form of this analysis is described in the
table given below. Certain suggestions were done by [11] [12] that include experimentation
design and optimization. These experimental designs explain the causes why a happening
happens by making experiments where independent variables are manipulated, exterior values
are proscribed and hence the results are being completed that concludes with activities that the
decision producer should follow. Davenport and Harris [13] had produced a report regarding the
intelligence based predictive modeling that is shown in the figure given below.
Figure 2: Predictive analysis with the degree of intelligence
Machine learning algorithm is widely used classification algorithm used in various fields.
Kehagias et al. employing many algorithms which are compared the classification accuracy of
classifiers reliant on words to that of classifiers based on senses [13]. Nardiello et al. [14] present
a novel approach algorithm of "boosting"-based learners, employed in the field of automated text
classification. Vinciarelli deals with categorization experiments which are employed over noisy
texts [15]. Naive Bayes are employed in text classification applications and it is widely used, its
simplicity and effectiveness are its important features [16]. Schneider presented the problems
encountered and also provided solutions which can be solved by employing these solutions [17]
[18].
Classification works based on their characteristics and their nature. Provided a group of class,
classifier evaluates which classes a given object owns to. Documents are distinguished based on
their subjects or the other attributes such as document type, etc. Text databases entail word
descriptions for objects. These word descriptions are usually long sentences or paragraphs. The
commonly used data mining functionalities are Association Analysis, Characterization and
Discrimination, Classification and Prediction.
Machine learning algorithms are the key factor for driving the data from the prediction and
failure detection using certain algorithms that could be either data analysis or statistical
techniques [19] [20] [21]. For the generation of this data-driven model could be obtained by
these data records and outputs that occur through the historical data of these predictive models
constructed into its training stage. For the decision making [22], the test data could be appraised
through this model. The conclusion made by this decision analysis could be the predicted data
either regarding the asset of the industries or the prediction of event and the type of breakdown.
In certain industries the management team uses robots for this job, which could adopt easily for
this detection thus providing a safe environment to the industries [23]. These industrial robots are
said to be the manipulators that are created to shift the parts and substances, which could perform
various task in certain way that it has been programmed. Due to the mechanical part arrangement
and their control system technologies used, certain reasons could be accredited to malfunction in
these robots [24]. These failures could be in the form of any brake malfunction or any sort of
repair in the electrical motors used in the robots for their movements. This could result in the
short circuit. When a single model is used as a fault detection framework, then these types of
faults couldn’t be bought under a single evolution and it couldn’t be very easy to combine. To
negotiate this we use the technique of data- driven models.
Machine learning algorithm is widely used classification algorithm used in various fields.
Kehagias et al. employing many algorithms which are compared the classification accuracy of
classifiers reliant on words to that of classifiers based on senses [13]. Nardiello et al. [14] present
a novel approach algorithm of "boosting"-based learners, employed in the field of automated text
classification. Vinciarelli deals with categorization experiments which are employed over noisy
texts [15]. Naive Bayes are employed in text classification applications and it is widely used, its
simplicity and effectiveness are its important features [16]. Schneider presented the problems
encountered and also provided solutions which can be solved by employing these solutions [17]
[18].
Classification works based on their characteristics and their nature. Provided a group of class,
classifier evaluates which classes a given object owns to. Documents are distinguished based on
their subjects or the other attributes such as document type, etc. Text databases entail word
descriptions for objects. These word descriptions are usually long sentences or paragraphs. The
commonly used data mining functionalities are Association Analysis, Characterization and
Discrimination, Classification and Prediction.
Machine learning algorithms are the key factor for driving the data from the prediction and
failure detection using certain algorithms that could be either data analysis or statistical
techniques [19] [20] [21]. For the generation of this data-driven model could be obtained by
these data records and outputs that occur through the historical data of these predictive models
constructed into its training stage. For the decision making [22], the test data could be appraised
through this model. The conclusion made by this decision analysis could be the predicted data
either regarding the asset of the industries or the prediction of event and the type of breakdown.
In certain industries the management team uses robots for this job, which could adopt easily for
this detection thus providing a safe environment to the industries [23]. These industrial robots are
said to be the manipulators that are created to shift the parts and substances, which could perform
various task in certain way that it has been programmed. Due to the mechanical part arrangement
and their control system technologies used, certain reasons could be accredited to malfunction in
these robots [24]. These failures could be in the form of any brake malfunction or any sort of
repair in the electrical motors used in the robots for their movements. This could result in the
short circuit. When a single model is used as a fault detection framework, then these types of
faults couldn’t be bought under a single evolution and it couldn’t be very easy to combine. To
negotiate this we use the technique of data- driven models.
2.2 Literature:
Lu et al. (2001) [25] tool into certain failure mode considered and bought a technique for
estimating independent system routine dependability in real-time. This technique involves in on-
line monitoring of multiple variables and bought out the performance of the systems thus
estimating its reliable performing condition [26]. This was bought in various time series as a
multivariable state-space approach. In order to obtain the reliability of the system, the estimated
covariance matrix and the mean vectors are taken into consideration. Through this developed
system we could able to anticipate and calculate the function deprivation of each and every
structure in a vibrant surroundings in real-time. Greitzer et al. (1999) [27] made a model for
calibration and prophetic equipments for the turbines (gas engine). To predict the engine
condition the Artificial neural network algorithms are been used. Roemer and Kacprzynski
(2000) [28] illustrated certain original diagnostic and prognostic technologies for assessing the
risk factors of the gas turbine engine. The general step followed in the ANN is shown in the
figure given below.
They also made diagnosis and extrapolative technologies that made a set of integrated turbo
machinery supervising [29]. This type of machine could be fitted in the aircraft that uses the
turbo machinery for the middle sized pumps to the ground based gas engines. This resulted in the
noteworthy prospective for the reduction of current cost. Moreover, neural networks play a major
role in certain field that could help in improvised operating systems in the real-time [30]. Several
sensors have to be implemented in this field for the detection of certain vibration in the
machines. These sensor output could be fed as the input to these neural network algorithm.
According to Sottile and Holloway (1994) [31], neural networks could be used in the field of
hydro-electric power plants and the replacement of components in the surface mount
technologies.
Lu et al. (2001) [25] tool into certain failure mode considered and bought a technique for
estimating independent system routine dependability in real-time. This technique involves in on-
line monitoring of multiple variables and bought out the performance of the systems thus
estimating its reliable performing condition [26]. This was bought in various time series as a
multivariable state-space approach. In order to obtain the reliability of the system, the estimated
covariance matrix and the mean vectors are taken into consideration. Through this developed
system we could able to anticipate and calculate the function deprivation of each and every
structure in a vibrant surroundings in real-time. Greitzer et al. (1999) [27] made a model for
calibration and prophetic equipments for the turbines (gas engine). To predict the engine
condition the Artificial neural network algorithms are been used. Roemer and Kacprzynski
(2000) [28] illustrated certain original diagnostic and prognostic technologies for assessing the
risk factors of the gas turbine engine. The general step followed in the ANN is shown in the
figure given below.
They also made diagnosis and extrapolative technologies that made a set of integrated turbo
machinery supervising [29]. This type of machine could be fitted in the aircraft that uses the
turbo machinery for the middle sized pumps to the ground based gas engines. This resulted in the
noteworthy prospective for the reduction of current cost. Moreover, neural networks play a major
role in certain field that could help in improvised operating systems in the real-time [30]. Several
sensors have to be implemented in this field for the detection of certain vibration in the
machines. These sensor output could be fed as the input to these neural network algorithm.
According to Sottile and Holloway (1994) [31], neural networks could be used in the field of
hydro-electric power plants and the replacement of components in the surface mount
technologies.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Figure 3: General process in the Artificial neural network
Chapter 3: Machine Learning Algorithms
3.1 Theoretical Background:
For producing a complete description of the classification function in a learning algorithm, a
cluster of candidate applications are consistently provided .Nevertheless, it is predominantly the
case where major candidate are inconsequential to the erudition assignment, lead to the problem
of over fitting and this will depreciate the efficiency of the engaged learning algorithm .The
indoctrination speed as well as learning veracity may be momentously retrograded by these
excessive features [51]. Hence it is of basic concern to exquisite the accordant and significant
features in the processing step. This paper delineates fundamental feature selection issues and
current research objectives. Machine learning is an evolving field today due to an increase of
large number of data. It aids to progress observations from a huge quantity of data which was
very ponderous to humans and at times also inaccessible. Machine learning is a specialization of
computer and an expeditiously trending topic in today’s context and is anticipated to bang more
in upcoming years. In a speculation, more discriminating power can be attained by aggregating
the size of the feature vector [52]. However, the learning procedure and the model generalization
are slow-paced and compromised due to extensively corpulent feature vector. Feature selection is
specifically necessary when one is manipulating a vast dataset with dimension up to thousands.
The prime applications of feature selection are as follows: (i)abbreviating the measurement cost
and accommodation requirements (ii) due to the finiteness of training sample sets, coping with
the degradation of the classification performance (iii)minimizing exploitation and training
time and, (iv) promoting data understanding and data conception .It may be visualized that
evaluating the significance of distinctive description may not be crucial; nevertheless, the actual
remonstrance is to evaluate the consequence of subsets of features.
3.1 Theoretical Background:
For producing a complete description of the classification function in a learning algorithm, a
cluster of candidate applications are consistently provided .Nevertheless, it is predominantly the
case where major candidate are inconsequential to the erudition assignment, lead to the problem
of over fitting and this will depreciate the efficiency of the engaged learning algorithm .The
indoctrination speed as well as learning veracity may be momentously retrograded by these
excessive features [51]. Hence it is of basic concern to exquisite the accordant and significant
features in the processing step. This paper delineates fundamental feature selection issues and
current research objectives. Machine learning is an evolving field today due to an increase of
large number of data. It aids to progress observations from a huge quantity of data which was
very ponderous to humans and at times also inaccessible. Machine learning is a specialization of
computer and an expeditiously trending topic in today’s context and is anticipated to bang more
in upcoming years. In a speculation, more discriminating power can be attained by aggregating
the size of the feature vector [52]. However, the learning procedure and the model generalization
are slow-paced and compromised due to extensively corpulent feature vector. Feature selection is
specifically necessary when one is manipulating a vast dataset with dimension up to thousands.
The prime applications of feature selection are as follows: (i)abbreviating the measurement cost
and accommodation requirements (ii) due to the finiteness of training sample sets, coping with
the degradation of the classification performance (iii)minimizing exploitation and training
time and, (iv) promoting data understanding and data conception .It may be visualized that
evaluating the significance of distinctive description may not be crucial; nevertheless, the actual
remonstrance is to evaluate the consequence of subsets of features.
In our world, data is being created continuously each and every day. According to the Solutions
Company CSC, a big data analytics company the data evolved at the year 2020 could be 44 times
larger than the year 2009. Hence, it is essential to recognize data and add observation for the
better indulgent of the human world. The quantity of data is so large these days that conventional
procedures cannot be used. Evaluating data or constructing extrapolative models by hand is
almost unfeasible in some consequences and also consume more time and take away productivity
[53]. Machine learning, produces consistent, imitable results and educate itself from former
computation.
There are two types of data used in machine learning [54] [55]. The one is labeled data and the
other is unlabeled data. In labeled data the attributes are essential and necessary that could imply
a tag to the information used in supervised learning, which could be categorical or numerical. To
estimate a value of regression, numerical data could be used and categorical data could be
utilized for classifying. Unlabeled data are implemented in unsupervised learning since they have
only data points. Due to this implementation the machine can find the structures and patterns
present in the data set. Hence these two data types of machine learning can be utilized with
supervised as well as unsupervised learning correspondingly. Supervised learning carries out a
learning map flanked by a set of variables say X and Y which is said to be known as the Input
and Output variables. This mapping is enforced to anticipate the output for invisible data [56].
After gaining the set of data, algorithms simplify the data and articulate the refutable value H for
the given sets of data.
There are further two types of classification in supervised learning. They are known as
Regression and Classification. A regression could be used within certain variables for a statistical
relationship among two or more variables. Classification arises very commonly in day by day
life [57].
Substantially it contains partitioning up objects. Each object is accredited to one of a number of
mutually exhaustive and exclusive kinds known as classes. Each object should be authorized to
specifically one class, i.e. more than one object shouldn’t be attached to a single class [58].
Unsupervised learning examines how systems could acquire to signify meticulous input patterns
in a path that redirects the statistical structure of the general collection of input patterns. A
contradiction with supervised learning is that there are no specific objective outputs or
environmental estimation accompanying with each and every input [59].
3.2 Dimension Reduction Techniques:
This reduction is just the procedure of declining the amount of random variables of the input
without harming any sort of information [60]. Larger amount of input variables and huge data
samples results in the raise of intricacy of the dataset [61]. In order to decrease the memory and
statistical time, the dimensionality of dataset is decreased. This reduction also helps to abolish
unnecessary input variables like replication of variables or variables with a truncated significance
Company CSC, a big data analytics company the data evolved at the year 2020 could be 44 times
larger than the year 2009. Hence, it is essential to recognize data and add observation for the
better indulgent of the human world. The quantity of data is so large these days that conventional
procedures cannot be used. Evaluating data or constructing extrapolative models by hand is
almost unfeasible in some consequences and also consume more time and take away productivity
[53]. Machine learning, produces consistent, imitable results and educate itself from former
computation.
There are two types of data used in machine learning [54] [55]. The one is labeled data and the
other is unlabeled data. In labeled data the attributes are essential and necessary that could imply
a tag to the information used in supervised learning, which could be categorical or numerical. To
estimate a value of regression, numerical data could be used and categorical data could be
utilized for classifying. Unlabeled data are implemented in unsupervised learning since they have
only data points. Due to this implementation the machine can find the structures and patterns
present in the data set. Hence these two data types of machine learning can be utilized with
supervised as well as unsupervised learning correspondingly. Supervised learning carries out a
learning map flanked by a set of variables say X and Y which is said to be known as the Input
and Output variables. This mapping is enforced to anticipate the output for invisible data [56].
After gaining the set of data, algorithms simplify the data and articulate the refutable value H for
the given sets of data.
There are further two types of classification in supervised learning. They are known as
Regression and Classification. A regression could be used within certain variables for a statistical
relationship among two or more variables. Classification arises very commonly in day by day
life [57].
Substantially it contains partitioning up objects. Each object is accredited to one of a number of
mutually exhaustive and exclusive kinds known as classes. Each object should be authorized to
specifically one class, i.e. more than one object shouldn’t be attached to a single class [58].
Unsupervised learning examines how systems could acquire to signify meticulous input patterns
in a path that redirects the statistical structure of the general collection of input patterns. A
contradiction with supervised learning is that there are no specific objective outputs or
environmental estimation accompanying with each and every input [59].
3.2 Dimension Reduction Techniques:
This reduction is just the procedure of declining the amount of random variables of the input
without harming any sort of information [60]. Larger amount of input variables and huge data
samples results in the raise of intricacy of the dataset [61]. In order to decrease the memory and
statistical time, the dimensionality of dataset is decreased. This reduction also helps to abolish
unnecessary input variables like replication of variables or variables with a truncated significance
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
level. These Reduction techniques are of two types namely Feature Selection and Feature
Extraction [62] [63].
3.2.1 Feature selection:
This particular framework is comprised of two parts: 1) utilization of search engine to obtain the
subset of the feature and obtaining the best candidate from the given criteria. The inspection for a
variable subset is a NP-hard enigma. Hence a fine solution could not be assured till an exhaustive
search is affected in the solution space [64]. The filter method could be the best ranking for the
classifier while a wrapper method utilizes a classifier to calculate feature subsets. This could
form the selection scheme and these methods are plentiful and they could be utilized effectively
for the classifier. Filtering methods such as bilateral information, autonomous component
analysis, class detaching measure, or variable ranking could be effectively used for classification.
Feature construction, occasionally termed feature extraction; allude to transforming the original
sample data set representation by virtue of the process of extracting new features, in order to
have the dilemma interpreted in a more discriminative informative space that makes the
classification function more proficient [65]. By combining several features, these methods
transform the feature set into a lower-dimensional feature vector. Extraneous and redundant
features are discarded by supervised feature selection that arbitrates relevant features by their
affiliation with the corresponding class labels.
In this selection, k dimensions are chosen away from d dimensions that provide much
information and reject the (d-k) dimensions. In additional, this selection is also termed subset
selection. The best subset consists of few amounts of dimensions that devote the majority to the
accuracy. The finest subset is set up with appropriate error function.
Extraction [62] [63].
3.2.1 Feature selection:
This particular framework is comprised of two parts: 1) utilization of search engine to obtain the
subset of the feature and obtaining the best candidate from the given criteria. The inspection for a
variable subset is a NP-hard enigma. Hence a fine solution could not be assured till an exhaustive
search is affected in the solution space [64]. The filter method could be the best ranking for the
classifier while a wrapper method utilizes a classifier to calculate feature subsets. This could
form the selection scheme and these methods are plentiful and they could be utilized effectively
for the classifier. Filtering methods such as bilateral information, autonomous component
analysis, class detaching measure, or variable ranking could be effectively used for classification.
Feature construction, occasionally termed feature extraction; allude to transforming the original
sample data set representation by virtue of the process of extracting new features, in order to
have the dilemma interpreted in a more discriminative informative space that makes the
classification function more proficient [65]. By combining several features, these methods
transform the feature set into a lower-dimensional feature vector. Extraneous and redundant
features are discarded by supervised feature selection that arbitrates relevant features by their
affiliation with the corresponding class labels.
In this selection, k dimensions are chosen away from d dimensions that provide much
information and reject the (d-k) dimensions. In additional, this selection is also termed subset
selection. The best subset consists of few amounts of dimensions that devote the majority to the
accuracy. The finest subset is set up with appropriate error function.
Figure 4: Feature Selection
Training the data is placed all the way through a definite process of subset production which is
shown in Figure 1, for instance sequential backward selection. The subsequent set is now placed
through the procedure to test its performance [66] [67]. If this performance endures the
anticipated conditions, then it will be designated as the final subset. Or else, the deriving subset
will once more be placed through the procedure of subset generation for more fine-tuning.
Feature selection has two different approaches: Sequential Forward and Backward Selection.
3.2.2 Sequential Forward Selection:
This selection commences a representation which doesn’t consist of any predictors, and thereby
the predictors are computed to the model, individually till the last predictors. In exacting, at each
measurement the variable which provides the maximum supplementary enhancement to the fit is
included to the model. Let us indicate a set by P, which contains the variables Xi, i = 1,……,d
and E(P) is the error provoked in the test sample [68]. The Sequential Forward Selection starts
with unfilled set with no variables P= {Ф}. At every single step, a sole variable is joined to the
blank set and a model is skilled, and also the error E(P Ф Xi) is estimated on test set. An error
condition is placed as per concern, for instance, the meanest square error and misclassification
error. From each and every error, the input variable which causes the least error Xj, is chosen and
combined to the set which is empty P. This model is trained again with the left over amount of
variables and the procedure endures to include variables to P, till the condition E(P Ф Xi) < E(P).
Training the data is placed all the way through a definite process of subset production which is
shown in Figure 1, for instance sequential backward selection. The subsequent set is now placed
through the procedure to test its performance [66] [67]. If this performance endures the
anticipated conditions, then it will be designated as the final subset. Or else, the deriving subset
will once more be placed through the procedure of subset generation for more fine-tuning.
Feature selection has two different approaches: Sequential Forward and Backward Selection.
3.2.2 Sequential Forward Selection:
This selection commences a representation which doesn’t consist of any predictors, and thereby
the predictors are computed to the model, individually till the last predictors. In exacting, at each
measurement the variable which provides the maximum supplementary enhancement to the fit is
included to the model. Let us indicate a set by P, which contains the variables Xi, i = 1,……,d
and E(P) is the error provoked in the test sample [68]. The Sequential Forward Selection starts
with unfilled set with no variables P= {Ф}. At every single step, a sole variable is joined to the
blank set and a model is skilled, and also the error E(P Ф Xi) is estimated on test set. An error
condition is placed as per concern, for instance, the meanest square error and misclassification
error. From each and every error, the input variable which causes the least error Xj, is chosen and
combined to the set which is empty P. This model is trained again with the left over amount of
variables and the procedure endures to include variables to P, till the condition E(P Ф Xi) < E(P).
3.2.3 Sequential Backward Selection:
This selection is an effective substitute for the most excellent subset solution. But it starts with
occupied set of features contrasting Sequential Forward Selection. It eliminates the minimum
significant features each at an instance iteratively. The selection begins with a packed set of
variables P = {1,2,3,…..,d}. At every single step, the model is skilled with a complete set of
variables and the error is calculated in test set [69]. The variable with the maximum error Xj, is
detained from the set P. The model is qualified once more with a novel set of variables P, and
the step extends to eliminate variables from P, till the condition E(P-Xj) < E(P).
3.2.4 Feature Extraction:
In this technique, features or sovereign variables from the given set of data are renovated into
novel self-governing variables called as new feature space [70]. Freshly assembled feature space
clarifies most data and only momentous data are preferred.
Let a set consists of n number of features say, X1…….Xn. After feature extraction there are m
attributes where (n > m) and this feature extraction is completed with a few mapping function, F.
Figure 5: Xn set of independent features or dimensions are reduced to Yn
Xn which consist of set of independent features or dimensions are condensed to Yn set of
independent features which is referred in Figure 2. In this extraction process, a method named
principal component analysis is implemented [71]. This will acquire only non-redundant and
significant features from Xn and transform into fresh feature space Yn . Withthis technique,
ability of interpretation is misplaced because, Yn features acquired after feature extraction are not
equal to Xn which means, it is not a direct subset of Xn.
3.3 Machine Learning Algorithms:
3.3.1 k-Nearest Neighbor:
One of the simple algorithms of machine learning is said to be k-Nearest Neighbor. This
particular algorithm is to learn the training set at first. Second, this algorithm deals with
predicting the label of any fresh occurrence on the foundation of the labels of its nearest
This selection is an effective substitute for the most excellent subset solution. But it starts with
occupied set of features contrasting Sequential Forward Selection. It eliminates the minimum
significant features each at an instance iteratively. The selection begins with a packed set of
variables P = {1,2,3,…..,d}. At every single step, the model is skilled with a complete set of
variables and the error is calculated in test set [69]. The variable with the maximum error Xj, is
detained from the set P. The model is qualified once more with a novel set of variables P, and
the step extends to eliminate variables from P, till the condition E(P-Xj) < E(P).
3.2.4 Feature Extraction:
In this technique, features or sovereign variables from the given set of data are renovated into
novel self-governing variables called as new feature space [70]. Freshly assembled feature space
clarifies most data and only momentous data are preferred.
Let a set consists of n number of features say, X1…….Xn. After feature extraction there are m
attributes where (n > m) and this feature extraction is completed with a few mapping function, F.
Figure 5: Xn set of independent features or dimensions are reduced to Yn
Xn which consist of set of independent features or dimensions are condensed to Yn set of
independent features which is referred in Figure 2. In this extraction process, a method named
principal component analysis is implemented [71]. This will acquire only non-redundant and
significant features from Xn and transform into fresh feature space Yn . Withthis technique,
ability of interpretation is misplaced because, Yn features acquired after feature extraction are not
equal to Xn which means, it is not a direct subset of Xn.
3.3 Machine Learning Algorithms:
3.3.1 k-Nearest Neighbor:
One of the simple algorithms of machine learning is said to be k-Nearest Neighbor. This
particular algorithm is to learn the training set at first. Second, this algorithm deals with
predicting the label of any fresh occurrence on the foundation of the labels of its nearest
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
acquaintance in the training set [72]. The logic behind this method is depended on the hypothesis
that the features which are used to portray the domain points are consistent to their labeling in a
particular way which makes close-by points probably to have the similar label. Moreover, in
certain cases, when there is a massive training set, then determining the adjacent neighbor can be
made very fast.
The mathematical derivation for 𝜌: 𝑋 ∗ 𝑋 → ℝ where, Ψ is a function that calculates the
distance between the two points of 𝑋 (𝑥+ , 𝑥 - ). The Euclidean distance between two points
can be inten ded by following formula:
K-nearest neighbor is the amount of the data points nearest to the new instance which is denoted
by k. For instance, if k =1 then this algorithm will select the closest one instance or if k = 3, then
the algorithm will choose the closest three neighbor instances and will categorize them
consequently [73]. Figure.3 which is given below explains this procedure.
Figure 6: k- Nearest Neighbour. Modified from k-Nearest Neighbour and Dynamic Time
Wrapping (2016)
Analyzing to the above Figure, The position where the Green star appears, is said to be the
point of the data which is to be classified. The circles which are blue in color are said to be
class A data points. The Red colored squares are said to be class B rectangles. The
Euclidean distance amidst the green star and all other blue and red points are calculated. The
that the features which are used to portray the domain points are consistent to their labeling in a
particular way which makes close-by points probably to have the similar label. Moreover, in
certain cases, when there is a massive training set, then determining the adjacent neighbor can be
made very fast.
The mathematical derivation for 𝜌: 𝑋 ∗ 𝑋 → ℝ where, Ψ is a function that calculates the
distance between the two points of 𝑋 (𝑥+ , 𝑥 - ). The Euclidean distance between two points
can be inten ded by following formula:
K-nearest neighbor is the amount of the data points nearest to the new instance which is denoted
by k. For instance, if k =1 then this algorithm will select the closest one instance or if k = 3, then
the algorithm will choose the closest three neighbor instances and will categorize them
consequently [73]. Figure.3 which is given below explains this procedure.
Figure 6: k- Nearest Neighbour. Modified from k-Nearest Neighbour and Dynamic Time
Wrapping (2016)
Analyzing to the above Figure, The position where the Green star appears, is said to be the
point of the data which is to be classified. The circles which are blue in color are said to be
class A data points. The Red colored squares are said to be class B rectangles. The
Euclidean distance amidst the green star and all other blue and red points are calculated. The
star will be categorized to the points of the data which has less distance. If k =7, then
distance among each of the seven points from the star is measured and this will be
categorized to the data points with the minimum distance, in this particular case with data
point which is in blue [74].
This is very easy algorithms which dumps every accessible case and categorize them depending
on the similarity measure. This algorithm had been used in statistical [36] estimation and pattern
recognition al- ready in the beginning of 1970s as a non-parametric technique. In k-NN, the
nearer neighbors play important role for contribution than far objects for computing the weight in
both cases like classification and regression.
Training example has a class label and they are represented in the vector form of the feature
space in multidimensional. In the training phase only the feature vectors and the class labels of
the training objects are stored. In k-NN algorithm, k is the constant given by the user, so in the
classification part test point is assigned the label that is most nearest in the training of k samples.
Euclidean distance is majorly used for continuous variables; on the other hand Hamming
distance is used for text classification kind of discrete variables. Pearson and Spearman [37]
used microarray has been used for finding correlation coefficients. The performance of k-NN can
be also improved by learning the distance matrix and analysis of neighborhood components [38]
[39] [40].
Distance functions equations
3.3.2 Support Vector Machines (SVM):
SVMs are one of the supervised learning models in machine learning same employed for
analysis. In SVM model the training/test samples are represented as dot/points in the space and
they are mapped in such a way that the clear gap among the categories appears which separates
the samples [75]. New test samples/examples are later mapped on the Using the kernel trick
SVMs are also be able to perform non-linear classification in addition to linear classification.
SVMs automatically map the inputs with high dimensional feature/attribute spaces. In general
SVM supports to construct hyper plane in any space and this can be employed for any tasks such
distance among each of the seven points from the star is measured and this will be
categorized to the data points with the minimum distance, in this particular case with data
point which is in blue [74].
This is very easy algorithms which dumps every accessible case and categorize them depending
on the similarity measure. This algorithm had been used in statistical [36] estimation and pattern
recognition al- ready in the beginning of 1970s as a non-parametric technique. In k-NN, the
nearer neighbors play important role for contribution than far objects for computing the weight in
both cases like classification and regression.
Training example has a class label and they are represented in the vector form of the feature
space in multidimensional. In the training phase only the feature vectors and the class labels of
the training objects are stored. In k-NN algorithm, k is the constant given by the user, so in the
classification part test point is assigned the label that is most nearest in the training of k samples.
Euclidean distance is majorly used for continuous variables; on the other hand Hamming
distance is used for text classification kind of discrete variables. Pearson and Spearman [37]
used microarray has been used for finding correlation coefficients. The performance of k-NN can
be also improved by learning the distance matrix and analysis of neighborhood components [38]
[39] [40].
Distance functions equations
3.3.2 Support Vector Machines (SVM):
SVMs are one of the supervised learning models in machine learning same employed for
analysis. In SVM model the training/test samples are represented as dot/points in the space and
they are mapped in such a way that the clear gap among the categories appears which separates
the samples [75]. New test samples/examples are later mapped on the Using the kernel trick
SVMs are also be able to perform non-linear classification in addition to linear classification.
SVMs automatically map the inputs with high dimensional feature/attribute spaces. In general
SVM supports to construct hyper plane in any space and this can be employed for any tasks such
as regression, prediction or classification. SVM hyper plane is the only responsible for good
separation of training data with the largest distance to its nearby data [76]. This approach is
generally known as functional margin [77]. The state of the rule is if margin is larger than the
generalization error of classifier will be lower.
The main aim of ordering the tracks is to exactly forecast the target class from the data in each
case. The classifier engine process includes two steps:
1. Classifier Building: This method is to build a learning phase. The classifier, which evolved
from training certain databases instances/tuples, is constructed by the classification algorithms.
Individual instance/tuples that is composed of the preparation group is mentioned as a group.
These tuples can also be mentioned to as data points [78].
2. Usage of Classifier – The training model/classifier generated employing the data set will
classify test data set objects/tuples.
• Relevance Analysis: Database could consist of some unrelated attributes [79]. Correlation
analysis will identify any two given attributes are associated.
• Data Transformation and lessening: The following methods help in data transformation:
– Normalization: This transformation occupies scaling every value that will make them descend
within a specific range [80] [81] [82]. This method is mainly used in the learning step when the
neural networks or the methods evolving measurements are employed.
– Simplification: This is a major concept of transforming. Here we can use the hierarchy
concept.
If the detected tracks of n points is given and the method of it is (−x→1 , y1 ) ........ (−x→n , yn )
and here yi is 1, this indicates the class with the sample →−xi is present. Each →−xi is a real
vector of p-dimension [83]. Our interest is detecting the “extreme- margin hyper plane” which
separates the category of samples from →−xi. This is required to be definite to maximize the
distance among the hyper plane as well as the adjacent sample →−xi.
It is understandable that H1 usually does not disconnect the classes. While H2 separates them by
a minute margin, on the other hand H3 disconnects them with the greatest margin [84]. The
hyper plane is defined as the set of points →−x satisfying →−x * →−w - b = 0. The support
vector contains the sample on the margin. The offset of the hyper plane from the origin with
normal vector →−w is determined by the parameter →−w.
Hard-margin two parallel hyper planes which separate two classes of data, can be selected if the
training data are linearly separable [85]. So by this we can have the distance between them is as
possible as large. The”margin” is nothing but the region bounded by these two hyper planes.
separation of training data with the largest distance to its nearby data [76]. This approach is
generally known as functional margin [77]. The state of the rule is if margin is larger than the
generalization error of classifier will be lower.
The main aim of ordering the tracks is to exactly forecast the target class from the data in each
case. The classifier engine process includes two steps:
1. Classifier Building: This method is to build a learning phase. The classifier, which evolved
from training certain databases instances/tuples, is constructed by the classification algorithms.
Individual instance/tuples that is composed of the preparation group is mentioned as a group.
These tuples can also be mentioned to as data points [78].
2. Usage of Classifier – The training model/classifier generated employing the data set will
classify test data set objects/tuples.
• Relevance Analysis: Database could consist of some unrelated attributes [79]. Correlation
analysis will identify any two given attributes are associated.
• Data Transformation and lessening: The following methods help in data transformation:
– Normalization: This transformation occupies scaling every value that will make them descend
within a specific range [80] [81] [82]. This method is mainly used in the learning step when the
neural networks or the methods evolving measurements are employed.
– Simplification: This is a major concept of transforming. Here we can use the hierarchy
concept.
If the detected tracks of n points is given and the method of it is (−x→1 , y1 ) ........ (−x→n , yn )
and here yi is 1, this indicates the class with the sample →−xi is present. Each →−xi is a real
vector of p-dimension [83]. Our interest is detecting the “extreme- margin hyper plane” which
separates the category of samples from →−xi. This is required to be definite to maximize the
distance among the hyper plane as well as the adjacent sample →−xi.
It is understandable that H1 usually does not disconnect the classes. While H2 separates them by
a minute margin, on the other hand H3 disconnects them with the greatest margin [84]. The
hyper plane is defined as the set of points →−x satisfying →−x * →−w - b = 0. The support
vector contains the sample on the margin. The offset of the hyper plane from the origin with
normal vector →−w is determined by the parameter →−w.
Hard-margin two parallel hyper planes which separate two classes of data, can be selected if the
training data are linearly separable [85]. So by this we can have the distance between them is as
possible as large. The”margin” is nothing but the region bounded by these two hyper planes.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
The maximum margin hyper plane lies between these planes [86]. These hyper planes can be
described by the equations →−w * →−x - b = 1 and →−w * →−x - b = -1. →−w is the
distance between two hyper planes, by minimizing w we can maximize the distance between the
planes [87]. By adding the constraint: for each either →−w * →−x - b ≥ 1 or →−w * →−x - b
≤ 1if yi = -1, here the data points can be prevented from falling into the margin .
Figure 7: Classification based on linear SVM
described by the equations →−w * →−x - b = 1 and →−w * →−x - b = -1. →−w is the
distance between two hyper planes, by minimizing w we can maximize the distance between the
planes [87]. By adding the constraint: for each either →−w * →−x - b ≥ 1 or →−w * →−x - b
≤ 1if yi = -1, here the data points can be prevented from falling into the margin .
Figure 7: Classification based on linear SVM
Figure 8: Classification based on Hard SVM
According to the constraints /conditions every point of the data should be on the d efinite
plane of the margin [88] [89]. The modified equation is :
y i (→−w ∗ →−x − b) ≥ 1 , f o r al l 1 ≤ I ≤ n (1)
To get the optimization problem we can put all these together:
”Decrease →w subject to →−w * →−x - b ≥ 1 for I = 1, 2, 3,. N”. Completely the max
margin hyper plane is determined by overright arrow(xi ) which lies at the near end.
Here the →−x is named as support vectors [90] .
Soft Margin is the loss function with the equation max (0, 1 – yi(→−w * → −x- b)). It is
introduced to enhance Support Vector Machine where the data are non- linearly
separable. If the constraint in (1) is satisfied then this function value is zero, which
means, →−x is in the right margin side . The function with minimization is as bel ow:
high(0, 1 – y i (→−w ∗ →−x − b)) + λ→−w
(2)
Here λ increments the values of margin-size and making sure that the →−x is
fall/separated with the actual side of it. Therefore, the soft margin Support Vector
Machine may behave equally same as hard margin Support Vector Machine for enough
small values of λ in the case when linearly classifiable test data are av ailable.
According to the constraints /conditions every point of the data should be on the d efinite
plane of the margin [88] [89]. The modified equation is :
y i (→−w ∗ →−x − b) ≥ 1 , f o r al l 1 ≤ I ≤ n (1)
To get the optimization problem we can put all these together:
”Decrease →w subject to →−w * →−x - b ≥ 1 for I = 1, 2, 3,. N”. Completely the max
margin hyper plane is determined by overright arrow(xi ) which lies at the near end.
Here the →−x is named as support vectors [90] .
Soft Margin is the loss function with the equation max (0, 1 – yi(→−w * → −x- b)). It is
introduced to enhance Support Vector Machine where the data are non- linearly
separable. If the constraint in (1) is satisfied then this function value is zero, which
means, →−x is in the right margin side . The function with minimization is as bel ow:
high(0, 1 – y i (→−w ∗ →−x − b)) + λ→−w
(2)
Here λ increments the values of margin-size and making sure that the →−x is
fall/separated with the actual side of it. Therefore, the soft margin Support Vector
Machine may behave equally same as hard margin Support Vector Machine for enough
small values of λ in the case when linearly classifiable test data are av ailable.
3.3.3 Random Forest:
This is known as a classifier which consists of an accumulation of decision trees, where each and
every tree is constructed by employing an algorithm A on the training set S and an extra arbitrary
vector, individually as well as equivalently appropriated from certain allocation [91]. The
estimation of the random forest is evolved by a maximum vote over the estimations of each and
every tree.
The algorithm of the Random Forest functions by the given procedure:
1. Selects indiscriminate K data points from the guided data.
2. Assembles a decision tree for those particular K data points.
3. Accepts the Ntree subset and acts upon step 1 and step 2
4. Determines the division or concludes on the behalf of the superiority of
votes.
3.3.4 Naïve Bayes
This particular classifier classifies by using Bayes theorem to categorize the data. This accepts
the probability of definite feature X is entirely solitary of the other feature Y. This theorem can
be simply demonstrated with the following scenario [92].
The spanners which are produced by machine A gives a probability of 0.6, and the probability of
machine B is 0.4. A bug in spanners from the entire assembly is 1 percent. The probability of
damaged spanners generated by machine A is 50 percent and that of machine B is 50 percent. In
this particular example Bayes theorem can be used to reply what is the probability of a defects
generated by machine B.
Bayes theorem provides a way to anticipate the probability of the assumption given that there is a
former knowledge regarding the problem.
This is known as a classifier which consists of an accumulation of decision trees, where each and
every tree is constructed by employing an algorithm A on the training set S and an extra arbitrary
vector, individually as well as equivalently appropriated from certain allocation [91]. The
estimation of the random forest is evolved by a maximum vote over the estimations of each and
every tree.
The algorithm of the Random Forest functions by the given procedure:
1. Selects indiscriminate K data points from the guided data.
2. Assembles a decision tree for those particular K data points.
3. Accepts the Ntree subset and acts upon step 1 and step 2
4. Determines the division or concludes on the behalf of the superiority of
votes.
3.3.4 Naïve Bayes
This particular classifier classifies by using Bayes theorem to categorize the data. This accepts
the probability of definite feature X is entirely solitary of the other feature Y. This theorem can
be simply demonstrated with the following scenario [92].
The spanners which are produced by machine A gives a probability of 0.6, and the probability of
machine B is 0.4. A bug in spanners from the entire assembly is 1 percent. The probability of
damaged spanners generated by machine A is 50 percent and that of machine B is 50 percent. In
this particular example Bayes theorem can be used to reply what is the probability of a defects
generated by machine B.
Bayes theorem provides a way to anticipate the probability of the assumption given that there is a
former knowledge regarding the problem.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Naïve Bayes are classified into three types: Multinomial Naïve Bayes [93] [94] which is
implemented in multinomial distributed data, Gaussian Naïve Bayes which is used in the
classification problems and Bernoulli Naïve Bayes is implemented in data with multivariate
Bernoulli distribution
3.3.5 Ada Boost
This is a booming approaches scheduled by Freund and Schapire [95]. This is the most widely
implemented boosting algorithm. Boosting is an amalgamation method which is used for
constituting extremely precise estimation or durable classifier from comparatively scrawny and
imprecise classifiers. This algorithm is a process of iteration where a model is skilled in data and
estimates frail learners. The next model skills itself from the errors which it makes from the prior
training and stick to the errors and the progression endures till the data is properly estimated.
Let the samples for the training be 20 in a binary categorization problem and there should be an
evenly seperated training samples, D1 [96]. Let w be the initial weight of each and every sample
which is 1/20. Subsequent to the training samples which are trained in a model G1, 5 of them are
miscategorized. Henceforth, error rate e = 0.25, the weight of model G1 is
Correctly classified samples are weighted low by multiplying with 𝑒QTS and the misclassified
samples are weighted further by multiplying with 𝑒TS and finally all weights are normalized to
1. Currently, the next model is trained G2 and the similar procedure persists till best possible
samples are suitably classified and last model is the addition of every weighted models.
Figure 9: Schematic of Adaboost
implemented in multinomial distributed data, Gaussian Naïve Bayes which is used in the
classification problems and Bernoulli Naïve Bayes is implemented in data with multivariate
Bernoulli distribution
3.3.5 Ada Boost
This is a booming approaches scheduled by Freund and Schapire [95]. This is the most widely
implemented boosting algorithm. Boosting is an amalgamation method which is used for
constituting extremely precise estimation or durable classifier from comparatively scrawny and
imprecise classifiers. This algorithm is a process of iteration where a model is skilled in data and
estimates frail learners. The next model skills itself from the errors which it makes from the prior
training and stick to the errors and the progression endures till the data is properly estimated.
Let the samples for the training be 20 in a binary categorization problem and there should be an
evenly seperated training samples, D1 [96]. Let w be the initial weight of each and every sample
which is 1/20. Subsequent to the training samples which are trained in a model G1, 5 of them are
miscategorized. Henceforth, error rate e = 0.25, the weight of model G1 is
Correctly classified samples are weighted low by multiplying with 𝑒QTS and the misclassified
samples are weighted further by multiplying with 𝑒TS and finally all weights are normalized to
1. Currently, the next model is trained G2 and the similar procedure persists till best possible
samples are suitably classified and last model is the addition of every weighted models.
Figure 9: Schematic of Adaboost
The foremost set of self-determining predictor variables which are to be modeled is Training
sample. Subsequent to the achievement of the modeling error rate, the feeble learners are
identified [97] [98] [99]. Those novice feeble learners are weighted and the samples are once
again modeled and these procedures persist up to M number of times and at last all the outputs
from various samples are averaged to produce the boosted output.
sample. Subsequent to the achievement of the modeling error rate, the feeble learners are
identified [97] [98] [99]. Those novice feeble learners are weighted and the samples are once
again modeled and these procedures persist up to M number of times and at last all the outputs
from various samples are averaged to produce the boosted output.
Chapter 4: Artificial Neural Network
4.1 Technical overview:
This Network is modeled subsequent to biological neural networks and efforts to permit
computers to learn in the way analogous to human-reinforcement learning. The fundamental
part in neural network operating is termed as perceptron [99]. A perceptron contains of a few
too many inputs, a processor, and a single output [100]. A perceptron pursues the model known
as the “feed-forward model”, which means that the inputs are received through a neuron,
processed and the result is produced as output.
There are four main processes in perceptron and they are
1. Receive inputs
2. Weight inputs
3. Sum inputs
4. Generate inputs
Each and every input which is received by a neuron must be slanted, i.e. multiplied by some
value. Generating a perceptron usually starts by conveying random weights. Every input is
taken individually and multiplied by its weight [101]. The generation of the output takes place
by transferring the sum throughout an activation function. If it is a simple binary output, then
activation function tells whether the perceptron is to be “fired” or not. For instance, if the sum is
positive number, then 1 will be the output; if the sum is negative, then -1 will be the output.
Additional factor to be considered is bias. If both the inputs are equal to zero, then any sum
would be zero which is independent of multiplicative weight [102].
To end this particular problem, a third input is summed which is called as the bias input with a
value of 1.
To train the perceptron subsequent steps are followed:
1. Afford the perceptron with inputs which has a well-known respond.
2. Inquire the perceptron to deduce a respond.
3. Calculate the error.
4. Regulate all the weights with the error, and the go to step 1 and do again.
4.1 Technical overview:
This Network is modeled subsequent to biological neural networks and efforts to permit
computers to learn in the way analogous to human-reinforcement learning. The fundamental
part in neural network operating is termed as perceptron [99]. A perceptron contains of a few
too many inputs, a processor, and a single output [100]. A perceptron pursues the model known
as the “feed-forward model”, which means that the inputs are received through a neuron,
processed and the result is produced as output.
There are four main processes in perceptron and they are
1. Receive inputs
2. Weight inputs
3. Sum inputs
4. Generate inputs
Each and every input which is received by a neuron must be slanted, i.e. multiplied by some
value. Generating a perceptron usually starts by conveying random weights. Every input is
taken individually and multiplied by its weight [101]. The generation of the output takes place
by transferring the sum throughout an activation function. If it is a simple binary output, then
activation function tells whether the perceptron is to be “fired” or not. For instance, if the sum is
positive number, then 1 will be the output; if the sum is negative, then -1 will be the output.
Additional factor to be considered is bias. If both the inputs are equal to zero, then any sum
would be zero which is independent of multiplicative weight [102].
To end this particular problem, a third input is summed which is called as the bias input with a
value of 1.
To train the perceptron subsequent steps are followed:
1. Afford the perceptron with inputs which has a well-known respond.
2. Inquire the perceptron to deduce a respond.
3. Calculate the error.
4. Regulate all the weights with the error, and the go to step 1 and do again.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
We do again this process until we get the error in which we are satisfied. This is the way a single
perceptron would function. Then we have to link all the perceptron [103] in layers in the input
and in the output, in order to produce a neural network. The layers amid the input and the output
will be a hidden layer. It is same as biological human brain system which includes the collection
of neurons and it is also considered as the border line between approximation algorithm and
artificial intelligence [104]. It learns through training resemble structured biological neuron
networks and hence it is known as a nonlinear predictive model. The neuron networks work for
the applications which include pattern detection, making prediction and learn from the past such
as biological systems. The artificial neuron networks are nothing but the computer programs
which enables the computer to learn like human being but it cannot mimic the human brain
completely, but having some lacking or limitations. They are highly accurate predictive models
which can be applied for large range of problems.
The terms neuron or processing element will refer to an operator which maps RN-> R and is
described by the equation
Vi = G(qi) = G ( ai cT+ ai0 )
V i=∑
j=0
N
(a¿ ¿ij ci +aio) ¿
Where c= [c1,c2……….,cN ]are the input vectors followed by wij= [wi1, wi2,…………,wiN] which
is said to be the weight vectors of N processing elements. We have represented G(.) that is said
to be the monotone function. G:RN -> (-1,1) or (0,1). Here we denote the output in the form of
Vi. Artificial neural network is said to be the set of interconnected processing elements. These
are formed as layer l=0, 1, 2, ……. L. the layer of l determines the input, which is given by the
(L-1) layer [105]. Due to this methodology we call the neural network as feed-forward network.
In the current scenario we use the back ward propagation that comes under the concept of
supervised neural networks. The development in this rule has been done that arouses an interest
to know more regarding the artificial neural network. In this rule we follow the method of
computation efficiency which could able to provide changes in the differential function
activation units. With this method we could able to learn the input and output data very
effectively. This method of back ward propagation was first adopted in the Harvard University.
Several procedures has been adopted to improve the back ward propagation speed, tolerance
effect, local minima etc. this could be used to solve the diversity problem that could be
classifying patterns, function approximation, prediction of time-series as well as developing the
non-linear systems [106]. The correction factor could be denoted in the form of ∆aij for the
weights aij, similar to the least mean square algorithm that is equal to ∂ϵk / ∂aij(k) , which denotes
the sensitivity factor that could be always negative. The correction factor is written by the
equation
perceptron would function. Then we have to link all the perceptron [103] in layers in the input
and in the output, in order to produce a neural network. The layers amid the input and the output
will be a hidden layer. It is same as biological human brain system which includes the collection
of neurons and it is also considered as the border line between approximation algorithm and
artificial intelligence [104]. It learns through training resemble structured biological neuron
networks and hence it is known as a nonlinear predictive model. The neuron networks work for
the applications which include pattern detection, making prediction and learn from the past such
as biological systems. The artificial neuron networks are nothing but the computer programs
which enables the computer to learn like human being but it cannot mimic the human brain
completely, but having some lacking or limitations. They are highly accurate predictive models
which can be applied for large range of problems.
The terms neuron or processing element will refer to an operator which maps RN-> R and is
described by the equation
Vi = G(qi) = G ( ai cT+ ai0 )
V i=∑
j=0
N
(a¿ ¿ij ci +aio) ¿
Where c= [c1,c2……….,cN ]are the input vectors followed by wij= [wi1, wi2,…………,wiN] which
is said to be the weight vectors of N processing elements. We have represented G(.) that is said
to be the monotone function. G:RN -> (-1,1) or (0,1). Here we denote the output in the form of
Vi. Artificial neural network is said to be the set of interconnected processing elements. These
are formed as layer l=0, 1, 2, ……. L. the layer of l determines the input, which is given by the
(L-1) layer [105]. Due to this methodology we call the neural network as feed-forward network.
In the current scenario we use the back ward propagation that comes under the concept of
supervised neural networks. The development in this rule has been done that arouses an interest
to know more regarding the artificial neural network. In this rule we follow the method of
computation efficiency which could able to provide changes in the differential function
activation units. With this method we could able to learn the input and output data very
effectively. This method of back ward propagation was first adopted in the Harvard University.
Several procedures has been adopted to improve the back ward propagation speed, tolerance
effect, local minima etc. this could be used to solve the diversity problem that could be
classifying patterns, function approximation, prediction of time-series as well as developing the
non-linear systems [106]. The correction factor could be denoted in the form of ∆aij for the
weights aij, similar to the least mean square algorithm that is equal to ∂ϵk / ∂aij(k) , which denotes
the sensitivity factor that could be always negative. The correction factor is written by the
equation
aij(k+1)= aij(k) + uk ∆aij(k) + bk ∆aij(k-1)
Where uk denotes the coefficient of learning factor and bk ∆aij(k-1) is said to be the momentum
term [107]. The two-dimensional neural network layer is shown in the figure given below.
Figure 10: 2-dimensional neural network model
The concept of self-organization comes under the category of unsupervised learning where we
could utilize the training data for detecting the clusters. The internal parameters within the input
are compared with the feature map elements and the element that fits into the best match could
be the victor. This successful element is taken into consideration and it should be trained further
so that these data could be well used and responsive for the accessed input. Kohonen self-
organizing feature maps are a supplementary evolvement of the competitive learning. In
this case the best corresponding elements will make their relative element to take part in
their training set [108]. This process leads to the similarity of the adjacent elements. In
this situation the analogous characteristics elements could form themselves as a cluster that
maps each other and that set that forms as a feature map, could be considered as a two-
dimensional data set. The self- organizing Kohenen neural network feature map is shown in
the figure given below. In this figure we have 2 input layers that are bounded to each and
every feature ma of the output layer [109].
Where uk denotes the coefficient of learning factor and bk ∆aij(k-1) is said to be the momentum
term [107]. The two-dimensional neural network layer is shown in the figure given below.
Figure 10: 2-dimensional neural network model
The concept of self-organization comes under the category of unsupervised learning where we
could utilize the training data for detecting the clusters. The internal parameters within the input
are compared with the feature map elements and the element that fits into the best match could
be the victor. This successful element is taken into consideration and it should be trained further
so that these data could be well used and responsive for the accessed input. Kohonen self-
organizing feature maps are a supplementary evolvement of the competitive learning. In
this case the best corresponding elements will make their relative element to take part in
their training set [108]. This process leads to the similarity of the adjacent elements. In
this situation the analogous characteristics elements could form themselves as a cluster that
maps each other and that set that forms as a feature map, could be considered as a two-
dimensional data set. The self- organizing Kohenen neural network feature map is shown in
the figure given below. In this figure we have 2 input layers that are bounded to each and
every feature ma of the output layer [109].
Figure 11: Kohenen self-organizing feature map.
The functionality of the self-organizing map could be denoted by the weights that are initialized
to the random variables and the ith mapping element that could vary from k=0, 1, 2, 3,………. By
the equation,
||c(k) – ac (k) || = min{ ||c(k) – wi(k) || }
Where, c denotes the best matching pair. During the past years, neural network integrated with
the fuzzy logic combination has been used widely in varieties of application [110]. The natural
integration procedure follows both the hybrid neural network with the fuzzy based systems rather
than using a traditional system. They have certain universal and harmonizing activities at the
common.
4.2 Specification:
The at most objective of artificial neural network is to detect the abnormal condition and
worsening of a mechanical behavior of the equipments. In detecting these habits the machines
has to be trained [111]. This detection has to be declared at a first sign itself, since the machine
could provide the same vibration sign as before that could give a false pattern of the malfunction.
Hence the most sufficient mechanical faults should be evaluated at the first itself. At this
particular scenario, the inherent capacity of the generalized neural network could be more
suitable. The fault identification could be done by inserting certain malfunction dataset into the
trained neural network model that could result in the fault classification pattern. For instance, the
The functionality of the self-organizing map could be denoted by the weights that are initialized
to the random variables and the ith mapping element that could vary from k=0, 1, 2, 3,………. By
the equation,
||c(k) – ac (k) || = min{ ||c(k) – wi(k) || }
Where, c denotes the best matching pair. During the past years, neural network integrated with
the fuzzy logic combination has been used widely in varieties of application [110]. The natural
integration procedure follows both the hybrid neural network with the fuzzy based systems rather
than using a traditional system. They have certain universal and harmonizing activities at the
common.
4.2 Specification:
The at most objective of artificial neural network is to detect the abnormal condition and
worsening of a mechanical behavior of the equipments. In detecting these habits the machines
has to be trained [111]. This detection has to be declared at a first sign itself, since the machine
could provide the same vibration sign as before that could give a false pattern of the malfunction.
Hence the most sufficient mechanical faults should be evaluated at the first itself. At this
particular scenario, the inherent capacity of the generalized neural network could be more
suitable. The fault identification could be done by inserting certain malfunction dataset into the
trained neural network model that could result in the fault classification pattern. For instance, the
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
dataset of the first and second harmonics of the vibrational data could be fed to the machines. If
the 1st harmonic exists in the machine then the value of imbalance and the probability of
misaligned could be detected [112]. Further if the second harmonics exists then again a value of
misalignment and imbalance could be produced which could be analyzed by the neural network.
For doing our project analysis with the help of the python script we have to convert our file in
CSV format. CSV is the comma separated values file. This could be in the tabular format.
The .csv extension could be used widely with every spreadsheet environment either with the
Microsoft excel spreadsheet or in Linux environment. They help us in certain purposes that are
as follows:
CSV files are plain-text files. This could reduce the effort of the web developers in
developing their projects.
It is not require using any special software for the CSV file. We could use the normal
spreadsheet files or any other storage database. As they are made up of plain text, they
can be simpler to introduce into a spreadsheet.
For the better organization of large amount of dataset
The next question arises regarding, “How to save the spreadsheet into CSV file format?”. This is
really very simple. Click Save or Save as -> choose the desired file folder -> file type as CSV
delimiter (.*csv). The file will be saved with the .csv extension and this procedure is very simple
in the Windows, Apple and Linux environment. We have to do this process exactly in our dataset
too.
4.3 Design:
This is completely regarding the monitoring of the machine. Suppose if I fit the vibrational
sensor in the machine then these data have to be validated. This could be done by the data
acquisition system. The data acquisition system is nothing but a board that could convert the
analog values coming from the transmitter into a digital format. It could be monitored with 256
channels that could be connected with 8 machines. This could generate the alarm data when
certain abnormal circumstances occur. This could be indicated with the alarm system that could
be fitted with the equipments else it could be indicated with the certain led lights. Typical
performing situation or run-out spectra may be chosen and operational parameters data. Certain
operation could be done by the on-line processing module:
1) The RMS value could be estimated for the vibrational signals, since these data should be
compared with the pre-defined limits of upper and lower values. Unless the RMS value works at
a normal condition the indication of the led could be normal.
2) In any situation if the RMS value has crossed its limit line then there could be an indication,
say any alarm sound or indication of different LED light. If it retains its normal limit then no
problem occurs, the system could regain its normal position as mentioned in point 1. After a
the 1st harmonic exists in the machine then the value of imbalance and the probability of
misaligned could be detected [112]. Further if the second harmonics exists then again a value of
misalignment and imbalance could be produced which could be analyzed by the neural network.
For doing our project analysis with the help of the python script we have to convert our file in
CSV format. CSV is the comma separated values file. This could be in the tabular format.
The .csv extension could be used widely with every spreadsheet environment either with the
Microsoft excel spreadsheet or in Linux environment. They help us in certain purposes that are
as follows:
CSV files are plain-text files. This could reduce the effort of the web developers in
developing their projects.
It is not require using any special software for the CSV file. We could use the normal
spreadsheet files or any other storage database. As they are made up of plain text, they
can be simpler to introduce into a spreadsheet.
For the better organization of large amount of dataset
The next question arises regarding, “How to save the spreadsheet into CSV file format?”. This is
really very simple. Click Save or Save as -> choose the desired file folder -> file type as CSV
delimiter (.*csv). The file will be saved with the .csv extension and this procedure is very simple
in the Windows, Apple and Linux environment. We have to do this process exactly in our dataset
too.
4.3 Design:
This is completely regarding the monitoring of the machine. Suppose if I fit the vibrational
sensor in the machine then these data have to be validated. This could be done by the data
acquisition system. The data acquisition system is nothing but a board that could convert the
analog values coming from the transmitter into a digital format. It could be monitored with 256
channels that could be connected with 8 machines. This could generate the alarm data when
certain abnormal circumstances occur. This could be indicated with the alarm system that could
be fitted with the equipments else it could be indicated with the certain led lights. Typical
performing situation or run-out spectra may be chosen and operational parameters data. Certain
operation could be done by the on-line processing module:
1) The RMS value could be estimated for the vibrational signals, since these data should be
compared with the pre-defined limits of upper and lower values. Unless the RMS value works at
a normal condition the indication of the led could be normal.
2) In any situation if the RMS value has crossed its limit line then there could be an indication,
say any alarm sound or indication of different LED light. If it retains its normal limit then no
problem occurs, the system could regain its normal position as mentioned in point 1. After a
certain interval of time if the RMS value continues to cross its limit condition, then there occurs
the occurrence of vibration spectrum and this could be indicated and recorder. Then when the
limit value is adjusted the system starts to regain its normal condition. A different LED light
helps us to analyze different scenarios of the equipments.
3) This could be continued several times of a day and the values obtained by the vibration
sensors, rms values should be necessarily recorded and stored in the hard disk, which could be
used as the reference later.
4.4 Methodology:
It is necessary to validate the sensor data properly in the form of time and frequency domain. An
important point to be noted is that the supervised learning methodology that comes with the
study of vibrational data is not real-time operating data. The features of the good sensor input
data that is obtained from its normal working condition should be necessarily taken into account.
Under the normal condition, the working of the sensor is obtained so that the appropriate
processing flags the potential malfunction modes and the widespread processing points to the
most feasible fault and its cause. For this a fuzzy based system is best suitable to get in reach
with the family of sensors.
With the help of trend analysis we could be able to determine the maximum risk factors that
could occur in the machines of the industry. For this evaluation we could use certain
measurement dataset concerning to the measured points of the machine. With the help of
regression curves we could able to infer the particular operating system the alarm is going to be
reached. These set of variables could be predefined by the operators. Predictive maintenance
will be really effectual if interpretation regarding the vibration development beside the time is
accomplished. Prediction is the difficulty of drawing calculations regarding known time series
into the future. Hybrid neural network is nothing but a combination of fuzzy logic system that
could be helpful in making certain decision.
4.4.1 Fuzzy based decision system:
The fuzzy system is really helpful for making certain sort of decision making. It uses the normal
IF-THEN statement for its decision. The combination of fuzzy system in the artificial neural
network helps us in providing a crisp output. The input data determined by the machines could
be evaluated through the fuzzy logical system to produce a crisp input. This in turn combines
with certain decision making statement and filters out itself with the logical and necessary data.
This filtered fuzzy data is send to the process of Defuzzification, where the fuzzy input data
produces itself to become a crisp output data. The block diagram of this process is shown in the
figure given below.
the occurrence of vibration spectrum and this could be indicated and recorder. Then when the
limit value is adjusted the system starts to regain its normal condition. A different LED light
helps us to analyze different scenarios of the equipments.
3) This could be continued several times of a day and the values obtained by the vibration
sensors, rms values should be necessarily recorded and stored in the hard disk, which could be
used as the reference later.
4.4 Methodology:
It is necessary to validate the sensor data properly in the form of time and frequency domain. An
important point to be noted is that the supervised learning methodology that comes with the
study of vibrational data is not real-time operating data. The features of the good sensor input
data that is obtained from its normal working condition should be necessarily taken into account.
Under the normal condition, the working of the sensor is obtained so that the appropriate
processing flags the potential malfunction modes and the widespread processing points to the
most feasible fault and its cause. For this a fuzzy based system is best suitable to get in reach
with the family of sensors.
With the help of trend analysis we could be able to determine the maximum risk factors that
could occur in the machines of the industry. For this evaluation we could use certain
measurement dataset concerning to the measured points of the machine. With the help of
regression curves we could able to infer the particular operating system the alarm is going to be
reached. These set of variables could be predefined by the operators. Predictive maintenance
will be really effectual if interpretation regarding the vibration development beside the time is
accomplished. Prediction is the difficulty of drawing calculations regarding known time series
into the future. Hybrid neural network is nothing but a combination of fuzzy logic system that
could be helpful in making certain decision.
4.4.1 Fuzzy based decision system:
The fuzzy system is really helpful for making certain sort of decision making. It uses the normal
IF-THEN statement for its decision. The combination of fuzzy system in the artificial neural
network helps us in providing a crisp output. The input data determined by the machines could
be evaluated through the fuzzy logical system to produce a crisp input. This in turn combines
with certain decision making statement and filters out itself with the logical and necessary data.
This filtered fuzzy data is send to the process of Defuzzification, where the fuzzy input data
produces itself to become a crisp output data. The block diagram of this process is shown in the
figure given below.
Figure 12: Block diagram for the fuzzy logical system
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Chapter 5: Classification techniques
There are five categories in the classification technique on the basis of various mathematical
concepts. These divisions are based on the statistics [32] distance, decision tree, neural network
and finally the rule. There are several algorithms for every category, but the most widely used
are K-Nearest Neighbors and Back propagation Neural Network [33] [34] [35].
5.1 Comparative study
Neural Networks:
A neural network is same as biological human brain system which includes the collection of
neurons and it is also considered as the border line between approximation algorithm and
artificial intelligence [41]. It learns through training resemble structured biological neuron
networks and hence it is known as a nonlinear predictive model [42]. The neuron networks work
for the applications which include pattern detection, making prediction and learn from the past
such as biological systems [43]. The artificial neuron networks are nothing but the computer
programs which enables the computer to learn like human being but it cannot mimic the human
brain completely, but having some lacking or limitations. They are highly accurate predictive
models which can be applied for large range of problems.
Decision Tree algorithm - ID3
The Decision Tree is in a shape of a Tree and it consists of set of decisions which could generate
rules for the dataset classification [44]. Each unique leaf node is dedicated to a record which is
starting from the root and continuously moves toward a child node with respect to the splitting
criterion. The splitting criteria evaluate a branching condition on the current node with respect to
the input records. There are two stages for decision tree construction: the first stage is to
build a tree and second is to prune it. In most of the algorithms the tree grows in top down way
with greedy approach. It starts with the root node, followed by at each intermediate node the
database records [45]. Most of the algorithm that has been created for learning the trees is the
deviation in a particular core algorithm that utilizes a top-down, greedy search throughout the
gap of probable trees. This approach is demonstrated by ID3 (Quinlan 1986) and its successors
C4.5 (Quinlan 1993) [46] C5.0 and many more. Selection is made among the best attribute and it
is used as the experiment of the tree at their root node. A descend of the root node is formed for
every probable value of this attribute [47]. The training examples are arranged to the suitable
progeny node. This whole process is done again to choose the most excellent attribute to test at
the particular point in the tree.
This forms a voracious search for a tolerable decision tree, in which the algorithms never
backtrack to reassess the former options.
There are five categories in the classification technique on the basis of various mathematical
concepts. These divisions are based on the statistics [32] distance, decision tree, neural network
and finally the rule. There are several algorithms for every category, but the most widely used
are K-Nearest Neighbors and Back propagation Neural Network [33] [34] [35].
5.1 Comparative study
Neural Networks:
A neural network is same as biological human brain system which includes the collection of
neurons and it is also considered as the border line between approximation algorithm and
artificial intelligence [41]. It learns through training resemble structured biological neuron
networks and hence it is known as a nonlinear predictive model [42]. The neuron networks work
for the applications which include pattern detection, making prediction and learn from the past
such as biological systems [43]. The artificial neuron networks are nothing but the computer
programs which enables the computer to learn like human being but it cannot mimic the human
brain completely, but having some lacking or limitations. They are highly accurate predictive
models which can be applied for large range of problems.
Decision Tree algorithm - ID3
The Decision Tree is in a shape of a Tree and it consists of set of decisions which could generate
rules for the dataset classification [44]. Each unique leaf node is dedicated to a record which is
starting from the root and continuously moves toward a child node with respect to the splitting
criterion. The splitting criteria evaluate a branching condition on the current node with respect to
the input records. There are two stages for decision tree construction: the first stage is to
build a tree and second is to prune it. In most of the algorithms the tree grows in top down way
with greedy approach. It starts with the root node, followed by at each intermediate node the
database records [45]. Most of the algorithm that has been created for learning the trees is the
deviation in a particular core algorithm that utilizes a top-down, greedy search throughout the
gap of probable trees. This approach is demonstrated by ID3 (Quinlan 1986) and its successors
C4.5 (Quinlan 1993) [46] C5.0 and many more. Selection is made among the best attribute and it
is used as the experiment of the tree at their root node. A descend of the root node is formed for
every probable value of this attribute [47]. The training examples are arranged to the suitable
progeny node. This whole process is done again to choose the most excellent attribute to test at
the particular point in the tree.
This forms a voracious search for a tolerable decision tree, in which the algorithms never
backtrack to reassess the former options.
Decision
Tree – ID3
Naive Bayes K- Nearest
Neighbor
SVM Neural Networks
Easily
Observed and
develop
generated rules
Fast, highly
scalable model
building (parallelized)
and scoring
Robust to noisy
training data and
effective if the
training data is large
More
accurate
than Decision
Tree
classification
High tolerance of
noisy data and
ability to
classify patterns for
untrained data
Table 1: Advantages of different classification algorithms
In the table 2.1, we have compared several machine learning algorithms and it could be seen that
ANN could handle the noisy data more efficiently and effectively with high tolerance [18][19].
F eatures Decision
T ree
Naiv e
Bay es
K-
N earest
Neighbor
SVM N eural
Networks
Learning
T ype
Eager
Learner
Eager
Learner
Lazy
Learner
Eager
Learner
Eag er
Learner
Speed F ast Very F ast Slow
Fast with
activ e
learning
Sl ow
Accur acy
Good in
man y
domains
Good
in
man y
domain
Hig
h
Robust
Significan tl
y High
Good
Scalability
Efficient f or
small data
set
E fficien t
for
larg e
data
set
——- ——- Sl ow
In terpretability Good ——– ——– ——- Bad
T ransparency R ules No R ules R ules No R ules No R ules
Tree – ID3
Naive Bayes K- Nearest
Neighbor
SVM Neural Networks
Easily
Observed and
develop
generated rules
Fast, highly
scalable model
building (parallelized)
and scoring
Robust to noisy
training data and
effective if the
training data is large
More
accurate
than Decision
Tree
classification
High tolerance of
noisy data and
ability to
classify patterns for
untrained data
Table 1: Advantages of different classification algorithms
In the table 2.1, we have compared several machine learning algorithms and it could be seen that
ANN could handle the noisy data more efficiently and effectively with high tolerance [18][19].
F eatures Decision
T ree
Naiv e
Bay es
K-
N earest
Neighbor
SVM N eural
Networks
Learning
T ype
Eager
Learner
Eager
Learner
Lazy
Learner
Eager
Learner
Eag er
Learner
Speed F ast Very F ast Slow
Fast with
activ e
learning
Sl ow
Accur acy
Good in
man y
domains
Good
in
man y
domain
Hig
h
Robust
Significan tl
y High
Good
Scalability
Efficient f or
small data
set
E fficien t
for
larg e
data
set
——- ——- Sl ow
In terpretability Good ——– ——– ——- Bad
T ransparency R ules No R ules R ules No R ules No R ules
Missing V al ue
In terpret.
Missing
Val ue
Missing
Val ue
Missing
Val ue
Sparse
Da ta ——-
Table 2: Feature comparisons
In the table 2.2, the feature comparison [48][49] has been shown more precisely.
Alg orithm Merit Demerit
Decision
T ree
• Handles: continuous, discrete
data and numeric data.
• It provides fast result in
classifying un- known records.
• It supports redundant attribute.
• Very good results acquired for
small size tree. Results are not
• It cannot predict the
val ue of a continuous
class a t - tribute.
• It provides error
prone results when too
man y classes are used.
• Construction of
decision tree is affected
by irrele vant
Naiv e
Ba y esian
• It provides high accuracy and
speed on large da tabase.
• Minimum error rate compared to
other classifiers.
• It is easy to understand.
•It assumes
independence of
features. So it
provides less
accur acy.
N eural
N etwor ks
• Highly affected by noisy data.
• Good for continuous val ues.
• Non trained patterns can also be
Classified.
• Complex to interpret.
•Takes much time to
train the model.
Table 3: Comparison of Classification Algorithms
In terpret.
Missing
Val ue
Missing
Val ue
Missing
Val ue
Sparse
Da ta ——-
Table 2: Feature comparisons
In the table 2.2, the feature comparison [48][49] has been shown more precisely.
Alg orithm Merit Demerit
Decision
T ree
• Handles: continuous, discrete
data and numeric data.
• It provides fast result in
classifying un- known records.
• It supports redundant attribute.
• Very good results acquired for
small size tree. Results are not
• It cannot predict the
val ue of a continuous
class a t - tribute.
• It provides error
prone results when too
man y classes are used.
• Construction of
decision tree is affected
by irrele vant
Naiv e
Ba y esian
• It provides high accuracy and
speed on large da tabase.
• Minimum error rate compared to
other classifiers.
• It is easy to understand.
•It assumes
independence of
features. So it
provides less
accur acy.
N eural
N etwor ks
• Highly affected by noisy data.
• Good for continuous val ues.
• Non trained patterns can also be
Classified.
• Complex to interpret.
•Takes much time to
train the model.
Table 3: Comparison of Classification Algorithms
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
In the above table we have discussed regarding certain classification technique. Although Neural
Network consumes a lot of time compared to other algorithms [50], they could provide a high
tolerance towards the noisy data. accuracy will not be affected with ANN when compared to the
Naïve Bayes.
Chapter 6: Result
We use python program for the prediction of the output data. For python programming platform
we use Anaconda software package. If we installed Anaconda software in the system it is not
Network consumes a lot of time compared to other algorithms [50], they could provide a high
tolerance towards the noisy data. accuracy will not be affected with ANN when compared to the
Naïve Bayes.
Chapter 6: Result
We use python program for the prediction of the output data. For python programming platform
we use Anaconda software package. If we installed Anaconda software in the system it is not
necessary to install any other packages or any other installations. Similarly it is not necessary to
uninstall Pandas and Numpy, since these are all default packages that are found in the software
platform. It takes pretty long time to install Anaconda, since the package is larger in size and
moreover there are many default packages inside the platform. The following are the guidelines
for installing Anaconda:
1) Download anaconda from this website https://www.anaconda.com/download/
2) Select the desired operating system. Download any version that could be convenient for the
user.
3) Double click on the .exe file
4) When installing Anaconda, a pop-up menu may ask whether or not to “Add Anaconda to my
PATH environment variable”, and “Register Anaconda as my default Python 3.5.” We suggest
accepting both or any one option.
It takes around 20 minutes to finish all these procedures
5) You can either use ipython or jupyter notebook or Anaconda prompt. Configuring your ip
address is very necessary while doing these procedures. At anaconda prompt type python and the
below two lines occur.
(C:\Users\akshay\Anaconda3) C:\Users\akshay>Python
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit
(AMD64)] on win32
uninstall Pandas and Numpy, since these are all default packages that are found in the software
platform. It takes pretty long time to install Anaconda, since the package is larger in size and
moreover there are many default packages inside the platform. The following are the guidelines
for installing Anaconda:
1) Download anaconda from this website https://www.anaconda.com/download/
2) Select the desired operating system. Download any version that could be convenient for the
user.
3) Double click on the .exe file
4) When installing Anaconda, a pop-up menu may ask whether or not to “Add Anaconda to my
PATH environment variable”, and “Register Anaconda as my default Python 3.5.” We suggest
accepting both or any one option.
It takes around 20 minutes to finish all these procedures
5) You can either use ipython or jupyter notebook or Anaconda prompt. Configuring your ip
address is very necessary while doing these procedures. At anaconda prompt type python and the
below two lines occur.
(C:\Users\akshay\Anaconda3) C:\Users\akshay>Python
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
6) if the below line does not occur then there is a problem in your installation
7) For working with your data it is necessary to load your dataset into the directory. Convert it
into csv file as I had mentioned in your report. Load the data set very carefully since error many
occur if there is a problem. While working with your project I did certain things with the dataset
that are as follows.
Screenshots:
6) if the below line does not occur then there is a problem in your installation
7) For working with your data it is necessary to load your dataset into the directory. Convert it
into csv file as I had mentioned in your report. Load the data set very carefully since error many
occur if there is a problem. While working with your project I did certain things with the dataset
that are as follows.
Screenshots:
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Initial step for a programming is to insert the dataset into the program. The dataset has been
collected from the sensors that consist of various parameters like voltage, temperature, vibration,
pressure etc. the result has been predicted with the help of python. We have also compared
several algorithms with the artificial neural network. It is determined that ANN works well with
the industrial based predictive analysis and the results are evaluated as follows.
6.1 Evaluation:
The true positives and true negatives classifications are considered as accurate classifications that
reside along the most important diagonal line in the confusion matrix. The model errors represent
the remaining fields. We can derive performance metrics from the confusion matrix. We can
determine the accuracy by the percentage of the total number of predictions which were correct
by employing equation 3:
Accuracy = 𝑇𝑃+𝑇𝑁 / 𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁 × 100% (eqn 1)
The True Positive (TP) rate is the percentage of positive cases that were appropriately
recognized, as determined employing the equation:
collected from the sensors that consist of various parameters like voltage, temperature, vibration,
pressure etc. the result has been predicted with the help of python. We have also compared
several algorithms with the artificial neural network. It is determined that ANN works well with
the industrial based predictive analysis and the results are evaluated as follows.
6.1 Evaluation:
The true positives and true negatives classifications are considered as accurate classifications that
reside along the most important diagonal line in the confusion matrix. The model errors represent
the remaining fields. We can derive performance metrics from the confusion matrix. We can
determine the accuracy by the percentage of the total number of predictions which were correct
by employing equation 3:
Accuracy = 𝑇𝑃+𝑇𝑁 / 𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁 × 100% (eqn 1)
The True Positive (TP) rate is the percentage of positive cases that were appropriately
recognized, as determined employing the equation:
Recall = 𝑇𝑃/𝑇𝑃+𝐹𝑁 × 100% (eqn 2)
The Precision is the percentage of the expected positive cases that were precise, as obtained by
employing the equation:
Precision = 𝐹𝑃/𝐹𝑃+𝑇𝑁 × 100% (eqn 3)
Detection time: In order to recognize an attack packet during detection, we employed detection
time, and a short detection time is considered as better detection time.
5.2 Analysis:
The comparison of all the classifiers employing the method of cross validation with score feature
and without score feature is given in the table below.
Evaluation
criteria
Classifiers with the score features Classifiers without the score feature
Naïve
Bayes
SVM ANN Naïve
Bayes
SVM ANN
Accuracy
(%)
94.53 95.52 97.01 74.62 79.10 82.08
Sensitivity
(%)
87.87 96.96 96.96 60.60 81.81 81.81
Specificity
(%)
97.05 94.11 97.05 74.66 82.35 82.35
Table 4: Comparison of classifiers employing the method of cross validation
The Precision is the percentage of the expected positive cases that were precise, as obtained by
employing the equation:
Precision = 𝐹𝑃/𝐹𝑃+𝑇𝑁 × 100% (eqn 3)
Detection time: In order to recognize an attack packet during detection, we employed detection
time, and a short detection time is considered as better detection time.
5.2 Analysis:
The comparison of all the classifiers employing the method of cross validation with score feature
and without score feature is given in the table below.
Evaluation
criteria
Classifiers with the score features Classifiers without the score feature
Naïve
Bayes
SVM ANN Naïve
Bayes
SVM ANN
Accuracy
(%)
94.53 95.52 97.01 74.62 79.10 82.08
Sensitivity
(%)
87.87 96.96 96.96 60.60 81.81 81.81
Specificity
(%)
97.05 94.11 97.05 74.66 82.35 82.35
Table 4: Comparison of classifiers employing the method of cross validation
Figure 13: Comparison of Classification accuracy
Following cross validation it was found that the best performance for each classifier was 94.53%
for Naïve Bayes, 95.52% for SVM and 97.01% for proposed classifier as shown in figure 13. It
is found that amongst classifiers, Naïve Bayes and SVM, the proposed classifier performs better
with the classification accuracy of 97% when tested with cross validation. Moreover, the model
also improves classification quality in terms of both sensitivity and specificity employing a better
feature vector.
Following cross validation it was found that the best performance for each classifier was 94.53%
for Naïve Bayes, 95.52% for SVM and 97.01% for proposed classifier as shown in figure 13. It
is found that amongst classifiers, Naïve Bayes and SVM, the proposed classifier performs better
with the classification accuracy of 97% when tested with cross validation. Moreover, the model
also improves classification quality in terms of both sensitivity and specificity employing a better
feature vector.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Figure 14: Prediction of false alarm rate
As mentioned earlier the false alarm rate could also be easily detected by this algorithm. This
could be done effectively with Artificial Neural Network and the predicted graph is shown in the
figure 14.
As mentioned earlier the false alarm rate could also be easily detected by this algorithm. This
could be done effectively with Artificial Neural Network and the predicted graph is shown in the
figure 14.
Chapter 7: Conclusion
The main aim of this project is to be to investigate the development and applications of
“Predictive Maintenance” concept in industrial machines. In addition to the survey, different
machine learning approaches and their applicability in predictive maintenance applications are
successfully explained and compared. In this comparison we have estimated the accuracy,
precision and specificity. Finally, a detailed analysis was made between these approaches to
determine the best method to be used to overcome some of the current industrial predictive
maintenance challenges. We have proved that ANN (Artificial neural network) forms the
greatest of all algorithms. The next lies the SVM (support Vector machines). In SVM model the
training/test samples are represented as dot/points in the space and they are mapped in such a
way that the clear gap among the categories appears which separates the samples. New test
samples/examples are later mapped on the Using the kernel trick SVMs are also be able to
perform non-linear classification in addition to linear classification. SVMs automatically map the
inputs with high dimensional feature/attribute spaces. In general SVM supports to construct
hyper plane in any space and this can be employed for any tasks such as regression, prediction or
classification.
The need to model many complicated functions with the same idea gave rise to the concept of
artificial neural network. The technique behind learning is that interactively determining the
coefficients for the input and the output that leads to the prediction. For the function
approximation as stated in the previous line, it is not necessary for the neural network to fit into a
straight line. It’s common that a straight line is drawn in the standard linear regression model that
goes through the values of the independent variable. But for the ANN any shape or a curve could
be fitted. The extension of the function approximation arises the Forecasting that could predicts
the time series data. We have a target variable and if that value predicts the same point in the
future time then at each step of the training phase we should present with the historical data to
the neural network. For instance, 60th week dataset had to be fed which is asked to estimate the
61st week based on the time series. This could be best done by ANN. This type of analysis is
done at the stock market where it is necessary to detect the share values (entirely runs on the
basis of prediction). ANN works well with the sample classification that maps the input data with
the various categories and classes. Classification another category could be the concept of
clustering, where we don’t have the amount of classes beforehand. With ANN it could be really
easy to cluster the classified records. Higher output nodes with an input react more effectively
for that sample.
The main aim of this project is to be to investigate the development and applications of
“Predictive Maintenance” concept in industrial machines. In addition to the survey, different
machine learning approaches and their applicability in predictive maintenance applications are
successfully explained and compared. In this comparison we have estimated the accuracy,
precision and specificity. Finally, a detailed analysis was made between these approaches to
determine the best method to be used to overcome some of the current industrial predictive
maintenance challenges. We have proved that ANN (Artificial neural network) forms the
greatest of all algorithms. The next lies the SVM (support Vector machines). In SVM model the
training/test samples are represented as dot/points in the space and they are mapped in such a
way that the clear gap among the categories appears which separates the samples. New test
samples/examples are later mapped on the Using the kernel trick SVMs are also be able to
perform non-linear classification in addition to linear classification. SVMs automatically map the
inputs with high dimensional feature/attribute spaces. In general SVM supports to construct
hyper plane in any space and this can be employed for any tasks such as regression, prediction or
classification.
The need to model many complicated functions with the same idea gave rise to the concept of
artificial neural network. The technique behind learning is that interactively determining the
coefficients for the input and the output that leads to the prediction. For the function
approximation as stated in the previous line, it is not necessary for the neural network to fit into a
straight line. It’s common that a straight line is drawn in the standard linear regression model that
goes through the values of the independent variable. But for the ANN any shape or a curve could
be fitted. The extension of the function approximation arises the Forecasting that could predicts
the time series data. We have a target variable and if that value predicts the same point in the
future time then at each step of the training phase we should present with the historical data to
the neural network. For instance, 60th week dataset had to be fed which is asked to estimate the
61st week based on the time series. This could be best done by ANN. This type of analysis is
done at the stock market where it is necessary to detect the share values (entirely runs on the
basis of prediction). ANN works well with the sample classification that maps the input data with
the various categories and classes. Classification another category could be the concept of
clustering, where we don’t have the amount of classes beforehand. With ANN it could be really
easy to cluster the classified records. Higher output nodes with an input react more effectively
for that sample.
References:
[1] Nyce, Charles (2007), Predictive Analytics White Paper (PDF), American Institute for
Chartered Property Casualty Underwriters/Insurance Institute of America.
[2] Edward R. Jones, “Neural Networks' Role in Predictive Analytics”, DM Review Special
Report, February 12, 2008.
[3] Christopher Krauss, Xuan Anh Do, Nicolas Huck, (2017) “Deep neural networks,
gradientboosted trees, random forests: Statistical arbitrage on the S&P 500”, European
Journal of Operational Research. [4]. Buettner, Ricardo (2016). "Predicting user behavior
in electronic markets based on personality-mining in large online social networks: A
personality-based product recommender framework". Electronic Markets: The
International Journal on Networked Business. Springer: 1–19.
[4] Schiff, Mike (March 6, 2012), BI Experts: Why Predictive Analytics Will Continue to
Grow, The Data Warehouse Institute.
[5] Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT
Press. Online.
[6] Schmidhuber, Jürgen (2015). "Deep Learning". Scholarpedia. 10 (11): 32832.
[7] Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representation Learning: A Review and
New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35
(8): 1798–1828.
[8] J. Schmidhuber, “Deep Learning in Neural Networks”, Technical Report IDSIA-03-14
arXiv: 1404.7828.
[9] J. Schmidhuber, (2001), “LSTM Recurrent Networks Learn Simple Context Free and
Context Sensitive Languages”, IEEE Transaction on Neural Networks, Vol-12.
[10] J. Hall, P. Mars (2000), Proceedings of IEEE Fuzzy System Conference.
[11] Zhao Xia Wang et al, (2006), “Research on fuzzy neural network algorithms for
nonlinear network traffic prediction”, Optoelectronics Letters.
[12] Dong Yeong Kim, Ju Hyun Kim, Kwae Hwan Yoo, Man Gyun Na, (2015)
“Prediction of hydrogen concentration in containment during severe accidents using
fuzzy neural network”, Nuclear Engineering and Technology.
[13] E. Lughofer (2011), “Evolving Fuzzy Systems: Methodologies, Advanced
Concepts and Applications”, Springer Heidelberg.
[14] Granger, C. W.J., & Morgenstern, O, (1970), “Predictability of stock market
prices”, DC Heath Lexington, Mass.
[15] Bondt, W. F., & Thaler, R. (1985). Does the stock market overreact? The Journal
of finance, 40(3), 793–805.
[16] Campbell, J. Y., & Thompson, S. B. (2008). Predicting excess stock returns out
of sample: Can anything beat the historical average? Review of Financial Studies, 21(4),
1509–1531.
[1] Nyce, Charles (2007), Predictive Analytics White Paper (PDF), American Institute for
Chartered Property Casualty Underwriters/Insurance Institute of America.
[2] Edward R. Jones, “Neural Networks' Role in Predictive Analytics”, DM Review Special
Report, February 12, 2008.
[3] Christopher Krauss, Xuan Anh Do, Nicolas Huck, (2017) “Deep neural networks,
gradientboosted trees, random forests: Statistical arbitrage on the S&P 500”, European
Journal of Operational Research. [4]. Buettner, Ricardo (2016). "Predicting user behavior
in electronic markets based on personality-mining in large online social networks: A
personality-based product recommender framework". Electronic Markets: The
International Journal on Networked Business. Springer: 1–19.
[4] Schiff, Mike (March 6, 2012), BI Experts: Why Predictive Analytics Will Continue to
Grow, The Data Warehouse Institute.
[5] Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT
Press. Online.
[6] Schmidhuber, Jürgen (2015). "Deep Learning". Scholarpedia. 10 (11): 32832.
[7] Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representation Learning: A Review and
New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35
(8): 1798–1828.
[8] J. Schmidhuber, “Deep Learning in Neural Networks”, Technical Report IDSIA-03-14
arXiv: 1404.7828.
[9] J. Schmidhuber, (2001), “LSTM Recurrent Networks Learn Simple Context Free and
Context Sensitive Languages”, IEEE Transaction on Neural Networks, Vol-12.
[10] J. Hall, P. Mars (2000), Proceedings of IEEE Fuzzy System Conference.
[11] Zhao Xia Wang et al, (2006), “Research on fuzzy neural network algorithms for
nonlinear network traffic prediction”, Optoelectronics Letters.
[12] Dong Yeong Kim, Ju Hyun Kim, Kwae Hwan Yoo, Man Gyun Na, (2015)
“Prediction of hydrogen concentration in containment during severe accidents using
fuzzy neural network”, Nuclear Engineering and Technology.
[13] E. Lughofer (2011), “Evolving Fuzzy Systems: Methodologies, Advanced
Concepts and Applications”, Springer Heidelberg.
[14] Granger, C. W.J., & Morgenstern, O, (1970), “Predictability of stock market
prices”, DC Heath Lexington, Mass.
[15] Bondt, W. F., & Thaler, R. (1985). Does the stock market overreact? The Journal
of finance, 40(3), 793–805.
[16] Campbell, J. Y., & Thompson, S. B. (2008). Predicting excess stock returns out
of sample: Can anything beat the historical average? Review of Financial Studies, 21(4),
1509–1531.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
[17] Agrawal, J., Chourasia, V., & Mittra, A. (2013). State-of-the-art in stock
prediction techniques. International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering,2(4), 1360–1366.
[18] Eunsuk Chong, Chulwoo Han, Frank C. Park, (2017), “Deep Learning Networks
for Stock Market Analysis and Prediction: Methodology, Data Representations, and Case
Studies”, Expert Systems With Applications.
[19] Alaa Sheta, “Software Effort Estimation and Stock Market Prediction Using
Takagi-Sugeno Fuzzy Models” IEEE International Conference on Fuzzy System, pp.171-
178, July 2006.
[20] Chu S.C. And Kim H.S, "Automatic knowledge generation from the stock market
data", Proceedings of 93 Korea Japan joint conference on expert systems, pp. 193-208,
1993.
[21] Justin Wolfers, Eric Zitzewitz, “Prediction markets in theory and practice”,
national bureau of economic research, pp.1-11, March 2006.
[22] Khaled Hammouda, Prof. Fakhreddine Karray, “A comparative study of data
clustering techniques” pp.1- 11, March 2006.
[23] R. Babuska, J. A. Roubos, and H. B. Verbruggen, “Identification of MIMO
systems by input-output TS fuzzy models,” in Proceedings of Fuzzy-IEEE’98,
Anchorage, Alaska, 1998.
[24] R. J. Van Eyden, “Application of Neural Networks in the Forecasting of Share
Prices” Finance and Technology Publishing, 1996.
[25] Ypke Hiemstra, “A Stock Market Forecasting Support System Based on Fuzzy
Logic”, Proceedings of the Twenty-Seventh Annual Hawaii International Conference on
System, Sciences, IEEE,pp.281- 287,1994.
[26] Chiu, S. L.; 1994, "Fuzzy model identification based on cluster estimation",
Journal of Intelligent and Fuzzy Systems, 2, John Wiley & Sons, pp. 267-278.
[27] Available online (Process modelling)
http://www.itl.nist.gov/div898/handbook/pmd/pmd.ht m.
[28] Available online: www.wc3.com
[29] Available online: www.ibm.com/developerWorks
[30] Balabanov, T, Hadjiski, M., Koprinkova-Hristova, P., Doukovska, L., Beloreshki,
S. “Neural Network Model of Mill-Fan Systems Elements Vibration for Predictive
Maintenance.” 2011.
[31] Zhang, Hong-tao, and GAO, Ming-xu. "The Application of Support Vector
Machine (SVM) Regression E Method in Tunnel Fires." IEEE. N.p., Dec. 2017
[32] Simon Fossier & Pierre-Olivier Robic. “Maintenance for Complex Systems-from
Preventive to Predictive” 2017.
prediction techniques. International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering,2(4), 1360–1366.
[18] Eunsuk Chong, Chulwoo Han, Frank C. Park, (2017), “Deep Learning Networks
for Stock Market Analysis and Prediction: Methodology, Data Representations, and Case
Studies”, Expert Systems With Applications.
[19] Alaa Sheta, “Software Effort Estimation and Stock Market Prediction Using
Takagi-Sugeno Fuzzy Models” IEEE International Conference on Fuzzy System, pp.171-
178, July 2006.
[20] Chu S.C. And Kim H.S, "Automatic knowledge generation from the stock market
data", Proceedings of 93 Korea Japan joint conference on expert systems, pp. 193-208,
1993.
[21] Justin Wolfers, Eric Zitzewitz, “Prediction markets in theory and practice”,
national bureau of economic research, pp.1-11, March 2006.
[22] Khaled Hammouda, Prof. Fakhreddine Karray, “A comparative study of data
clustering techniques” pp.1- 11, March 2006.
[23] R. Babuska, J. A. Roubos, and H. B. Verbruggen, “Identification of MIMO
systems by input-output TS fuzzy models,” in Proceedings of Fuzzy-IEEE’98,
Anchorage, Alaska, 1998.
[24] R. J. Van Eyden, “Application of Neural Networks in the Forecasting of Share
Prices” Finance and Technology Publishing, 1996.
[25] Ypke Hiemstra, “A Stock Market Forecasting Support System Based on Fuzzy
Logic”, Proceedings of the Twenty-Seventh Annual Hawaii International Conference on
System, Sciences, IEEE,pp.281- 287,1994.
[26] Chiu, S. L.; 1994, "Fuzzy model identification based on cluster estimation",
Journal of Intelligent and Fuzzy Systems, 2, John Wiley & Sons, pp. 267-278.
[27] Available online (Process modelling)
http://www.itl.nist.gov/div898/handbook/pmd/pmd.ht m.
[28] Available online: www.wc3.com
[29] Available online: www.ibm.com/developerWorks
[30] Balabanov, T, Hadjiski, M., Koprinkova-Hristova, P., Doukovska, L., Beloreshki,
S. “Neural Network Model of Mill-Fan Systems Elements Vibration for Predictive
Maintenance.” 2011.
[31] Zhang, Hong-tao, and GAO, Ming-xu. "The Application of Support Vector
Machine (SVM) Regression E Method in Tunnel Fires." IEEE. N.p., Dec. 2017
[32] Simon Fossier & Pierre-Olivier Robic. “Maintenance for Complex Systems-from
Preventive to Predictive” 2017.
[33] Yingying Zhao, Dongsheng, Dong Ao, Dahai Kang, Qin Lv and Li Shang. “Fault
Prediction and Diagnosis of Wind Turbine Generators Using SCADA Data.” August
2017.
[34] Jung, Marcel, Niculita, Octavian, Skaf, Zakwan. “Comparison of Different
Classification Algorithms for Fault Detection and Fault Isolation in Complex Systems”.
Nov. 2017.
[35] Russell, S. and Norvig, P. (1995). Artificial intelligence a modern approach. 3rd
ed.
[36] Elattar, H., Elminir, H. and Riad, A. (2016). Prognostics: a literature review.
Complex & Intelligent Systems, 2(2), pp.125-154.
[37] Wilson, A. (2016). Big-Data Analytics for Predictive Maintenance Modeling:
Challenges and Opportunities. Journal of Petroleum Technology, 68(10), pp.71-72.
[38] Mobley, R. (2002). An introduction to predictive maintenance. Amsterdam:
Butterworth-Heinemann.
[39] Computational Statistics and Predictive Analysis in Machine Learning. (2016).
International Journal of Science and Research (IJSR), 5(1), pp.1521-1524.
[40] . Susto, G., Schirru, A., Pampuri, S., McLoone, S. and Beghi, A. (2015). Machine
Learning for Predictive Maintenance: A Multiple Classifier Approach. IEEE Transactions
on Industrial Informatics, 11(3), pp.812-820.
[41] Yin, Z. and Hou, J. (2016). Recent advances on SVM based fault diagnosis and
process monitoring in complicated industrial processes. Neurocomputing, 174, pp.643-
650.
[42] Borkowski, M., Fdhila, W., Nardelli, M., Rinderle-Ma, S. and Schulte, S. (2017).
Event-based failure prediction in distributed business processes. Information Systems.
[43] Soffer, P., Hinze, A., Koschmider, A., Ziekow, H., Ciccio, C., Koldehofe, B.,
Kopp, O., Jacobsen, A., Sürmeli, J. and Song, W. (2017). From event streams to process
models and back: Challenges and opportunities. Information Systems.
[44] Peng, H. and Bai, X. (2018). Improving orbit prediction accuracy through
supervised machine learning. Advances in Space Research, 61(10), pp.2628-2646. 12
[45] Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems
research. Journal of information technology, 20(2), 67-87.
[46] Bakos, J. Y., & Treacy, M. E. (1986). Information technology and corporate
strategy: a research perspective. MIS quarterly, 107-119.
[47] Bhattacherjee, A. (2012). Social science research: principles, methods, and
practices.
Prediction and Diagnosis of Wind Turbine Generators Using SCADA Data.” August
2017.
[34] Jung, Marcel, Niculita, Octavian, Skaf, Zakwan. “Comparison of Different
Classification Algorithms for Fault Detection and Fault Isolation in Complex Systems”.
Nov. 2017.
[35] Russell, S. and Norvig, P. (1995). Artificial intelligence a modern approach. 3rd
ed.
[36] Elattar, H., Elminir, H. and Riad, A. (2016). Prognostics: a literature review.
Complex & Intelligent Systems, 2(2), pp.125-154.
[37] Wilson, A. (2016). Big-Data Analytics for Predictive Maintenance Modeling:
Challenges and Opportunities. Journal of Petroleum Technology, 68(10), pp.71-72.
[38] Mobley, R. (2002). An introduction to predictive maintenance. Amsterdam:
Butterworth-Heinemann.
[39] Computational Statistics and Predictive Analysis in Machine Learning. (2016).
International Journal of Science and Research (IJSR), 5(1), pp.1521-1524.
[40] . Susto, G., Schirru, A., Pampuri, S., McLoone, S. and Beghi, A. (2015). Machine
Learning for Predictive Maintenance: A Multiple Classifier Approach. IEEE Transactions
on Industrial Informatics, 11(3), pp.812-820.
[41] Yin, Z. and Hou, J. (2016). Recent advances on SVM based fault diagnosis and
process monitoring in complicated industrial processes. Neurocomputing, 174, pp.643-
650.
[42] Borkowski, M., Fdhila, W., Nardelli, M., Rinderle-Ma, S. and Schulte, S. (2017).
Event-based failure prediction in distributed business processes. Information Systems.
[43] Soffer, P., Hinze, A., Koschmider, A., Ziekow, H., Ciccio, C., Koldehofe, B.,
Kopp, O., Jacobsen, A., Sürmeli, J. and Song, W. (2017). From event streams to process
models and back: Challenges and opportunities. Information Systems.
[44] Peng, H. and Bai, X. (2018). Improving orbit prediction accuracy through
supervised machine learning. Advances in Space Research, 61(10), pp.2628-2646. 12
[45] Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems
research. Journal of information technology, 20(2), 67-87.
[46] Bakos, J. Y., & Treacy, M. E. (1986). Information technology and corporate
strategy: a research perspective. MIS quarterly, 107-119.
[47] Bhattacherjee, A. (2012). Social science research: principles, methods, and
practices.
[48] Blenko, M. W., Mankins, M. C., & Rogers, P. (2010). The decision-driven
organization. Harvard Business Review, 88(6), 54-62.
[49] Boland, R. J. (2008). Decision Making And Sense making, in Burstein, F. &
Holsapple, C. W., 2008, Handbook On Decision Support Systems.Vol.1,pp.55–63.
[50] Boyer, J., Harris, T., Green, B., Frank, B., & Van De Vanter, K. (2012). 5 keys to
business analytics program success. MC Press.
[51] Brinkmann, S., & Kvale, S. (2005). Confronting the ethics of qualitative
research. Journal of constructivist psychology, 18(2), 157-181.
[52] Brydon, M., & Gemino, A. (2008). You've Data Mined. Now What?.
Communications of the Association for Information Systems, 22(1), 33.
[53] CGI Group, (2013). Predictive analytics – The rise and value of predictive
analytics in enterprise decision making. White paper.
[54] CGI Group 2013. Calhoun, K. J., Teng, J. T., & Cheon, M. J. (2002). Impact of
national culture on information technology usage behaviour: an exploratory study of
decision making in Korea and the USA. Behaviour & Information Technology, 21(4),
293-302.
[55] Carter, N. M., Hoffman, J. J., & Cullen, J. B. (1994). The Effects of Computer
Technology and decision-making structure on organizational performance: A dual-core
model approach. The Journal of High Technology Management Research, 5(1), 59-76.
Clark, T.
[56] D., Jones, M. C., & Armstrong, C. P. (2007). The dynamic structure of
management support systems: theory development, research focus, and direction. Mis
Quarterly, 31(3), 579-615.
[57] Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business Intelligence and
Analytics: From Big Data to Big Impact. MIS quarterly, 36(4), 1165-1188.
[58] Choo, C. W. (1996). The knowing organization: How organizations use
information to construct meaning, create knowledge and make decisions. International
journal of information management, 16(5), 329-340.
[59] Cooper, M. R., & Wood, M. T. (1974). Effects of member participation and
commitment in group decision making on influence, satisfaction, and decision riskiness.
Journal of Applied Psychology, 59(2), 127.
[60] Courtney, J. F. (2001). Decision making and knowledge management in
inquiring organizations: toward a new decision-making paradigm for DSS.Decision
Support Systems, 31(1), 17-38.
[61] Daft, R. L. (1978). A dual-core model of organizational innovation. Academy of
management journal, 21(2), 193-210.
[62] Davenport, T. H. (2006). Competing on analytics.harvard business review, (84),
98-107.
[63] Davenport, T. H., & Harris, J. G. (2007). Competing on analytics: the new
science of winning. Harvard Business Press.
[64] Davenport, T. H., & Kim, J. (2013). Keeping Up with the Quants: Your Guide to
Understanding and Using Analytics. Harvard Business Review Press.
[65] Delen, D., & Demirkan, H. (2013). Data, information and analytics as services.
Decision Support Systems, 55(1), 359-363.
[66] Dewett, T., & Jones, G. R. (2001). The role of information technology in the
organization: a review, model, and assessment. Journal of management, 27(3), 313-346.
organization. Harvard Business Review, 88(6), 54-62.
[49] Boland, R. J. (2008). Decision Making And Sense making, in Burstein, F. &
Holsapple, C. W., 2008, Handbook On Decision Support Systems.Vol.1,pp.55–63.
[50] Boyer, J., Harris, T., Green, B., Frank, B., & Van De Vanter, K. (2012). 5 keys to
business analytics program success. MC Press.
[51] Brinkmann, S., & Kvale, S. (2005). Confronting the ethics of qualitative
research. Journal of constructivist psychology, 18(2), 157-181.
[52] Brydon, M., & Gemino, A. (2008). You've Data Mined. Now What?.
Communications of the Association for Information Systems, 22(1), 33.
[53] CGI Group, (2013). Predictive analytics – The rise and value of predictive
analytics in enterprise decision making. White paper.
[54] CGI Group 2013. Calhoun, K. J., Teng, J. T., & Cheon, M. J. (2002). Impact of
national culture on information technology usage behaviour: an exploratory study of
decision making in Korea and the USA. Behaviour & Information Technology, 21(4),
293-302.
[55] Carter, N. M., Hoffman, J. J., & Cullen, J. B. (1994). The Effects of Computer
Technology and decision-making structure on organizational performance: A dual-core
model approach. The Journal of High Technology Management Research, 5(1), 59-76.
Clark, T.
[56] D., Jones, M. C., & Armstrong, C. P. (2007). The dynamic structure of
management support systems: theory development, research focus, and direction. Mis
Quarterly, 31(3), 579-615.
[57] Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business Intelligence and
Analytics: From Big Data to Big Impact. MIS quarterly, 36(4), 1165-1188.
[58] Choo, C. W. (1996). The knowing organization: How organizations use
information to construct meaning, create knowledge and make decisions. International
journal of information management, 16(5), 329-340.
[59] Cooper, M. R., & Wood, M. T. (1974). Effects of member participation and
commitment in group decision making on influence, satisfaction, and decision riskiness.
Journal of Applied Psychology, 59(2), 127.
[60] Courtney, J. F. (2001). Decision making and knowledge management in
inquiring organizations: toward a new decision-making paradigm for DSS.Decision
Support Systems, 31(1), 17-38.
[61] Daft, R. L. (1978). A dual-core model of organizational innovation. Academy of
management journal, 21(2), 193-210.
[62] Davenport, T. H. (2006). Competing on analytics.harvard business review, (84),
98-107.
[63] Davenport, T. H., & Harris, J. G. (2007). Competing on analytics: the new
science of winning. Harvard Business Press.
[64] Davenport, T. H., & Kim, J. (2013). Keeping Up with the Quants: Your Guide to
Understanding and Using Analytics. Harvard Business Review Press.
[65] Delen, D., & Demirkan, H. (2013). Data, information and analytics as services.
Decision Support Systems, 55(1), 359-363.
[66] Dewett, T., & Jones, G. R. (2001). The role of information technology in the
organization: a review, model, and assessment. Journal of management, 27(3), 313-346.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
[67] Dewey, J. (1933). How we think: A restatement of the reflective thinking to the
educative process.
[68] Heath. C-Retail (2011), About Us, [Online] (accessed 2015-05-02).
[69] Duncan, R. B. (1973). Multiple decision-making structures in adapting to
environmental uncertainty: The impact on organizational effectiveness. Human Relations.
[70] Duncan, R. B. (1974). Modifications in decision structure in adapting to the
environment: Some implications for organizational learning. Decision Sciences, 5, 705-
725.
[71] Fiedler, K. D., Grover, V., & Teng, J. T. (1996). An empirically derived
taxonomy of information technology structure and its relationship to organizational
structure. Journal of Management Information Systems, 9-34.
[72] Ge, M., & Helfert, M. (2013). Impact of Information Quality on Supply Chain
Decisions. Journal of Computer Information Systems, 53(4).
[73] Golfarelli, M., Rizzi, S., & Cella, I. (2004). Beyond data warehousing: what's
next in business intelligence?. In Proceedings of the 7th ACM international workshop on
Data warehousing and OLAP (pp. 1-6). ACM.
[74] Gorry, G. A., & Morton, M. S. S. (1971). A framework for management
information systems (Vol. 13). Massachusetts Institute of Technology.
[75] Griffith, T. L., Northcraft, G. B., & Fuller, M. A. (2008). Borgs in the org?
Organizational decision making and technology. The Oxford Handbook of
Organizational Decision Making, Oxford University Press, New York.
[76] Hosack, B., Hall, D., Paradice, D., & Courtney, J. F. (2012). A look toward the
future: decision support systems research is alive and well. Journal of the Association for
Information Systems, 13(5), 3.
[77] Huber, G. P. (1990). A theory of the effects of advanced information
technologies on organizational design, intelligence, and decision making. Academy of
management review, 15(1), 47-71.
[78] IKEA (2014), Facts & Figures for business year 2014, [Online] Availabe at:
http://franchisor.ikea.com/wp-content/uploads/2015/04/IKEA-2014_IKEA-retailingfacts-
and-figures.pdf (accessed 2015-05-20).
[79] Keen, P. G. (1981). Information systems and organizational change.
Communications of the ACM, 24(1), 24-33.
[80] Klatt, T., Schläfke, M., &Möller, K. (2011). Integrating business analytics into
strategic planning for better performance. Journal of business strategy, 32(6), 30-39.
[81] Kvale, S., & Brinkmann, S. (2009). Learning the craft of qualitative research
interviewing. Thousands Oaks: Sage Publications.
[82] Kvale, S. (1996). The 1,000-page question. Qualitative inquiry, 2(3), 275-284.
Lawler, J. J., & Elliot, R. (1996). Artificial intelligence in HRM: an experimental study
of an expert system. Journal of Management, 22(1), 85-111.
[83] Laursen, G., &Thorlund, J. (2010). Business analytics for managers: Taking
business intelligence beyond reporting (Vol. 40). John Wiley & Sons.
[84] Wixom, B. H., Yen, B., & Relich, M. (2013). Maximizing value from business
analytics. MIS Quarterly Executive, 12(2), 111-123.
[85] Yoerger, M., Crowe, J., & Allen, J. A. (2015). Participate or else! The effect of
participation in decision-making in meetings on employee engagement. Consulting
Psychology Journal: Practice And Research, 67(1), 65-80.
educative process.
[68] Heath. C-Retail (2011), About Us, [Online] (accessed 2015-05-02).
[69] Duncan, R. B. (1973). Multiple decision-making structures in adapting to
environmental uncertainty: The impact on organizational effectiveness. Human Relations.
[70] Duncan, R. B. (1974). Modifications in decision structure in adapting to the
environment: Some implications for organizational learning. Decision Sciences, 5, 705-
725.
[71] Fiedler, K. D., Grover, V., & Teng, J. T. (1996). An empirically derived
taxonomy of information technology structure and its relationship to organizational
structure. Journal of Management Information Systems, 9-34.
[72] Ge, M., & Helfert, M. (2013). Impact of Information Quality on Supply Chain
Decisions. Journal of Computer Information Systems, 53(4).
[73] Golfarelli, M., Rizzi, S., & Cella, I. (2004). Beyond data warehousing: what's
next in business intelligence?. In Proceedings of the 7th ACM international workshop on
Data warehousing and OLAP (pp. 1-6). ACM.
[74] Gorry, G. A., & Morton, M. S. S. (1971). A framework for management
information systems (Vol. 13). Massachusetts Institute of Technology.
[75] Griffith, T. L., Northcraft, G. B., & Fuller, M. A. (2008). Borgs in the org?
Organizational decision making and technology. The Oxford Handbook of
Organizational Decision Making, Oxford University Press, New York.
[76] Hosack, B., Hall, D., Paradice, D., & Courtney, J. F. (2012). A look toward the
future: decision support systems research is alive and well. Journal of the Association for
Information Systems, 13(5), 3.
[77] Huber, G. P. (1990). A theory of the effects of advanced information
technologies on organizational design, intelligence, and decision making. Academy of
management review, 15(1), 47-71.
[78] IKEA (2014), Facts & Figures for business year 2014, [Online] Availabe at:
http://franchisor.ikea.com/wp-content/uploads/2015/04/IKEA-2014_IKEA-retailingfacts-
and-figures.pdf (accessed 2015-05-20).
[79] Keen, P. G. (1981). Information systems and organizational change.
Communications of the ACM, 24(1), 24-33.
[80] Klatt, T., Schläfke, M., &Möller, K. (2011). Integrating business analytics into
strategic planning for better performance. Journal of business strategy, 32(6), 30-39.
[81] Kvale, S., & Brinkmann, S. (2009). Learning the craft of qualitative research
interviewing. Thousands Oaks: Sage Publications.
[82] Kvale, S. (1996). The 1,000-page question. Qualitative inquiry, 2(3), 275-284.
Lawler, J. J., & Elliot, R. (1996). Artificial intelligence in HRM: an experimental study
of an expert system. Journal of Management, 22(1), 85-111.
[83] Laursen, G., &Thorlund, J. (2010). Business analytics for managers: Taking
business intelligence beyond reporting (Vol. 40). John Wiley & Sons.
[84] Wixom, B. H., Yen, B., & Relich, M. (2013). Maximizing value from business
analytics. MIS Quarterly Executive, 12(2), 111-123.
[85] Yoerger, M., Crowe, J., & Allen, J. A. (2015). Participate or else! The effect of
participation in decision-making in meetings on employee engagement. Consulting
Psychology Journal: Practice And Research, 67(1), 65-80.
[86] Zhang, X., Hu, T., Janz, B. D., & Gillenson, M. L. (2006). Radio frequency
identification: The initiator of a domino effect. In Proceedings of the 2006 Southern
Association for Information Systems Conference (pp. 191-196).
[87] SAP (2013).Predicting the future of predictive analytics.White paper.SAP and
Loudhouse 2013.
[88] Schläfke, M., Silvi, R., &Möller, K. (2012). A framework for business analytics
in performance management. International Journal of Productivity and Performance
Management, 62(1), 110-122. S
[89] emos Group (2008). [Online] Available at: http://www.semos.com.mk/ (accessed
2015-05- 02). Seale, C. (1999). Quality in qualitative research. Qualitative inquiry, 5(4),
465-478.
[90] Sharda, R., Asamoah, D. A., & Ponna, N. (2013, June). Business analytics:
Research and teaching perspectives. In Information Technology Interfaces (ITI),
Proceedings of the ITI 2013 35th International Conference on (pp. 19-27). IEEE.
[91] Shmueli, G., & Koppius, O. (2010). Predictive analytics in information systems
research. Robert H. Smith School Research Paper No. RHS, 06-138.
[92] Shumway, C. R., Maher, P. M., Baker, M. R., Souder, W. E., Rubenstein, A. H.,
& Gallant, A. R. (1975). Diffuse decision-making in hierarchical organizations: an
empirical examination. Management Science, 21(6), 697-707.
[93] Siegel, E. (2013). Predictive analytics: The power to predict who will click, buy,
lie, or die. John Wiley & Sons.
[94] Simon, H. (1977). The New Science of Management Decision. Englewood
Cliffs: Prentice Hall.
[95] Song, S. K., Kim, D. J., Hwang, M., Kim, J., Jeong, D. H., Lee, S., & Sung, W.
(2013, December). Prescriptive Analytics System for Improving Research Power. In
Computational Science and Engineering (CSE), 2013 IEEE 16th International
Conference on (pp. 1144-1145). IEEE.
[96] Stein, E. W. (1995). Organization memory: Review of concepts and
recommendations for management. International journal of information management,
15(1), 17-32.
[97] DBank (2013), Annual report, [Online] (accessed 2015-05-02). T-Mobile
Macedonia AD (2013), Annual report 2013, [online] Available on
http://www.tmobile.mk/public/documents/report-2013-en.pdf (accessed on 29/04/2015).
[98] Te'eni, D. (2001). Review: A cognitive-affective model of organizational
communication for designing IT. MIS quarterly, 25(2), 251-312.
[99] Tracy, K., & Dimock, A. (2004). Meetings: Discursive sites for building and
fragmenting community. Communication yearbook, 28, 127-166.
[100] Tsitsiklis, J. N. (1984). Problems in Decentralized Decision making and
Computation. Massachusetts Inst of Tech. Cambridge Lab for Information and Decision
Systems.
[101] Tuckman B (1965) Developmental sequence in small groups. Psychol Bull
63:384–399.
[102] Turban, E., Liang, T. P., & Wu, S. P. (2011). A framework for adopting
collaboration 2.0 tools for virtual group decision making. Group decision and negotiation,
20(2), 137- 154.
identification: The initiator of a domino effect. In Proceedings of the 2006 Southern
Association for Information Systems Conference (pp. 191-196).
[87] SAP (2013).Predicting the future of predictive analytics.White paper.SAP and
Loudhouse 2013.
[88] Schläfke, M., Silvi, R., &Möller, K. (2012). A framework for business analytics
in performance management. International Journal of Productivity and Performance
Management, 62(1), 110-122. S
[89] emos Group (2008). [Online] Available at: http://www.semos.com.mk/ (accessed
2015-05- 02). Seale, C. (1999). Quality in qualitative research. Qualitative inquiry, 5(4),
465-478.
[90] Sharda, R., Asamoah, D. A., & Ponna, N. (2013, June). Business analytics:
Research and teaching perspectives. In Information Technology Interfaces (ITI),
Proceedings of the ITI 2013 35th International Conference on (pp. 19-27). IEEE.
[91] Shmueli, G., & Koppius, O. (2010). Predictive analytics in information systems
research. Robert H. Smith School Research Paper No. RHS, 06-138.
[92] Shumway, C. R., Maher, P. M., Baker, M. R., Souder, W. E., Rubenstein, A. H.,
& Gallant, A. R. (1975). Diffuse decision-making in hierarchical organizations: an
empirical examination. Management Science, 21(6), 697-707.
[93] Siegel, E. (2013). Predictive analytics: The power to predict who will click, buy,
lie, or die. John Wiley & Sons.
[94] Simon, H. (1977). The New Science of Management Decision. Englewood
Cliffs: Prentice Hall.
[95] Song, S. K., Kim, D. J., Hwang, M., Kim, J., Jeong, D. H., Lee, S., & Sung, W.
(2013, December). Prescriptive Analytics System for Improving Research Power. In
Computational Science and Engineering (CSE), 2013 IEEE 16th International
Conference on (pp. 1144-1145). IEEE.
[96] Stein, E. W. (1995). Organization memory: Review of concepts and
recommendations for management. International journal of information management,
15(1), 17-32.
[97] DBank (2013), Annual report, [Online] (accessed 2015-05-02). T-Mobile
Macedonia AD (2013), Annual report 2013, [online] Available on
http://www.tmobile.mk/public/documents/report-2013-en.pdf (accessed on 29/04/2015).
[98] Te'eni, D. (2001). Review: A cognitive-affective model of organizational
communication for designing IT. MIS quarterly, 25(2), 251-312.
[99] Tracy, K., & Dimock, A. (2004). Meetings: Discursive sites for building and
fragmenting community. Communication yearbook, 28, 127-166.
[100] Tsitsiklis, J. N. (1984). Problems in Decentralized Decision making and
Computation. Massachusetts Inst of Tech. Cambridge Lab for Information and Decision
Systems.
[101] Tuckman B (1965) Developmental sequence in small groups. Psychol Bull
63:384–399.
[102] Turban, E., Liang, T. P., & Wu, S. P. (2011). A framework for adopting
collaboration 2.0 tools for virtual group decision making. Group decision and negotiation,
20(2), 137- 154.
[103] Walsh, J. P., & Ungson, G. R. (1991). Organizational memory. Academy of
management review, 16(1), 57-91.
[104] Watson, H. J. (2009). Tutorial: Business intelligence-Past, present, and future.
Communications of the Association for Information Systems, 25(1), 39.
[105] Witte, E., Joost, N., & Thimm, A. L. (1972). Field research on complex decision-
making processes-the phase theorem. International Studies of Management &
Organization, 2(2), 156-182.
[106] Wixom, B. H., Watson, H. J., & Werner, T. (2011). Developing an enterprise
business intelligence capability: The Norfolk Southern journey. MIS Quarterly
Executive, 10(2), 61-71.
[107] Blogs.technet.microsoft.com. (2018). Evaluating Failure Prediction Models for
Predictive Maintenance. [online] Available at:
https://blogs.technet.microsoft.com/machinelearning/2016/04/19/evaluating-failure-
prediction-models-for-predictive-maintenance/
[108] Reliabilityweb.com. (2018). Pushing IIoT Predictive Maintenance Forward: Two
Challenges to Overcome - Reliabilityweb. [online] Available at:
https://reliabilityweb.com/articles/entry/pushing-iiot-predictive-maintenance-forward-
two-challenges-to-overcome
[109] Dspace.lib.cranfield.ac.uk. (2018). Prognostics: design, implementation and
challenges. [online] Available at:
https://dspace.lib.cranfield.ac.uk/bitstream/handle/1826/10206/Prognostics-
Design_Implementation_and_Challenges-
2015.pdf;jsessionid=719CBF0463FDA21738936E3991990455?sequence=1
[110] Deshpande, B. (2018). 6 benefits of using predictive maintenance. [online]
Simafore.com. Available at: http://www.simafore.com/blog/bid/204816/6-benefits-of-
using-predictive-maintenance
[111] Durocher, D. and Feldmeier, G. (2004). Predictive versus preventive maintenance
- Future control technologies in motor diagnosis and system wellness - Future control
technologies in motor diagnosis and system wellness. IEEE Industry Applications
Magazine, 10(5), pp.12-21.
[112] Jain, Y. (2018). 5 Applications: Artificial Intelligence & Machine Learning in
Banking. [online] Newgenapps.com. Available at:
https://www.newgenapps.com/blog/applications-artificial-intelligence-machine-learning-
in-banking-finance
[113] Opocenska, H., Nahodil, P. and Hammer, M. (2017). USE OF
MULTIPARAMETRIC DIAGNOSTICS IN PREDICTIVE MAINTENANCE. MM
Science Journal, 2017(05), pp.2090-2093.
[114] Susto, G., Schirru, A., Pampuri, S., McLoone, S. and Beghi, A. (2015). Machine
Learning for Predictive Maintenance: A Multiple Classifier Approach. IEEE Transactions
on Industrial Informatics, 11(3), pp.812-820.
management review, 16(1), 57-91.
[104] Watson, H. J. (2009). Tutorial: Business intelligence-Past, present, and future.
Communications of the Association for Information Systems, 25(1), 39.
[105] Witte, E., Joost, N., & Thimm, A. L. (1972). Field research on complex decision-
making processes-the phase theorem. International Studies of Management &
Organization, 2(2), 156-182.
[106] Wixom, B. H., Watson, H. J., & Werner, T. (2011). Developing an enterprise
business intelligence capability: The Norfolk Southern journey. MIS Quarterly
Executive, 10(2), 61-71.
[107] Blogs.technet.microsoft.com. (2018). Evaluating Failure Prediction Models for
Predictive Maintenance. [online] Available at:
https://blogs.technet.microsoft.com/machinelearning/2016/04/19/evaluating-failure-
prediction-models-for-predictive-maintenance/
[108] Reliabilityweb.com. (2018). Pushing IIoT Predictive Maintenance Forward: Two
Challenges to Overcome - Reliabilityweb. [online] Available at:
https://reliabilityweb.com/articles/entry/pushing-iiot-predictive-maintenance-forward-
two-challenges-to-overcome
[109] Dspace.lib.cranfield.ac.uk. (2018). Prognostics: design, implementation and
challenges. [online] Available at:
https://dspace.lib.cranfield.ac.uk/bitstream/handle/1826/10206/Prognostics-
Design_Implementation_and_Challenges-
2015.pdf;jsessionid=719CBF0463FDA21738936E3991990455?sequence=1
[110] Deshpande, B. (2018). 6 benefits of using predictive maintenance. [online]
Simafore.com. Available at: http://www.simafore.com/blog/bid/204816/6-benefits-of-
using-predictive-maintenance
[111] Durocher, D. and Feldmeier, G. (2004). Predictive versus preventive maintenance
- Future control technologies in motor diagnosis and system wellness - Future control
technologies in motor diagnosis and system wellness. IEEE Industry Applications
Magazine, 10(5), pp.12-21.
[112] Jain, Y. (2018). 5 Applications: Artificial Intelligence & Machine Learning in
Banking. [online] Newgenapps.com. Available at:
https://www.newgenapps.com/blog/applications-artificial-intelligence-machine-learning-
in-banking-finance
[113] Opocenska, H., Nahodil, P. and Hammer, M. (2017). USE OF
MULTIPARAMETRIC DIAGNOSTICS IN PREDICTIVE MAINTENANCE. MM
Science Journal, 2017(05), pp.2090-2093.
[114] Susto, G., Schirru, A., Pampuri, S., McLoone, S. and Beghi, A. (2015). Machine
Learning for Predictive Maintenance: A Multiple Classifier Approach. IEEE Transactions
on Industrial Informatics, 11(3), pp.812-820.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
[115] Balaban E, Saxena A, Narasimhan S, Roychoudhury I, Koopmans, M, Ott C,
Goebel K (2015) Prognostic health-management system, development for
electromechanical actuators. J Aerosp Inf Syst12(3):329–344. doi:10.2514/1.I010171
[116] Yan J, Koc M, Lee J (2004) A prognostic algorithm for machine, performance
assessment and its application. Prod Plan Control, 15(08):796–801
[117] Bonissone P, Goebel K (2002) When will it break? A hybrid, soft computing
model to predict time-to-break margins in paper, machines. In: Proceedings of SPIE 47th
annual meeting, international, symposium on optical science and technology, Seattle,,
Washington, USA, vol 4787, pp 53–64
Goebel K (2015) Prognostic health-management system, development for
electromechanical actuators. J Aerosp Inf Syst12(3):329–344. doi:10.2514/1.I010171
[116] Yan J, Koc M, Lee J (2004) A prognostic algorithm for machine, performance
assessment and its application. Prod Plan Control, 15(08):796–801
[117] Bonissone P, Goebel K (2002) When will it break? A hybrid, soft computing
model to predict time-to-break margins in paper, machines. In: Proceedings of SPIE 47th
annual meeting, international, symposium on optical science and technology, Seattle,,
Washington, USA, vol 4787, pp 53–64
1 out of 56
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.