Bayesian Network Construction
VerifiedAdded on 2019/10/09
|5
|1315
|279
Report
AI Summary
A Bayesian Network (BN) is a probabilistic graphical model that represents dependencies among variables using nodes and arrows. It helps in utilizing probabilities in Artificial Intelligence. The network can be used to infer unobserved variables, learn parameters, and structure learning. Construction of BN involves understanding variables, values, and relationships among them, as well as quantification of conditional probabilities. Additionally, the Markov Property is considered while forming the network.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Bayesian Network helps utilize the probabilities in the Artificial Intelligence. It is a form of
probabilistic graphical model. The model shows the dependencies among the variables. One
of the examples of the model can be the representation of diseases and the symptoms. If the
symptoms are available with the individual then the individual can use the tool to understand
the likeliness of various diseases. The representation of the network is through arrows and
nodes. The nodes present in the model denote the quantities that can be identified by the
observers. In some cases, there are variables that are not known completely but can be
inferred from the situation. Such variables are called latent variables, which are also used as
the nodes (Jensen, 1996). Moreover, there are hypothetical variables defined by the
individuals to understand the situations that are also denoted on the nodes of the models. The
diagram given below shows the simple model of the network consisting of nodes and the
dependencies shown through arrows:
The variables shown through the nodes are represented as X = X1+X2+…Xn and the arcs that
are shown in the diagram above is represented by Xi → Xj. The arcs shows the direct
dependencies among the variables they are connected. One of the important aspects that are
used in this network is the conditional probability. The conditional probability shows the
strengths of the dependencies. However, one thing that should be kept in consideration while
forming the nodes and arcs is that the directed cycles should not there (Nielsen 7 Jensen,
probabilistic graphical model. The model shows the dependencies among the variables. One
of the examples of the model can be the representation of diseases and the symptoms. If the
symptoms are available with the individual then the individual can use the tool to understand
the likeliness of various diseases. The representation of the network is through arrows and
nodes. The nodes present in the model denote the quantities that can be identified by the
observers. In some cases, there are variables that are not known completely but can be
inferred from the situation. Such variables are called latent variables, which are also used as
the nodes (Jensen, 1996). Moreover, there are hypothetical variables defined by the
individuals to understand the situations that are also denoted on the nodes of the models. The
diagram given below shows the simple model of the network consisting of nodes and the
dependencies shown through arrows:
The variables shown through the nodes are represented as X = X1+X2+…Xn and the arcs that
are shown in the diagram above is represented by Xi → Xj. The arcs shows the direct
dependencies among the variables they are connected. One of the important aspects that are
used in this network is the conditional probability. The conditional probability shows the
strengths of the dependencies. However, one thing that should be kept in consideration while
forming the nodes and arcs is that the directed cycles should not there (Nielsen 7 Jensen,
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
2009 ). If an individual wants to return to a node by moving through the directed arcs, then it
would not reach feasible solution.
As per the inference is concerned, there are three of these within BN. They are given below:
Inferring unobserved variables
Due to the presence of all the observed variables in the Bayesian Network, it becomes easier
to infer the variables that are unobserved. Answering the queries with the help of probability
becomes easier (Murphy, 2002).
Parameter learning
Parameters are used to show the likelihood of one cause over another. The parameters are
generally unknown and require estimation.
Structure learning
The structures are defined in BNs with the help of experts and then inferences are derived.
Data are used to form the structure as in most cases it becomes challenging due to its
complexity.
The construction of Bayesian Networ requires understanding few steps. These steps are given
below:
Nodes and values
This is the first step where the individual tries to understand the variables that are into play.
There are some questions that are considered while selecting the variables for the nodes such
as the values that are applicable, or the state in which they are, and others. The discrete nodes
have been preferred here for discussion (Heckerman, 1998). Some of general discrete nodes
types are Boolean nodes, ordered nodes, and integral nodes. The Boolean nodes refer to the
would not reach feasible solution.
As per the inference is concerned, there are three of these within BN. They are given below:
Inferring unobserved variables
Due to the presence of all the observed variables in the Bayesian Network, it becomes easier
to infer the variables that are unobserved. Answering the queries with the help of probability
becomes easier (Murphy, 2002).
Parameter learning
Parameters are used to show the likelihood of one cause over another. The parameters are
generally unknown and require estimation.
Structure learning
The structures are defined in BNs with the help of experts and then inferences are derived.
Data are used to form the structure as in most cases it becomes challenging due to its
complexity.
The construction of Bayesian Networ requires understanding few steps. These steps are given
below:
Nodes and values
This is the first step where the individual tries to understand the variables that are into play.
There are some questions that are considered while selecting the variables for the nodes such
as the values that are applicable, or the state in which they are, and others. The discrete nodes
have been preferred here for discussion (Heckerman, 1998). Some of general discrete nodes
types are Boolean nodes, ordered nodes, and integral nodes. The Boolean nodes refer to the
nodes that propose something. For instance, if an individual has short breath and goes to the
doctor then there can be tuberculosis, bronchitis, and cancer. The node showing cancer will
be the proposition. On the other hand, if the pollution is the likely reason then the exposure
level has to be shown that can be represented by low, medium and high. This representation
shows the Ordered Values. The last one, integral values, shows the numerical value such as
the node showing the age of that patient.
This stage of model formation, modelling choices can be taken. For instance, instead of the
representation of the age of the patient on the nodes, the patient can be placed into certain age
group such as baby, adolescent, young and others. The selection of whatever values have
been done, the focus should be clearly representing the area.
Structure
The structure of the network needs to consider the relationships among the variables. The
nodes that are related directly in the network need to be directly connected. The direct
relation can be in the form of cause and effect. The arc should be there that shows which one
is causing what. The previous example can be taken to understand this aspect. If it is expected
that the pollution is the possible cause of the presence of cancer, then the tail of the arc
should be on the node representing pollution and the arrow point forward should be on the
cancer node. Understanding the terminologies in the construction of the nodes is also
important. There are generally four family associated terms used, namely, child, parents,
ancestors, and descendants. The node in front of a node is called as child and that particular
node will be parent to that child node. The earlier nodes will be called as ancestors and the
nodes ahead of the child will be called as descendants.
Conditional Probabilities
doctor then there can be tuberculosis, bronchitis, and cancer. The node showing cancer will
be the proposition. On the other hand, if the pollution is the likely reason then the exposure
level has to be shown that can be represented by low, medium and high. This representation
shows the Ordered Values. The last one, integral values, shows the numerical value such as
the node showing the age of that patient.
This stage of model formation, modelling choices can be taken. For instance, instead of the
representation of the age of the patient on the nodes, the patient can be placed into certain age
group such as baby, adolescent, young and others. The selection of whatever values have
been done, the focus should be clearly representing the area.
Structure
The structure of the network needs to consider the relationships among the variables. The
nodes that are related directly in the network need to be directly connected. The direct
relation can be in the form of cause and effect. The arc should be there that shows which one
is causing what. The previous example can be taken to understand this aspect. If it is expected
that the pollution is the possible cause of the presence of cancer, then the tail of the arc
should be on the node representing pollution and the arrow point forward should be on the
cancer node. Understanding the terminologies in the construction of the nodes is also
important. There are generally four family associated terms used, namely, child, parents,
ancestors, and descendants. The node in front of a node is called as child and that particular
node will be parent to that child node. The earlier nodes will be called as ancestors and the
nodes ahead of the child will be called as descendants.
Conditional Probabilities
The next step in the construction of BN requires quantification of the relationship among the
nodes. The conditional probabilities are used to fulfil this requirement. In this probabilistic
values are defined for each of the nodes then they will show the strengths that one node
impacts the other.
The Markov Property
The Markov Property is considered while forming the network. The Markov Property states
that if there does not exist any arc between two nodes, and the particular node is free of
dependencies, then any other form of dependencies should not be taken into consideration.
However, if the arc present in the model tries to show some dependencies, then it should not
be construed that there exists some interdependencies.
Bayesian Networks are needs to be converted into the Junction Trees for inference because
helps in universally applying the algorithm without any directional constraint. There are three
steps in total, they are:
- In first step, undirected graph is formed from the directed graph
- In the second step, variables are defined.
- In the third step, graphs that are not chordal are considered.
The connection of automatic BN construction to Machine Learning is that we involve in the
formulation of knowledge regarding particular situation in probabilistic way. To do this we
gather the data. After that, the posterior probability is computed for the parameters
(Heckerman et al, 1995). After that, this posterior distribution is utilized to identify scientific
conclusions, predicting, and forming decisions to reduce the posterior loss.
References
Jensen, F. V. (1996). An introduction to Bayesian networks (Vol. 210). London: UCL press.
nodes. The conditional probabilities are used to fulfil this requirement. In this probabilistic
values are defined for each of the nodes then they will show the strengths that one node
impacts the other.
The Markov Property
The Markov Property is considered while forming the network. The Markov Property states
that if there does not exist any arc between two nodes, and the particular node is free of
dependencies, then any other form of dependencies should not be taken into consideration.
However, if the arc present in the model tries to show some dependencies, then it should not
be construed that there exists some interdependencies.
Bayesian Networks are needs to be converted into the Junction Trees for inference because
helps in universally applying the algorithm without any directional constraint. There are three
steps in total, they are:
- In first step, undirected graph is formed from the directed graph
- In the second step, variables are defined.
- In the third step, graphs that are not chordal are considered.
The connection of automatic BN construction to Machine Learning is that we involve in the
formulation of knowledge regarding particular situation in probabilistic way. To do this we
gather the data. After that, the posterior probability is computed for the parameters
(Heckerman et al, 1995). After that, this posterior distribution is utilized to identify scientific
conclusions, predicting, and forming decisions to reduce the posterior loss.
References
Jensen, F. V. (1996). An introduction to Bayesian networks (Vol. 210). London: UCL press.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Nielsen, T. D., & Jensen, F. V. (2009). Bayesian networks and decision graphs. Springer
Science & Business Media.
Heckerman, D., Geiger, D., & Chickering, D. M. (1995). Learning Bayesian networks: The
combination of knowledge and statistical data. Machine learning, 20(3), 197-243.
Heckerman, D. (1998). A tutorial on learning with Bayesian networks. In Learning in
graphical models (pp. 301-354). Springer Netherlands.
Murphy, K. P. (2002). Dynamic bayesian networks: representation, inference and learning
(Doctoral dissertation, University of California, Berkeley).
Science & Business Media.
Heckerman, D., Geiger, D., & Chickering, D. M. (1995). Learning Bayesian networks: The
combination of knowledge and statistical data. Machine learning, 20(3), 197-243.
Heckerman, D. (1998). A tutorial on learning with Bayesian networks. In Learning in
graphical models (pp. 301-354). Springer Netherlands.
Murphy, K. P. (2002). Dynamic bayesian networks: representation, inference and learning
(Doctoral dissertation, University of California, Berkeley).
1 out of 5
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.