Given an influence diagram model, a decision variable, and observations on the informational parents of the decision, the task is to identify the variable, which is most informative with respect to the decision variable.
We consider a one step lookahead hypothesis driven value of information analysis on discrete random variables relative to discrete decision variables.
Figure 1 shows the value of information pane appearing after
activating value of information analysis on Operate. In this
pane the results of the value information analysis is shown (Figure 3)
and the set of information variables can be defined (Figure 2).
![]() |
Figure 1: The maximum expected utility of the decision variable Operate is MEU(Operate)=0.3079. |
Figure 1 shows that the maximum expected utility of the decision variable Operate is MEU(Operate)=0.3079.
Value of information analysis on a decision variable is performed
relative to a set of information variables. The set of information
variables can be selected as indicated in Figure 2.
![]() |
Figure 2: Selecting the set of information variables. |
Selecting information variables proceeds in the same way as selecting Target(s) of Instantiations in the d-Separation pane.
After selecting the set of information variables the value of
information analysis can be performed by pressing Perform. The
results for the example are shown below in Figure 3.
![]() |
Figure 3: The results of value of information analysis on B relative to the selected set of information variables. |
The results show the value of information of each of the selected information variables relative to the decision. There is one bar for each information variable. The name of the information variable and the value of information of the information variable relative to the decision is associated with each bar. The size of the bar is proportional to a multiplum of the maximum expected utility of the decision.
The value displayed for each observation node is the difference between the maximum expected utility of the decision node with and without the node observed before the decision.
Given a Bayesian network model and a hypothesis variable, the task is to identify the variable, which is most informative with respect to the hypothesis variable.
We consider myopic hypothesis driven value of information analysis on discrete random variables relative to discrete random variables.
Figure 4 shows the value of information pane appearing after activating value of information analysis on B. In this pane the results of the value information analysis is shown (Figure 6) and the set of information variables can be defined (Figure 5).
![]() |
Figure 4: The entropy of the hypothesis variable B is H(B)=0.6876. |
Figure 4 shows that the entropy of B is H(B)=0.6876.
Value of information analysis on a hypothesis variable is performed relative to a set of information variables. The set of information variables can be selected as indicated in Figure 5.
![]() |
Figure 5: Selecting the set of information variables. |
Selecting information variables proceeds in the same way as selecting Target(s) of Instantiations in the d-Separation pane.
After selecting the set of information variables the value of information analysis can be performed by pressing Perform. The results for the example are shown below in Figure 6.
![]() |
Figure 6: The results of value of information analysis on B relative to the selected set of information variables. |
The results show the mutual information I(T,H) between the target node and each of the selected information variables. There is one bar for each information variable. The name of the information variable and the mutual information between the target node and the information variable are associated with each bar. The size of the bar is proportional to the ratio between the mutual information and the entropy of the target node.
The examples above consider the case of discrete observation nodes. For possible observations on continuous chance nodes an approximation is used. Instead of using the true mixture of Normal distributions for a continuous node, a single Normal distribution with the same mean and variance is used as an approximation.
You may find more information on value of information analysis in "Help Topics" under "Help".