What is Value of Information Analysis?

Value of information analysis (VOIA) may be performed in both limited memory influence diagrams (LIMIDs) and Bayesian networks (BNs). The approach to value of information analysis in influence diagrams is, however, slightly different from the approach to value of information analysis in Bayesian networks. Thus, in the following two sections we consider in turn value of information analysis in influence diagrams and Bayesian networks, respectively. Each of the following two sections is self contained. This implies that each section may be read separately, but it also implies that some overlap may be present.

VOIA in LIMIDs

Consider the situation where a decision maker has to make a decision given a set of observations. It could, for instance, be a physician deciding on a treatment of a patient given observations on symptoms and risk factors related to a specific disease. The decision could, for instance, be whether or not to operate the patient immediately. Prior to making the decision, the physician may have the option to gather additional information about the patient such as waiting for the result of a test or asking further questions. Given a range of options, which option should the physician choose next? That is, which of the given options will produce the most information? These questions can be answered by a value of information analysis.

Given an influence diagram model, a decision variable, and observations on the informational parents of the decision, the task is to identify the variable, which is most informative with respect to the decision variable.

We consider a one step lookahead hypothesis driven value of information analysis on discrete random variables relative to discrete decision variables.

Performing VOIA in LIMIDs

Figure 1 shows the value of information pane appearing after activating value of information analysis on Operate. In this pane the results of the value information analysis is shown (Figure 3) and the set of information variables can be defined (Figure 2).

Figure 1: The maximum expected utility of the decision variable Operate is MEU(Operate)=0.3079.

Figure 1 shows that the maximum expected utility of the decision variable Operate is MEU(Operate)=0.3079.

Value of information analysis on a decision variable is performed relative to a set of information variables. The set of information variables can be selected as indicated in Figure 2.

Figure 2: Selecting the set of information variables.

Selecting information variables proceeds in the same way as selecting Target(s) of Instantiations in the d-Separation pane.

After selecting the set of information variables the value of information analysis can be performed by pressing Perform. The results for the example are shown below in Figure 3.

Figure 3: The results of value of information analysis on B relative to the selected set of information variables.

The results show the value of information of each of the selected information variables relative to the decision. There is one bar for each information variable. The name of the information variable and the value of information of the information variable relative to the decision is associated with each bar. The size of the bar is proportional to a multiplum of the maximum expected utility of the decision.

The value displayed for each observation node is the difference between the maximum expected utility of the decision node with and without the node observed before the decision.

VOIA in BNs

Consider the situation where a decision maker has to make a decision based on the probability distribution of a hypothesis variable. It could, for instance, be a physician deciding on a treatment of a patient given the probability distribution of a disease variable. For instance, if the probability of the patient suffering from the disease is above a certain threshold, then the patient should be treated immediately. Prior to deciding on a treatment the physician may have the option to gather additional information about the patient such as performing a test or asking a certain question. Given a range of options, which option should the physician choose next? That is, which of the given options will produce the most information? These questions can be answered by a value of information analysis.

Given a Bayesian network model and a hypothesis variable, the task is to identify the variable, which is most informative with respect to the hypothesis variable.

We consider myopic hypothesis driven value of information analysis on discrete random variables relative to discrete random variables.

Performing VOIA in BNs

Figure 4 shows the value of information pane appearing after activating value of information analysis on B. In this pane the results of the value information analysis is shown (Figure 6) and the set of information variables can be defined (Figure 5).


Figure 4: The entropy of the hypothesis variable B is H(B)=0.6876.

Figure 4 shows that the entropy of B is H(B)=0.6876.

Value of information analysis on a hypothesis variable is performed relative to a set of information variables. The set of information variables can be selected as indicated in Figure 5.


Figure 5: Selecting the set of information variables.

Selecting information variables proceeds in the same way as selecting Target(s) of Instantiations in the d-Separation pane.

After selecting the set of information variables the value of information analysis can be performed by pressing Perform. The results for the example are shown below in Figure 6.


Figure 6: The results of value of information analysis on B relative to the selected set of information variables.

The results show the mutual information I(T,H) between the target node and each of the selected information variables. There is one bar for each information variable. The name of the information variable and the mutual information between the target node and the information variable are associated with each bar. The size of the bar is proportional to the ratio between the mutual information and the entropy of the target node.

The examples above consider the case of discrete observation nodes. For possible observations on continuous chance nodes an approximation is used. Instead of using the true mixture of Normal distributions for a continuous node, a single Normal distribution with the same mean and variance is used as an approximation.

You may find more information on value of information analysis in "Help Topics" under "Help".