Graphical model which contains chance nodes, which are exactly like Bayes Net nodes, action nodes, which are similar to Q-states and represent determined decisions, and utility nodes, which condition on all of its parents.
Conventions. This course uses ovals to represent chance nodes, rectangles to represent action nodes, and diamonds to represent utility nodes.
The principle of maximum expected utility states that the optimal set of actions in a decision network are those that maximize the expected utility. Moreover, this set can be found by exhaustively iterating through each possible configuration of actions, calculating the posterior of the utility node given its parents and any evidence, and choosing the best configuration.
$$ \text{MEU}(e_1, \dots, e_n) = \max _{a_1, \dots, a_k} \text{EU} (a_1, \dots, a_k \space | \space e_1, \dots, e_n) $$
The value of perfect information is a measure for how many utility points it is worth to reveal a piece of evidence in a decision network.
$$ \text{VPI}(E' \space | \space e_1, \dots, e_n) = \text{MEU}(e_1, \dots, e_n , E') - \text{MEU}(e_1, \dots , e_n) $$
$$ \text{MEU}(e, E') = \mathbb{E}[\text{MEU}(e, E')]=\sum _{e'} \Pr[e' \space | \space e] \text{MEU}(e, e') $$
Since revealing new evidence cannot harm the maximum expected utility, this quantity is nonnegative. Furthermore, the order in which evidence is observed does not matter.
$$ \text{VPI}(E_i, E_j \space | \space e ) =\text{VPI}(E_i \space | \space e) + \text{VPI}(E_j \space | \space e, E_i ) = \text{VPI}(E_j \space | \space e) + \text{VPI}(E_i \space | \space e, E_j ) $$