Αποτελέσματα Αναζήτησης
In information theory, the conditional entropy quantifies the amount of information needed to describe the outcome of a random variable given that the value of another random variable is known. Here, information is measured in shannons, nats, or hartleys.
Definition The conditional entropy of X given Y is. H(X|Y ) = −. p(x, y) log p(x|y) = −E[ log(p(x|y)) ] (5) The conditional entropy is a measure of how much uncertainty remains about the random variable.
The conditional entropy is the entropy of the conditional probability. It reveals how much variability remains in y, when xis xed. By de nition, this part of the entropy of yis uninformative about x(which is given). Therefore, this part of the entropy of yis not shared with x. Because the conditional entropy of yis uninformative about x, it is also
Examples are entropy, mutual information, conditional entropy, conditional information, and relative entropy (discrimination, Kullback-Leibler information), along with the limiting normalized versions of these quantities
The joint entropy \(H(X, Y)\) of a pair of discrete random variables \((X, Y)\) with a joint distribution \(p(x, y)\) is defined as \[H(X, Y) = - \sum_{x \in \mathcal{X}} \sum_{y \in \mathcal{Y}} p(x, y) \log p(x, y) = - E \log p(X, Y)\]
Conditional entropy. Let (X;Y) ∼ p. For x ∈ Supp(X), the random variable YSX = x is well defined. The entropy of Y conditioned on X, is defined by. H(YSX) ∶= E H(YSX = x) = E H(YSX) x←X X. Measures the uncertainty in Y given X. Let pX & pYSX be the marginal & conational distributions induced by p. H(YSX) = Q pX(x) ⋅ H(YSX = x) x∈X.
The naturalness of the definition of joint entropy and conditional entropy is exhibited by the fact that the entropy of a pair of random variables is the entropy of one plus the conditional entropy of the other. This is proved in the-following- theorem. Theorem 2.2.1 (Chain rule): H(X, Y) = H(X) + H(YJX) .