为了正常的体验网站,请在浏览器设置里面开启Javascript功能!

em-note

2014-04-28 6页 pdf 143KB 20阅读

用户头像

is_621108

暂无简介

举报
em-note A Note on the Expectation-Maximization (EM) Algorithm ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign March 11, 2007 1 Introduction The Expectation-Maximization (EM) algorithm is a general algorithm for maximum-likeliho...
em-note
A Note on the Expectation-Maximization (EM) Algorithm ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign March 11, 2007 1 Introduction The Expectation-Maximization (EM) algorithm is a general algorithm for maximum-likelihood estimation where the data are “incomplete” or the likelihood function involves latent variables. Note that the notion of “incomplete data” and “latent variables” are related: when we have a latent variable, we may regard our data as being incomplete since we do not observe values of the latent variables; similarly, when our data are incomplete, we often can also associate some latent variable with the missing data. For language modeling, the EM algorithm is often used to estimate parameters of a mixture model, in which the exact component model from which a data point is generated is hidden from us. Informally, the EM algorithm starts with randomly assigning values to all the parameters to be estimated. It then iterately alternates between two steps, called the expectation step (i.e., the “E-step”) and the max- imization step (i.e., the “M-step”), respectively. In the E-step, it computes the expected likelihood for the complete data (the so-called Q-function) where the expectation is taken w.r.t. the computed conditional dis- tribution of the latent variables (i.e., the “hidden variables”) given the current settings of parameters and our observed (incomplete) data. In the M-step, it re-estimates all the parameters by maximizing the Q-function. Once we have a new generation of parameter values, we can repeat the E-step and another M-step. This process continues until the likelihood converges, i.e., reaching a local maxima. Intuitively, what EM does is to iteratively “augment” the data by “guessing” the values of the hidden variables and to re-estimate the parameters by assuming that the guessed values are the true values. The EM algorithm is a hill-climbing approach, thus it can only be guanranteed to reach a local maxima. When there are multiple maximas, whether we will actually reach the global maxima clearly depends on where we start; if we start at the “right hill”, we will be able to find a global maxima. When there are multiple local maximas, it is often hard to identify the “right hill”. There are two commonly used strategies to solving this problem. The first is that we try many different initial values and choose the solution that has the highest converged likelihood value. The second uses a much simpler model (ideally one with a unique global maxima) to determine an initial value for more complex models. The idea is that a simpler model can hopefully help locate a rough region where the global optima exists, and we start from a value in that region to search for a more accurate optima using a more complex model. There are many good tutorials on the EM algorithm (e.g., [2, 5, 1, 4, 3]). In this note, we introduce the EM algorithm through a specific problem – estimating a simple mixture model. 1 2 A simple mixture unigram language model In the mixture model feedback approach [6], we assume that the feedback documents F = {d1, ..., dk} are “generated” from a mixture model with two multinomial component models. One component model is the background model p(w|C) and the other is an unknown topic language model p(w|θF ) to be estimated. (w is a word.) The idea is to model the common (non-discriminative) words in F with p(w|C) so that the topic model θF would attract more discriminative content-carrying words. The log-likelihood of the feedback document data for this mixture model is log L(θF ) = log p(F | θF ) = k∑ i=1 |di|∑ j=1 log((1 − λ)p(dij | θF ) + λp(dij | C)) where dij is the j-th word in document di, |di| is the length of di, and λ is a parameter that indicates the amount of “background noise” in the feedback documents, which will be set empirically. We thus assume λ to be known, and want to estimate p(w|θF ). 3 Maximum Likelihood Estimation A common method for estimating θF is the maximum likelihood (ML) estimator, in which we choose a θF that maximizes the likelihood of F . That is, the estimated topic model (denoted by θˆF ) is given by θˆF = arg max θF L(θF ) (1) = arg max θF k∑ i=1 |di|∑ j=1 log((1 − λ)p(dij | θF ) + λp(dij | C)) (2) The right-side of this equation is easily seen to be a function with p(w|θF ) as variables. To find θˆF , we can, in principle, use any optimization methods. Since the function involves a logarithm of a sum of two terms, it is difficult to obtain a simple analytical solution via the Lagrange Multiplier approach, so in general, we must rely on numerical algorithms. There are many possibilities; EM happens to be just one of them which is quite natural and guaranteed to converge to a local maxima, which, in our case, is also a global maxima, since the likelihood function can be shown to have one unique maxima. 4 Incomplete vs. Complete Data The main idea of the EM algorithm is to “augment” our data with some latent/hidden variables so that the “complete” data has a much simpler likelihood function – simpler for the purpose of finding a maxima. The original data are thus treated as “incomplete”. As we will see, we will maximize the incomplete data likelihood (our original goal) through maximizing the expected complete data likelihood (since it is much easier to maximize) where expectation is taken over all possible values of the hidden variables (since the complete data likelihood, unlike our original incomplete data likelihood, would contain hidden variables). In our example, we introduce a binary hidden variable z for each occurrence of a word w to indicate whether the word has been “generated” from the background model p(w|C) or the topic model p(w|θF ). Let dij be the j-th word in document di. We have a corresponding variable zij defined as follows: 2 zij = { 1 if word dij is from background 0 otherwise We thus assume that our complete data would have contained not only all the words in F , but also their corresponding values of z. The log-likelihood of the complete data is thus Lc(θF ) = log p(F , z | θF ) = k∑ i=1 |di|∑ j=1 [(1 − zij) log((1 − λ)p(dij | θF )) + zij log(λp(dij | C))] Note the difference between Lc(θF ) and L(θF ): the sum is outside of the logarithm in Lc(θF ), and this is possible because we assume that we know which component model has been used to generated each word dij . What is the relationship between Lc(θF ) and L(θF )? In general, if our parameter is θ, our original data is X , and we augment it with a hidden variable H , then p(X,H|θ) = p(H|X, θ)p(X|θ). Thus, Lc(θ) = log p(X,H|θ) = log p(X|θ) + log p(H|X, θ) = L(θ) + log p(H|X, θ) 5 A Lower Bound of Likelihood Algorithmically, the basic idea of EM is to start with some initial guess of the parameter values θ (0) and then iteratively search for better values for the parameters. Assuming that the current estimate of the parameters is θ(n), our goal is to find another θ(n+1) that can improve the likelihood L(θ). Let us consider the difference between the likelihood at a potentially better parameter value θ and the likelihood at the current estimate θ(n), and relate it with the corresponding difference in the complete likeli- hood: L(θ)− L(θ(n)) = Lc(θ)− Lc(θ (n)) + log p(H|X, θ(n)) p(H|X, θ) (3) Our goal is to maximize L(θ)−L(θ(n)), which is equivalent to maximizing L(θ). Now take the expectation of this equation w.r.t. the conditional distribution of the hidden variable given the data X and the current estimate of parameters θ(n), i.e., p(H|X, θ(n). We have L(θ)− L(θ(n)) = ∑ H Lc(θ)p(H |X, θ (n))− ∑ H Lc(θ (n))p(H |X, θ(n)) + ∑ H p(H |X, θ(n)) log p(H |X, θ(n)) p(H |X, θ) Note that the left side of the equation remains the same as the variable H does not occur there. The last term can be recognized as the KL-divergence of p(H|X, θ(n)) and p(H|X, θ), which is always non-negative. We thus have L(θ)− L(θ(n)) ≥ ∑ H Lc(θ)p(H|X, θ (n))− ∑ H Lc(θ (n))p(H|X, θ(n)) or 3 L(θ) ≥ ∑ H Lc(θ)p(H|X, θ (n)) + L(θ(n))− ∑ H Lc(θ (n))p(H|X, θ(n)) (4) We thus obtain a lower bound for the original likelihood function. The main idea of EM is to maximize this lower bound so as to maximize the original (incomplete) likelihood. Note that the last two terms in this lower bound can be treated as constants as they do not contain the variable θ, so the lower bound is essentially the first term, which is the expectation of the complete likelihood, or the so-called “Q-function” denoted by Q(θ; θ(n)). Q(θ; θ(n)) = Ep(H|X,θ(n))[Lc(θ)] = ∑ H Lc(θ)p(H|X, θ (n)) The Q-function for our mixture model is the following Q(θF ; θ (n) F ) = ∑ z Lc(θF )p(z|F , θ (n) F ) (5) = k∑ i=1 |di|∑ j=1 [p(zij = 0|F , θ (n) F ) log((1− λ)p(dij | θF )) + p(zij = 1|F , θ (n) F ) log(λp(dij | C))] (6) 6 The General Procedure of EM Clearly, if we find a θ(n+1) such that Q(θ(n+1); θ(n)) > Q(θ(n); θ(n)), then we will also have L(θ(n+1)) > L(θ(n)). Thus the general procedure of the EM algorithm is the following 1. Initialize θ(0) randomly or heuristically according to any prior knowledge about where the optimal parameter value might be. 2. Iteratively improve the estimate of θ by alternating between the following two-steps: (a) The E-step (expectation): Compute Q(θ; θ(n)) (b) The M-step (maximization): Re-estimate θ by maximizing the Q-function: θ(n+1) = argmaxθQ(θ; θ (n)) 3. Stop when the likelihood L(θ) converges. As mentioned earlier, the complete likelihood Lc(θ) is much easier to maximize as the values of the hidden variable are assumed to be known. This is why the Q-function, which is an expectation of Lc(θ), is often much easier to maximize than the original likelihood function. In cases when there does not exist a natural latent variable, we often introduce a hidden variable so that the complete likelihood function is easy to maximize. The major computation to be carried out in the E-step is to compute p(H|X, θ (n)), which is sometimes very complicated. In our case, this is simple: p(zij = 1|F , θ (n) F ) = λp(dij |C) λp(dij |C) + (1 − λ)p(dij |θ (n) F ) (7) 4 And of course, p(zij = 0|F , θ(n)F ) = 1 − p(zij = 1|F , θ (n) F ). Note that, in general, zij may depend on all the words in F . In our model, however, it only depends on the corresponding word dij . The M-step involves maximizing the Q-function. This may sometimes be quite complex as well. But, again, in our case, we can find an analytical solution. In order to achieve this, we use the Lagrange multiplier method since we have the following contraint on the parameter variables {p(w|θF )}w∈V , where V is our vocabulary. ∑ w∈V p(w|θF ) = 1 We thus consider the following auxiliary function g(θF ) = Q(θF ; θ (n) F ) + µ(1 − ∑ w∈V p(w|θF )) . and take its derivative w.r.t. each parameter variable p(w|θF ). ∂g(θF ) ∂p(w|θF ) = [ k∑ i=1 |di|∑ j=1,dij=w p(zij = 0|F , θ (n) F ) p(w | θF ) ] − µ (8) Setting this derivative to zero and solving the equation for p(w|θF ), we obtain p(w|θF ) = ∑k i=1 ∑|di| j=1,dij=w p(zij = 0|F , θ (n) F )∑k i=1 ∑|di| j=1 p(zij = 0|F , θ (n) F ) (9) = ∑k i=1 p(zw = 0|F , θ (n) F )c(w, di)∑k i=1 ∑ w∈V p(zw = 0|F , θ (n) F )c(w, di) (10) Note that we changed the notaion so that the sum over each word position in document di is now a sum over all the distinct words in the vocabulary. This is possible, because p(zij |F , θ(n)F ) depends only on the corresponding word dij . Using word w, rather then the word occurrence dij , to index z, we have p(zw = 1|F , θ (n) F ) = λp(w|C) λp(w|C) + (1 − λ)p(w|θ (n) F ) (11) We therefore have the following EM updating formulas for our simple mixture model: p(zw = 1|F , θ (n) F ) = λp(w|C) λp(w|C) + (1 − λ)p(w|θ (n) F ) E-step (12) p(w|θ (n+1) F ) = ∑k i=1(1 − p(zw = 1|F , θ (n) F ))c(w, di)∑k i=1 ∑ w∈V (1 − p(zw = 1|F , θ (n) F )c(w, di)) M-step (13) Note that we never need to explicitely compute the Q-function; instead, we compute the distribution of the hidden variable z and then directly obtain the new parameter values that will maximize the Q-function. 5 References [1] J. Bilmes. A gentle tutorial on the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. 1997. Technical Report, University of Berkeley, ICSI-TR-97-021, 1997. [2] J. Lafferty. Notes on the em algorithm. Online article. http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/11761-s97/WWW/tex/em.ps. [3] G. J. McLachlan and T. Krishnan. The EM Algorithm and Extensions. John Wiley and Sons, Inc., 1997. [4] T. P. Minka. Expectation-maximization as lower bound maximization. Online article. http://citeseer.nj.nec.com/minka98expectationmaximization.html. [5] R. Rosenfeld. The em algorithm. Online article. http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/11761- s97/WWW/tex/EM.ps. [6] C. Zhai and J. Lafferty. Model-based feedback in the KL-divergence retrieval model. In Tenth Interna- tional Conference on Information and Knowledge Management (CIKM 2001), pages 403–410, 2001. 6
/
本文档为【em-note】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。 本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。 网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。

历史搜索

    清空历史搜索