Stepping A Little Deeper into Additive Smoothing

This post is inspired by a discussion thread from the piazza forum course 10-605/10-805 I am taking at CMU. The question is summarized below:

In the classification progress of a Naive Bayes(NB) Classifier, using Maximum Likelihood Estimation(MLE) to compute the joint distribution of label Y and all words from a vocabulary with size |V| as follows:

Pr(y,x_1,\ldots,x_d) = \prod_{j=1}^d \frac{Count(X=x_{j},Y=y) }{Count(X=ANy,Y=y)}\frac{Counts(Y=y)}{Counts(Y=ANY)}

may overfit.

Thus to mitigate the effect of overfitting, we will use a smoothing method:

 Pr(y,x_1,\ldots,x_d) = \prod_{j=1}^d \frac{Count(X=x_{j},Y=y) + mq_{x}}{Count(X=ANY,Y=y)+m}\frac{Counts(Y=y)+mq_{y}}{Counts(Y=ANY)+m}

Here  q_{x} = 1/|V|, q_{y} = 1/dom(Y), m = 1

Actually in the class back to last Thursday I was confused by this too. It's not THAT obvious to me at first glance. Then my intuition that it make sense to do this since it will not change the probability too much but will avoid 0 probability for feature X that haven't seen in the vocabulary convinced me to stop thinking and go out for a spicy pot. Yeah it is kind of smoothing, but could we call that a mitigation of overfitting? Mmm...Should go for dinner first.

If I have forgot about this, I will still have nothing on my blog. :( Fortunately I found the discussion thread on piazza. I think this is a good question and my classmate and TA have give some inspiring explanation on extreme case(say the 0 prob case).  Professor Cohen, the lecturer of the course, shared a great reading material in the discussion, which shows the improved method is called MAP(also an example of "additive smoothing", which is part of the title) and gained using a Dirichlet prior. This do take me to as far as I have been. How I hope I can make the thread available for the reader.

Well, that really make sense.  Actually I was always thinking that MLE will cause some kind of overfitting while using Bayesian Inference(BI) may mitigate the problem a little, for the MLE only return us with a optimal parameter value but BI will return a posterior parameter distribution which is over all the parameter space. Then we may use the expectation of a certain parameter from that distribution as the desired output. By calculating the expectation the parameters are naturally smoothed, which is kind of a weighted average over the whole parameter space. I think it will to some extent mitigate overfitting problem.

So the rest is just the beautiful math, majorly from ref[2].  Please enjoy.

For NB classifier, we suppose that  X_{1}, X_{2}, \ldots, X_{n} are i.i.d draws from multinomial (n, \theta), where  n = d is the document size. Then

 P(X=x|\theta) \propto \theta_{1}^{\sum_{j=1}^n1 (x_{j} = \theta_{1})} \ldots \theta_{k}^{\sum_{j=1}^n1(x_{j} = \theta_{k})}


When we use a Dirichlet distribution with parameter \alpha \in \mathbb{R}^k with all element equal to mq_{x} and k = |V|, we have:

P(\theta|\alpha) \propto \theta_{1}^{mq_{x}-1} \ldots \theta_{k}^{mq_{x}-1}


Then we have a posterior distribution:

P(X=x|\theta,\alpha) \propto \theta_{1}^{\sum_{j=1}^n1(x_{j} = \theta_{1})+mq_{x}-1} \ldots \theta_{k}^{\sum_{j=1}^n1(x_{j} = \theta_{k})+mq_{x}-1}

The normalizing constant is given by

\frac{\Gamma(\sum mq_{xi} + n)}{\prod_{i} \Gamma(mq_{xi} + \sum_{j=1}^n1(x_{j}=i))}

To estimate the value for a specific parameter \theta_i, which is also the estimation of P(X=x_i|x, \alpha), we calculate the mean of this posterior:

\mathbb{E}[\theta_i|x,\alpha] = \frac{mq_{x} + \sum_{j=1}^{n}1(x_{j}=i)}{n+\sum_{l=1}^k mq_{x}} = \frac{Count(X=x_{j}) + mq_{x}}{Count(X=ANY)+m}

Finally use the assumption of NB classifier that X_{i} are i.i.d and use some knowledge about conditional probability to involve Y, which I omitted deliberately for simplicity, we get the original formula from the course.

Thank you for reading. Feel free to point out any mistake or discuss with me.

----------------

Reference:

1.Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition. Oxford University Press, Inc., New York, NY, USA.

2.http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture4.pdf

3.http://www.cs.cmu.edu/~wcohen/10-605/prob-tour+bayes.pdf

4.http://www.cs.cmu.edu/~wcohen/10-601/prob-tour+bayes.pdf

5. https://en.wikipedia.org/wiki/Additive_smoothing

This entry was posted in Machine Learning. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>