看到不错的介绍crf的资料
转自:http://blog.echen.me/2012/01/03/introduction-to-conditional-random-fields/
原来crf是通过下降法调节参数的
又了解到无约束的最优化问题比较容易求解的,只要是可以求导的,不管是二次、三次.....可以用最速下降法,牛顿法.....
Imagine you have a sequence of snapshots from a day in Justin Bieber’s life, and you want to label each image with the activity it represents (eating, sleeping, driving, etc.). How can you do this?
One way is to ignore the sequential nature of the snapshots, and build a per-image classifier. For example, given a month’s worth of labeled snapshots, you might learn that dark images taken at 6am tend to be about sleeping, images with lots of bright colors tend to be about dancing, images of cars are about driving, and so on.
By ignoring this sequential aspect, however, you lose a lot of information. For example, what happens if you see a close-up picture of a mouth – is it about singing or eating? If you know that the previous image is a picture of Justin Bieber eating or cooking, then it’s more likely this picture is about eating; if, however, the previous image contains Justin Bieber singing or dancing, then this one probably shows him singing as well.
Thus, to increase the accuracy of our labeler, we should incorporate the labels of nearby photos, and this is precisely what a conditional random field does.
Let’s go into some more detail, using the more common example of part-of-speech tagging.
In POS tagging, the goal is to label a sentence (a sequence of words or tokens) with tags like ADJECTIVE, NOUN, PREPOSITION, VERB, ADVERB, ARTICLE.
For example, given the sentence “Bob drank coffee at Starbucks”, the labeling might be “Bob (NOUN) drank (VERB) coffee (NOUN) at (PREPOSITION) Starbucks (NOUN)”.
So let’s build a conditional random field to label sentences with their parts of speech. Just like any classifier, we’ll first need to decide on a set of feature functions fi .
In a CRF, each feature function is a function that takes in as input:
and outputs a real-valued number (though the numbers are often just either 0 or 1).
(Note: by restricting our features to depend on only the current and previous labels, rather than arbitrary labels throughout the sentence, I’m actually building the special case of a linear-chain CRF. For simplicity, I’m going to ignore general CRFs in this post.)
For example, one possible feature function could measure how much we suspect that the current word should be labeled as an adjective given that the previous word is “very”.
Next, assign each feature function fj a weight λj (I’ll talk below about how to learn these weights from the data). Given a sentence s, we can now score a labeling l of s by adding up the weighted features over all words in the sentence:
score(l|s)=∑mj=1∑ni=1λjfj(s,i,li,li−1)
(The first sum runs over each feature function j , and the inner sum runs over each position i of the sentence.)
Finally, we can transform these scores into probabilities p(l|s) between 0 and 1 by exponentiating and normalizing:
p(l|s)=exp[score(l|s)]∑l′exp[score(l′|s)]=exp[∑mj=1∑ni=1λjfj(s,i,li,li−1)]∑l′exp[∑mj=1∑ni=1λjfj(s,i,l′i,l′i−1)]
So what do these feature functions look like? Examples of POS tagging features could include:
And that’s it! To sum up: to build a conditional random field, you just define a bunch of feature functions (which can depend on the entire sentence, a current position, and nearby labels), assign them weights, and add them all together, transforming at the end to a probability if necessary.
Now let’s step back and compare CRFs to some other common machine learning techniques.
The form of the CRF probabilities p(l|s)=exp[∑mj=1∑ni=1fj(s,i,li,li−1)]∑l′exp[∑mj=1∑ni=1fj(s,i,l′i,l′i−1)] might look familiar.
That’s because CRFs are indeed basically the sequential version of logistic regression: whereas logistic regression is a log-linear model for classification, CRFs are a log-linear model for sequential labels.
Recall that Hidden Markov Models are another model for part-of-speech tagging (and sequential labeling in general). Whereas CRFs throw any bunch of functions together to get a label score, HMMs take a generative approach to labeling, defining
p(l,s)=p(l1)∏ip(li|li−1)p(wi|li)
where
So how do HMMs compare to CRFs? CRFs are more powerful – they can model everything HMMs can and more. One way of seeing this is as follows.
Note that the log of the HMM probability is logp(l,s)=logp(l0)+∑ilogp(li|li−1)+∑ilogp(wi|li) . This has exactly the log-linear form of a CRF if we consider these log-probabilities to be the weights associated to binary transition and emission indicator features.
That is, we can build a CRF equivalent to any HMM by…
Thus, the score p(l|s) computed by a CRF using these feature functions is precisely proportional to the score computed by the associated HMM, and so every HMM is equivalent to some CRF.
However, CRFs can model a much richer set of label distributions as well, for two main reasons:
Let’s go back to the question of how to learn the feature weights in a CRF. One way is (surprise) to use gradient ascent.
Assume we have a bunch of training examples (sentences and associated part-of-speech labels). Randomly initialize the weights of our CRF model. To shift these randomly initialized weights to the correct ones, for each training example…