支持向量机只能做二分类任务
SVM全称支持向量机,即寻找到一个超平面使样本分成两类,且间隔最大
硬间隔:如果样本线性可分,在所有样本分类都正确的情况下,寻找最大间隔;如果出现异常值或样本线性不可分,此时硬间隔无法实现
软间隔:允许部分样本,在最大间隔之内,甚至在错误的一边,寻找最大间隔;目标是尽可能保持间隔宽阔和限制间隔违例之间寻找良好的平衡
惩罚系数:通过惩罚系数来控制这个平衡,C值越小,则间隔越宽,分错的样本个数也就越多;反之,C值越大,则间隔越窄,分错的样本个数越少
class sklearn.svm LinearSVC(C = 1.0)
from plot_util import plot_decision_boundary_svc, plot_decision_boundary
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
from sklearn.svm import LinearSVC
X, y = load_iris(return_X_y= True)
x = X[y < 2, :2]
y = y[y < 2]
plt.scatter(x[y == 0, 0], x[y == 0, 1], c = 'r')
plt.scatter(x[y == 1, 0], x[y == 1, 1], c = 'b')
plt.show()
# 特征处理
transform = StandardScaler()
x_tran = transform.fit_transform(x)
# 模型训练
model = LinearSVC(C = 30)
model.fit(x_tran, y)
y_pre = model.predict(x_tran)
print(accuracy_score(y, y_pre))
# 可视化处理
plot_decision_boundary_svc(model, axis = [-3, 3, -3, 3])
plt.scatter(x_tran[y == 0, 0], x_tran[y == 0, 1], c = 'r')
plt.scatter(x_tran[y == 1, 0], x_tran[y == 1, 1], c = 'b')
plt.show()
# 模型训练
model = LinearSVC(C = 0.01)
model.fit(x_tran, y)
y_pre = model.predict(x_tran)
print(accuracy_score(y, y_pre))
# 可视化处理
plot_decision_boundary_svc(model, axis = [-3, 3, -3, 3])
plt.scatter(x_tran[y == 0, 0], x_tran[y == 0, 1], c = 'r')
plt.scatter(x_tran[y == 1, 0], x_tran[y == 1, 1], c = 'b')
plt.show()
要去求一组参数(w, b),使其构建的超平面函数能够最优地分离两个集合
样本空间中任一点x到超平面(w, b)的距离可写成: r = w T x + b ∣ ∣ w ∣ ∣ r = \frac{w^Tx+b}{||w||} r=∣∣w∣∣wTx+b,想要找到具有最大间隔的划分超平面,也就是要找到能满足下式中约束的参数w和b,使得间隔 γ \gamma γ最大
{ w T x i + b ⩾ + 1 , y i = + 1 ; w T x i + b ⩽ − 1 , y i = − 1. \begin{cases} \boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}_{i} + b \geqslant +1, & y_{i} = +1; \\ \boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}_{i} + b \leqslant -1, & y_{i} = -1. \end{cases} {wTxi+b⩾+1,wTxi+b⩽−1,yi=+1;yi=−1.
距离超平面最近的几个训练样本点使上式等号成立,他们被称为“支持向量”,两个异类支持向量到超平面的距离之和为(最大间隔距离表示): 2 ∣ ∣ w ∣ ∣ \frac{2}{||w||} ∣∣w∣∣2
训练样本: { w T x i + b ⩾ + 1 , y i = + 1 ; w T x i + b ⩽ − 1 , y i = − 1. \begin{cases} \boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}_{i} + b \geqslant +1, & y_{i} = +1; \\ \boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}_{i} + b \leqslant -1, & y_{i} = -1. \end{cases} {wTxi+b⩾+1,wTxi+b⩽−1,yi=+1;yi=−1.则目标函数可以写成: m a x w , b = 2 ∣ ∣ w ∣ ∣ s . t . y i ( w T x i + b ) ⩾ 1 ,其中 i = 1 , 2 , 3 , … , m max_{w, b} = \frac{2}{||w||}s.t.y_i(w^Tx_i+b) \geqslant 1,其中i=1,2,3,\dots, m maxw,b=∣∣w∣∣2s.t.yi(wTxi+b)⩾1,其中i=1,2,3,…,m
将目标函数进一步优化: m i n w , b = 1 2 ∣ ∣ w ∣ ∣ 2 s . t . y i ( w T x i + b ) ⩾ 1 , 其中 i = 1 , 2 , 3 , … , m min_{w, b} = \frac{1}{2}||w||^2 s.t.y_i(w^Tx_i+b) \geqslant 1,其中i = 1, 2, 3, \dots, m minw,b=21∣∣w∣∣2s.t.yi(wTxi+b)⩾1,其中i=1,2,3,…,m
添加核函数,将目标函数转化成以下形式:KaTeX parse error: {align*} can be used only in display mode.
构建拉格朗日函数:其中 α i \alpha_i αi为拉格朗日乘子(相当于 λ i \lambda_i λi): L ( w , b , α ) = 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 n α i ( 1 − y i ( w T ⋅ Φ ( x i ) + b ) − 1 ) … … ① L(w, b, \alpha) = \frac{1}{2}||w||^2-\sum_{i = 1}^{n} \alpha_i\left(1 - y_{i} \left(\boldsymbol{w}^{\mathrm{T}} \cdot \boldsymbol{\varPhi}(x_{i}) + b\right)-1\right) \dots \dots ① L(w,b,α)=21∣∣w∣∣2−∑i=1nαi(1−yi(wT⋅Φ(xi)+b)−1)……①
要想求得极小值,上式后半部分应该取的极大值: m i n w , b m a x α L ( w , b , α ) < = = > m a x α m i n w , b L ( w , b , α ) min_{w, b}max_{\alpha}L(w, b, \alpha) <==> max_{\alpha }min_{w, b}L(w, b, \alpha) minw,bmaxαL(w,b,α)<==>maxαminw,bL(w,b,α)
要找 m i n w , b L ( w , b , α ) min_{w, b}L(w, b, \alpha) minw,bL(w,b,α),既要先对 w , b w, b w,b求导
对 w w w求偏导,并令其为0: L = 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 n α i ( y i w T φ ( x i ) + y i b − 1 ) = 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 n α i y i w T φ ( x i ) + α i y i b − α i L=\frac{1}{2}||w||^2-\sum_{i = 1}^n \alpha_i(y_iw^T \varphi(x_i)+y_ib-1)=\frac{1}{2}||w||^2-\sum_{i = 1}^n \alpha_iy_iw^T \varphi(x_i)+\alpha_iy_ib-\alpha_i L=21∣∣w∣∣2−∑i=1nαi(yiwTφ(xi)+yib−1)=21∣∣w∣∣2−∑i=1nαiyiwTφ(xi)+αiyib−αi
∂ L ∂ w = w − ∑ i = 1 n α i y i φ ( x i ) = 0 \frac{\partial L}{\partial w}= w-\sum_{i = 1}^n\alpha_iy_i \varphi(x_i) = 0 ∂w∂L=w−∑i=1nαiyiφ(xi)=0
得到: w = ∑ i = 1 n α i y i φ ( x i ) = 0 w =\sum_{i = 1}^n\alpha_iy_i \varphi(x_i) = 0 w=∑i=1nαiyiφ(xi)=0
对b求偏导,并令其为0:
L = 1 2 ∣ ∣ w ∣ ∣ 2 − ∑ i = 1 n α i y i w T φ ( x i ) + α i y i b − α i L = \frac{1}{2}||w||^2-\sum_{i = 1}^n \alpha_iy_iw^T\varphi(x_i)+\alpha_iy_ib-\alpha_i L=21∣∣w∣∣2−∑i=1nαiyiwTφ(xi)+αiyib−αi
∂ L ∂ b = ∑ i = 1 n α i y i = 0 \frac{\partial L}{\partial b}=\sum_{i = 1}^n\alpha_iy_i=0 ∂b∂L=∑i=1nαiyi=0
得到: ∑ i = 1 n α i y i = 0 \sum_{i = 1}^n\alpha _iy_i = 0 ∑i=1nαiyi=0
将上面两个求导的结果代入①式中,得到:KaTeX parse error: {align*} can be used only in display mode.
公式: K ( x , y ) = e − γ ∣ ∣ x − y ∣ ∣ 2 K(x, y) = e^{-\gamma||x-y||^2} K(x,y)=e−γ∣∣x−y∣∣2,其中 γ = 1 2 σ 2 \gamma=\frac{1}{2\sigma^2} γ=2σ21
API( γ \gamma γ较大,过拟合; γ \gamma γ较小,欠拟合)
from sklearn.svm import SVC
SVC(kernel = 'rbf', gamma=gamma)