Multi-label classification复现

摘要

存在 theoretical results show that SA and HL are conflicting measures

1 介绍

an algorithm usually performs well on some measures while poorly on others.There are a few works studying the behavior of various measures.Although they provide valuable insights, the generalization analysis of the algorithms on different measures is still largely open.本文 focus on kernel-based learning algorithms which have been widely-used for MLC,this is the first to provide the generalization error bounds for the learning algorithms between these measures for MLC

number of labels (i.e., c) plays an important role in the generalization error bounds

3 Generalization analysis techniques

Multi-label classification复现_第1张图片
在这里插入图片描述
在这里插入图片描述
Multi-label classification复现_第2张图片

4 Learning guarantees between Hamming and Subset Loss

Multi-label classification复现_第3张图片

4.1 Learning guarantees of Algorithm A h A^h Ah

Multi-label classification复现_第4张图片
Multi-label classification复现_第5张图片

4.2 Learning guarantees of Algorithm A s A^s As

Multi-label classification复现_第6张图片

4.3 对比

For Hamming Loss, A h A^h Ah has tighter bound than A s A^s As, thus A h A^h Ah would perform better than A s A^s As

5 Learning guarantees between Hamming and Ranking Loss

Multi-label classification复现_第7张图片

5.1 Learning guarantee of Algorithm A h A^h Ah

Multi-label classification复现_第8张图片
When c is small, A h A^h Ah can have promising performance for Ranking Loss

5.2 Learning guarantee of Algorithm A r A^r Ar

Multi-label classification复现_第9张图片

6 实验

for the experiments,the goal is to validate 本文 theoretical results rather than illustrating the performance superiority.For A h A^h Ah and A s A^s As,作者 take the linear models with the hinge base loss function for simplicity

表2

在这里插入图片描述

A h A^h Ah复现结果:
image: 0.192 ± 0.010 \textbf{0.192}\pm \textbf{0.010} 0.192±0.010,enron: 0.176 ± 0.013 \textbf{0.176}\pm \textbf{0.013} 0.176±0.013,rcv1-subset1: 0.109 ± 0.002 \textbf{0.109}\pm \textbf{0.002} 0.109±0.002

表3

在这里插入图片描述
复现结果: D a t a s e t i m a g e e n r o n r c v 1 − s u b s e t 1 A h 0.417 ± 0.021 0.051 ± 0.021 − − − A s − − − − − − 0.050 ± 0.004 \begin{array}{} Dataset&image&enron&rcv1-subset1\\ A^h&\textbf{0.417}\pm \textbf{0.021}&\textbf{0.051}\pm \textbf{0.021}& ---\\ A^s &--- & ---&\textbf{0.050}\pm \textbf{0.004} \end{array} DatasetAhAsimage0.417±0.021enron0.051±0.021rcv1subset10.050±0.004
默认设置有误差很正常

感悟

第四篇完全复现的论文!

你可能感兴趣的:(识别,深度学习,算法,机器学习,人工智能)