描述 | 论文 |
deep learning for nlp的早期框架 | A unified architecture for natural language processing: deep neural networks with multitask learning |
主题模型:LDA | Latent dirichlet allocation |
条件随机场: | Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data |
word2vec | Efficient Estimation of Word Representations in Vector Space |
glove | Glove: Global Vectors for Word Representation |
elmo | https://arxiv.org/pdf/1802.05365.pdfDeep contextualized word representations |
nlp中的cnn | Convolutional Neural Networks for Sentence Classification |
RNN-based seq2seq | Sequence to Sequence Learning with Neural Networks Neural Machine Translation by Jointly Learning to Align and Translate |
Attention is all you need (绝对经典) | Attention Is All You Need |
Bert (重点) | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
transformer-XL | Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context - arXiv 2019) |
长文本transformer | Longformer: The Long-Document Transformer |
bert压缩 | TinyBERT: Distilling BERT for Natural Language Understanding DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter ALBERT: A Lite BERT for Self-supervised Learning of Language Representations |
融入知识图谱信息 | K-BERT: Enabling Language Representation with Knowledge Graph https://arxiv.org/pdf/1909.07606.pdf ERNIE: Enhanced Representation through Knowledge Integration https://arxiv.org/pdf/1904.09223.pdf |
[Youtube-DNN] Deep Neural Networks for YouTube Recommendations(Google 2016,非常经典的论文)
[Pinterest] Graph Convolutional Neural Networks for Web-Scale Recommender Systems (Pinterest 2018)
[DL Recsys Intro] Deep Learning based Recommender System- A Survey and New Perspectives (UNSW 2018)
召回:
[DSSM双塔模型] Learning Deep Structured Semantic Models for Web Search using Clickthrough Data (UIUC 2013)
[TDM] Learning Tree-based Deep Model for Recommender Systems(Alibaba 2018)
排序:
[ESMM] Entire Space Multi-Task Model - An Effective Approach for Estimating Post-Click Conversion Rate (Alibaba 2018,多任务)
[MMOE] Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts(Google 2018,众多大厂都有用这个模型,多任务)
[DIEN] Deep Interest Evolution Network for Click-Through Rate Prediction (Alibaba 2019)
[DIN] Deep Interest Network for Click-Through Rate Prediction (Alibaba 2018)
其它:
[tutorial] Learning to Rank for Information Retrieval(Microsoft 2010,刘铁岩经典综述)
[DeepWalk] DeepWalk: Online Learning of Social Representations(2014)
[item2vec] Item2Vec-Neural Item Embedding for Collaborative Filtering (Microsoft 2016)
[node2vec] node2vec: Scalable Feature Learning for Networks(2016)
描述 | 论文 |
小样本 | Multi-Label Few-Shot Learning for Aspect Category Detection https://arxiv.org/abs/2105.1417 Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision https://arxiv.org/abs/2012.1486 Generalizing from a Few Examples: A Survey on Few-Shot Learning(小样本学习综述) |
NER | Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data https://arxiv.org/abs/2106.0897 Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker https://arxiv.org/abs/2105.1492 |
对话 | Answering Ambiguous Questions through Generative Evidence Fusion and Round-Trip Prediction https://arxiv.org/abs/2011.1313 |
生成 | Prefix-Tuning: Optimizing Continuous Prompts for Generation https://arxiv.org/abs/2101.0019 |
摘要 | Cross-Lingual Abstractive Summarization with Limited Parallel Resources https://arxiv.org/abs/2105.1364 Long-Span Summarization via Local Attention and Content Selection https://arxiv.org/abs/2105.0380 |
预训练模型 | Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling NEZHA: Neural Contextualized Representation for Chinese Language Understanding ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language ERNIE 2.0: A Continual Pre-training Framework for Language Understanding |
表征学习 | DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations https://arxiv.org/abs/2006.03659(对比学习) ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer https://arxiv.org/abs/2105.1174 Self-Guided Contrastive Learning for BERT Sentence Representations https://arxiv.org/abs/2106.0734 |
知识图谱 | Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering Case-based Reasoning for Natural Language Queries over Knowledge Bases https://arxiv.org/pdf/2104.08762.pdf Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering |
[Graph learning] Graph Learning Approaches to Recommender Systems: A Review(2021)
召回:
[JTM] Joint Optimization of Tree-based Index and Deep Model for Recommender Systems(Alibaba 2019)
[Deep Retrieval] Deep Retrieval: Learning A Retrievable Structure for Large-Scale Recommendations(字节 2021)
排序:
[PLE] Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations(腾讯 2020,近一年很多大厂有follow这项工作,多任务)
其它:
[KAFtt] Kalman Filtering Attention for User Behavior Modeling in CTR Prediction(京东 2020)
[SIM] Search-based User Interest Modeling with Lifelong Sequential Behavior Data for Click-Through Rate Prediction (Alibaba 2020)
[BST] Behavior Sequence Transformer for E-commerce Recommendation in Alibaba(Alibaba 2019)
[GIN] Graph Intention Network for Click-through Rate Prediction in Sponsored Search(Alibaba 2019)
Prompt Based Task Reformulation in NLP调研 | Thinkwee's Blog
https://github.com/km1994/nlp_paper_study
GitHub - km1994/NLP-Interview-Notes: 本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。
GitHub - shenweichen/AlgoNotes: 公众号【浅梦学习笔记】文章汇总:包含 排序&CXR预估,召回匹配,用户画像&特征工程,推荐搜索综合 计算广告,大数据,图算法,NLP&CV,求职面试 等内容
GitHub - datawhalechina/fun-rec: 本推荐算法教程主要是针对具有机器学习基础并想找推荐算法岗位的同学,教程由推荐算法基础、推荐算法入门赛、新闻推荐项目及推荐算法面经组成,形成了一个完整的从基础到实战再到面试的闭环。
快手推荐算法实习面经_笔经面经_牛客网
社招一年:小米算法面经(推荐算法)_笔经面经_牛客网
美团推荐算法暑期实习岗面经_笔经面经_牛客网
秋招总结:非机器学习科班学生漫长的算法工程师上岸之旅_笔经面经_牛客网
算法岗秋招面经总结,回馈牛客_笔经面经_牛客网
某渣渣NLP暑期实习面经_笔经面经_牛客网
NLP面经回馈_笔经面经_牛客网
NLP and 机器学习面经,回馈牛客_笔经面经_牛客网