Skip to content

NLP 相关的一些文档、论文及代码, 包括主题模型(Topic Model)、词向量(Word Embedding)、命名实体识别(Named Entity Recognition)、文本分类(Text Classificatin)、文本生成(Text Generation)、文本相似性(Text Similarity)计算、机器翻译(Machine Translation)等,涉及到各种与nlp相关的算法,基于tensorflow 2.0。

Notifications You must be signed in to change notification settings

dengxiuqi/nlp-journey

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nlp journey

Your Journey to NLP Starts Here !

全面拥抱tensorflow2,代码全部修改为tensorflow2.0版本。

一. 基础知识

二. 经典书目(百度云 提取码:txqx)

  1. 概率图入门. 原书地址
  2. Deep Learning.深度学习必读. 原书地址
  3. Neural Networks and Deep Learning. 入门必读. 原书地址
  4. 斯坦福大学《语音与语言处理》第三版:NLP必读. 原书地址

三. 必读论文

01) 必读NLP论文

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
  2. GPT: Improving Language Understanding by Generative Pre-Training by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
  3. GPT-2: Language Models are Unsupervised Multitask Learners by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
  4. Transformer-XL: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
  5. ​XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
  6. XLM: Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau.
  7. RoBERTa: Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
  8. DistilBERT: a distilled version of BERT: smaller, faster, cheaper and lighter by Victor Sanh, Lysandre Debut and Thomas Wolf.
  9. CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
  10. CamemBERT: a Tasty French Language Model by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
  11. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
  12. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
  13. XLM-RoBERTa: Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
  14. MMBT: Supervised Multimodal Bitransformers for Classifying Images and Text by Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine.
  15. FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.

02) 模型优化

  1. LSTM(Long Short-term Memory). 地址
  2. Sequence to Sequence Learning with Neural Networks. 地址
  3. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. 地址
  4. Residual Network(Deep Residual Learning for Image Recognition). 地址
  5. Dropout(Improving neural networks by preventing co-adaptation of feature detectors). 地址
  6. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 地址

03) 综述论文

  1. An overview of gradient descent optimization algorithms. 地址
  2. Analysis Methods in Neural Language Processing: A Survey. 地址
  3. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. 地址
  4. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. 地址
  5. A Gentle Introduction to Deep Learning for Graphs. 地址

04) 文本预训练

  1. A Neural Probabilistic Language Model. 地址
  2. word2vec Parameter Learning Explained. 地址
  3. Language Models are Unsupervised Multitask Learners. 地址
  4. An Empirical Study of Smoothing Techniques for Language Modeling. 地址
  5. Efficient Estimation of Word Representations in Vector Space. 地址
  6. Distributed Representations of Sentences and Documents. 地址
  7. Enriching Word Vectors with Subword Information(FastText). 地址. 解读
  8. GloVe: Global Vectors for Word Representation. 官网
  9. ELMo (Deep contextualized word representations). 地址
  10. Pre-Training with Whole Word Masking for Chinese BERT. 地址

05) 文本分类

  1. Bag of Tricks for Efficient Text Classification (FastText). 地址
  2. Convolutional Neural Networks for Sentence Classification. 地址
  3. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. 地址

06) 文本生成

  1. A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. 地址
  2. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. 地址

07) 文本相似性

  1. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. 地址
  2. Learning Text Similarity with Siamese Recurrent Networks. 地址
  3. A Deep Architecture for Matching Short Texts. 地址

08) 自动问答

  1. A Question-Focused Multi-Factor Attention Network for Question Answering. 地址
  2. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. 地址
  3. A Knowledge-Grounded Neural Conversation Model. 地址
  4. Neural Generative Question Answering. 地址
  5. Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.地址
  6. Modeling Multi-turn Conversation with Deep Utterance Aggregation.地址
  7. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.地址
  8. Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes. 地址

09) 机器翻译

  1. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. 地址
  2. Neural Machine Translation by Jointly Learning to Align and Translate. 地址
  3. Transformer (Attention Is All You Need). 地址
  4. Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context. 地址

10) 自动摘要

  1. Get To The Point: Summarization with Pointer-Generator Networks. 地址
  2. Deep Recurrent Generative Decoder for Abstractive Text Summarization. 地址

11) 关系抽取

  1. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. 地址
  2. Neural Relation Extraction with Multi-lingual Attention. 地址
  3. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. 地址
  4. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. 地址

12) 推荐系统

  1. Deep Neural Networks for YouTube Recommendations. 地址
  2. Behavior Sequence Transformer for E-commerce Recommendation in Alibaba. 地址

四. 必读博文

  1. 应聘机器学习工程师?这是你需要知道的12个基础面试问题. 地址
  2. 如何学习自然语言处理(综合版). 地址
  3. The Illustrated Transformer.地址
  4. Attention-based-model. 地址
  5. Modern Deep Learning Techniques Applied to Natural Language Processing. 地址
  6. Bert解读. 地址
  7. 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。地址
  8. 深度学习中优化方法. 地址
  9. 从语言模型到Seq2Seq:Transformer如戏,全靠Mask. 地址
  10. Applying word2vec to Recommenders and Advertising. 地址
  11. 2019 NLP大全:论文、博客、教程、工程进展全梳理. 地址

五. 相关优秀github项目

一份教程

六. 相关优秀博客

About

NLP 相关的一些文档、论文及代码, 包括主题模型(Topic Model)、词向量(Word Embedding)、命名实体识别(Named Entity Recognition)、文本分类(Text Classificatin)、文本生成(Text Generation)、文本相似性(Text Similarity)计算、机器翻译(Machine Translation)等,涉及到各种与nlp相关的算法,基于tensorflow 2.0。

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%