zl程序教程

您现在的位置是:首页 >  工具

当前栏目

GitHub项目:自然语言处理领域的相关干货整理

GitHub项目领域 处理 相关 整理 干货 自然语言
2023-09-27 14:27:26 时间
信息:共指消解:https://nlp.stanford.edu/projects/coref.shtml 论文:Deep Reinforcement Learning for Mention-Ranking Coreference Models(对Mention-Ranking的共指模型进行深度强化学习:https://arxiv.org/abs/1609.08667 论文:Improving Coreference Resolution by Learning Entity-Level Distributed Representations(通过学习实体级分布式表示来改善相关的解决方案):https://arxiv.org/abs/1606.01323 挑战:CoNLL 2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes(CoNLL 2012共享任务:在OntoNotes中对多语言的不受限制的共指进行建模):http://conll.cemantix.org/2012/task-description.html 挑战:CoNLL 2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes(CoNLL 2011共享任务:在OntoNotes中对多语言的不受限制的共指进行建模):http://conll.cemantix.org/2011/task-description.html
论文:Neural Network Translation Models for Grammatical Error Correction(语法错误校正的神经网络翻译模型):https://arxiv.org/abs/1606.00189 挑战:CoNLL 2013 Shared Task: Grammatical Error Correction(CoNLL 2013共享任务:语法错误校正):http://www.comp.nus.edu.sg/~nlp/conll13st.html 挑战:CoNLL 2014Shared Task: Grammatical Error Correction(CoNLL 2014共享任务:语法错误校正):http://www.comp.nus.edu.sg/~nlp/conll14st.html 资料:NUSNon-commercial research/trial corpus license:http://www.comp.nus.edu.sg/~nlp/conll14st/nucle_license.pdf 资料:Lang-8 Learner Corpora:http://cl.naist.jp/nldata/lang-8/ 资料:Cornell Movie–Dialogs Corpus:http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html 项目:Deep Text Corrector(深度文本校正器):https://github.com/atpaino/deep-text-corrector 产品:deep grammar:http://deepgrammar.com/
论文:Grapheme-to-Phoneme Models for (Almost) Any Language(适合(几乎)任何语言的字素到音素的模型):https://pdfs.semanticscholar.org/b9c8/fef9b6f16b92c6859f6106524fdb053e9577.pdf 论文:Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic Representation Learning(多语言神经语言模型:跨语语音表达学习的案例研究):https://arxiv.org/pdf/1605.03832.pdf 论文:Multi task Sequence-to-Sequence Models for Grapheme-to-Phoneme Conversion(多任务序列到序列的字素到音素转换的模型):https://pdfs.semanticscholar.org/26d0/09959fa2b2e18cddb5783493738a1c1ede2f.pdf 项目:Sequence-to-Sequence G2P toolkit(序列到序列G2P工具包):https://github.com/cmusphinx/g2p-seq2seq 资料:Multilingual Pronunciation Data(多语种发音数据):https://drive.google.com/drive/folders/0B7R_gATfZJ2aWkpSWHpXUklWUmM
维基百科:唇读法:https://en.wikipedia.org/wiki/Lip_reading 论文:Lip Reading Sentences in the Wild (在野外读懂唇语):https://arxiv.org/abs/1611.05358 论文:3D Convolutional Neural Networks for Cross Audio-Visual Matching Recognition(交叉视听匹配识别的3D卷积神经网络):https://arxiv.org/abs/1706.05739 项目: Lip Reading – Cross Audio-Visual Recognition using 3D Convolutional Neural Networks(唇读法—使用3D卷积神经网络的交叉视听识别:https://github.com/astorfi/lip-reading-deeplearning 资料: The GRID audiovisual sentence corpus:http://spandh.dcs.shef.ac.uk/gridcorpus/
论文:Neural Machine Translation by Jointly Learning to Align and Translate(通过共同学习来调整和翻译神经机器翻译):https://arxiv.org/abs/1409.0473 论文:Neural Machine Translation in Linear Tim(在线性时间中的神经机器翻译):https://arxiv.org/abs/1610.10099 挑战: ACL2014 NINTH WORKSHOP ON STATISTICAL MACHINE TRANSLATION(ACL2014第九届统计机器翻译研讨会):http://www.statmt.org/wmt14/translation-task.html#download 资料:OpenSubtitles2016:http://opus.lingfil.uu.se/OpenSubtitles2016.php 资料: WIT3:Web Inventory of Transcribed and Translated Talks:https://wit3.fbk.eu/ 资料: The QCRI Educational Domain (QED) Corpus:http://alt.qcri.org/resources/qedcorpus/
论文:Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection(动态池和展开递归自动编码器的释义检测):http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.650.7199 rep=rep1 type=pdf 项目:Paralex: Paraphrase-Driven Learning for Open Question Answering(Paralex:释义驱动学习的开放问答):http://knowitall.cs.washington.edu/paralex/ 资料:Microsoft Research Paraphrase Corpus:https://www.microsoft.com/en-us/download/details.aspx?id=52398 资料:Microsoft Research Video Description Corpus :https://www.microsoft.com/en-us/download/details.aspx?id=52422 from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fdownloads%2F38cf15fd-b8df-477e-a4e4-a4680caa75af%2F 资料: Pascal Dataset:http://nlp.cs.illinois.edu/HockenmaierGroup/pascal-sentences/index.html 资料:Flicker Dataset:http://nlp.cs.illinois.edu/HockenmaierGroup/8k-pictures.html 资料: TheSICK data set:http://clic.cimec.unitn.it/composes/sick.html 资料: PPDB:The Paraphrase Database:http://www.cis.upenn.edu/~ccb/ppdb/ 资料:WikiAnswers Paraphrase Corpus:http://knowitall.cs.washington.edu/paralex/wikianswers-paraphrases-1.0.tar.gz
维基百科:语法分析:https://en.wikipedia.org/wiki/Parsing 工具包:The Stanford Parser: A statistical parser:https://nlp.stanford.edu/software/lex-parser.shtml 工具包: spaCyparser:https://spacy.io/docs/usage/dependency-parse 论文:A fastand accurate dependency parser using neural networks(快速而准确地使用神经网络的依赖解析器):http://www.aclweb.org/anthology/D14-1082 挑战:CoNLL2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies(CoNLL2017共享任务:从原始文本到通用依赖项的多语言解析):http://universaldependencies.org/conll17/ 挑战:CoNLL2016 Shared Task: Multilingual Shallow Discourse Parsing(CoNLL2016共享任务:多语言的浅会话解析):http://www.cs.brandeis.edu/~clp/conll16st/
论文:Neural Network Language Model for Chinese Pinyin Input Method Engine(中文拼音输入法引擎的神经网络语言模型):http://aclweb.org/anthology/Y15-1052 项目:Neural Chinese Transliterator:https://github.com/Kyubyong/neural_chinese_transliterator
维基百科:问答系统:https://en.wikipedia.org/wiki/Question_answering
论文:Ask Me Anything: Dynamic Memory Networks for Natural Language Processing(自然语言处理的动态内存网络):http://www.thespermwhale.com/jaseweston/ram/papers/paper_21.pdf 论文:Dynamic Memory Networks for Visual and Textual Question Answering(用于视觉和文本的问答系统的动态记忆网络):http://proceedings.mlr.press/v48/xiong16.pdf 挑战:TREC Question Answering Task(TREC问答系统任务):http://trec.nist.gov/data/qamain.html 挑战:SemEval-2017 Task 3: Community Question Answering:http://alt.qcri.org/semeval2017/task3/ 资料:MSMARCO: Microsoft MAchine Reading COmprehension Dataset(MSMARCO:微软机器阅读理解数据集)http://www.msmarco.org/ 资料:Maluuba NewsQA:https://github.com/Maluuba/newsqa 资料:SQuAD:100,000+ Questions for Machine Comprehension of Text(SQuAD:100,000+个文本的机器理解的问题):https://rajpurkar.github.io/SQuAD-explorer/ 资料:Graph Questions: A Characteristic-rich Question Answering Dataset(图形问题:一个特征丰富的问题回答数据集):https://github.com/ysu1989/GraphQuestions 资料: Story Cloze Test and ROC Stories Corpora:http://cs.rochester.edu/nlp/rocstories/ 资料:Microsoft Research WikiQA Corpus:https://www.microsoft.com/en-us/download/details.aspx?id=52419 from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fdownloads%2F4495da01-db8c-4041-a7f6-7984a4f6a905%2Fdefault.aspx 资料:DeepMind Q A Dataset:http://cs.nyu.edu/~kcho/DMQA/ 资料: QASent:http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz
维基百科:关系提取:https://en.wikipedia.org/wiki/Relationship_extraction 论文:A deep learning approach for relationship extraction from interaction context in social manufacturing paradigm(一种从社会生产范例的互动情境中提取关系深度学习的方法):http://www.sciencedirect.com/science/article/pii/S0950705116001210
维基百科:语义角色标记:https://en.wikipedia.org/wiki/Semantic_role_labeling 书籍:Semantic Role Labeling(语义角色标记):https://www.amazon.com/Semantic-Labeling-Synthesis-Lectures-Technologies/dp/1598298313/ref=sr_1_1?s=books ie=UTF8 qid=1507776173 sr=1-1 keywords=Semantic+Role+Labeling 论文:End-to-end Learning of Semantic Role Labeling Using Recurrent Neural Networks(使用循环神经网络对语义角色标签进行端到端学习):http://www.aclweb.org/anthology/P/P15/P15-1109.pdf 论文:Neural Semantic Role Labeling with Dependency Path Embeddings(有着依赖路径嵌入的神经语义角色标记):https://arxiv.org/abs/1605.07515 挑战:CoNLL-2005 Shared Task: Semantic Role Labeling(CoNLL-2005共享任务:语义角色标记):http://www.cs.upc.edu/~srlconll/st05/st05.html 挑战:CoNLL-2004 Shared Task: Semantic Role Labeling(CoNLL-2004共享任务:语义角色标记):http://www.cs.upc.edu/~srlconll/st04/st04.html 工具包:Illinois Semantic Role Labeler(SRL):http://cogcomp.org/page/software_view/SRL 资料:CoNLL-2005 Shared Task: Semantic Role Labeling(CoNLL-2005共享任务:语义角色标记):http://www.cs.upc.edu/~srlconll/soft.html
维基百科:语句边界消歧:https://en.wikipedia.org/wiki/Sentence_boundary_disambiguation 论文:A Quantitative and Qualitative Evaluation of Sentence Boundary Detection for theClinical Domain(对临床领域的语句边界检测进行定量和定性的评估):https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5001746/ 工具包: NLTK Tokenizers:http://www.nltk.org/_modules/nltk/tokenize.html 资料: The British National Corpus:http://www.natcorp.ox.ac.uk/ 资料:Switchboard-1 Telephone Speech Corpus:https://catalog.ldc.upenn.edu/ldc97s62
维基百科:源分离:https://en.wikipedia.org/wiki/Source_separation 论文:From Blind to Guided Audio Source Separation(从盲目到有指导性的音频源分离):https://hal-univ-rennes1.archives-ouvertes.fr/hal-00922378/document 论文:Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation (对单声道分离的掩膜和深层循环神经网络的联合优化):https://arxiv.org/abs/1502.04149 挑战:Signal Separation Evaluation Campaign(信号分离评估活动):https://sisec.inria.fr/ 挑战: CHiME Speech Separation and Recognition Challenge(CHiME语音分离和识别的挑战):http://spandh.dcs.shef.ac.uk/chime_challenge/
维基百科:说话者识别:https://en.wikipedia.org/wiki/Speaker_recognition 论文:A NOVEL SCHEME FOR SPEAKER RECOGNITION USING A PHONETICALLY-AWARE DEEP NEURAL NETWORK(一种使用语音识别的深度神经网络的新方案):https://pdfs.semanticscholar.org/204a/ff8e21791c0a4113a3f75d0e6424a003c321.pdf 论文:DEEP NEURAL NETWORKS FOR SMALL FOOTPRINT TEXT-DEPENDENT SPEAKER VERIFICATION(深度神经网络,用于小范围的文本依赖的说话者验证):https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41939.pdf 挑战: NIST Speaker Recognition Evaluation(NIST说话者识别评价):https://www.nist.gov/itl/iad/mig/speaker-recognition
维基百科:语音分段:https://en.wikipedia.org/wiki/Speech_segmentation 论文:Word Segmentation by 8-Month-Olds: When Speech Cues Count More Than Statistics(8个月大婴儿的单词分段:当语音提示比统计数字更重要时):http://www.utm.toronto.edu/infant-child-centre/sites/files/infant-child-centre/public/shared/elizabeth-johnson/Johnson_Jusczyk.pdf 论文:Unsupervised Word Segmentation and Lexicon Discovery Using Acoustic Word Embeddings(不受监督的单词分割和使用声学词嵌入的词汇发现):https://arxiv.org/abs/1603.02845 资料:CALLHOME Spanish Speech:https://catalog.ldc.upenn.edu/ldc96s35
维基百科:术语提取:https://en.wikipedia.org/wiki/Terminology_extraction 论文: Neural Attention Models for Sequence Classification: Analysis and Application to KeyTerm Extraction and Dialogue Act Detection(序列分类的神经提示模型:分析和应用于关键词提取和对话法检测):https://arxiv.org/pdf/1604.00077.pdf
维基百科:文本简化:https://en.wikipedia.org/wiki/Text_simplification 论文:Aligning Sentences from Standard Wikipedia to Simple Wikipedia(调整句子,从标准的维基百科到简单的维基百科):https://ssli.ee.washington.edu/~hannaneh/papers/simplification.pdf 论文:Problems in Current Text Simplification Research: New Data Can Help(当前文本简化研究中的问题:可提供帮助的新数据):https://pdfs.semanticscholar.org/2b8d/a013966c0c5e020ebc842d49d8ed166c8783.pdf 资料:Newsela Data:https://newsela.com/data/
维基百科:文本蕴含:https://en.wikipedia.org/wiki/Textual_entailment 项目:Textual Entailment with TensorFlow(文本蕴含与TensorFlow):https://github.com/Steven-Hewitt/Entailment-with-Tensorflow 竞赛:SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge(SemEval-2013任务7:联合学生反应分析和第8届认知文本蕴含挑战):https://www.cs.york.ac.uk/semeval-2013/task7.html
维基百科:音译:https://en.wikipedia.org/wiki/Transliteration 论文:A Deep Learning Approach to Machine Transliteration(一个机器音译的深度学习方法):https://pdfs.semanticscholar.org/54f1/23122b8dd1f1d3067cf348cfea1276914377.pdf 项目:Neural Japanese Transliteration—can you do better than SwiftKey™ Keyboard?(神经日语音译:你能比SwiftKey键盘做得更好吗?):https://github.com/Kyubyong/neural_japanese_transliterator
维基百科:词嵌入:https://en.wikipedia.org/wiki/Word_embedding 工具包:Gensim: word2vec:https://radimrehurek.com/gensim/models/word2vec.html 工具包:fastText:https://github.com/facebookresearch/fastText 工具包:GloVe:Global Vectors for Word Representation:https://nlp.stanford.edu/projects/glove/ 信息:Where to get a pretrained model?(哪里能够获得一个预先训练的模型?):https://github.com/3Top/word2vec-api 项目:Pre-trained word vectors of 30+ languages(30多种语言的预先训练的词向量):https://github.com/Kyubyong/wordvectors 项目:Polyglot: Distributed word representations for multilingual NLP(Polyglot:多语言NLP的分布式词汇表征):https://sites.google.com/site/rmyeid/projects/polyglot
信息:What is Word Prediction?(什么是词汇预测?):http://www2.edc.org/ncip/library/wp/what_is.htm 论文: The prediction of character based on recurrent neural network language model(基于循环神经网络语言模型的字符预测):http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7960065 论文: An Embedded Deep Learning based Word Prediction(一个基于深度学习的词汇预测):https://arxiv.org/abs/1707.01662 论文:Evaluating Word Prediction: Framing Keystroke Savings(评估单词预测:框击键保存):http://aclweb.org/anthology/P08-2066 资料:An Embedded Deep Learning based Word Prediction(一个基于深度学习的词汇预测):https://github.com/Meinwerk/WordPrediction/master.zip 项目: Word Prediction using Convolutional Neural Networks—can you do better than iPhone™ Keyboard?(使用卷积神经网络的词汇预测——你能比iPhone键盘做得更好吗?):https://github.com/Kyubyong/word_prediction
论文: Neural Word Segmentation Learning for Chinese(中文的神经词分割学习):https://arxiv.org/abs/1606.04300 项目:Convolutional neural network for Chinese word segmentation(中文的词分割的卷积神经网络):https://github.com/chqiwang/convseg 工具包:Stanford Word Segmenter:https://nlp.stanford.edu/software/segmenter.html 工具包: NLTK Tokenizers:http://www.nltk.org/_modules/nltk/tokenize.html
词义消歧 维基百科:词义消歧:https://en.wikipedia.org/wiki/Word-sense_disambiguation 论文:Train-O-Matic: Large-Scale Supervised Word Sense Disambiguation in Multiple Languages without Manual Training Data(Train-O-Matic:在没有人工训练数据的情况下,在多种语言中大规模的监督词义消歧):http://www.aclweb.org/anthology/D17-1008 资料:Train-O-Matic Data:http://trainomatic.org/data/train-o-matic-data.zip 资料:BabelNet:http://babelnet.org/

原项目地址:https://github.com/Kyubyong/nlp_tasks#speech-segmentation

本文为编译作品,转载请注明出处。更多内容关注微信公众号:atyun_com


推荐一个GitHub上牛b的Java学习项目!已整理成了文档版本 很多Java程序员一直希望找到一份完整的学习路径,但是市面上很多书都是专注某一个领域的,没有一份完整的大图,以至于很多程序员很迷茫,不知道自己到底应该从哪里开始学,或者不知道自己学习些什么。 好在很早之前就有人总结了一份《Java工程师成神之路》,作者按照自己的经验总结了从基础,到高级、底层、架构、进阶、扩展等6个大的章节。几乎囊括了Java体系内的所有知识点。
这款Alibaba SpringCloud微服务项目真香!Github标星35K+ 近年来随着互联网的飞速发展,各行各业都在拥 互联网。互联网给人类生活带来了翻天覆地的变化,人们在享受互联网给生活带来便捷的同时,业务需求的发展也对互联网技术提出了更高的要求,传统的单体架构对越来越复杂的业务需求显得力不从 此外,随着大数据云计算和人工智能的飞速发展,软件的架构显得越来越重要。近几年来,“微服务”这名词在各大网站、论坛、演讲中出现的频率足以让人们感觉到它对软件架构带来的影响 。目前,各大公司都在纷纷采用微服务架构。
github开源学习项目推荐(2) 这本书里的文字,全部的意义,只有两个字:“启发”。 有些知识,不仅要了解,还要深入了解。为了深入了解,不仅要学习,还要实践,更要反复试错,在成功中获得激励,在失败中汲取教训,路漫漫其修远,上下求索才可能修成正果。小到开车,大到创业,各种所需要的知识莫不如是。面对这样的知识,我们要了解
github开源学习项目推荐 洛雪音乐助手是一款完全开源免费的音乐软件,在 github 上已经收获了将近 2w stars 完全开源免费 界面美观,主题多 支持多平台歌单导入 支持自定义音源,享受超高音质 支持多平台数据源,聚合搜索,数据齐全 支持 windows,mac,linux ,android 平台
逛了五年GitHub,终于整理出七大java开源技术项目文档,赶紧收藏 大家都知道 Github 是一个程序员福地,这里有各种厉害的开源框架、软件或者教程。这些东西对于我们学习和进步有着莫大的进步,所以将 Github 上非常棒的七大Java开源项目技术文档整理下来供大家学习!!!
github项目的开发创建仓库、分支管理、分支策略、标签管理 作用:假设你准备开发一个新功能,但需要两周才能完成,第一周写了60%,如果提交,由于代码还没写完,不完整的代码库会导致别人不能干活,如果等代码全部写完在一次提交,又会存在丢失每天进度的风险。有了分支,可以避免上述问题,创建一个属于自己的分支,别人看不到,还继续在原来的分支上正常工作,而我们在自己的分支上干活,想提交就提交,直到开发完毕后,在一次性合并到原来的分支上,这样,即安全又不影响别人工作。