Learning when to skim and when to read

Do we always need human level accuracy in real world data? Or can we sometimes do with less? In this blog post we will explore how a fast baseline can decide which sentences are easy or difficult. By only using expensive classifiers on the difficult sentences we can save computational time.

March 15, 2017


Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning

We introduce a novel adaptive attention encoder-decoder framework, a state of the art image-captioning deep learning model that significantly outperforms all existing systems on the COCO image captioning challenge data and Flickr30K.

November 29, 2016


Multiple Different Natural Language Processing Tasks in a Single Deep Model

We have developed a single deep neural network model which can learn five different natural language processing tasks. Our model achieves state-of-the-art results on syntactic chunking, dependency parsing, semantic relatedness, and textual entailment.

November 11, 2016


State of the art deep learning model for question answering

We introduce the Dynamic Coattention Network, a state of the art question answering deep learning model that significantly outperforms all existing systems on the Stanford Question Answering dataset.

November 07, 2016


New neural network building block allows faster and more accurate text understanding

We published a new neural network building block, called a QRNN, that runs and trains much faster than traditional models due to better parallelism. Although our goal was speed, the new model also performs more accurately on every task we tried it on.

November 07, 2016


Teaching neural networks to point to improve language modeling and translation

If you were a small child and wanted to ask what an object was but didn't have the vocabulary for it, how do you refer to it? By pointing at it! Surprisingly, neural networks can benefit from the same tactic as a five year old to improve on a variety of language tasks.

October 26, 2016


The WikiText Long Term Dependency Language Modeling Dataset

As part of our research into pointer sentinel mixture models, we've created and published the WikiText language modeling dataset produced using over 28k Wikipedia articles.

September 26, 2016


Dynamic memory networks for visual and textual question answering

Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering.

April 04, 2016


New deep learning model understands and answers questions

We published new state of the art results on a variety of natural language processing (NLP) tasks. Our model, which we call the Dynamic Memory Network (DMN), combines two lines of recent work on memory and attention mechanisms in deep learning.

June 25, 2015