Accession Number:

AD1033419

Title:

Implicitly-Defined Neural Networks for Sequence Labeling

Descriptive Note:

Technical Report

Corporate Author:

MASSACHUSETTS INST OF TECH LEXINGTON LEXINGTON United States

Personal Author(s):

Report Date:

2016-09-09

Pagination or Media Count:

8.0

Abstract:

We relax the causality assumption in formulating recurrent neural networks, so that the hidden states of the network are all coupled together. This goes beyond bidirectional RNN, which consists of two explicit recurrent networks concatenated together. The motivation behind doing this is to improve performance on long-range dependencies, and to improve stability solution drift in NLP tasks. We choose an implicit neural network architecture, show that it can be computed reasonably efficiently, and demonstrate it proof of-concept on the task of part-of-speech tagging.

Subject Categories:

  • Cybernetics

Distribution Statement:

APPROVED FOR PUBLIC RELEASE