Work

Methods for Improving Natural Language Processing Techniques with Linguistic Regularities Extracted from Large Unlabeled Text Corpora

Public

Natural Language Processing methods have become increasingly important for a variety of high- and low-level tasks including speech recognition, question answering, and automatic language translation. The state of the art performance of these methods is continuously advancing, but reliance on labeled training data sets often creates an artificial upper bound on performance due to the limited availability of labeled data, especially in settings where annotations by human experts are expensive to acquire. In comparison, unlabeled text data is constantly generated by Internet users around the world and at scale this data can provide critical insights into human language. This work contributes two novel methods of extracting insights from large unlabeled text corpora in order to improve the performance of machine learning models. The first contribution is an improvement to the decades-old Multinomial Naive Bayes classifier (MNB). Our method utilizes word frequencies over a large unlabeled text corpus to improve MNB’s underlying conditional probability estimates and subsequently achieve state-of-the-art performance in the semi-supervised setting. The second contribution is a novel neural network method capable of simultaneous generation of multi-sense word embeddings and word sense disambiguation that does not rely on a sense-disambiguated training corpus or previous knowledge of word meanings. In both cases, our models illustrate the benefit of how modern machine learning approaches can benefit from the disciplined integration of large text corpora, which are constantly growing and only becoming cheaper to collect as technology advances.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items