Work

Question-Answering with Structural Analogy

Public

Designing intelligent systems that can answer questions has been an ongoing and active challenge for the artificial intelligence community. In the past, researchers were focused on producing specialized language systems for particular domains and datasets. Such approaches would require deeper-than-ideal amounts of expertise to design, and often necessitated the expensive manual annotation of datasets with logical forms. Modern methods have since shifted to being deep learning-based, which has allowed for effective and flexible question-answering systems that can be constructed in a more hands-off approach. Both paradigms have their advantages and disadvantages. The earlier systems were far more interpretable as they often involved learning explicit grammar rules. However, they were less performant and had to start from scratch for each new domain to which they were applied. Mainstream deep learning-based methods are very effective, easier to train, and do exhibit some degree of transferability (largely due to their use of techniques like word embeddings), but their internal reasoning processes are opaque, and they generally require a significant amount of data to train on to achieve good performance. In this thesis, we approach question-answering from an analogical perspective. In particular, we introduce an approach that uses analogy to adapt an existing general-purpose semantic parser to answer questions in novel domains. The adaptation is learned automatically and performs well when given either natural-language question-answer pairs or questions annotated with logical forms. The incorporation of a general-purpose semantic parser allows the system to avoid having to learn from scratch for each new domain to which it is applied, while also making the question-answering task simpler, which allows for better performance and data efficiency. We demonstrate the effectiveness and generality of our approach by applying it to three different datasets that each require a distinct type of reasoning. We show that the method is competitive with modern neural approaches to question-answering while maintaining interpretability and explainability.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items