Do Neural Language Representations Learn Physical Commonsense?

*Note: The demo images may not display due to the HTTPS certificate for the MS COCO website. They display correctly locally by cloning the repository and running a web server at docs/.

Humans understand language based on the rich background knowledge about how the physical world works, which in turn allows us to reason about the physical world through language. In addition to the properties of objects (e.g., boats require fuel) and their affordances, i.e., the actions that are applicable to them (e.g., boats can be driven), we can also reason about if–then inferences between what properties of objects imply the kind of actions that are applicable to them (e.g., that if we can drive something then it likely requires fuel).

In this paper, we investigate the extent to which state-of-the-art neural language representations, trained on a vast amount of natural language text, demonstrate physical commonsense reasoning. While recent advancements of neural language models have demonstrated strong performance on various types of natural language inference tasks, our study based on a dataset of over 200k newly collected annotations suggests that neural language representations still only learn associations that are explicitly written down.

Paper

Poster

Bibtex

@article{forbes2019neural,
    title={Do Neural Language Representations Learn Physical Commonsense?},
    author={Forbes, Maxwell and Holtzman, Ari and Choi, Yejin},
    year={2019},
    journal={Proceedings of the 41st Annual Conference of the Cognitive Science Society},
}

Code and Data

See our GitHub repository for both the proposed abstract and situated datasets, as well as code to reproduce the results found in the paper.

Authors

Maxwell Forbes
Ari Holtzman
Yejin Choi