Do Language Models Exhibit Human-like Structural Priming Effects?
| Authors | |
|---|---|
| Publication date | 2024 |
| Host editors |
|
| Book title | The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024 |
| Book subtitle | ACL 2024 : August 11-16, 2024 |
| ISBN (electronic) |
|
| Event | Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 |
| Pages (from-to) | 14727-14742 |
| Number of pages | 16 |
| Publisher | Kerrville, TX: Association for Computational Linguistics |
| Organisations |
|
| Abstract |
We explore which linguistic factors-at the sentence and token level-play an important role in influencing language model predictions, and investigate whether these are reflective of results found in humans and human corpora (Gries and Kootstra, 2017). We make use of the structural priming paradigm, where recent exposure to a structure facilitates processing of the same structure. We don't only investigate whether, but also where priming effects occur, and what factors predict them. We show that these effects can be explained via the inverse frequency effect, known in human priming, where rarer elements within a prime increase priming effects, as well as lexical dependence between prime and target. Our results provide an important piece in the puzzle of understanding how properties within their context affect structural prediction in language models. |
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.18653/v1/2024.findings-acl.877 |
| Other links | https://www.scopus.com/pages/publications/85205295465 |
| Downloads |
2024.findings-acl.877
(Final published version)
|
| Permalink to this page | |