Search results
Results: 19
Number of items: 19
-
Mohebbi, H., Jumelet, J., Hanna, M., Alishahi, A., & Zuidema, W. (2024). Transformer-specific Interpretability. In M. Mesgar, & S. Loáiciga (Eds.), The 18th Conference of the European Chapter of the Association for Computational Linguistics : Proceedings of Tutorial Abstracts: EACL : March 21, 2024 (pp. 21-26). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.eacl-tutorials.4 -
Jumelet, J., Zuidema, W., & Sinclair, A. (2024). Do Language Models Exhibit Human-like Structural Priming Effects? In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024: ACL 2024 : August 11-16, 2024 (pp. 14727-14742). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-acl.877 -
Weber, L., Jumelet, J., Bruni, E., & Hupkes, D. (2024). Interpretability of Language Models via Task Spaces. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) : proceedings of the conference: ACL 2024 : August 11-16, 2024 (Vol. 1, pp. 4522-4538). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.acl-long.248 -
Langedijk, A., Mohebbi, H., Sarti, G., Zuidema, W., & Jumelet, J. (2024). DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers. In K. Duh, H. Gomez, & S. Bethard (Eds.), Findings of the Association for Computational Linguistics: NAACL 2024: Findings: Findings 2024 : June 16-21, 2024 (pp. 4764-4780). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-naacl.296 -
Patil, A., Jumelet, J., Chiu, Y. Y., Lapastora, A., Shen, P., Wang, L., Willrich, C., & Steinert-Threlkeld, S. (2024). Filtered Corpus Training (FiCT) Shows that Language Models Can Generalize from Indirect Evidence. Transactions of the Association for Computational Linguistics, 12, 1597-1615. https://doi.org/10.1162/tacl_a_00720 -
Jumelet, J., & Zuidema, W. (2023). Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution. In H. Bouamor, J. Pino, & K. Bali (Eds.), The 2023 Conference on Empirical Methods in Natural Language Processing : Findings of the Association for Computational Linguistics: EMNLP 2023: December 6-10, 2023 (pp. 4354–4369). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.findings-emnlp.288 -
Molnar, A., Jumelet, J., Giulianelli, M., & Sinclair, A. (2023). Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue. In J. Jiang, D. Reitter, & S. Deng (Eds.), The 27th Conference on Computational Natural Language Learning: CoNLL 2023 : proceedings of the conference : December 6-7, 2023 (pp. 254–273). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.conll-1.18 -
Jumelet, J., Hanna, M., de Heer Kloots, M., Langedijk, A., Pouw, C., & van der Wal, O. (2023). ChapGTP, ILLC’s Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation. In A. Warstadt, A. Mueller, L. Choshen, E. Wilcox, C. Zhuang, J. Ciro, R. Mosquera, B. Paranjabe, A. Williams, T. Linzen, & R. Cotterell (Eds.), Findings of the BabyLM Challenge: Sample-efficient pretraining on developmentally plausible corpora (pp. 74-85). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.conll-babylm.6 -
Jumelet, J., & Zuidema, W. (2023). Feature Interactions Reveal Linguistic Structure in Language Models. In A. Rogers, J. Boyd-Graber, & N. Okazaki (Eds.), Findings of the Association for Computational Linguistics: ACL 2023: July 9-14, 2023 (pp. 8697–8712). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.findings-acl.554
Page 1 of 2