Search results

    Filter results

  • Full text

  • Document type

  • Publication year

  • Organisation

Results: 19
Number of items: 19
  • Open Access
    Sinclair, A., Jumelet, J., Zuidema, W., & Fernández, R. (2022). Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations. Transactions of the Association of Computational Linguistics, 10, 1031–1050. https://doi.org/10.1162/tacl_a_00504
  • Open Access
    van der Wal, O., Jumelet, J., Schulz, K., & Zuidema, W. (2022). The Birth of Bias: A case study on the evolution of gender bias in an English language model. (v1 ed.) ArXiv. https://doi.org/10.48550/arXiv.2207.10245
  • Open Access
    Srivastava, A., Siro, C., Shutova, E., Jumelet, J., ter Hoeve, M., Giulianelli, M., Lewis, M., Schubert, M., Tong, X., & BIG-bench authors (2022). Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. (v2 ed.) ArXiv. https://doi.org/10.48550/arXiv.2206.04615
  • Open Access
    Jumelet, J., Denić, M., Szymanik, J., Hupkes, D., & Steinert-Threlkeld, S. (2021). Language models use monotonicity to assess NPI licensing. In C. Zong, F. Xia, W. Li, & R. Navigli (Eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021: Findings of ACL: ACL-IJCNLP 2021 : August 1-6, 2021 (pp. 4958–4969). The Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.findings-acl.439
  • Open Access
    Weber, L., Jumelet, J., Bruni, E., & Hupkes, D. (2021). Language Modelling as a Multi-Task Problem. In P. Merlo, J. Tiedemann, & R. Tsarfaty (Eds.), The 16th Conference of the European Chapter of the Association for Computational Linguistics: EACL 2021 : proceedings of the conference : April 19-23, 2021 (pp. 2049–2060). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.eacl-main.176
  • Open Access
    Kersten, T., Wong, H. M., Jumelet, J., & Hupkes, D. (2021). Attention vs non-attention for a Shapley-based explanation method. In E. Agirre, M. Apidianaki, & I. Vulić (Eds.), Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures: proceedings of the workshop : NAACL-HLT 2021 : June 10 2021 (pp. 129-139). The Association for Computational Linguistics. https://doi.org/10.48550/arXiv.2104.12424, https://doi.org/10.18653/v1/2021.deelio-1.13
  • Open Access
    Jumelet, J. (2020). diagNNose: A Library for Neural Activation Analysis. In A. Alishahi, Y. Belinkov, G. Chrupała, D. Hupkes, & Y. Pinter (Eds.), Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP: BlackboxNLP2020 (pp. 342-350). The Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.blackboxnlp-1.32
  • Open Access
    Jumelet, J., Zuidema, W., & Hupkes, D. (2019). Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment. In M. Bansal, & A. Villavicencio (Eds.), The 23rd Conference on Computational Natural Language Learning: CoNLL 2019 : proceedings of the conference : November 3-4, 2019, Hong Kong, China (pp. 1-11). The Association for Computational Linguistics. https://doi.org/10.18653/v1/K19-1001
  • Open Access
    Jumelet, J., & Hupkes, D. (2018). Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. In T. Linzen, G. Chrupała, & A. Alishahi (Eds.), The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP: EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium (pp. 222-231). The Association for Computational Linguistics. https://doi.org/10.18653/v1/W18-5424
Page 2 of 2