Improving Transformer-based Sequential Recommenders through Preference Editing

Authors
Publication date 07-2023
Journal ACM Transactions on Information Systems
Article number 71
Volume | Issue number 41 | 3
Number of pages 24
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
One of the key challenges in sequential recommendation is how to extract and represent user preferences. Traditional methods rely solely on predicting the next item. But user behavior may be driven by complex preferences. Therefore, these methods cannot make accurate recommendations when the available information user behavior is limited. To explore multiple user preferences, we propose a transformer-based sequential recommendation model, named MrTransformer (Multi-preference Transformer). For training MrTransformer, we devise a preference-editing-based self-supervised learning (SSL) mechanism that explores extra supervision signals based on relations with other sequences. The idea is to force the sequential recommendation model to discriminate between common and unique preferences in different sequences of interactions. By doing so, the sequential recommendation model is able to disentangle user preferences into multiple independent preference representations so as to improve user preference extraction and representation.We carry out extensive experiments on five benchmark datasets. MrTransformer with preference editing significantly outperforms state-of-the-art sequential recommendation methods in terms of Recall, MRR, and NDCG. We find that long sequences of interactions from which user preferences are harder to extract and represent benefit most from preference editing.
Document type Article
Language English
Published at https://doi.org/10.1145/3564282
Permalink to this page
Back