Privacy concerns in chatbot interactions
| Authors | |
|---|---|
| Publication date | 2020 |
| Host editors |
|
| Book title | Chatbot Research and Design |
| Book subtitle | Third International Workshop, CONVERSATIONS 2019, Amsterdam, The Netherlands, November 19–20, 2019 : revised selected papers |
| ISBN |
|
| ISBN (electronic) |
|
| Series | Lecture Notes in Computer Science |
| Event | CONVERSATIONS 2019: an international workshop on chatbot research |
| Pages (from-to) | 34-48 |
| Publisher | Cham: Springer |
| Organisations |
|
| Abstract |
Chatbots are increasingly used in a commercial context to make product- or service-related recommendations. By doing so, they collect personal information of the user, similar to other online services. While privacy concerns in an online (website-) context are widely studied, research in the context of chatbot-interaction is lacking. This study investigates the extent to which chatbots with human-like cues influence perceptions of anthropomorphism (i.e., attribution of human-like characteristics), privacy concerns, and consequently, information disclosure, attitudes and recommendation adherence. Findings show that a human-like chatbot leads to more information disclosure, and recom- mendation adherence mediated by higher perceived anthropomorphism and subsequently, lower privacy concerns in comparison to a machine-like chatbot. This result does not hold in comparison to a website; human-like chatbot and website were perceived as equally high in anthropomorphism. The results show the importance of both mediating concepts in regards to attitudinal and behavioral outcomes when interacting with chatbots.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1007/978-3-030-39540-7_3 |
| Downloads |
Ischen2020_Chapter_PrivacyConcernsInChatbotIntera
(Final published version)
|
| Permalink to this page | |
