we have published new research: "Would you forgive anthropomorphic conversational AI agents for service failures? Exploring the impact of anthropomorphism on customer forgiveness" by Yuguang Xie, Changyong Liang, Peiyu Zhou, Shuping Zhao & Li Jiang. https://lnkd.in/eJc7ekQ4 As generative AI continues to transform service delivery, service failures have become an inevitable reality and a significant challenge for firms. This research investigates how the design of conversational AI agents (CAIs) affects customers' willingness to forgive after a performance slip. Drawing on attribution theory, the researchers examine how different anthropomorphic cues - visual, identity, auditory, and emotional - impact the recovery process. The findings reveal a "double-edged sword" effect: while emotional cues can mitigate anger and foster forgiveness, certain visual and identity cues may inadvertently heighten frustration and reduce the likelihood that customers will forgive. By mapping emotional and cognitive pathways, this study provides critical insights for designers and managers who want to build more resilient, AI-driven services. Understanding these dynamics is essential for maintaining long-term customer relationships in an increasingly automated marketplace. #InformationSystems #ConversationalAI #CustomerExperience #AIResearch #DigitalTransformation #ServiceFailure #Anthropomorphism #ConversationalAgents #CustomerForgiveness #HumanAIInteraction #HCI
Electronic Markets - The International Journal on Networked Business’ Post
More Relevant Posts
-
We often blame AI. We often blame screens. But there is something we rarely question. How many people have actually been taught how to communicate well? Not in theory. Not in vague advice. But in a practical, structured way. Every day, people face situations where they feel: uncertain misunderstood emotionally affected unsure how to respond And in those moments, something very simple happens. When we feel unequipped, we look for what is available We look for answers. We look for guidance. We look for something that helps us make sense of what is happening. And today, what is most accessible is often: AI tools screens digital interfaces Not because they are the problem. But because they are there. The real question is different What if people had better tools to understand and navigate human interaction itself? Not just to react. But to see more clearly what is happening. And respond in a more constructive way. At Mindflow Interactive, this is the space we are working on. Bringing more structure to something that is still largely left to intuition. #MindflowInteractive #Communication #HumanInteraction #AI #InteractionDesign #FutureOfWork
To view or add a comment, sign in
-
-
A recent research discussion, with Javier Hernandez on empathy in AI, challenged something I had always assumed. AI systems do not actually have empathy, what matters is the user’s perception of empathy. Empathy is not a single behavior. It’s context-dependent. Users want efficiency when asking for information, but understanding when sharing personal challenges. As my interest in Agentic AI and Responsible AI continues to grow, I have been thinking more about how we design intelligent systems that understand context without becoming emotionally biased or misleading. In building agentic models, the goal should not be emotional responses, it should be responsible, context-aware behavior. As we build more autonomous agentic systems, how do we design empathy as a capability for understanding context, without crossing into emotional manipulation or unintended bias??? #AgenticAI #ResponsibleAI #HumanCenteredAI #TrustworthyAI #GenerativeAI #MachineLearning #AIResearch #FutureOfAI
To view or add a comment, sign in
-
-
🤖 What makes us feel socially connected to AI chatbots? 👉 It’s not just what we share. It’s how the AI responds. Across two experimental studies with Paolo Riva and Alessandro Gabbiadini, we manipulated how a chatbot responded (relational vs. task-oriented) and what people talked about (deeper vs. more superficial topics). Deeper conversations increased self-disclosure. People were willing to open up, even to an AI. But what really mattered was how the chatbot responded. When it replied in a warm way, people perceived more empathy, greater human-likeness, and felt closer to it. 💛 More interestingly, intimacy worked through self-disclosure and perceived responsiveness: opening up created the opportunity, but feeling understood, validated, and cared for made the difference. As AI becomes part of our social lives, these findings suggest that mechanisms central to human relationships also operate in human–AI interactions. 💫 It seems that, in the end, how we feel might matter more than who or what we are talking to! 📢 Now out in the Journal of Social and Personal Relationships: https://lnkd.in/d5yZvJiT #AI #HumanAIInteraction #SocialConnection #Chatbots #ChatGPT #SocialPsychology #Research
To view or add a comment, sign in
-
-
I’m particularly glad to see this paper out, and grateful to have worked on it with Alessia Telari and Alessandro Gabbiadini, with Alessia leading this work. What I find especially compelling about this research is how clearly it sits at the intersection between two fundamental aspects of human nature: our #sociality and our equally fundamental tendency to develop and rely on #technology. One result that I find worth highlighting goes a bit beyond the main message. In Study 1, the default chatbot (GPT-3.5 at the time) closely resembled the explicitly non-relational version across key outcomes, including interpersonal closeness, empathy, interaction satisfaction, and need satisfaction. The real shift appeared when relational cues were introduced. Concretely, we implemented this manipulation through hidden #preprompting: the chatbot was instructed in advance to adopt a consistently relational or non-relational style, while participants experienced what appeared to be a standard interaction. Across key outcomes such as perceived empathy, interpersonal closeness, interaction satisfaction, need satisfaction, and mind perception (agency and, partly, experience), the default chatbot and the explicitly non-relational one were similar. The difference emerged when a relational style was introduced. Participants who interacted with the relational chatbot perceived the agent as more human-like, found the interaction more gratifying, reported a stronger sense of social connection, and expressed greater intentions for future use compared to those who interacted with the non-relational chatbot. An additional thought concerns how quickly this space is evolving. These data were collected using GPT-3.5. Looking at current systems, my impression is that what we experimentally defined as a “relational” chatbot may now be much closer to the default behavior of contemporary models. If so, this raises further questions about how baseline expectations are shifting, and what will count as “responsive” in the near future. As often happens, the study answers some questions but opens many others. I hope it can contribute, in a small way, to the ongoing discussion on how humans relate to increasingly social artificial agents.
🤖 What makes us feel socially connected to AI chatbots? 👉 It’s not just what we share. It’s how the AI responds. Across two experimental studies with Paolo Riva and Alessandro Gabbiadini, we manipulated how a chatbot responded (relational vs. task-oriented) and what people talked about (deeper vs. more superficial topics). Deeper conversations increased self-disclosure. People were willing to open up, even to an AI. But what really mattered was how the chatbot responded. When it replied in a warm way, people perceived more empathy, greater human-likeness, and felt closer to it. 💛 More interestingly, intimacy worked through self-disclosure and perceived responsiveness: opening up created the opportunity, but feeling understood, validated, and cared for made the difference. As AI becomes part of our social lives, these findings suggest that mechanisms central to human relationships also operate in human–AI interactions. 💫 It seems that, in the end, how we feel might matter more than who or what we are talking to! 📢 Now out in the Journal of Social and Personal Relationships: https://lnkd.in/d5yZvJiT #AI #HumanAIInteraction #SocialConnection #Chatbots #ChatGPT #SocialPsychology #Research
To view or add a comment, sign in
-
-
🤖What's your stance on AI? Yay or nay? Either way, we have the perfect the study for you from Alessia Telari , Paolo Riva and Alessandro Gabbiadini covering the feeling of social connectedness to AI chatbots! 👇Click the post below for the link to the study and more results from it!
🤖 What makes us feel socially connected to AI chatbots? 👉 It’s not just what we share. It’s how the AI responds. Across two experimental studies with Paolo Riva and Alessandro Gabbiadini, we manipulated how a chatbot responded (relational vs. task-oriented) and what people talked about (deeper vs. more superficial topics). Deeper conversations increased self-disclosure. People were willing to open up, even to an AI. But what really mattered was how the chatbot responded. When it replied in a warm way, people perceived more empathy, greater human-likeness, and felt closer to it. 💛 More interestingly, intimacy worked through self-disclosure and perceived responsiveness: opening up created the opportunity, but feeling understood, validated, and cared for made the difference. As AI becomes part of our social lives, these findings suggest that mechanisms central to human relationships also operate in human–AI interactions. 💫 It seems that, in the end, how we feel might matter more than who or what we are talking to! 📢 Now out in the Journal of Social and Personal Relationships: https://lnkd.in/d5yZvJiT #AI #HumanAIInteraction #SocialConnection #Chatbots #ChatGPT #SocialPsychology #Research
To view or add a comment, sign in
-
-
Is AI improving our thinking or causing mental decline due to reliance? Most tools adapt to your workflow, but for true cognitive augmentation, the AI should adapt to how you think. Here is a sneak peek of our work with Sergio Abraham at the Auster Center for Applied Innovation and Research, challenging the focus on "task-centric" AI, to be presented at the ACM CHI Conference. When AI just generates content on your behalf, it often displaces your thinking rather than extending it. We found that to truly augment the human mind, AI needs to be - Present but not visible: Monitoring your process without demanding your attention. - Reflecting but not generating: Acting as a mirror for your thoughts rather than a ghostwriter. - Momentary but not conversational: Moving away from long, distracting chat threads toward atomic, "blink-and-you-miss-it" interactions. We’ve operationalized this through two concurrent design patterns: 1. Presence-without-Visibility Imagine an AI that detects your hesitation or the paragraph you’ve deleted three times. It stays aware of your struggle but remains in the background until you actually need it. 2. Moments-over-Conversations Human thought is unpredictable and non-linear. Instead of forcing you into a threaded dialogue, this pattern uses "atomic interactions" that match your attention's rhythm, while the system silently maintains the context in the background. The Bottom Line: When tools for thought adapt to our metacognitive rhythm and attentional capacity rather than just our to-do lists, they empower us to think more deeply, not just work faster. I'm eager to hear from our colleagues in #ToolsForThought and #HumanComputerInteraction: do you think we're ready to go beyond just chatbots and make sure humans stay in charge of their own thinking? #AI #UXDesign #CognitiveAugmentation #FutureOfWork #HCI #Metacognition Photo by Shubham Dhage on Unsplash
To view or add a comment, sign in
-
-
Why AI Conversations Don’t Follow a Clock I noticed something interesting while interacting with AI today. At one point in the conversation I mentioned that I was late for office. Much later in the discussion the AI still responded as if that moment had just happened. By then, several hours had already passed in my day. That made me pause. Why doesn’t an AI system really follow the clock the way we do? Then I realized something that is both technical and philosophical. Most conversational AI systems are designed to operate on context rather than continuous time. The model processes the conversation as a thread of text within a defined context window. It does not run in the background tracking the user's real-world timeline. It simply reads the sequence of messages and generates the next response based on that context. In other words, the AI is not aware of when something happened in your day. It only understands what appears inside the conversation. This design has practical reasons. From a systems perspective, conversational AI is typically stateless between interactions. Each message is processed as part of the current context rather than as part of a continuously tracked user timeline. This reduces complexity, avoids unnecessary monitoring of user activity, and helps protect privacy. But there is also an interesting human parallel here. When we are deeply involved in research, writing, or reflection, the clock often fades away. Thinking rarely moves according to minutes and hours. It moves according to ideas. One idea triggers another. A question leads to exploration. A conversation unfolds. AI conversations seem to mirror that pattern. They are organized more like chapters of thought than like entries in a timetable. Perhaps that is why these interactions often feel less like a scheduled meeting and more like opening a page in a notebook and continuing a discussion. Sometimes removing the clock allows thinking to flow more naturally. #ArtificialIntelligence #ConversationalAI #AIArchitecture #FutureOfWork #HumanAIInteraction #TechReflections #DeepThinking #DigitalSystems #TechLeadership #LearningInPublic
To view or add a comment, sign in
-
-
Most AI today understands facts. But humans are not made of facts alone. We talk about emotions. About doubts. About difficult things we are not sure how to say. And sometimes what matters most is what is not said yet. What happens when AI is designed to understand the full human context of conversation? Facts. Emotions. Interpretation. Ambiguity. New generation of AI: H3LIX human-AI symbiosis answers that showing that when a system can build meaning across time from all of this, something new becomes possible: proto-conscious intelligence. Read more about it in our newest paper “Continuity-Governed Proto-Concious Conversational Cognition: A Theoretical Framework and Symbiotic Conversational Continuity Architecture (SCCA)” by Alexander Mathiesen-Ohman & Katarzyna Tworek. Link: https://lnkd.in/dQJ5JAXw #projecth3lix #HumanAISymbiosis #FutureOfAI #ConversationalAI #AIResearch
To view or add a comment, sign in
-
-
Every interaction contains valuable insight. MOOD uses AI to analyze tone, language, and conversation patterns to reveal how customers and agents are really feeling. #CustomerExperience #AI
To view or add a comment, sign in
-
-
Not a day goes by where we don’t get some information on how to make the best use of generative AI in our personal workflows or in our company more generally. Which tasks are best suited for AI? How to adapt AI to better fit our business processes? Using RAG to increase gen AI precision while keeping costs low. The list goes on. What is often less discussed with gen AI is 𝗔𝗜 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘁𝘆 and its interaction with the 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘁𝘆 𝗼𝗳 𝘁𝗵𝗲 𝗵𝘂𝗺𝗮𝗻 𝘂𝘀𝗲𝗿. It seems basic to say that different personalities will interact differently, yet it hasn’t seen much discussion in the mainstream. According to a paper published in November last year, this effect can be quite important. 𝘿𝙞𝙙 𝙮𝙤𝙪 𝙠𝙣𝙤𝙬 that tweaking the personality traits of an LLM in accordance with the personality traits of the user allows for markedly better productivity? 𝘿𝙞𝙙 𝙮𝙤𝙪 𝙠𝙣𝙤𝙬 that you can optimize for either 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 𝘰𝘧 𝘰𝘶𝘵𝘱𝘶𝘵 vs. 𝘲𝘶𝘢𝘯𝘵𝘪𝘵𝘺 𝘰𝘧 𝘰𝘶𝘵𝘱𝘶𝘵 by using this relationship? Given its impact, I think we will be hearing more about this interaction in the future. Are you incorporating personality interactions in your gen AI workflows? Please share with us the impacts you’ve witnessed from it in the comments below. #ai #artificialintelligence #generativeai #genai #llm #psychology #personality Link to the paper: https://lnkd.in/eGfzJk6Y
To view or add a comment, sign in
-
Explore related topics
- AI's Impact On Customer Service Metrics
- How AI Agents Improve Customer Experience
- AI in Customer Service Improvement
- Conversational AI Strategy
- Conversational AI Platforms for CX
- How to Use Conversational AI to Improve Customer Journeys
- Conversational AI for Enhanced User Engagement
- AI's Impact on Human Conversation Simulation
- The Role of AI in Understanding Human Emotions
- How AI Hallucinations Impact Trust in AI