Human conversation is interactive. As others speak you are thinking about what they are saying and identifying the best thread to continue the dialogue. Current LLMs wait for their interlocutor. Getting AI to think during interaction instead of only when prompted can generate more intuitive and engaging Humans + AI interaction and collaboration. Here are some of the key ideas in the paper "Interacting with Thoughtful AI" from a team at UCLA, including some interesting prototypes. 🧠 AI that continuously thinks enhances interaction. Unlike traditional AI, which waits for user input before responding, Thoughtful AI autonomously generates, refines, and shares its thought process during interactions. This enables real-time cognitive alignment, making AI feel more proactive and collaborative rather than just reactive. 🔄 Moving from turn-based to full-duplex AI. Traditional AI follows a rigid turn-taking model: users ask a question, AI responds, then it idles. Thoughtful AI introduces a full-duplex process where AI continuously thinks alongside the user, anticipating needs and evolving its responses dynamically. This shift allows AI to be more adaptive and context-aware. 🚀 AI can initiate actions, not just react. Instead of waiting for prompts, Thoughtful AI has an intrinsic drive to take initiative. It can anticipate user needs, generate ideas independently, and contribute proactively—similar to a human brainstorming partner. This makes AI more useful in tasks requiring ongoing creativity and planning. 🎨 A shared cognitive space between AI and users. Rather than isolated question-answer cycles, Thoughtful AI fosters a collaborative environment where AI and users iteratively build on each other’s ideas. This can manifest as interactive thought previews, real-time updates, or AI-generated annotations in digital workspaces. 💬 Example: Conversational AI with "inner thoughts." A prototype called Inner Thoughts lets AI internally generate and evaluate potential contributions before speaking. Instead of blindly responding, it decides when to engage based on conversational relevance, making AI interactions feel more natural and meaningful. 📝 Example: Interactive AI-generated thoughts. Another project, Interactive Thoughts, allows users to see and refine AI’s reasoning in real-time before a final response is given. This approach reduces miscommunication, enhances trust, and makes AI outputs more useful by aligning them with user intent earlier in the process. 🔮 A shift in human-AI collaboration. If AI continuously thinks and shares thoughts, it may reshape how humans approach problem-solving, creativity, and decision-making. Thoughtful AI could become a cognitive partner, rather than just an information provider, changing the way people work and interact with machines. More from the edge of Humans + AI collaboration and potential coming.
How to Improve Human-Technology Interaction
Explore top LinkedIn content from expert professionals.
Summary
Human-technology interaction refers to the way people and digital systems communicate, collaborate, and share tasks. Improving this relationship means designing technology that supports human needs, builds trust, and encourages thoughtful teamwork between people and machines.
- Prioritize human needs: Focus on understanding where people struggle or need support, and let those insights guide how technology is used or developed.
- Design for collaboration: Create systems where humans and technology work together, allowing AI to handle routine tasks while people remain in control and can connect when needed.
- Build transparency and trust: Clearly show when users are interacting with technology versus a person, and make transitions between the two smooth and respectful to foster confidence and comfort.
-
-
AI and automation offer us an incredible opportunity: the chance to free up time, energy, and attention for the human connections that matter most in healthcare. When we're intentional about implementation, we can create systems that are both more efficient and more deeply human - where technology handles the transactional so people can focus on the relational. Here are ten principles for using AI and automation to strengthen human connection: 1. Start with Human Needs, Not Technical Capabilities Before asking what you can automate, ask what people actually need. Observe where friction exists. Listen to where patients and staff struggle. Let those insights guide your technology decisions. 2. Automate the Transactional to Protect the Relational Routine scheduling, wayfinding, and basic information transfer are ideal for automation. This frees up your team for moments that truly need human attention - difficult conversations, emotional support, and relationship building. 3. Test with Real People in Real Conditions What works in an outpatient setting might not work in an inpatient procedural space. Prototype different approaches and observe how people respond in the specific contexts where they'll use these tools. 4. Design for Everyone, Especially the Most Vulnerable When your automation works for people with varying comfort with technology, different language needs, and different digital access levels, you've created something that expands access rather than creating new barriers. 5. Make Human Interaction Always Available Give people easy, judgment-free ways to connect with a human whenever they need to. When automation is truly helpful, most people will use it. When they need a person, that option should be readily available. 6. Measure Whether You're Creating Capacity for Connection The best automation frees staff from routine tasks so they can spend more time on complex care conversations, emotional support, and personalized attention. If your team isn't gaining that capacity, refine your approach. 7. Be Clear About What's Automated and What's Human People appreciate knowing when they're interacting with AI versus a person. Transparency builds trust and sets appropriate expectations. 8. Design Seamless Handoffs Between Technology and Humans When someone moves from an automated system to human interaction, the transition should feel smooth. Information should carry forward, staff should have context, and patients shouldn't repeat themselves. 9. Learn and Adapt Continuously Pay attention to what's actually happening as people use your systems. Where does automation help? Where does it frustrate? Use these insights to keep improving. 10. Let Your Values Guide What Stays Human Your organizational values should illuminate where human presence is essential. If you value dignity and compassion, those values can guide which moments need human interaction and which can be effectively supported by technology.
-
Most AI implementations can be technically flawless—but fundamentally broken. Here's why: Consider this scenario: A company implemented a fully automated AI customer service system, and reduced ticket solution time by 40%. What happens to the satisfaction scores? If they drop by 35%, is the reduction in response times worth celebrating? This exemplifies the trap many leaders fall into - optimizing for efficiency while forgetting that business, at its core, is fundamentally human. Customers don't always just want fast answers; they want to feel heard and understood. The jar metaphor I often use with leadership teams: Ever tried opening a jar with the lid screwed on too tight? No matter how hard you twist, it won't budge. That's exactly what happens when businesses pour resources into technology but forget about the people who need to use it. The real key to progress isn't choosing between technology OR humanity. It's creating systems where both work together, responsibly. So, here are 3 practical steps for leaders and businesses: 1. Keep customer interactions personal: Automation is great, but ensure people can reach humans when it matters. 2. Let technology do the heavy lifting: AI should handle repetitive tasks so your team can focus on strategy, complex problems, and relationships. 3. Lead with heart, not just data (and I’m a data person saying this 🤣) Technology streamlines processes, but can't build trust or inspire people. So, your action step this week: Identify one process where technology and human judgment intersect. Ask yourself: - Is it clear where AI assistance ends and human decision-making begins? - Do your knowledge workers feel empowered or threatened by technology? - Is there clear human accountability for final decisions? The magic happens at the intersection. Because a strong culture and genuine human connection will always be the foundation of a great organization. What's your experience balancing tech and humanity in your organization?
-
The best AI products don’t replace human interaction—they enhance it. Yet too often, AI is either too passive, forcing users to do all the work, or too aggressive, automating decisions without enough human input. The real value comes when AI acts as an assistant, handling repetitive tasks while keeping humans in control. AI should provide insights, not just outputs, and adapt to users rather than forcing users to adapt to it. The goal isn’t to remove people from the equation—it’s to make them more effective. How are you thinking about balancing automation and human control in AI?
-
🌟 Navigating the Human-AI Relationship with Levinger’s Relationship Stage Theory 🌟 In the ever-evolving landscape of computer and human interaction, it's useful to draw inspiration from existing relationship theories to guide us in crafting meaningful (and effective!) human-AI collaborations. One such guiding principle is "Levinger's Relationship Stage Theory", traditionally applied to interpersonal relationships but increasingly relevant to our dynamic with AI. At each stage of Levinger's model—acquaintance, buildup, continuation, deterioration, and termination—we can find valuable insights for designing human-AI interactions that are trustworthy, engaging, and ultimately beneficial. 🤝 Acquaintance: Building initial trust by introducing AI in a transparent and accessible manner, enabling users to feel comfortable and curious. 🔗 Buildup: Developing deeper connections through personalized experiences and adaptive learning, as well as ensuring that AI systems align with users' evolving needs and goals. 🌿 Continuation: Maintaining a sustainable relationship by continuously improving AI capabilities and fostering a sense of mutual growth and collaboration. ⚠️ Deterioration: Recognizing signs of diminishing trust or satisfaction, and addressing concerns through feedback loops and responsive updates. 🚪 Termination: Understanding when an AI relationship has reached its end, respecting user autonomy, and ensuring a seamless transition or closure. I took the model a bit further, and mapped to the "Community Commitment Curve", made popular by DOUGLAS ATKIN. Basically, as a human and AI exchange 'artifacts', the commitment in the relationship increases, they share more "consequential" artifacts, building trust over time. These frameworks provide a robust foundation for creating AI systems that prioritize user experience & trust. By thoughtfully considering each stage and commitment step, we can design AI interactions that not only meet users' needs but also empower them as collaborators in a shared journey. Hope this helps all the human-computer designers & builders out there!
-
𝐇𝐮𝐦𝐚𝐧-𝐅𝐢𝐫𝐬𝐭 𝐋𝐞𝐚𝐝𝐞𝐫𝐬𝐡𝐢𝐩: 𝐀𝐥𝐢𝐠𝐧𝐢��𝐠 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐢𝐨𝐧 𝐰𝐢𝐭𝐡 𝐏𝐞𝐨𝐩𝐥𝐞 𝐚𝐧𝐝 𝐏𝐮𝐫𝐩𝐨𝐬𝐞 “Human-first” means approaching innovation, AI, and enterprise transformation in a way that prioritizes people at the center of every decision. It’s about creating systems and processes that enhance human potential, while ensuring technology serves as an enabler of trust, clarity, and empowerment. By leveraging ACT (𝐀𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭, 𝐂𝐥𝐚𝐫𝐢𝐭𝐲, 𝐓𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲), this approach ensures that innovation is guided by leadership principles that respect, elevate, and embolden the workforce. 𝐀𝐩𝐩𝐥𝐲𝐢𝐧𝐠 𝐭𝐡𝐞 𝐀𝐂𝐓 𝐌𝐨𝐝𝐞𝐥 𝐭𝐨 𝐚 𝐇𝐮𝐦𝐚𝐧-𝐅𝐢𝐫𝐬𝐭 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: 1. 𝐀𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭: • Innovation must align with both individual and organizational goals. • Ensure AI and automation integrate seamlessly with workflows, enabling employees to do their best work by focusing on higher-value, creative tasks. • Align ethical and cultural values with technological progress to maintain trust and engagement across teams. 2. 𝐂𝐥𝐚𝐫𝐢𝐭𝐲: • Simplify the adoption of new technologies by making processes, roles, and AI capabilities clear and accessible. • Provide employees with clear paths for training and development, enabling them to confidently work alongside AI systems. • Communicate the “why” behind changes, ensuring everyone understands the vision and purpose of the innovation. 3. 𝐓𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲: • Make AI systems explainable, visible, and accountable, building trust in their outputs and decisions. • Foster an open culture where employees can give feedback on how technology impacts their roles. • Create transparency in leadership, ensuring employees see how decisions about technology benefit them and the organization. 𝐄𝐧𝐚𝐛𝐥𝐞, 𝐄𝐦𝐩𝐨𝐰𝐞𝐫, 𝐄𝐦𝐛𝐨𝐥𝐝𝐞𝐧: • 𝐄𝐧𝐚𝐛𝐥𝐞: Provide employees with the right tools, frameworks, and training to embrace AI and innovation with confidence. • 𝐄𝐦𝐩𝐨𝐰𝐞𝐫: Let people take ownership of how technology integrates into their work, fostering creativity and innovation. • 𝐄𝐦𝐛𝐨𝐥𝐝𝐞𝐧: Create a culture where people feel supported and inspired to take risks, explore new ideas, and challenge the status quo. A human-first approach, guided by the ACT model, ensures that introducing new ideas, innovations, and AI systems strengthens the workforce rather than displacing it. It’s about crafting a path forward where leadership and technology serve as partners in empowering individuals and driving enterprise success. 𝗡𝗼𝘁𝗶𝗰𝗲: The views within any of my posts, are not those of my employer. 𝗟𝗶𝗸𝗲 👍 this? Feel free to reshare, repost, and join the conversation. #humanfirst #leadership #people Gartner Peer Experiences Forbes Technology Council Theia Institute™ VOCAL Council InsightJam.com Solutions Review PEX Network IgniteGTM
-
As Generative AI (GenAI) becomes more common place, a new Human superpower will emerge. There will be those with expert ability at getting quality information from LLMs (large language models), and those without. This post provides simple tips and tricks to help you gain that superpower. TL; DR: To better interact with specific #GenAI tools, bring focused problems, provide sufficient context, engage in interactive and iterative conversations, and utilize spoken audio for a more natural interaction. Couple background notes. I'm an applied linguist by education; historically, a communicator by trade (human-to-human communication); and passionate about responsibly guiding the future of AI at Honeywell. When we announced a pilot program last year to trial use of LLMs in our daily work, I jumped on the opportunity. The potential for increased productivity and creativity was of course a large draw, but the opportunity to explore an area of linguistics I haven't touched in over a decade: human-computer interaction and communication (computational linguistics) was as well. Words are essential elements of effective communication, shaping how messages are perceived, understood, and acted upon. Similar to H2H communication, words we use in conversation with LLMs largely impact the output of the interaction, from both user experience and quality. A drawback is that we often approach an LLM like a search engine, just looking for answers. Instead, we must approach like a conversation partner. This will feel like more work for a human, which is often discouraging. ChatGPT has a reputation of being a "magical" tool or solution. When we find out it's not an easy button but actually requires work and input, we're demotivated. But in reality, the AI tool is pulling your best thinking from you. How to have an effective conversation with AI: 1. Bring a focused problem. Instead of asking, "What recommendations would you make for using ChatGPT?" Start with, "I'm writing a blog post and I'd like to give concrete, tangible suggestions to professionals who haven't had much exposure to ChatGPT." 2. Provide good and enough context. Hot Tip: Ask #ChatGPT to ask you for the context. "I'm writing a LinkedIn post on human-computer interaction. Ask me 3 questions to would help me provide you with sufficient context to assist me with writing this post." 3. Make your conversation interactive and iterative, just as you would with a human. Never accept the first response. (Imagine if we did this in H2H conversation.) 4. Interact via an app versus web. Some web browsers mimic a search box, which influences *how we interact with the tool. Try to use spoken audio. Talk naturally. And try using different models, just as you would speak with different friends for advice. What tips can you share? A special shout out to Stanford Graduate School of Business' Think Fast, Talk Smart podcast for some of the input exchanged here. Sapan Shah Laura Kelleher Tena Mueller Adam O'Neill
-
Humanizing AI Through the Kano Model In an era where generative AI has become a ubiquitous offering, true differentiation lies not in merely adopting the technology but in integrating human values into its core. Building on my earlier discussion about applying the Kano Model to Gen AI strategy, let’s explore how this framework can refocus development metrics to prioritize ethics and human-centricity. By aligning AI systems with human needs, organizations can shift from functional tools to trusted partners that inspire lasting loyalty. Traditional metrics such as speed, scalability, and model accuracy have evolved into basic expectations the “must-haves” of AI. What truly elevates a product today is its ability to embody values like safety, helpfulness, dignity, and harmlessness. These qualities, categorized as “delighters” in the Kano Model, transform AI from a transactional tool into a meaningful collaborator. Key Human-Centric Differentiators Safety: Proactive safeguards must ensure AI systems protect users from risks, whether physical, emotional, or societal. Safety is non-negotiable in building trust. Helpfulness: Personalized, context-aware interactions demonstrate empathy. AI should anticipate needs and adapt to individual preferences, turning routine tasks into meaningful experiences. Dignity: Ethical design principles—fairness, transparency, and privacy—must underpin AI development. Respecting user autonomy fosters long-term trust and engagement. Harmlessness: AI outputs and recommendations should prioritize user well-being, avoiding unintended consequences like bias, misinformation, or psychological harm. This human-centered approach represents a paradigm shift in technology development. While traditional KPIs remain important, they are no longer sufficient to stand out in a crowded market. Organizations that embed human values into their AI systems will not only meet user expectations but exceed them, creating emotional connections that drive loyalty. By applying the Kano Model, businesses can systematically align innovation with ethics, ensuring technology serves humanity rather than the other way around. The future of AI isn’t just about efficiency it’s about elevating human potential through thoughtful, responsible design. How is your organization balancing technical excellence with human values?