AI in Knowledge Work Productivity

Explore top LinkedIn content from expert professionals.

  • View profile for Ethan Mollick
    Ethan Mollick Ethan Mollick is an Influencer
    372,948 followers

    In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Today’s AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9

  • View profile for Vas Narasimhan
    Vas Narasimhan Vas Narasimhan is an Influencer

    Reimagining medicine as CEO of Novartis

    435,709 followers

    Right now, every CEO is wondering the same thing: “How can artificial intelligence help maximize our impact?”   Delivering on the promise of AI isn’t just good business, it has the potential to help us address some of society’s most pressing challenges. So today, I wanted to offer a closer look at how AI is helping us discover new medicines at Novartis.   The process of identifying a new drug, running patient clinical trials, and bringing it to market takes over a decade. Each new medicine costs on average $2 billion to develop, and we know nearly 9 in 10 of the treatments we work on will fail before they ever reach patients.   A major early step in that process is identifying individual targets in the body that we want to design a drug for. Once we identify that target, which most commonly is a protein, we look for molecules that might address the target’s underlying issue – ultimately those molecule structures form the basis for every successful treatment.   Unlocking the right protein and molecular structures is complex stuff – each step often takes years to get right and our scientists consider billions of potential chemical structures that might lead to effective and safe drug candidates.   AI offers us the chance to accelerate that process. Working with partners at Isomorphic Labs – including members of the Google DeepMind team that were awarded the Nobel Prize this year – we’re now able to do things like model how a protein folds and interacts with the molecules we design. AI models also make it possible for us to analyze different chemical structures simultaneously. It has the potential to add up to significant time savings for our drug development scientists and their work to predict what molecules might treat specific diseases better and faster.   We’re just at the beginning of what this technology can do. As we incorporate AI throughout Novartis’ work, I’m excited to see all the ways it helps us unlock the mysteries of human biology, so we can deliver better medicines that improve and extend patients’ lives.

  • View profile for Pascal BORNET

    #1 Top Voice in AI & Automation | Award-Winning Expert | Best-Selling Author | Recognized Keynote Speaker | Agentic AI Pioneer | Forbes Tech Council | 2M+ Followers ✔️

    1,517,954 followers

    Wikipedia traffic is collapsing — and it’s not just because of AI. Wikipedia just reported an 8% drop in human visits in just a few months. The reason? AI systems — the same ones trained on Wikipedia — are now answering questions instead of sending users there. The free encyclopedia is being replaced by the knowledge it taught. That irony stopped me cold. I’ve always seen Wikipedia as the internet’s moral compass — messy, human, collaborative. When I was learning about anything new, I didn’t go for perfection. I went for context. Now I rarely visit it. AI gives me the answer instantly — but never the understanding that came from scrolling, cross-checking, exploring footnotes. Somewhere along the way, convenience quietly replaced curiosity. Here’s what’s really going on beneath the numbers: → AI is not just summarizing information — it’s absorbing the audience that once sustained the sources. → When answers appear directly on search pages, the human loop of reading, editing, and donating breaks. → And as fewer humans visit, fewer volunteers contribute — shrinking the very ecosystem AI depends on. It’s the classic paradox of automation: AI is killing the teachers it learned from. If knowledge itself is becoming automated, we need to rebuild the habit of participation. Here’s what I believe that looks like: ✅ Credit and link back to the human sources behind AI summaries. ✅ Support open, editable knowledge platforms — not just polished AI outputs. ✅ Remember that understanding comes from reading, not just receiving. Because if we stop feeding the commons of human knowledge, We won’t just lose Wikipedia — We’ll lose the curiosity that made the internet worth exploring in the first place. #AI #Wikipedia #KnowledgeEconomy #AIEthics #Publishing #InformationFuture #DigitalCulture

  • View profile for Nick Bloom
    Nick Bloom Nick Bloom is an Influencer

    Stanford Professor | LinkedIn Top Voice In Remote Work | Co-Founder wfhresearch.com | Speaker on work from home

    72,235 followers

    Just out in Harvard Business Review, summary of the Hybrid Experiment results and lessons on how to make hybrid succeed. Experiment: randomize 1600 graduate employees in marketing, finance, accounting and engineering at Trip.com into 5-days a week in office, or 3-days a week in office and 2-days a week WFH. Analyzed 2 years of data. Two key results A) Hybrid and fully-in-office showed no differences in productivity, performance review grade, promotion, learning or innovation. B) Hybrid had a higher satisfaction rate, and 35% lower attrition. Quit-rate reductions were largest for female employees. Four managerial lessons 1) Hybrid needs a strong performance management system so managers don’t need to hover over employees at their desks to check their progress. Trip.com had an extensive performance review process every six months. 2) Coordinate in-office days at the team or company level. Schedule clarity prevents the frustration of coming to an empty office only to participate in Zoom calls. Trip.com coordinated WFH on Wednesday and Friday. 3) Having leadership buy-in is critical (as with most management practices). Trip.com’s CEO and C-suite all support the hybrid policy. 4) A/B test new policies (as well as products) if possible. Often new policies turn out to be unexpectedly profitable. Trip.com made millions of dollars more profits from hybrid by cutting expensive turnover.

  • View profile for Usman Sheikh

    I co-found companies with experts ready to own outcomes, not give advice.

    55,996 followers

    Is this the fastest-growing AI startup of 2025? $10M revenue run-rate in 8 weeks. Here's how: It turns your ideas into apps - in seconds. But that's just the surface. The real story starts with failure: → First launch: Good tech, wrong approach → Second launch: Better tech, same problems → Third launch: Everything changed Most AI coding tools fail the same way: → They get exponentially worse with complexity → Projects get stuck halfway through → Changes cascade into errors → Simple demos never become real products Lovable cracked the code differently: → Built AI that learns from its mistakes → Created systematic error detection → Focused on one perfect tech stack → Turned every failure into improvement Think SpaceX for Software: → Don't aim for perfect first attempts → Build systems that learn from failure → Iterate faster than anyone else → Turn experience into institutional knowledge The market felt it immediately: → Non-technical founders shipping full products → Designers building without engineers → Product managers coding their own features → Engineers focusing on core innovation The knock-on effects exploded. When implementation becomes instant: → Ideas can be tested in real time → Innovation cycles compress to hours → Market feedback becomes immediate → Competitive advantage shifts entirely This isn't just about software anymore. This is about how value gets created. The Pattern Is becoming Clearer: → When systems can improve themselves → When expertise becomes programmable → When iteration becomes instant → Everything we know about work transforms Look at your industry: → Lawyers won't write contracts, they'll architect legal frameworks → Strategists won't analyze data, they'll discover hidden opportunities → Designers won't draw, they'll craft experiences → Analysts won't process data, they'll define what matters The New Game: → Expertise shifts from doing to designing → Value moves from doing to orchestrating → Competition becomes about system creation → Human capability scales through AI partnership The Winners Will: → Build systems that learn and improve → Design patterns for AI to execute → Create feedback loops that compound → Scale their impact exponentially The Losers Will: → Defend traditional expertise → Compete with AI directly → Miss the system-level play → Stay stuck doing instead of orchestrating Lovable isn't just growing fast. They're showing us how work evolves. The question isn't whether this transformation is coming. Lovable proves it's already here.

  • View profile for Shobhit Tankha

    🧿 Gaudium Dei fortitudo mea est

    7,679 followers

    A lot of AI engineers (even sharp ones) get seduced by the cool factor of vector databases. Cosine similarity, ANN search... it all sounds cutting-edge. But when you're building a Retrieval-Augmented Generation (RAG) pipeline, you're not just doing retrieval. You're orchestrating a semantic symphony between memory, context, and reasoning. And that's where many go off the rails. ❌ The Mistake: Vector First, Think Later Vector DBs are fantastic if: • Your knowledge is flat, unstructured, and mostly text • You want fast nearest-neighbor search over embeddings • You're okay with opaque black-box retrieval But the moment your domain knowledge has structure, hierarchies, relationships, or rules that need to be preserved across hops... vector search starts hallucinating. Hard. Because embedding space flattens knowledge. It smears out the sharp logic. It doesn't understand that "Paris is the capital of France and a city in Europe and has museums related to Impressionism." Vector DB just knows "Paris" is semantically close to "Eiffel Tower." Wow. Groundbreaking. 🧭 What You Should Be Using: Knowledge Graphs If your use case has: • Ontologies (types, classes, hierarchies) • Multi-hop reasoning (A→B→C) • Causality or directionality (X leads to Y, not just related to) • Entity disambiguation (which "Apple" are we talking about?) • Need for traceability and explainability (the why behind the answer) Then a Knowledge Graph (KG) is your divine weapon. Graphs don't just store facts. They encode logic, preserve causality, and let you do symbolic + neural hybrid search. They let you model the world like the world actually works... not just as a soup of cosine-clustered tokens. 🧪 Real-World Case: Ask a medical LLM powered by a vector DB: Can ibuprofen be taken with aspirin? You might get a generic answer scraped from a webpage. Ask the same question in a KG-powered RAG. The graph knows: Ibuprofen is an NSAID. Aspirin is an antiplatelet. There's a potential drug interaction due to increased bleeding risk. This depends on patient profile → age → comorbidities → other meds It can trace a path through nodes and edge types to construct a reasoned answer. This is not just retrieval. This is inference. 🔮 Where This Is Going The future of RAG is hybrid: 🔸️Embeddings for semantic breadth 🔸️Graphs for logical depth You'll embed the leaves of the tree... but you'll walk the branches with graph logic. 🎯 TLDR for the Impatient: Vector DBs are great for fuzzy recall. Knowledge Graphs are necessary for precise reasoning. And most AI engineers forget that precision is not optional in high-stakes domains like medicine, law, or finance. If your system needs to think, not just parrot, start with the graph. #database #vector #embeddings #knowledgegraphs #algorithms #computerscience #software #tech #medicine #law #finance #AI #RAG #LLM

  • View profile for Lenny Rachitsky
    Lenny Rachitsky Lenny Rachitsky is an Influencer

    Deeply researched no-nonsense product, growth, and career advice

    341,615 followers

    Is AI delivering real productivity gains? What's the ROI so far? Hot takes abound, but data have been scarce. Noam Segal and I took it upon ourselves to find out what’s actually happening on the ground by running one of the largest independent, in-depth surveys on how AI is affecting productivity for tech workers (1,750 respondents). We surveyed product managers, engineers, designers, founders, and others about how they’re using AI at work. tl;dr: AI is overdelivering. 1. 55% of respondents say AI has exceeded their expectations, and almost 70% say it’s improved the quality of their work. 2. More than half of respondents said AI is saving them at least half a day per week on their most important tasks. We’ve never seen a tool deliver a productivity boost like this before. 3. Founders are getting the most out of AI. Half (49%) report that AI saves them over 6 hours per week, dramatically higher than for any other role. Close to half (45%) also feel that the quality of their work is “much better” thanks to AI. 4. Designers are seeing the fewest benefits. Only 45% report a positive ROI (compared with 78% of founders), and 31% report that AI has fallen below expectations, triple the rate among founders. 5. Engineers have accepted AI as a coding partner and now want it to handle the more boring (but necessary) work of building products: documentation, code review, and writing tests. 6. n8n is currently dominating the agent landscape, though actual adoption of agentic platforms in 2025 has been slow. 7. A whopping 92.4% of respondents report at least one significant downsides to using AI tools. There’s definitely room for improvement. Here's the full report: https://lnkd.in/gR5G88yA Inside: - What exactly AI is doing for people, function by function? - Where are the biggest opportunities for AI startups? - Which AI tools have product-market fit? - The downsides of AI productivity - Bonus: The state of agentic AI: promise outpaces practice - What this all means - Appendix: Who took this survey

  • View profile for Alex Wang
    Alex Wang Alex Wang is an Influencer

    Learn AI Together - I share my learning journey into AI & Data Science here, 90% buzzword-free. Follow me and let's grow together!

    1,125,312 followers

    Paper sharing- AI in Science Discovery & Product Innovation MIT researchers expanded on applying AI-driven discovery to material science, which is making discoveries happen faster than ever! What they did was introduce an AI tool (similar to the "AI Scientist" from Sakana AI) for materials discovery to 1,800 scientists in the R&D lab of a large U.S. firm. Traditionally, scientific discovery is labor-intensive and manual—a process full of trial and error, where scientists conceptualize various potential structures and then test their properties. Here’s how AI tackles it: AI generates ideas, prioritizes promising materials, tests them, and iterates on any false positives, refining until it finds viable options. Once validated, these materials can be patented and commercialized. This entire process runs much faster, and the impact is striking. Researchers with AI assistance have discovered 44% more materials, filed 39% more patents, and seen a 177% jump in downstream product innovation. Interestingly, the benefits are more unequally divided than we might have assumed. Top researchers nearly doubled their output, while the bottom third saw little improvement. This divide is partly because AI automates 57% of idea generation, allowing top scientists to focus on testing rather than preliminary research. Another downside is that 82% of scientists reported feeling less fulfilled, citing reduced creativity and underutilized skills. References/ Paper: https://lnkd.in/g3sZdAbJ __________________ I share my learning journey here. Join me and let's grow together. For more on AI and learning materials, please check my previous posts. Alex Wang

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    165,437 followers

    How can leaders transform their teams to be AI-first? It starts with mindset. An AI-first mindset means: Seeing AI as an opportunity, not a threat. Viewing AI as a tool to augment teams, not just automate tasks. Using AI to reimagine work, not just optimize work. As leaders, it’s on us to build this mindset within our teams. Here are 5 ways we do this at HubSpot: Use AI daily: Lead by example—trust grows when teams see leaders embrace AI themselves. I use it everyday and share very specific use cases with our company on how I use it. Now every leader is doing the same with their teams. The result is that we will have almost everyone in the company use AI daily by the end of year. Apply constraints: Give clear, focused challenges. We kept headcount flat in Support while growing the customer base by 20%+. Result - the team innovated with AI and over achieved the target. Smart constraints drive innovation. Establish tiger teams: Empower small, agile groups to experiment, innovate, and teach the organization. We have AI Tiger teams in every function - they share progress in Slack channels and there is so much energy with small groups experimenting and learning. Be a learn-it-all: Foster a culture of continuous learning. Share openly about successes and failures alike. We have dedicated 2 full days to learning and scaling with AI this quarter as a company - we have lined up great speakers, ways to experiment and gamified learning. Measure progress and share it: Measure which teams are completing learning modules, using AI everyday and share that openly. A little healthy competition goes a long way in driving AI-fluency. AI isn’t just a technology shift. It’s fundamentally reshaping how work gets done—and that requires shifting our mindset first. Leaders who embrace AI now will unlock creativity, performance, and impact. Are you building an AI-first mindset with your team? #Leadership #AI #Innovation #Mindset #FutureOfWork

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    708,481 followers

    In the world of Generative AI, 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 (𝗥𝗔𝗚) is a game-changer. By combining the capabilities of LLMs with domain-specific knowledge retrieval, RAG enables smarter, more relevant AI-driven solutions. But to truly leverage its potential, we must follow some essential 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀: 1️⃣ 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗮 𝗖𝗹𝗲𝗮𝗿 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲 Define your problem statement. Whether it’s building intelligent chatbots, document summarization, or customer support systems, clarity on the goal ensures efficient implementation. 2️⃣ 𝗖𝗵𝗼𝗼𝘀𝗲 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗕𝗮𝘀𝗲 - Ensure your knowledge base is 𝗵𝗶𝗴𝗵-𝗾𝘂𝗮𝗹𝗶𝘁𝘆, 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱, 𝗮𝗻𝗱 𝘂𝗽-𝘁𝗼-𝗱𝗮𝘁𝗲. - Use vector embeddings (e.g., pgvector in PostgreSQL) to represent your data for efficient similarity search. 3️⃣ 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗠𝗲𝗰𝗵𝗮𝗻𝗶𝘀𝗺𝘀 - Use hybrid search techniques (semantic + keyword search) for better precision. - Tools like 𝗽𝗴𝗔𝗜, 𝗪𝗲𝗮𝘃𝗶𝗮𝘁𝗲, or 𝗣𝗶𝗻𝗲𝗰𝗼𝗻𝗲 can enhance retrieval speed and accuracy. 4️⃣ 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗲 𝗬𝗼𝘂𝗿 𝗟𝗟𝗠 (𝗢𝗽𝘁𝗶𝗼𝗻𝗮𝗹) - If your use case demands it, fine-tune the LLM on your domain-specific data for improved contextual understanding. 5️⃣ 𝗘𝗻𝘀𝘂𝗿𝗲 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 - Architect your solution to scale. Use caching, indexing, and distributed architectures to handle growing data and user demands. 6️⃣ 𝗠𝗼𝗻𝗶𝘁𝗼𝗿 𝗮𝗻𝗱 𝗜𝘁𝗲𝗿𝗮𝘁𝗲 - Continuously monitor performance using metrics like retrieval accuracy, response time, and user satisfaction. - Incorporate feedback loops to refine your knowledge base and model performance. 7️⃣ 𝗦𝘁𝗮𝘆 𝗦𝗲𝗰𝘂𝗿𝗲 𝗮𝗻𝗱 𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝘁 - Handle sensitive data responsibly with encryption and access controls. - Ensure compliance with industry standards (e.g., GDPR, HIPAA). With the right practices, you can unlock its full potential to build powerful, domain-specific AI applications. What are your top tips or challenges?

Explore categories