Productivity

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    234,778 followers

    𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗺𝗶𝘀𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗼𝗼𝗱 𝘁𝗼𝗽𝗶𝗰𝘀 𝗶𝗻 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲. Because most people explain it from the inside out: policies, councils, standards, stewardship. But the business does not buy any of that. The business buys outcomes: → trustworthy KPIs → vendor and partner data you can actually use → faster financial close → fewer reporting escalations → smoother M&A integration → AI you can deploy without creating risk debt Most AI programs fail for boring reasons: nobody owns the data, quality is unknown, access is messy, accountability is missing. 𝗦𝗼 𝗹𝗲𝘁’𝘀 𝘀𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝗶𝘁. 𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗳𝗼𝘂𝗿 𝘁𝗵𝗶𝗻𝗴𝘀: → ownership → quality → access → accountability 𝗔𝗻𝗱 𝗶𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘃𝗲𝗿𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 𝗶𝗻 𝟰 𝗹𝗮𝘆𝗲𝗿𝘀: 1. Data Products (what the business consumes) → a named dataset with an owner and SLA → clear definitions + metric logic → documented inputs/outputs and intended use → discoverable in a catalog → versioned so changes don’t break reporting 2. Data Management (how products stay reliable) → quality rules + monitoring (freshness, completeness, accuracy) → lineage (where it came from, where it’s used) → master/reference data alignment → metadata management (business + technical) → access controls and retention rules 3. Data Governance (who decides, who is accountable) → data ownership model (domain owners, stewards) → decision rights: who can change KPI definitions, thresholds, and sources → issue management: triage, escalation paths, resolution SLAs → policy enforcement: what’s mandatory vs optional → risk and compliance alignment (auditability, approvals) 4. Data Operating Model (how you scale across the enterprise) → domain-based setup (data mesh or not, but clear domains) → operating cadence: weekly issue review, monthly KPI governance, quarterly standards → stewardship at scale (roles, capacity, incentives) → cross-domain decision-making for shared metrics → enablement: templates, playbooks, tooling support If you want to start fast: Pick the 10 metrics that run the business. Assign an owner. Define decision rights + escalation. Then build the data products around them. ↓ 𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝗮𝘀 𝗔𝗜 𝗿𝗲𝘀𝗵𝗮𝗽𝗲𝘀 𝘄𝗼𝗿𝗸 𝗮𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀, 𝘆𝗼𝘂 𝘄𝗶𝗹𝗹 𝗴𝗲𝘁 𝗮 𝗹𝗼𝘁 𝗼𝗳 𝘃𝗮𝗹𝘂𝗲 𝗳𝗿𝗼𝗺 𝗺𝘆 𝗳𝗿𝗲𝗲 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dbf74Y9E

  • View profile for Daniel Pink
    Daniel Pink Daniel Pink is an Influencer
    417,061 followers

    The most dangerous time of the day is the afternoon, and science proves it. Your afternoon slump isn’t just about feeling tired. It's way worse than that. Research shows that standardized test scores drop in the afternoon. Anesthesia errors are three times more likely at 3 PM than at 9 AM. Doctors find fewer polyps and colonoscopies later in the day. Car accidents spike between 2 PM and 4 PM. Here's the thing, your brain just doesn't perform at its best in the afternoon. It's the trough of your day, a biological dip in energy and focus about seven hours after you wake up. So how do you beat it? Here are three simple fixes: Number one, schedule your most important work in the morning. Number two, take a strategic break. Research shows even 10 minutes helps. Number three, avoid making big decisions between 2 PM and 4 PM. Afternoons are risky, but now you know how to outsmart them.

  • View profile for Brett Mathews
    Brett Mathews Brett Mathews is an Influencer

    Editor @ Apparel Insider | Editorial, Copywriting

    45,257 followers

    STUDY FINDS COST PER WEAR INFORMATION SHIFTS SHOPPERS TO QUALITY: A new study published in Psychology & Marketing offers a fascinating look at what fashion drives fashion purchasing decisions. Researchers from the University of Bath and Cambridge University found that simply showing consumers the cost per wear (CPW) of garments (price divided by the number of times an item can be worn) can shift preferences away from cheap, low-quality clothing toward higher-priced, longer-lasting options. The findings draw on behavioural psychology to reveal that people respond more to perceived 'economic value' than to abstract sustainability messages. When shoppers could compare CPW between garments, and especially when figures were backed by trusted certification, they were far more likely to choose quality over quantity. The authors suggest CPW could be a powerful tool for brands and policymakers seeking to reframe sustainability as smart spending. Full story in comments.

  • View profile for Hemant Batra

    Legal Futurist | Global Corporate & UN Lawyer | LinkedIn `24 Top Voice | Podcaster | Author – The Law Firm Playbook | Global Thought Leader | ALSP/VLF Pioneer | TV Host – Laws That Shaped India (Sansad TV - Parliament)

    34,508 followers

    Law alone is no longer enough. Clients today don’t just want a memo on risk. They want to know how that risk impacts their product launch, their valuation, and their compliance in a world driven by AI and global regulation. This is why multidisciplinary legal teams are emerging as winners. Lawyers who collaborate with economists, engineers, coders, and policy experts aren’t sidelined. They lead. They shape strategy, deliver clarity, and redefine value for clients. I’ve explored this shift in my latest column. Multidisciplinary Legal Teams Are Winning: Here’s Why (See below). Would love to hear your take. Are we ready to break away from traditional models and embrace hybrid teams? #Law #LegalInnovation #FutureOfWork #Leadership #LegalProfession #Strategy

  • View profile for Jeff Winter
    Jeff Winter Jeff Winter is an Influencer

    Industry 4.0 & Digital Transformation Enthusiast | Business Strategist | Avid Storyteller | Tech Geek | Public Speaker

    170,567 followers

    Everyone wants AI. But what are they actually funding? According to Deloitte’s latest survey of 600 manufacturing executives, the answer is clear: They’re funding data 𝐟𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬. They’re funding 𝐜𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐯𝐢𝐭𝐲. They’re funding automation 𝐢𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞. They’re not buying the hype—they’re building the 𝐛𝐚𝐜𝐤𝐛𝐨𝐧𝐞. • 𝟕𝟖% of manufacturers are spending more than 20% of their improvement budgets on smart manufacturing. • 𝟒𝟎% say data analytics is a top investment priority. • 𝟐𝟗% are putting cloud and AI next. • 𝟑𝟒% are focused on active sensors—the eyes of their factories. Why? Because without clean, connected, contextualized data, none of the shiny stuff works. This isn’t a pilot phase. This is the build phase—and it’s quietly transforming how factories think, sense, and act. Despite all the tech, the lowest maturity score? 𝐇𝐮𝐦𝐚𝐧 𝐜𝐚𝐩𝐢𝐭𝐚𝐥. Manufacturers know the systems are coming online. Now they’re scrambling to bring the people along. So if you're a manufacturer still working off spreadsheets and tribal knowledge—know this: Your competitors aren’t just automating. They’re upgrading their operational IQ. And if you’re not investing in your digital foundation today… You’re budgeting for irrelevance tomorrow. 𝐑𝐞𝐚𝐝 𝐟𝐮𝐥𝐥 𝐫𝐞𝐩𝐨𝐫𝐭:  https://lnkd.in/e6_QsJcw ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!

  • View profile for Ethan Mollick
    Ethan Mollick Ethan Mollick is an Influencer
    372,938 followers

    In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Today’s AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9

  • View profile for Zach Wilson
    Zach Wilson Zach Wilson is an Influencer

    Founder @ DataExpert.io | ex-Netflix ex-Meta staff engineer | Angel Investor in 6 startups | Featured on Forbes | Dogs

    514,873 followers

    Apache Spark has levels to it: - Level 0 You can run spark-shell or pyspark, it means you can start - Level 1 You understand the Spark execution model: • RDDs vs DataFrames vs Datasets • Transformations (map, filter, groupBy, join) vs Actions (collect, count, show) • Lazy execution & DAG (Directed Acyclic Graph) Master these concepts, and you’ll have a solid foundation - Level 2 Optimizing Spark Queries • Understand Catalyst Optimizer and how it rewrites queries for efficiency. • Master columnar storage and Parquet vs JSON vs CSV. • Use broadcast joins to avoid shuffle nightmares • Shuffle operations are expensive. Reduce them with partitioning and good data modeling • Coalesce vs Repartition—know when to use them. • Avoid UDFs unless absolutely necessary (they bypass Catalyst optimization). Level 3 Tuning for Performance at Scale • Master spark.sql.autoBroadcastJoinThreshold. • Understand how Task Parallelism works and set spark.sql.shuffle.partitions properly. • Skewed Data? Use adaptive execution! • Use EXPLAIN and queryExecution.debug to analyze execution plans. - Level 4 Deep Dive into Cluster Resource Management • Spark on YARN vs Kubernetes vs Standalone—know the tradeoffs. • Understand Executor vs Driver Memory—tune spark.executor.memory and spark.driver.memory. • Dynamic allocation (spark.dynamicAllocation.enabled=true) can save costs. • When to use RDDs over DataFrames (spoiler: almost never). What else did I miss for mastering Spark and distributed compute?

  • View profile for Sander Hofman
    Sander Hofman Sander Hofman is an Influencer

    ASML🔹Join 6K+ techies for my newsletter Always Be Curious🔹Reserve Officer Track in Royal Netherlands Navy

    20,164 followers

    🔎 𝗟𝗼𝗼𝗸𝗶𝗻𝗴 𝗶𝗻𝘀𝗶𝗱𝗲 𝗮𝗻 𝗮𝗰𝘁𝘂𝗮𝗹 AMD 𝗰𝗵𝗶𝗽! 😲 Here's a bit of a Ryzen processor made on TSMC's 7-nanometer node. You can see the web of interconnects, the metal wires that connect the transistors (that bottom layer) on a chip to harness their computing power. The image was taken with a new 𝗽𝘁𝘆𝗰𝗵𝗼𝗴𝗿𝗮𝗽𝗵𝗶𝗰 𝗫-𝗿𝗮𝘆 𝗹𝗮𝗺𝗶𝗻𝗼𝗴𝗿𝗮𝗽𝗵𝘆 (𝗣𝘆𝗫𝗟) technique out of the PSI Paul Scherrer Institut, University of Southern California and ETH Zürich. The technique currently has 4 nanometer resolution and the scientists have a path to get to 1 nm resolution. The cool thing about this technology is its non-destructive imaging power to help find defects in chips. Today’s chips are so complicated that electrical tests alone can no longer pinpoint where a defect is: chipmakers use a mix of optical imaging and other methods to zero in on potential problem areas. They then image such areas with a slow but very high-resolution scanning electron microscope. Finally they might take a slice of a chip for further imaging with a transmission electron microscope (TEM). When they find the flaw, they can then go back and correct their design. But with PyXL, they have another tool to pinpoint defects without destroying the chip. ✨

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    222,362 followers

    💎 Accessibility For Designers Checklist (PDF: https://lnkd.in/e9Z2G2kF), a practical set of cards on WCAG accessibility guidelines, from accessible color, typography, animations, media, layout and development — to kick-off accessibility conversations early on. Kindly put together by Geri Reid. WCAG for Designers Checklist, by Geri Reid Article: https://lnkd.in/ef8-Yy9E PDF: https://lnkd.in/e9Z2G2kF WCAG 2.2 Guidelines: https://lnkd.in/eYmzrNh7 Accessibility isn’t about compliance. It’s not about ticking off checkboxes. And it’s not about plugging in accessibility overlays or AI engines either. It’s about *designing* with a wide range of people in mind — from the very start, independent of their skills and preferences. In my experience, the most impactful way to embed accessibility in your work is to bring a handful of people with different needs early into design process and usability testing. It’s making these test sessions accessible to the entire team, and showing real impact of design and code on real people using a real product. Teams usually don’t get time to work on features which don’t have a clear business case. But no manager really wants to be seen publicly ignoring their prospect customers. Visualize accessibility to everyone on the team and try to make an argument about potential reach and potential income. Don’t ask for big commitments: embed accessibility in your work by default. Account for accessibility needs in your estimates. Create accessibility tickets and flag accessibility issues. Don’t mistake smiling and nodding for support — establish timelines, roles, specifics, objectives. And most importantly: measure the impact of your work by repeatedly conducting accessibility testing with real people. Build a strong before/after case to show the change that the team has enabled and contributed to, and celebrate small and big accessibility wins. It might not sound like much, but it can start changing the culture faster than you think. Useful resources: Giving A Damn About Accessibility, by Sheri Byrne-Haber (disabled) https://lnkd.in/eCeFutuJ Accessibility For Designers: Where Do I Start?, by Stéphanie Walter https://lnkd.in/ecG5qASY Web Accessibility In Plain Language (Free Book), by Charlie Triplett https://lnkd.in/e2AMAwyt Building Accessibility Research Practices, by Maya Alvarado https://lnkd.in/eq_3zSPJ How To Build A Strong Case For Accessibility, ↳ https://lnkd.in/ehGivAdY, by 🦞 Todd Libby ↳ https://lnkd.in/eC4jehMX, by Yichan Wang #ux #accessibility

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,404,630 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

Explore categories