Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Ruben Hassid

    Master AI before it masters you.

    779,515 followers

    This is the most underrated way to use Claude: (and it has nothing to do with writing or coding) It's competitive intelligence. Using data that's free, public, and updated every single week. Here's my extract step by step guide: Step 1. Go to claude .ai. Step 2. Select the new Claude "Opus 4.6." Step 3. Turn on "Extended Thinking." Step 4. Pick a competitor. Go to their careers page. Step 5. Copy every open job listing into one doc. (Title. Team name. Location. Full description) Step 6. Save it as one .txt or .docx file. Step 7. Search the company at EDGAR (sec .gov) Step 8. Download its recent 10-K or 10-Q filing. (Official strategy, risks, and financials - all public.) Step 9. Upload both files to Claude Opus 4.6. Step 10. Paste this exact prompt: "You are a competitive intelligence analyst at a rival company. I've uploaded [Company]'s complete current job listings and their most recent SEC filing. Perform a strategic intelligence analysis: → Cluster these roles by what they suggest is being built. Don't use the team names they've listed. Infer the actual product initiatives from the skills, tools, and responsibilities described. → Identify capabilities or teams that appear entirely new — not mentioned anywhere in the SEC filing. These are unreleased bets. → Find roles where seniority is disproportionately high for a new team. This signals executive-level priority. → Cross-reference the SEC filing's Risk Factors and Strategy sections with hiring patterns. Where are they investing against a stated risk? Where did they flag a risk but have zero hiring to address it? → Predict 3 product launches or strategic moves this company will make in the next 6-12 months. State your confidence level and cite specific job titles and filing sections as evidence. Format this as a 1-page competitive intelligence briefing for a CMO." What you'll find: → Products that don't exist yet but will in 6 months. → Priorities that contradict what the CEO said. → Risks they told the SEC but aren't addressing. This is what consulting firms charge $200K for. It took me 10 minutes. I used the new Claude 'Opus 4.6' for a reason: ✦ It read 60 job listing & a 200-page filing together.  ✦ And connects dots across both. ✦ It is superior in thinking and context retrieval. That's why I didn't use ChatGPT for this.

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    224,415 followers

    Software development is quietly undergoing its biggest shift in decades. Not because of new frameworks. Not because of faster cloud. But because agents are entering the SDLC. Traditional development follows a slow, sequential loop: requirements → design → coding → testing → reviews → deployment → monitoring → feedback. Each step depends on human handoffs, manual fixes, delayed feedback, and long iteration cycles—often stretching from weeks to months. Agentic coding changes this entirely. Instead of humans writing everything line-by-line, developers express intent. Agents understand requirements, implement features, generate tests and documentation, deploy changes, monitor production, and even propose fixes. The lifecycle compresses from weeks and months into hours or days. Here’s what actually changes: • Sequential handoffs become continuous agent-driven flows • Humans shift from coding to guiding and reviewing • Documentation is generated inline, not after delivery • Testing happens automatically alongside implementation • Incidents trigger agent-assisted remediation • Monitoring feeds directly back into learning loops • Iteration becomes constant, not episodic In the Agentic SDLC: You describe outcomes. Agents execute workflows. Humans validate critical decisions. Systems learn continuously. The result isn’t just faster delivery. It’s a fundamentally different operating model for engineering—where feedback is immediate, fixes are automated, and improvement never stops. This is how software teams move from manual development pipelines to self-improving delivery systems.

  • View profile for Vinu Varghese

    MS Organizational Psychology | Chartered MCIPD | GPHR® | SHRM-SCP® | Lean Six Sigma Green Belt

    7,637 followers

    𝗧𝗵𝗲 𝗽𝗮𝗿𝗮𝗱𝗼𝘅 𝗼𝗳 𝗺𝗼��𝗲𝗿𝗻 𝗵𝗲𝗮𝗹𝘁𝗵 𝘁𝗲𝗰𝗵: 𝗧𝗵𝗲 𝗺𝗼𝗿𝗲 𝘄𝗲 𝗺𝗼𝗻𝗶𝘁𝗼𝗿, 𝘁𝗵𝗲 𝗺𝗼𝗿𝗲 𝗮𝗻𝘅𝗶𝗼𝘂𝘀 𝘄𝗲 𝗯𝗲𝗰𝗼𝗺𝗲. We track our bodies 24/7. Count every calorie. Measure sleep, HRV, glucose, stress. From Apple Watch. To Oura Ring. To the latest “temple” device. Somewhere along the way, awareness turned into obsession. Here’s the paradox no one talks about: We have the best health-tracking tools in history, and some of the worst health outcomes. Something doesn’t add up. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝗵𝗼𝘄𝘀 𝗦𝗹𝗲𝗲𝗽 𝘁𝗿𝗮𝗰𝗸𝗶𝗻𝗴 𝗰𝗮𝗻 𝘄𝗼𝗿𝘀𝗲𝗻 𝘀𝗹𝗲𝗲𝗽 Studies on orthosomnia (an obsession with “perfect” sleep metrics) show that people who fixate on sleep scores experience more sleep anxiety, lighter sleep, and poorer recovery—even when objective sleep doesn’t improve. Trying to optimize sleep can literally break it. 𝗛𝗥𝗩 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗶𝗻𝗰𝗿𝗲𝗮𝘀𝗲𝘀 𝘀𝘁𝗿𝗲𝘀𝘀 𝗳𝗼𝗿 𝗺𝗮𝗻𝘆 𝘂𝘀𝗲𝗿𝘀 HRV is a useful trend marker—but daily fluctuations are normal. Research shows that constant HRV checking can heighten health anxiety and perceived stress, especially when users don’t understand variability or context. Ironically, stressing about HRV often lowers HRV. 𝗠𝗼𝗿𝗲 𝗱𝗮𝘁𝗮 ≠ 𝗯𝗲𝘁𝘁𝗲𝗿 𝗵𝗲𝗮𝗹𝘁𝗵 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀 Behavioral science research consistently finds that excessive self-monitoring leads to hypervigilance, loss of bodily trust, and decision fatigue. When every sensation becomes a data point, people stop listening to internal cues and start deferring to dashboards. In short: 𝗢𝘃𝗲𝗿-𝗺𝗲𝗮𝘀𝘂𝗿𝗲𝗺𝗲𝗻𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝘀 𝗮𝘄𝗮𝗿𝗲𝗻𝗲𝘀𝘀 𝘄𝗶𝘁𝗵 𝗮𝗻𝘅𝗶𝗲𝘁𝘆. So what actually creates health? The same fundamentals that worked 5,000 years ago: • Deep, peaceful sleep • Regular sunlight • Real, nourishing food • Daily movement • Time with people you love These don’t need algorithms. They need presence. Use wearables if they serve you—I do, occasionally. But don’t let them become your master. Your life isn’t an algorithm waiting to be optimized. It’s a system meant to be felt, explored, and course-corrected. The best health coach you’ll ever have is already inside you. Trust it.

  • View profile for Rock Lambros
    Rock Lambros Rock Lambros is an Influencer

    Securing Agentic AI @ Zenity | Cybersecurity | CxO, Startup, PE & VC Advisor | Executive & Board Member | CISO | CAIO | QTE | AIGP | Author | OWASP AI Exchange, GenAI & Agentic AI | Tiki Tribe Founding Member

    19,881 followers

    AI security/securing the use of AI is going to kill me. I use Claude Code almost daily. It's a problem.... Here's what I have to change AGAIN this week. Security researcher Ari Marzuk disclosed 30+ vulnerabilities across AI coding tools. Cursor. GitHub Copilot. Windsurf. Claude Code. All of them. He called it IDEsaster. The attack chain includes prompt injection, hijacking LLM context, and auto-approved tool calls executing without permission. Then, legitimate IDE features are weaponized for data exfiltration and RCE. Your .env files. Your API keys. Your source code. Accessible through features you thought were safe. Most studies I read claim that around 85% of developers now use AI coding tools daily. Most have no idea their IDE treats its own features as inherently trusted. 𝗦𝗼... 𝗮𝗳𝘁𝗲𝗿 𝗿𝗲𝘃𝗶𝗲𝘄𝗶𝗻𝗴 𝗔𝗿𝗶'𝘀 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵, 𝗵𝗲𝗿𝗲'𝘀 𝗜 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗱𝗼𝗶𝗻𝗴... Be warned: All this is SO much easier said than done! Audit every MCP server connection. Checked for tool poisoning vectors where legitimate tools might parse attacker-controlled input from GitHub PRs or web content. Removed servers I couldn't verify. Disabled auto-approve for file writes. The attack chains weaponize configuration files and project instructions like .claude/settings.json and CLAUDE.md. One malicious write to these files can alter agent behavior or achieve code execution without additional user interaction. Move all credentials to a secrets manager. No .gitignored .env files in agent-accessible directories. API keys live in 1Password CLI. Environment variables inject at runtime through a wrapper script the LLM never sees. Start running Claude Code in isolated containers. Mounted volumes limited to specific project directories. No access to ~/.ssh, ~/.aws, or ~/.config. If the agent gets compromised, blast radius stays contained. Enable all security warnings. Claude Code added explicit warnings for JSON schema exfiltration and settings file modifications. These exist because Anthropic knows the attack surface. Add pre-commit hooks for hidden characters. Prompt injections hide in pasted URLs, READMEs, and file names using invisible Unicode. Flag non-ASCII characters in any file the agent might ingest. The fix isn't to stop using AI coding tools. The fix is to stop trusting them implicitly. What controls do you have for AI tools with write access to your codebase? 👉 Follow for more AI and cybersecurity insights with the occasional rant #AISecurity #DevSecOps

  • View profile for Satya Nadella
    Satya Nadella Satya Nadella is an Influencer

    Chairman and CEO at Microsoft

    11,876,148 followers

    Today in Cell, we published new research showing how AI can help accelerate cancer discovery. With GigaTIME, we can now simulate spatial proteomics from routine pathology slides, enabling population-scale analysis of tumor microenvironments across dozens of cancer types and hundreds of subtypes.   Developed in partnership with Providence and the University of Washington, our hope is that this work helps scientists move faster from data to insight, revealing new links between genetic mutations, immune activity, and clinical outcomes, and ultimately improving health for people everywhere. https://lnkd.in/dSpPdtzz

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    776,360 followers

    🌍 The future of mobility is taking a bold step forward—literally. What do you think about this concept? Imagine a four-legged robot you can ride like a horse, guided not by wheels or handlebars, but by your natural body movements. This new form of intelligent mobility merges biomechanics, robotics, and AI, enabling seamless interaction between human and machine. 🚶♂️ Adaptive mobility: Quadruped locomotion handles terrain where wheels fail—stairs, rocky paths, and uneven ground. 🧠 Natural control: Instead of learning complex controls, riders simply shift weight or posture. The robot responds instantly, like a living partner. 📈 Market momentum: The global personal mobility market is projected to reach $170B by 2030, with robotics and AI-driven solutions growing at >25% CAGR. ⚡ Applications: From urban commuting and outdoor exploration, to accessibility for elderly users, to logistics in defense and industry. This isn’t just transportation—it’s a new species of intelligent machines designed to work in harmony with us. The age of rideable quadrupeds is no longer science fiction—it’s the next chapter in how we move, explore, and connect with our world. #Mobility #AI #Robotics #Innovation #FutureOfTransport #IntelligentSystems #HumanMachineInteraction #Accessibility #PersonalMobility

  • View profile for Steve Suarez®

    Chief Executive Officer | Entrepreneur | Board Member | Senior Advisor McKinsey | Harvard & MIT Alumnus | Ex-HSBC | Ex-Bain

    49,372 followers

    A milestone in quantum physics — rooted in a student project What began as a student's undergraduate thesis at Caltech — later continued as a graduate student at MIT — has grown into a collaborative experiment between researchers from MIT, Caltech, Harvard, Fermilab, and Google Quantum AI. Using Google’s Sycamore quantum processor, the team simulated traversable wormhole dynamics — a quantum system that behaves analogously to how certain wormholes are predicted to work in theoretical physics. Here’s what they did: Implemented two coupled SYK-like quantum systems on the processor that represent black holes in a holographic model. Sent a quantum state into one system. Applied an effective “negative energy” pulse to make the simulated wormhole traversable. Observed the state emerge on the other side — consistent with quantum teleportation. This wasn’t just classical computer modeling — it ran on real qubits, using 164 two-qubit quantum gates across nine qubits. Why it matters: The results are consistent with the ER=EPR conjecture, which suggests a deep link between quantum entanglement and spacetime geometry. In the holographic picture, patterns of entanglement can be interpreted as wormhole-like “bridges.” This experiment shows how quantum processors can begin to probe aspects of quantum gravity in a laboratory setting, complementing astrophysical observations and theoretical work. While no physical wormhole was created, this is a step toward using quantum computers to explore some of the most fundamental questions in physics. What breakthrough in science excites you most? Share your thoughts below — and let’s discuss how quantum computing is reshaping our understanding of reality. ♻️ Repost to help people in your network. And follow me for more posts like this. CC: thebrighterside

  • View profile for Arvind Jain
    Arvind Jain Arvind Jain is an Influencer
    71,418 followers

    Two strikingly similar headlines surfaced this past week that should make every leader pause: • “Companies Are Pouring Billions Into A.I. It Has Yet to Pay Off.” — New York Times • “Companies Are Pouring Billions Into AI. Here’s Why They’re Not Seeing Returns” — Forbes The NYT points to the human side: employees resist tools they don’t trust. Forbes focuses on the technical side: most AI still can’t understand the context of work. Both are true, and they’re related. When AI lacks context, employees lose trust. It can’t tell the latest doc from last year’s draft. It summarizes a customer conversation but drops the follow-ups buried in the thread. It pulls a response from Slack while ignoring the context in Google Drive. Employees realize it creates more work than it saves, and stop using it. Pilots stall, deployments fade, and projects slide into the “trough of disillusionment" as the NYT describes. Unfortunately, that's the reality for many organizations. At Glean, we work hard to make sure AI understands the enterprise context the way a human does. If a subject matter expert says something, I trust it more. If something’s old, I double-check it. That’s how people think, and it’s how AI should work too. Yet every enterprise has its own documentation culture and quirks, so sometimes we struggle at first. But we persist and co-develop with customers until the system reaches the quality they need. Then we take those learnings to make it work automatically for the next customer. We’ve seen this approach deliver measurable impact for customers: • Booking.com: Glean Agents give teams faster access to customer insights, cutting video production time by 75% and doubling monthly output. • Confluent: Glean’s AI-powered search saves 15,000+ hours/month, boosts support satisfaction by 13%, and cuts ticket investigation time by 10 minutes. • Fortune 100 telecom company: Glean surfaces instant knowledge during support calls, reducing call resolution time by 17 seconds across 800+ agents. • Leading global consultancy: Glean Agents automate RFP workflows, cutting consulting project proposals from 4 weeks to a few hours (97% faster). • Wealthsimple: Glean gives employees instant access to policies and knowledge, driving $1M+ in annual productivity gains. When AI understands the real context of work—across people, tools, and workflows— employees trust it and use it. Instead of falling into the trough of disillusionment, companies climb a slope toward productivity gains and real ROI.

  • View profile for Vineet Agrawal
    Vineet Agrawal Vineet Agrawal is an Influencer

    Helping Early Healthtech Startups Raise $1-3M Funding | Award Winning Serial Entrepreneur | Best-Selling Author

    54,326 followers

    AI just helped a couple get pregnant - after 19 years and 15 failed IVF cycles. The breakthrough came with an AI tool built by a team at Columbia University. It’s called STAR - the world’s first AI system trained to find sperm that embryologists can’t. The husband had azoospermia - a condition where no sperm is visible under the microscope. Dozens of attempts, surgeries, and even overseas experts had failed. But the team at Columbia didn’t give up. They spent 5 years building STAR (Sperm Track and Recovery). The system scans 8 million images per hour using a chip and computer vision, then gently isolates viable sperm missed by even the most experienced lab techs. And it worked. ▶︎ STAR found 44 sperm in a sample that had been manually searched for two full days. ▶︎ That one breakthrough led to a pregnancy that had felt impossible for nearly two decades. ▶︎ And it did so without chemicals, donor samples, or invasive extraction methods. For millions of couples dealing with infertility, this is a glimpse of what AI-assisted reproductive medicine could unlock. But more importantly - this shows us what AI in healthtech should be aiming for: Not just more data. Not just smarter models. But real clinical results that change lives. And as a healthtech investor, this is what I look for in AI-driven care: → A clear pain point → A targeted intervention → And a story no one can ignore What’s your take - could AI reshape fertility care the way it’s starting to reshape diagnostics and mental health? #entrepreneurship #healthtech #innovation

  • View profile for Kelly Jones

    Chief People Officer at Cisco

    28,172 followers

    We’ve all heard about AI’s potential to boost productivity. But what truly matters to me is whether it’s making work better for the people who show up every day. At Cisco, our People Intelligence team, in collaboration with IT, has been exploring this very topic, and the findings are fascinating. Here are five key insights from our research that leaders should take seriously: 1. Leaders are key to adoption. At Cisco, employees are 2x more likely to use AI if their direct leader uses it. 2. Generic AI training doesn’t work. Role-specific, practical training accelerates AI use. 3. Confidence gaps exist among senior leaders. Directors at Cisco often feel less confident with AI than mid-level employees, underscoring the need for tailored support at all levels. 4. Employee autonomy fuels adoption. Hybrid work environments are powerful accelerators for AI adoption, while mandates can hinder it. Employees who voluntarily go to the office are more likely to use AI, while those who are required to work on-site have lower adoption. 5. AI use is linked to employee well-being, but the relationship is complex, with both benefits and trade-offs that require thoughtful navigation. This is just the beginning. Next, we’re looking at how AI is transforming the way teams operate. For now, one thing is clear, employees who use AI aren’t just more productive. They’re also more engaged, better aligned with company strategy, and empowered to focus on meaningful work. #AIAdoption #EmployeeExperience #FutureOfWork

Explore categories