Integrating AI In Engineering Solutions

Explore top LinkedIn content from expert professionals.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    708,452 followers

    Cloud Native technologies have long been at the heart of scalable applications. But now, with AI and Agentic Systems, the game is changing!   Unlike traditional AI automation, Agentic AI can make decisions, execute workflows, and adapt dynamically to system changes—without constant human oversight. This means self-healing, self-optimizing, and autonomous cloud-native infrastructure!  Here’s how Agentic AI can transform each layer of Cloud Native skills:  1. Linux & AI-Optimized OS   - AI-powered package managers automatically resolve compatibility issues.   - Agentic AI monitors system logs, predicts failures, and patches vulnerabilities autonomously.  2. Networking & AI-Driven Observability   - AI-driven network forensics using self-learning algorithms to detect anomalies.   - Agent-based routing optimizations, ensuring seamless traffic flow even in congestion.  3. Cloud Services & AI-Augmented Workflows   - Agentic AI predicts cloud workload demand and pre-allocates resources in AWS, Azure, and GCP.   - Autonomous cost optimization adjusts instance types, storage, and compute in real time.  4. Security & AI Cyberdefense Agents   - Self-learning AI security agents actively detect and mitigate cyber threats before they happen.   - Generative AI-powered penetration testing agents simulate evolving attack patterns.  5. Containers & Agentic AI Orchestration   - Autonomous Kubernetes controllers scale clusters before demand spikes.   - Agentic AI continuously optimizes pod scheduling, reducing cold starts and resource waste.  6. Infrastructure as Code + AI Copilots   - AI-driven infrastructure agents automatically refactor Terraform, Ansible, and Puppet scripts.   - Self-adaptive IaC, where AI updates configurations based on usage patterns and compliance policies.  7. Observability & AI-Driven Incident Response   - AI-powered anomaly detection in Grafana & Prometheus—flagging issues before failures.   - Agentic AI handles incident response, running diagnostics and executing pre-approved fixes.  8. CI/CD & Autonomous Pipelines   - Agentic AI writes, tests, and deploys code autonomously, reducing developer toil.   - Self-optimizing pipelines that rerun failed tests, debug, and retry deployment automatically.  The Future: Fully Autonomous Cloud Native Systems!  𝗗𝗲𝘃𝗢𝗽𝘀 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 → 𝗔𝗜-𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 → 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗰𝗹𝗼𝘂𝗱 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲. The result? Zero-touch, self-managing environments where AI agents handle failures, optimize costs, and secure systems in real time.  𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗲𝘅𝗰𝗶𝘁𝗶𝗻𝗴 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗰𝗹𝗼𝘂𝗱 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 𝘆𝗼𝘂’𝘃𝗲 𝘀𝗲𝗲𝗻 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆?

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    776,354 followers

    This isn’t just a design trend. It’s a data-driven shift in how homes are created. How practical is this design? Here’s what AI is changing in residential design — backed by numbers: • AI-assisted design tools can reduce concept iteration time by 60–80% • Early-stage AI simulations cut construction change orders by up to 30% • Material optimization reduces waste by 10–20%, improving sustainability and cost control • Lighting and spatial simulations increase perceived space efficiency by up to 25% • Personalized design increases homeowner satisfaction and resale appeal — premium homes with unique architectural features often command 5–15% higher value These pebble stone stairs are a great example. AI helped: – Optimize stone size and layout for anti-slip safety – Simulate light reflection across textures at different times of day – Balance luxury aesthetics with long-term durability – Integrate the stairs seamlessly into the overall spatial flow The key insight: AI doesn’t replace architects or designers. It augments creativity with computation. Humans define taste, emotion, and vision. AI accelerates testing, optimization, and decision-making. The result.... • Better design decisions • Fewer costly mistakes • More sustainable builds • Truly personalized luxury AI is no longer just transforming software and semiconductors. It’s transforming how we design, build, and live. #AI #Architecture via @diycraftstvofficial #DesignInnovation #LuxuryDesign #SmartHomes #PropTech #FutureOfLiving #SustainableDesign

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    224,410 followers

    AI models like ChatGPT and Claude are powerful, but they aren’t perfect. They can sometimes produce inaccurate, biased, or misleading answers due to issues related to data quality, training methods, prompt handling, context management, and system deployment. These problems arise from the complex interaction between model design, user input, and infrastructure. Here are the main factors that explain why incorrect outputs occur: 1. Model Training Limitations AI relies on the data it is trained on. Gaps, outdated information, or insufficient coverage of niche topics lead to shallow reasoning, overfitting to common patterns, and poor handling of rare scenarios. 2. Bias & Hallucination Issues Models can reflect social biases or create “hallucinations,” which are confident but false details. This leads to made-up facts, skewed statistics, or misleading narratives. 3. External Integration & Tooling Issues When AI connects to APIs, tools, or data pipelines, miscommunication, outdated integrations, or parsing errors can result in incorrect outputs or failed workflows. 4. Prompt Engineering Mistakes Ambiguous, vague, or overloaded prompts confuse the model. Without clear, refined instructions, outputs may drift off-task or omit key details. 5. Context Window Constraints AI has a limited memory span. Long inputs can cause it to forget earlier details, compress context poorly, or misinterpret references, resulting in incomplete responses. 6. Lack of Domain Adaptation General-purpose models struggle in specialized fields. Without fine-tuning, they provide generic insights, misuse terminology, or overlook expert-level knowledge. 7. Infrastructure & Deployment Challenges Performance relies on reliable infrastructure. Problems with GPU allocation, latency, scaling, or compliance can lower accuracy and system stability. Wrong outputs don’t mean AI is "broken." They show the challenge of balancing data quality, engineering, context management, and infrastructure. Tackling these issues makes AI systems stronger, more dependable, and ready for businesses. #LLM

  • View profile for David J. Malan
    David J. Malan David J. Malan is an Influencer

    I teach CS50

    507,529 followers

    A look at how CS50 has incorporated artificial intelligence (AI), including its new-and-improved rubber duck debugger, and how it has impacted the course already. 🦆 https://lnkd.in/eb-8SAiw In Summer 2023, we developed and integrated a suite of AI-based software tools into CS50 at Harvard University. These tools were initially available to approximately 70 summer students, then to thousands of students online, and finally to several hundred on campus during Fall 2023. Per the course's own policy, we encouraged students to use these course-specific tools and limited the use of commercial AI software such as ChatGPT, GitHub Copilot, and the new Bing. Our goal was to approximate a 1:1 teacher-to-student ratio through software, thereby equipping students with a pedagogically-minded subject-matter expert by their side at all times, designed to guide students toward solutions rather than offer them outright. The tools were received positively by students, who noted that they felt like they had "a personal tutor." Our findings suggest that integrating AI thoughtfully into educational settings enhances the learning experience by providing continuous, customized support and enabling human educators to address more complex pedagogical issues. In this paper, we detail how AI tools have augmented teaching and learning in CS50, specifically in explaining code snippets, improving code style, and accurately responding to curricular and administrative queries on the course's discussion forum. Additionally, we present our methodological approach, implementation details, and guidance for those considering using these tools or AI generally in education. Paper at https://lnkd.in/eZF4JeiG. Slides at https://lnkd.in/eDunMSyx. #education #community #ai #duck

  • View profile for Zach Wilson
    Zach Wilson Zach Wilson is an Influencer

    Founder @ DataExpert.io | ex-Netflix ex-Meta staff engineer | Angel Investor in 6 startups | Featured on Forbes | Dogs

    514,873 followers

    AI Engineering has four levels to it! – Level 1: Using AI Start by mastering the fundamentals: -- Prompt engineering (zero-shot, few-shot, chain-of-thought) -- Calling APIs (OpenAI, Anthropic, Cohere, Hugging Face) -- Understanding tokens, context windows, and parameters (temperature, top-p) With just these basics, you can already solve real problems. – Level 2: Integrating AI Move from using AI to building with it: -- Retrieval Augmented Generation (RAG) with vector databases (Pinecone, FAISS, Weaviate, Milvus) -- Embeddings and similarity search (cosine, Euclidean, dot product) -- Caching and batching for cost and latency improvements -- Agents and tool use (safe function calling, API orchestration) This is the foundation of most modern AI products. – Level 3: Engineering AI Systems Level up from prototypes to production-ready systems: -- Fine-tuning vs instruction-tuning vs RLHF (know when each applies) -- Guardrails for safety and compliance (filters, validators, adversarial testing) -- Multi-model architectures (LLMs + smaller specialized models) -- Evaluation frameworks (BLEU, ROUGE, perplexity, win-rates, human evals) Here’s where you shift from “it works” to “it works reliably.” – Level 4: Optimizing AI at Scale Finally, learn how to run AI systems efficiently and responsibly: -- Distributed inference (vLLM, Ray Serve, Hugging Face TGI) -- Managing context length and memory (chunking, summarization, attention strategies) -- Balancing cost vs performance (open-source vs proprietary tradeoffs) -- Privacy, compliance, and governance (PII redaction, SOC2, HIPAA, GDPR) At this stage, you’re not just building AI—you’re designing systems that scale in the real world. What else would you add? Subscribe to my free blog for more learning blog.dataexpert.io

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    613,453 followers

    If you are building AI agents or learning about them, then you should keep these best practices in mind 👇 Building agentic systems isn’t just about chaining prompts anymore, it’s about designing robust, interpretable, and production-grade systems that interact with tools, humans, and other agents in complex environments. Here are 10 essential design principles you need to know: ➡️ Modular Architectures Separate planning, reasoning, perception, and actuation. This makes your agents more interpretable and easier to debug. Think planner-executor separation in LangGraph or CogAgent-style designs. ➡️ Tool-Use APIs via MCP or Open Function Calling Adopt the Model Context Protocol (MCP) or OpenAI’s Function Calling to interface safely with external tools. These standard interfaces provide strong typing, parameter validation, and consistent execution behavior. ➡️ Long-Term & Working Memory Memory is non-optional for non-trivial agents. Use hybrid memory stacks, vector search tools like MemGPT or Marqo for retrieval, combined with structured memory systems like LlamaIndex agents for factual consistency. ➡️ Reflection & Self-Critique Loops Implement agent self-evaluation using ReAct, Reflexion, or emerging techniques like Voyager-style curriculum refinement. Reflection improves reasoning and helps correct hallucinated chains of thought. ➡️ Planning with Hierarchies Use hierarchical planning: a high-level planner for task decomposition and a low-level executor to interact with tools. This improves reusability and modularity, especially in multi-step or multi-modal workflows. ➡️ Multi-Agent Collaboration Use protocols like AutoGen, A2A, or ChatDev to support agent-to-agent negotiation, subtask allocation, and cooperative planning. This is foundational for open-ended workflows and enterprise-scale orchestration. ➡️ Simulation + Eval Harnesses Always test in simulation. Use benchmarks like ToolBench, SWE-agent, or AgentBoard to validate agent performance before production. This minimizes surprises and surfaces regressions early. ➡️ Safety & Alignment Layers Don’t ship agents without guardrails. Use tools like Llama Guard v4, Prompt Shield, and role-based access controls. Add structured rate-limiting to prevent overuse or sensitive tool invocation. ➡️ Cost-Aware Agent Execution Implement token budgeting, step count tracking, and execution metrics. Especially in multi-agent settings, costs can grow exponentially if unbounded. ➡️ Human-in-the-Loop Orchestration Always have an escalation path. Add override triggers, fallback LLMs, or route to human-in-the-loop for edge cases and critical decision points. This protects quality and trust. PS: If you are interested to learn more about AI Agents and MCP, join the hands-on workshop, I am hosting on 31st May: https://lnkd.in/dWyiN89z If you found this insightful, share this with your network ♻️ Follow me (Aishwarya Srinivasan) for more AI insights and educational content.

  • View profile for Montgomery Singman
    Montgomery Singman Montgomery Singman is an Influencer

    Managing Partner @ Radiance Strategic Solutions | xSony, xElectronic Arts, xCapcom, xAtari

    27,222 followers

    In a world where computer chips power everything from smartphones to smart cities, engineers at Princeton have unleashed an AI that designs wireless chips with mind-bending efficiency—proving machines can innovate in ways humans never imagined 🚀. This article, written by Popular Mechanics contributing editor Caroline Delbert, explores groundbreaking research from Princeton University’s Sengupta Lab, where AI is reshaping the future of chip design. Published in Nature Communications, the work blends cutting-edge neural networks with human ingenuity to push the boundaries of wireless technology. Five Key Insights 🧠 AI as a Co-Pilot for Innovation The Princeton team’s convolutional neural network (CNN) doesn’t just optimize chip layouts—it invents entirely new design paradigms. By analyzing desired electromagnetic properties and working backward, the AI generates "chaotic, blobby" structures that defy human intuition yet outperform traditional templates. 🔄 Inverse Design: Backward Engineering for Forward Progress Unlike human engineers, who build chips piece by piece, the AI employs inverse design—starting with the end goal and reverse-engineering components. This approach eliminates reliance on existing templates, unlocking geometries that would take engineers years to conceptualize. 🎨 From Order to (Controlled) Chaos Human-designed chips follow neat, grid-like patterns, but the AI’s creations resemble abstract art. These "folded and twisted" layouts maximize efficiency by exploiting electromagnetic interactions in ways our linearly trained brains struggle to grasp. 🤖 The Hallucination Problem: Why Humans Still Matter Despite its prowess, the AI occasionally suggests impossible designs or "hallucinates" impractical solutions. As lead researcher Kaushik Sengupta notes, human oversight remains crucial to filter out noise and refine the AI’s raw creativity into manufacturable blueprints. 📖 Open Science in an Age of AI Secrecy In a field dominated by proprietary algorithms, Sengupta’s decision to publish openly in Nature Communications is revolutionary. By democratizing access to this tool, the team aims to spark collaborative breakthroughs while maintaining transparency—a rarity in AI-driven hardware research. This fusion of machine learning and chip design hints at a future where AI accelerates discovery, but as Delbert underscores, the human capacity for ingenuity and repair remains irreplaceable. The true breakthrough lies not in replacing engineers, but in freeing them to focus on big-picture innovation 🌟. #AIChipDesign #InverseEngineering #MachineLearning #WirelessTechnology #FutureOfComputing #TechInnovation #ElectromagneticEngineering #NeuralNetworks #ComputerScience #PopularMechanics

  • View profile for Gus Bartholomew

    On-demand sustainability expertise for teams under delivery pressure | Co-Founder @ Leafr

    45,046 followers

    AI has no place in sustainability. There’s a familiar stance I hear a lot in sustainability circles. AI uses a lot of energy. So using it for sustainability sounds… contradictory. But that argument misses the bigger picture. AI isn’t just consuming energy. It’s helping us use less of it too. Used well, AI is already solving real sustainability problems. Not hypotheticals. Not R&D lab demos. Live, operational tools that help businesses reduce emissions, speed up reporting, and make better decisions. Here’s what that looks like in practice: 1. Energy grid optimisation In the UK, the National Grid is using AI to forecast solar energy production by analysing satellite images and weather data. If clouds are expected to lower solar output in, say, Cornwall 30 minutes from now, the grid can prep alternative sources in advance. That means fewer blackouts and lower emissions from fossil backup plants. DeepMind did something similar for wind power. Their AI predicted wind farm output 36 hours in advance, which increased the commercial value of wind energy by around 20 percent. Why? Because energy providers could schedule when to send power to the grid with more certainty. 2. Streamlined carbon accounting AI tools now scan invoices, utility bills and PDF reports to pull out emissions data automatically. They match spend categories to emissions factors and calculate Scope 1, 2 and 3 outputs in seconds. That turns carbon accounting from a once-a-year headache into a real-time management tool. 3. Transparent supply chains Unilever has tested AI platforms that combine satellite imagery with supply data to flag illegal deforestation in palm oil regions. If a patch of rainforest is cleared where it shouldn’t be, AI catches it fast and alerts their team. No need to wait for an audit or third-party tipoff. 4. Faster climate simulations Traditional climate models take weeks or months to run. New AI-driven models can simulate complex climate scenarios up to 25 times faster. That unlocks planning tools for city councils, small businesses and insurers who can’t wait months to model flood risks or heat exposure. Yes, AI needs energy to run. But if it helps avoid 10 times more emissions than it creates, the trade-off makes sense. So the question isn’t whether AI belongs in sustainability. It’s whether we’re serious about using every tool we have to solve the problems in front of us. At Leafr, we’ve seen consultants use AI to cut time and cost on energy audits, validate supplier claims, and surface risks early. When paired with the right human expertise, AI becomes a multiplier. Because the planet doesn’t care if a human or a machine found the emissions. It just cares that they’re found and cut. Follow Gus Bartholomew (Leafr 🌿)for more and repost if you found useful. Use Leafr to find the sustainability specialists you need to support your AI efforts

  • View profile for Pina Schlombs

    Sustainability Lead & Industrial AI Thought Leader @ Siemens | Adjunct Professor | Startup Advisor | Speaker

    6,074 followers

    𝗪𝗲 𝗷𝘂𝘀𝘁 𝗰𝗿𝗼𝘀𝘀𝗲𝗱 𝘁𝗵𝗲 𝘁𝗵𝗿𝗲𝘀𝗵𝗼𝗹𝗱 𝘄𝗵𝗲𝗿𝗲 𝗔𝗜 𝘀𝘁𝗼𝗽𝘀 𝗯𝗲𝗶𝗻𝗴 “𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝗮𝗹” 𝗳𝗼𝗿 𝘀𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆 I’m buzzing after seeing our latest research with #Reuters. After years implementing #IndustrialAI for sustainability, watching this shift happen in real-time feels significant. 𝟲𝟯% 𝗼𝗳 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 𝗵𝗮𝘃𝗲 𝗺𝗼𝘃𝗲𝗱 𝗯𝗲𝘆𝗼𝗻𝗱 𝗽𝗶𝗹𝗼𝘁𝘀. Implementation rates jumped from 13% to over 50% in a single year. Organizations deploying industrial AI are seeing: ⚪️ 𝟲𝟱% 𝗮𝗰𝗵𝗶𝗲𝘃𝗶𝗻𝗴 𝗲𝗻𝗲𝗿𝗴𝘆 𝘀𝗮𝘃𝗶𝗻𝗴𝘀 of 23% on average ⚪️ 𝟱𝟵% 𝗰𝘂𝘁𝘁𝗶𝗻𝗴 𝗖𝗢𝟮 𝗲𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 𝗯𝘆 𝟮𝟰% But there’s something even more fascinating underneath. 𝗪𝗲’𝗿𝗲 𝗘𝗻𝘁𝗲𝗿𝗶𝗻𝗴 𝗨𝗻𝗰𝗵𝗮𝗿𝘁𝗲𝗱 𝗧𝗲𝗿𝗿𝗶𝘁𝗼𝗿𝘆a I’m watching AI evolve from imitation learning—copying how humans solve problems—to exploration learning. 𝗔𝗜 𝗶𝘀 𝗻𝗼𝘄 𝘁𝗮𝗸𝗶𝗻𝗴 𝘂𝘀 𝗯𝗲𝘆𝗼𝗻𝗱 𝘄𝗵𝗮𝘁 𝗵𝘂𝗺𝗮𝗻 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗮𝗹𝗼𝗻𝗲 𝗰𝗼𝘂𝗹𝗱 𝗮𝗰𝗵𝗶𝗲𝘃𝗲. This isn’t incremental improvement. We’re talking radical innovation. AI can simulate entirely new designs that were previously impossible to conceive. When you’re juggling decarbonization, circularity, and societal changes simultaneously while navigating a “tsunami of regulations” - this capability becomes transformative. 𝗪𝗵𝗮𝘁 𝗞𝗲𝗲𝗽𝘀 𝗠𝗲 𝗨𝗽 𝗮𝘁 𝗡𝗶𝗴𝗵𝘁 (𝗜𝗻 𝗮 𝗚𝗼𝗼𝗱 𝗪𝗮𝘆) 𝟴𝟭% 𝗼𝗳 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 𝗯𝗲𝗹𝗶𝗲𝘃𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝘀𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻. Not “AI will help.” But that AI will 𝗱𝗿𝗶𝘃𝗲 innovation. From what I’m seeing? They’re right. We’re using AI to capture regulations requiring hundreds of experts. We’re building multi-agent teams developing entirely new features. We’re optimizing complete process flows. The technical barriers are dissolving faster than expected. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗻𝗼𝘄 𝗶𝘀 𝗵𝗼𝘄 𝗾𝘂𝗶𝗰𝗸𝗹𝘆 𝘄𝗲 𝘀𝗰𝗮𝗹𝗲 𝘄𝗵𝗮𝘁’𝘀 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝘄𝗼𝗿𝗸𝗶𝗻𝗴. 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗼𝗺𝗲𝗻𝘁 𝗙𝗲𝗲𝗹𝘀 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 For years: “Can AI really deliver on sustainability?” Now: “How fast can we deploy this across our operation?” That shift - from skepticism to urgency - tells me we’ve hit critical mass. The business cases are proven. 71% of leaders expect significant impact on the energy transition. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗰𝗼𝗻𝘃𝗲𝗿𝗴𝗲𝗻𝗰𝗲 𝗺𝗼𝗺𝗲𝗻𝘁 𝗜’𝘃𝗲 𝗯𝗲𝗲𝗻 𝘄𝗮𝗶𝘁𝗶𝗻𝗴 𝗳𝗼𝗿. What’s your take? Are you seeing this shift in your work?

  • View profile for Tom Head

    Operational efficiency through AI. Deployed in weeks | Co-founder @G3NR8

    51,345 followers

    Let's be honest about AI and sustainability. It's messy. It's complicated. And nobody's getting it 100% right yet. Including us. The dirty secret of the "green AI" movement is that most of these systems are massive energy hogs. We're all trying to save the planet with technology that's part of the problem. That's why we're partnering with EdenLab - not because we've got it all figured out, but because we're determined to do better. Together we're launching the Greenwash Guardian - an AI tool that helps companies avoid misleading sustainability claims. It’s the first in a series of intelligent, agentic solutions that help brands move from intention to action, and from risk to opportunity. But, alongside this we're also working towards sustainable AI working practices which include: • Smaller, more efficient models: Using smaller foundation models to reduce compute energy intensity. • Carbon-aware scheduling: Running energy-intensive tasks during periods of renewable grid abundance. • Green hosting: Using data centres powered by 100% renewable energy with transparent carbon reporting. • Embedded sustainability metrics: Measuring and reporting model emissions as a core design principle. We’re not 100% there yet, but we are making progress. Because the sustainability journey needs fewer grand promises and more pragmatic solutions. What are your thoughts on AI and its environmental impact? ♻️ Repost if you think we all need more honest conversations about how we manage the environmental impact of our tech.

Explore categories