AI Applications In Engineering

Explore top LinkedIn content from expert professionals.

  • View profile for Folake Soetan

    CEO, Ikeja Electric | Transforming the power sector by building high-performance teams and future-ready leaders | Business Transformation | Leadership | Women & Youth Empowerment

    114,732 followers

    The power sector is changing fast, and AI is at the center of this transformation. From predicting outages before they happen to improving energy distribution, AI is making electricity more reliable, efficient, and sustainable. But how exactly is AI reshaping the industry? 1. Predicting failures before they happen. Power outages can be costly and disruptive. AI-powered predictive maintenance helps utilities identify potential failures in transformers, power lines, and substations before they occur. By analyzing data from sensors and historical trends, AI reduces downtime and ensures a more stable power supply. 2. Smarter energy distribution. Electricity demand fluctuates throughout the day. AI helps balance supply and demand in real time, ensuring power is distributed where it’s needed most. This minimizes waste, lowers costs, and improves overall grid efficiency. 3. Optimizing renewable energy. Renewable energy sources like solar and wind are unpredictable. AI helps by analyzing weather patterns and adjusting energy production accordingly. This means more stable integration of renewables into the grid. While AI is transforming the power sector, technology alone isn’t enough. The biggest challenge is adoption. Getting companies, governments, and individuals to embrace these changes. For digital transformation to succeed, the industry needs: → Skilled talent → Better infrastructure → And a willingness to rethink traditional ways of managing power AI is here to stay, and its impact on energy is growing. The question is: Are we ready to maximize its potential?

  • View profile for Jeff Winter
    Jeff Winter Jeff Winter is an Influencer

    Industry 4.0 & Digital Transformation Enthusiast | Business Strategist | Avid Storyteller | Tech Geek | Public Speaker

    170,567 followers

    𝐓𝐡𝐞 𝐛𝐞𝐬𝐭 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧𝐬 𝐚𝐫𝐞 𝐦𝐚𝐝𝐞 𝐰𝐢𝐭𝐡 𝐜𝐥𝐚𝐫𝐢𝐭𝐲, 𝐧𝐨𝐭 𝐠𝐮𝐞𝐬𝐬𝐰𝐨𝐫𝐤. Digital twins take the guesswork out of decision-making by creating a virtual model of your operations that reflects reality in stunning detail. From improving design to reducing downtime, they transform the unknown into actionable intelligence. To simplify the broad range of potential digital twin applications, a classification approach I like to use is called the “𝟓 𝐏𝐬“. This model is easy to remember and covers nearly all use cases of industrial digital twins: • 𝐏𝐚𝐫𝐭 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐰𝐢𝐧: Digital representation of individual components or parts typically to understand the physical, mechanical, and electrical characteristics of the part. This allows companies to monitor, analyze, and predict the performance and health of that particular part, optimizing maintenance schedules and extending its lifecycle. • 𝐏𝐫𝐨𝐝𝐮𝐜𝐭 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐰𝐢𝐧: Digital representation of the interoperability of components or parts as they work together as part of a product. This enables companies to simulate and test product behavior under various conditions, improving design, ensuring quality, and speeding up the time to market. • 𝐏𝐥𝐚𝐧𝐭 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐰𝐢𝐧: Digital representation of a plant, facility, or system to understand how assets work together at an operational level. This allows businesses to enhance operational efficiency, reduce downtimes, and optimize production processes through real-time insights and predictive analytics. • 𝐏𝐫𝐨𝐜𝐞𝐬𝐬 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐰𝐢𝐧: Digital representation of a specific process or workflow within a system or a facility. This helps companies refine and optimize processes, identify inefficiencies, and ensure smoother and more cost-effective operations. • 𝐏𝐞𝐫𝐬𝐨𝐧 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐰𝐢𝐧: Digital representation of a person to capture their movements, habits, interactions, skills, knowledge, and preferences. This helps companies gain insights into workflow patterns, fatigue patterns, and safety concerns ensuring increased productivity and a reduction in workplace-related injuries. 𝐇𝐨𝐰 𝐝𝐨𝐞𝐬 𝐚 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐡𝐫𝐞𝐚𝐝 𝐫𝐞𝐥𝐚𝐭��� 𝐭𝐨 𝐚 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐓𝐰𝐢𝐧? A digital thread is a continuous flow of data and information that integrates processes, systems, and devices throughout the product lifecycle. It serves as the foundation for a digital twin, which is a virtual representation of a physical product or system, leveraging data from the digital thread to simulate, predict, and optimize its performance. For high-resolution image and to read full version: https://lnkd.in/ezmPkSag ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!

  • View profile for Mike Wang

    Builder & Engineering Leader

    2,275 followers

    90% of engineers using AI coding tools are doing it wrong. They're treating AI like a code monkey. Fire prompt → Get code → Accept all changes → Ship. That's why we see 128k-line AI pull requests that became memes (look this up, it's a fun read). After spending quite a bit of time using AI dev tools, I discovered the real game isn't about generating more code faster. It's about rapid engineering while managing cognitive load. My workflow now: 1. Start with AI-generated system diagrams 2. Ask questions until I understand the architecture 3. Create detailed change plans 4. Break down into AI-manageable chunks 5. Maintain context throughout This isn't coding. It's orchestration. The best engineers aren't typing anymore. They're conducting symphonies of AI agents, each handling specific complexity while the human maintains the vision. Think about it → We're moving from IDEs to "Cognitive Load Managers." Tools that auto-generate documentation, visualize dependencies in real-time, and explain impact before you commit. The future isn't AI writing code. It's AI helping you understand what code to write. The billion-dollar opportunity? Build the tool that turns every engineer into a systems architect who happens to code. We're not being replaced. We're being promoted. Who else sees this shift? #AI #SoftwareEngineering #DevTools #FutureOfCoding #TechLeadership

  • View profile for Marily Nika, Ph.D
    Marily Nika, Ph.D Marily Nika, Ph.D is an Influencer

    Gen AI Product @ Google | AI builder & Educator | Get certified as an AI PM with my Bootcamp | O’Reilly Best Selling Author | Fortune 40u40 | aiproduct.com

    127,668 followers

    Introducing RICE-A, a prioritization framework for AI products. Traditional frameworks like RICE excel at helping teams evaluate feature ideas based on Reach, Impact, Confidence, and Effort. However, when it comes to AI products, the unique challenges of data collection, model training, and deployment require a nuanced approach. I see Product Managers sometimes including these challenges within ‘Effort’ but I don’t believe that this is the right approach... That’s why I am proposing RICE-A, an enhanced prioritization framework tailored specifically for AI-driven features. RICE-A will help product managers make data-informed decisions, balancing innovation with execution feasibility. ✨ What Is RICE-A? RICE-A builds on the RICE framework by introducing a fifth factor: AI Complexity (A). This additional layer captures the unique effort required by the AI lifecycle - to design, train, and deploy AI models, ensuring AI-specific challenges are weighted appropriately. ✨ The RICE-A Formula (look at the image) Each component evaluates a specific aspect of the feature's feasibility and potential: →Reach: What percentage of your target audience will benefit from this feature? →Impact: How significant is the impact for the target user? →Confidence: How certain are you about the accuracy of your assumptions and ability to deliver? →Effort: What is the engineering effort needed to implement the feature? →(the new part) AI Complexity (A): What are the data and computational demands for collecting the right dataset, training a robust model, and ensuring scalability? ✨ Why Add "AI Complexity"? AI features present unique challenges that aren't captured by traditional effort metrics. For example... -Data Challenges: Collecting, cleaning, and labeling high-quality datasets is often a monumental task. -Training Costs: Model training requires substantial computational resources, hyperparameter tuning, and infrastructure setup. -Deployment & Monitoring: AI systems demand post-deployment monitoring, retraining, and bias detection to ensure sustained performance. I'm expanding this more on the first link in the comments, I also included 11 AI Product Management jobs I would apply to if I were looking for anyone interested. <><><><><><><><><><><><><><><><> Follow Marily Nika, Ph.D for the #1 AI Product Management certification. Best way to support my work is if you like & share 🔄 my content.

  • View profile for Markus J. Buehler
    Markus J. Buehler Markus J. Buehler is an Influencer

    McAfee Professor of Engineering at MIT

    28,983 followers

    How do materials fail, and how can we design stronger, tougher, and more resilient ones? Published in #PNAS, our physics-aware AI model integrates advanced reasoning, rational thinking, and strategic planning capabilities models with the ability to write and execute code, perform atomistic simulations to solicit new physics data from “first principles”, and conduct visual analysis of graphed results and molecular mechanisms. By employing a multiagent strategy, these capabilities are combined into an intelligent system designed to solve complex scientific analysis and design tasks, as applied here to alloy design and discovery. This is significant because our model overcomes the limitations of traditional data-driven approaches by integrating diverse AI capabilities—reasoning, simulations, and multimodal analysis—into a collaborative system, enabling autonomous, adaptive, and efficient solutions to complex, multiobjective materials design problems that were previously slow, expert-dependent, and domain-specific. Wonderful work by my postdoc Alireza Ghafarollahi! Background: The design of new alloys is a multiscale problem that requires a holistic approach that involves retrieving relevant knowledge, applying advanced computational methods, conducting experimental validations, and analyzing the results, a process that is typically slow and reserved for human experts. Machine learning can help accelerate this process, for instance, through the use of deep surrogate models that connect structural and chemical features to material properties, or vice versa. However, existing data-driven models often target specific material objectives, offering limited flexibility to integrate out-of-domain knowledge and cannot adapt to new, unforeseen challenges. Our model overcomes these limitations by leveraging the distinct capabilities of multiple AI agents that collaborate autonomously within a dynamic environment to solve complex materials design tasks. The proposed physics-aware generative AI platform, AtomAgents, synergizes the intelligence of LLMs and the dynamic collaboration among AI agents with expertise in various domains, incl. knowledge retrieval, multimodal data integration, physics-based simulations, and comprehensive results analysis across modalities. The concerted effort of the multiagent system allows for addressing complex materials design problems, as demonstrated by examples that include autonomously designing metallic alloys with enhanced properties compared to their pure counterparts. We demonstrate accurate prediction of key characteristics across alloys and highlight the crucial role of solid solution alloying to steer the development of alloys. Paper: https://lnkd.in/enusweMf Code: https://lnkd.in/eWv2eKwS MIT Schwarzman College of Computing MIT Civil and Environmental Engineering MIT Department of Mechanical Engineering (MechE) MIT Industrial Liaison Program MIT School of Engineering

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    34,778 followers

    The potential of Humans + AI decision-making is superior decisions - and outcomes - across the board. Yet we still do not have decision architectures that clearly integrate the strengths of humans (context, experience, judgment, intuition) and AI (rich data, pattern recognition, scenario analysis). A starting point is that any AI inputs to decisions are explainable. Black box recommendations can only be accepted or rejected. Only when inputs, rationales, logics etc. are presented can AI outputs be meshed with human cognition. Yet humans are generally not good at incorporating external recommendations or rationales into their own cognitive structures. They tend to interpret AI inputs with existing biases, override them, or simply ignore them. One of the most interesting approaches is Evaluative AI, proposed by Tim Miller. Evaluative AI does not provide recommendations, it helps human decision-makers to generate hypotheses and assess them by providing evidence for or against. The decision-maker is in control of the process and hypothesis choice. This is how to put it into practice: 1️⃣ Define the decision and frame the case State exactly what decision must be made, why it matters, and any constraints, then gather the key facts or events so the situation is explicit before you evaluate options. 2️⃣Surface options List viable options yourself and let the tool add or filter to a manageable set, avoiding a single persuasive recommendation. 3️⃣ Select a hypothesis to test Choose one option to examine now, keeping control of the sequence and scope of what gets explored. 4️⃣ Gather evidence for and against, including confidence levels Ask for balanced reasons supporting and refuting the active hypothesis, including degree of uncertainty, so you can calibrate confidence. 5️⃣ Compare trade-offs across options Place two or more options side by side on the same criteria to reveal where each is strong, weak, and in tension. 6️⃣ Decide, log, and revisit as facts change Make the call, record your rationale and rejected alternatives, and re-run the evaluation when new information arrives. This can be implemented using standard LLMs, or embedded in a tool. I'll be sharing more detailed structures on high-performance Humans + AI decisions and work coming up.

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    776,354 followers

    AI didn’t assist engineers here. It designed the rocket engine. What do you think? LEAP 71 just proved something big for engineering and AI: • A liquid rocket engine was autonomously designed by a physics-based AI system (Noyron) • 3D-printed as a single copper part • Hot-fired successfully on the very first test • No traditional CAD, no manual iteration loops This wasn’t trial-and-error. It was pure physics + computation + manufacturing constraints encoded in software. Once the model exists, new engine variants can be generated in minutes, not months. Why this matters: Rocket engines are among the hardest machines humans build: • ~3,000°C combustion temperatures • Cryogenic propellants • Extreme pressure, vibration, and thermal stress And yet… the first design worked. This isn’t “AI will replace engineers.” This is engineering moving from drawing to defining intent — and letting computation do the rest. Same shift we’re seeing in: • Semiconductors • AI infrastructure • Advanced manufacturing • Robotics & simulation Design is becoming software. Testing is becoming data. Iteration speed is becoming the real advantage. The future of engineering just fired on a test stand 🚀 #AI via @codeintellectus and Joel Gomes #Engineering #Aerospace #ComputationalDesign #AdvancedManufacturing #3DPrinting #DeepTech #Innovation

  • View profile for Kara H. Hurst

    Chief Sustainability Officer, Amazon

    52,131 followers

      AI is a game-changer for sustainability at work. At Amazon, our culture is rooted in innovation and speed. AI can enable both, and we’re using it in ways big and small to make progress. Here are just a few examples: 📦 The Package Decision Engine - we created this AI model to make sure items arrive on your doorstep safely, in the most efficient packaging possible. It makes decisions using deep machine learning, natural language processing and computer vision. What does this mean for sustainability? So far, along with other packaging innovations, the Package Decision Engine has helped us avoid over 2-million tons of packaging material worldwide. 🏢 AI Tools for Buildings - You may be surprised to hear that buildings and their construction account for 40% of the world's greenhouse gas emissions. We’re using a suite of AI tools to help manage energy and water use in more than 100 of our buildings. One example: a tool built by Amazon Web Services (AWS) called FlowMS led engineers at a logistics facility to an underground leak, and fixing it helped prevent the loss of over 9-million gallons of water per year. Other AI tools help us monitor our HVAC systems, refrigeration units, and dock doors. These seemingly simple solutions add up, and we're making meaningful progress in saving energy. 🤖 Maximo - Arguably one of the coolest-looking examples, Maximo is an AI-powered robot developed by The AES Corporation helping build solar farms, including projects backed by Amazon. It uses computer vision to lift heavy panels, makes decisions with real-time construction intelligence, and helps construction crews avoid dangerous heat. All told, Maximo can reduce solar construction timelines and costs by as much as 50%. This is just the beginning, and I’m excited about all the ways AI can help us reach our goals. If you’d like to dive deeper into how we’re using it in our buildings, you’ll find more details here: https://lnkd.in/gU_UmWbq

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    58,789 followers

    Let’s go Europe! A CERN for AI! We are actually doing it! The European Union has just announced InvestAI, an ambitious €200 billion initiative designed to supercharge AI innovation and infrastructure across the bloc. The centrepiece? AI gigafactories, large-scale computing hubs intended to power the next generation of artificial intelligence. This initiative is the EU’s boldest move yet to cement itself as a global AI leader, but it also highlights stark differences between European and American approaches to AI development. At the heart of this divergence is a fundamental question: should AI innovation be shaped by government-led investment and regulation, or should it be driven by market forces and private capital? With InvestAI, Europe is betting on a highly coordinated, centrally funded model—one that stands in contrast to the U.S.’s private sector dominance and China’s state-controlled AI push. The scale of InvestAI is unprecedented. The EU is committing €20 billion to AI gigafactories alone, facilities that will provide compute power for training the most advanced AI models. President Ursula von der Leyen compared this effort to CERN, (like another observer recently did…)the European Organisation for Nuclear Research, which has played a pivotal role in global physics research. The idea is to create a publicly accessible AI ecosystem that fosters collaborative innovation, allowing startups, universities, and smaller companies access to computing power that would otherwise be dominated by tech giants. A case in point is Stargate, a U.S. Department of Energy initiative that aims to create a national AI supercomputing network. Unlike the EU’s publicly funded gigafactories, Stargate will function as a federated system of AI resources, combining government, academia, and private-sector computing power. The key difference? Access to these resources will likely be limited to select institutions rather than being widely available to startups and small businesses, as envisioned in InvestAI. Another critical distinction is how the EU and the U.S. regulate AI. The EU AI Act imposes strict obligations on AI developers, particularly those creating high-risk AI systems. The U.S., on the other hand, has taken a lighter regulatory approach, focusing on voluntary AI safety commitments rather than sweeping legislation. This difference in regulatory philosophy could influence where companies choose to base their AI operations. While InvestAI seeks to boost European AI competitiveness, there are concerns that overregulation could stifle innovation, driving talent and investment to more flexible regulatory environments, like the U.S. or even the UK. If InvestAI succeeds, Europe could establish itself as a third AI superpower, offering a model that balances AI safety with competitiveness. If it fails, it risks falling further behind in the global AI race, leaving the field to U.S. and Chinese AI giants.

Explore categories