𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗧𝘄𝗶𝗻 𝗥𝗮𝗰𝗲𝘁𝗿𝗮𝗰𝗸: 𝗛𝗼𝘄 𝗦𝘆𝗻𝗼𝗽𝘀𝘆𝘀 𝗮𝗻𝗱 𝗡𝗩𝗜𝗗𝗜𝗔 𝗮𝗿𝗲 𝗥𝗲𝗱𝗲𝗳𝗶𝗻𝗶𝗻𝗴 𝗦𝗧𝗘𝗠 𝗘𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻. Synopsys and NVIDIA have introduced a 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝘁𝘄𝗶𝗻 𝗿𝗮𝗰𝗲𝘁𝗿𝗮𝗰𝗸 for the global 𝗦𝗧𝗘𝗠 𝗥𝗮𝗰𝗶𝗻𝗴 (𝗙𝟭 𝗶𝗻 𝗦𝗰𝗵𝗼𝗼𝗹𝘀) championship. Engineering education is shifting gears. What was once limited to pro motorsport teams and research labs is now entering 𝗰𝗹𝗮𝘀𝘀𝗿𝗼𝗼𝗺𝘀 𝗮𝗻𝗱 𝘀𝗰𝗵𝗼𝗼𝗹 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗶𝗼𝗻𝘀. The premiere in Singapore showcased how students can test and visualize aerodynamics in real time. 𝗪𝗵𝗮𝘁 𝘄𝗮𝘀 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲𝗱 Teams work with 𝗖𝗙𝗗 𝗶𝗻 𝗔𝗻𝘀𝘆𝘀 𝗗𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆, analyzing lift, drag, and vortices. Results are exported into 𝗡𝗩𝗜𝗗𝗜𝗔 𝗢𝗺𝗻𝗶𝘃𝗲𝗿𝘀𝗲, where they overlay onto a digital track. This pipeline creates an 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝘃𝗲 𝟯𝗗 𝘀𝗰𝗲𝗻𝗲 of car behavior. From 2025–2026, “demo days” will let schools upload their own car geometries, tweak designs, and instantly validate performance inside the digital twin racetrack. 𝗪𝗵𝘆 𝗶𝘁 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 • 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 𝗼𝘃𝗲𝗿 𝗮𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻: students see real airflow patterns instead of formulas alone • 𝗘𝗻𝗱-𝘁𝗼-𝗲𝗻𝗱 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲: simulation → visualization → insights, just like industry workflows • 𝗖𝗮𝗿𝗲𝗲𝗿 𝗽𝗮𝘁𝗵𝘄𝗮𝘆: free access to Discovery and Omniverse builds baseline literacy in digital twins • 𝗙𝗮𝘀𝘁 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝗼𝗻: geometry-to-results takes hours, not weeks — matching current trends in ML-accelerated CFD Experts note: this is the same tech stack used in 𝗽𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹 𝗺𝗼𝘁𝗼𝗿𝘀𝗽𝗼𝗿𝘁, making the student experience closer to real industry practice. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 The Synopsys × NVIDIA project shows how 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝘁𝘄𝗶𝗻𝘀 can become entry points into professional ecosystems. For schools, it means hands-on learning with advanced tools. For industry, it cultivates a generation of engineers fluent in modern pipelines. Beyond racing, the “analyze → simulate → explain” model can scale to architecture, climate modeling, or industrial pilots. Digital twins are no longer exotic — they’re becoming the new normal, even in classrooms. Do you think digital twins in education will stay niche, or become a core skillset for tomorrow’s engineers? #DigitalTwin #STEMEducation #CFD #NVIDIAOmniverse #Synopsys #FutureSkills #EngineeringEducation
More Relevant Posts
-
How can you accelerate CAE simulations, turning hours or days of computation into real-time design insights. By training surrogate models on traditional simulation data, engineers can rapidly explore more design options, run predictions in seconds instead of days, and visualize results using NVIDIA Omniverse for real-time feedback. This approach doesn’t replace traditional solvers; instead, it complements them. https://lnkd.in/eh8kQuzM
To view or add a comment, sign in
-
Excited to share how NVIDIA’s latest tools are advancing robotic cloth manipulation! In their recent blog, NVIDIA dives into how their open-source physics engine, Newton, integrated with Isaac Lab, enables a rigid robot arm to manipulate deformable cloth with exceptional fidelity. What’s cool? The demo uses a GPU-based Vertex-Block Descent (VBD) cloth solver to simulate cloth dynamics in real-time — running at ~30 FPS on an RTX 4090. In the workflow, one simulation loop couples a solid-body robot solver with the cloth solver: the robot moves, collisions with the cloth are detected, and then the cloth solver steps accordingly. Because Newton supports multi-physics coupling, we see a rigid arm (the robot) interacting realistically with a deformable object (the cloth) — bridging a key gap in simulation-to-real workflows. Why it matters For educational robotics ventures like ours, this is a milestone: it means tasks such as garment folding, textile handling, or flexible-material manipulation are becoming feasible in simulation-enabling faster prototyping, closed-loop learning, and eventual real-world deployment. In practice Demonstrating to students and industrial partners how fabric, cloth, and soft materials can be manipulated with precision via simulation-trained policies. Leveraging GPU-accelerated simulation to compress training times, accelerate research, and scale demonstrations to industry-grade experiments. If you’re exploring robotics in textile automation, soft-material manipulation, or training policies that handle non-rigid objects, this is worth a look. #Robotics #Simulation #ClothManipulation #DeformableObjects #NVIDIA #IsaacLab #Newton #AIIBOTICS
To view or add a comment, sign in
-
The Invisible Made Visible Physics happens at scales we can't see: molecules, magnetic fields, airflow, heat. Engineers waste weeks interpreting simulations. Students can't visualize abstract concepts. Complex 3D phenomena trapped in 2D screens. PhysicsSight: AR/VR platform that turns physics into a place you can walk through. * Network engineers see WiFi signals overlaid on buildings * Aerospace teams step inside airflow around prototypes * Students explore molecular bonds in 3D * Surgeons plan operations with patient-specific flow visualization. Why now? AR/VR hardware matured. Cloud physics simulations are affordable. $280B education + $1.2T industrial R&D need better tools. The play: Start with one killer use case (EM field viz for telecom). Expand to every physics domain. Become the universal interface layer—like Figma for design, but for physics. If you're at the intersection of physics simulation and spatial computing—let's talk. #deeptech #AR #VR #startup #physics #entrepreneur
To view or add a comment, sign in
-
Human–Robot Interaction: A New Era of Collaboration Begins Robotics and Artificial Intelligence are no longer confined to labs — they’re entering daily life, workplaces, and global discussions. The rise of socially intelligent robots marks a turning point in how humans and machines cooperate, communicate, and coexist. Researchers and engineers are combining technologies like ROS, SolidWorks, MathWorks, Arduino, ESP32, and AI frameworks to develop robots that understand gestures, respond intelligently, and assist humans in education, healthcare, and industry. As innovation accelerates, the boundary between human empathy and robotic intelligence continues to blur — shaping a future where collaboration replaces command.
To view or add a comment, sign in
-
-
I am excited to announce that for the ELECOMP Capstone Design Program at the URI College of Engineering, I will be working alongside Draper, the Charles Stark Draper Lab., in Boston, MA. They have sponsored this capstone design project. The team consists of Electrical Engineer Ben Gulezian and me, a Computer Engineer. Our Technical Directors, Rick Wang and Stephen Lawrence, serve as our leadership for our project, “Edge Computing on GPUs for Ground Robotics”. Stephen has been instrumental in supporting our electronics and integration work, while Rick has guided us on autonomy software design and system architecture. Together, they have given us the technical direction needed to grow this project, while also challenging us to think like professional engineers. Our anticipated best outcome is to create an edge compute infrastructure that enables a swarm of compact ground robots to collaboratively explore and map complex indoor environments. Key milestones include developing 2D SLAM, centralized map fusion, and visual mark detection, with stretch goals of multi-robot teaming, 3D map generation, and CUDA acceleration for real-time perception tasks. We are also fortunate to have Mike D. Smith as our Consulting Technical Director. Mike has been a consistent guide throughout this process, serving as the person we regularly check in with to ensure that we are on track and aligned with our project goals. His feedback has helped us clarify our direction, stay organized, and make steady progress toward our milestones. Personally, my technical contributions will focus on embedded software development, autonomy algorithms such as SLAM (Simultaneous Localization and Mapping) and path planning, and implementing edge computing techniques on Nvidia Jetson hardware. I will also contribute to real-time image and depth processing using CUDA (Compute Unified Device Architecture) and OpenCV, while helping design the system architecture that allows our robots to communicate and operate as a coordinated swarm. This project provides an opportunity to apply both my software and hardware background to a real-world engineering challenge. We are passionate about this project because it has the potential to advance scalable autonomous systems that can improve safety and efficiency in critical environments. By demonstrating robust edge computing capabilities, we hope to contribute to technologies that reduce risk for humans and expand the possibilities for autonomous robotics. A special thank you to our Program Director, Harish Sunak, for making this collaboration possible and for giving us the opportunity to gain hands-on design experience with Draper. The Symposium: December 16th will showcase the mid-year results of our project, details: https://lnkd.in/e7BrxCED Our project Details: https://lnkd.in/e4qTJ43S
To view or add a comment, sign in
-
-
I think we have reached the age where a lot of online courses have reached their peak. If you are serious about learning in this age, of course pick a good book to get your fundamentals right. Then go through the documentation, read the architecture diagrams, read the source code and find some good company blogs who publish their experience on how they use the technology, get your hands on some and you should be good on that—better than any course will ever teach you. I followed the exact same steps to learn about NVIDIA Dynamo and you can ask me anything about Dynamo before any course instructor even publishes about what NVIDIA Dynamo is all about. I run inference on Qwen3 0.6B, Qwen3 30B Coder and 480B Coder on my GKE cluster with 2 nodes, each node with 8X NVIDIA H100—total 16 GPUs for the disaggregated components. #artificialintelligence #deeplearning #machinelearning #llm #innovation #technology #startups #ai #cloudengineering #cloudcomputing
To view or add a comment, sign in
-
-
🚀 JIO ROBOTICS joins the NVIDIA Inception Program! We’re thrilled to share that JIO ROBOTICS has officially joined NVIDIA Inception, the global accelerator empowering startups shaping the future of #AI and #robotics. This milestone strengthens both our #innovation pillars: 🤖 JIO ROBOTICS (Humanoid & Autonomous Systems) — advancing intelligent, real-world robots powered by NVIDIA #GPU technologies for manufacturing, mobility, and human-robot collaboration. 🎓 JIO ROBOTICS Academy — building the next generation of AI-driven robotic engineers through hands-on education, simulation, and cloud-based training with NVIDIA for Startups’s cutting-edge AI ecosystem. Together, these initiatives mark a new chapter in our mission to integrate AI, education, and automation — bridging innovation from classroom to industry. Thank you, NVIDIA Inception, for accelerating our journey toward a smarter, more connected robotic future #ai #ml #dl #robotics #humanoidrobots #mechanical #electrical #industrial #manudacturing #commercialworker #aieducation #roboticseducation NVIDIA AI NVIDIA Omniverse NVIDIA Robotics NVIDIA DRIVE NVIDIA Data Center NVIDIA Healthcare NVIDIA GTC NVIDIA for Startups NVIDIA Networking JIO ROBOTICS JIO ROBOTICS Academy JIO ROBOTICS 3D Printing and Design
To view or add a comment, sign in
-
-
🌍 Bringing AI Power to the Edge — NVIDIA Jetson Sets a New Standard As artificial intelligence expands beyond data centers, one platform continues to redefine what’s possible at the edge: NVIDIA Jetson. According to NVIDIA: “The NVIDIA Jetson Orin Nano™ Super Developer Kit is a compact, yet powerful computer that redefines generative AI for small edge devices. It delivers up to 67 TOPS of AI performance—a 1.7X improvement over its predecessor—to seamlessly run the most popular generative AI models, like vision transformers, large language models, vision-language models, and more.” — NVIDIA Official Product Page (https://lnkd.in/dPwA6AXe) At just $249, this kit provides developers, students, and makers with one of the most affordable and accessible platforms for building real-world AI solutions — supported by NVIDIA’s extensive AI software ecosystem. 🔍 Key Highlights - Up to ≈ 67 TOPS of INT8 AI performance (up from 40 TOPS in earlier Nano modules) - 102 GB/s memory bandwidth for faster data handling and model throughput - Ampere GPU architecture with 1,024 CUDA cores and 32 Tensor Cores, paired with a 6-core ARM Cortex-A78AE CPU - 8 GB LPDDR5 memory (also available in 4 GB configurations) Power efficiency ranging from roughly 7 W to 25 W depending on workload ⚙️ What This Enables Local AI inference at the edge — supporting advanced vision, multimodal, and compact language models (SLMs / LLMs) without relying on the cloud Ultra-low latency processing for robotics, automation, and industrial IoT Enhanced data privacy and reliability — sensitive data stays on device, ensuring autonomy even in low-connectivity environments Scalable ecosystem through NVIDIA JetPack SDK, TensorRT, and CUDA libraries for accelerated AI development Developers and researchers have already demonstrated running quantized and efficient language models (e.g., LLaMA 2 variants and Phi-class models) on Jetson hardware, powered by NVIDIA’s TensorRT-LLM optimizations. 💡 Why It Matters The Jetson line continues to advance the idea that AI doesn’t have to live in the cloud. With the Orin Nano Super, NVIDIA brings data-center-grade intelligence to edge systems — enabling faster, more private, and more autonomous AI applications. A special acknowledgment to NVIDIA for engineering this remarkable platform and for continuously empowering developers to push the limits of what’s possible in embedded and edge AI. #NVIDIA #Jetson #EdgeAI #EmbeddedAI #AIInference #Robotics #Innovation #GenerativeAI #Technology
To view or add a comment, sign in
Explore related topics
- Digital Twin Analytics in Engineering
- Digital Twins for Simulation and Modeling
- Digital Twin in Mechanical Systems
- Digital Twins in R&D
- How Digital Twins Change Industry Operations
- Digital Twins in Oil Field Operations
- Digital Twin Technologies in Construction
- How Digital Twins Improve Decision-Making
- Future Applications of Digital Twins in Business
- Digital Twin Technologies for Business Simulation