The most interesting paper I read last week, showing promising work on optical generative models that offload parts of the compute from electrons to photons. UCLA researchers demonstrated generating images by physical light propagation rather than digital computation. The approach maps diffusion model concepts to free-space optics: a shallow digital encoder converts random noise to phase patterns, then a trained diffractive surface processes light to generate images with the optical computation happening in under a nanosecond (overall speed is limited by the SLM refresh rate). This cross-pollination between fields (GenAI → Optics → GenAI) strengthens the entire research ecosystem. Moving from bits to light as the medium opens new possibilities for energy-efficient and significantly faster inference. Paper: https://lnkd.in/gC4By9Vv
UCLA researchers use light to generate images, faster and more energy-efficient than digital computation.
More Relevant Posts
-
Happy to share the latest version of our paper on differentiable turbulence closure modeling! In this new version, we extend our FEM (FEniCSx)-based differentiable physics framework from two-dimensional flows to three-dimensional turbulence, enabling a-posteriori learning of subgrid-scale (SGS) closures. We employ Graph Neural Networks (GNNs) within a PDE-constrained differentiable framework to learn closures that generalize to unseen geometries, including flows with different step heights. Interestingly, the model also shows strong performance even under sparse training conditions—when trained only on downstream flow-field data, it successfully captures universal turbulent dynamics, showing its potential for real engineering applications. Moreover, the learned closure preserves complex multiscale turbulent structures, maintaining coherence across a wide range of flow scales. To address the GPU memory bottlenecks of GNNs in 3D simulations, we introduce several key strategies: - Reduced rollout length during backpropagation through time - Graph construction using first-order Lagrangian elements to limit node counts - Multi-GPU parallelization with MPI, distributing domains across GPUs for scalable training The results demonstrate that our learned closure reproduces physically consistent turbulence statistics (mean profiles, Reynolds stresses, and higher-order correlations) while maintaining numerical stability and multiscale physical fidelity over long horizons. This work represents a step toward generalizable, differentiable physics-based turbulence modeling for realistic 3D flow configurations. Huge thanks to my co-author Romit Maulik! [Link to preprint: https://lnkd.in/gJXcw6qM]
To view or add a comment, sign in
-
The Anatomy of Multiscale Discovery: Our new paper in Materials Research Society MRS Bulletin shows how GNNs + reasoning-driven massively agentic AI can autonomously design alloys - linking atomic vibrations, dislocation physics, and macroscopic strength. Beyond prediction, this is an important step toward AI that discovers new laws of matter. It is composed of three organs, working in concert: 1️⃣ Sensing: an in-situ trained graph neural network that perceives the structure of matter, translating atoms and dislocations into patterns of energy and motion into a relational world model. It acts as the sensory layer, compressing the raw world of atoms into meaningful variables like Peierls barriers and solute-dislocation interactions - the "language of physics". 2️⃣ Reasoning: an architecture of planners and interpreters. These components coordinate like a cognitive cortex: posing questions, designing experiments, interpreting anomalies, and critiquing their own logic. They embody selective imperfection; not perfect prediction, but purposeful deviation in the pursuit of new structure. 3️⃣ Integration: a physics model that ties these threads into law, linking atomic behavior to macroscopic strength. What this enables: The system explored hundreds of NbMoTa alloy compositions across the full ternary space in seconds - work that would require months of atomistic simulations. It achieved R²=0.97 accuracy for Peierls barrier predictions and autonomously identified composition-property relationships, including predicting temperature-dependent yield stress validated against experimental data. The system reconstructs the generative rules that make those properties emerge. Together, these layers form a self-referential loop of perception, reasoning, and synthesis. Each cycle begins in difference - variations among atoms, data, or ideas - and ends in organization: variations among atoms, data, or ideas. And then it begins again. This work exemplifies how discovery grows not by erasing differences, but by organizing them. This system can see structure, reason over it, and re-compose it into law - echoing the deepest function of science and art alike! Great work by my postdoc Alireza Ghafarollahi! Citation and link to paper in comment.
To view or add a comment, sign in
-
-
When I was a grad student, my work focused on physics simulations using classical numerical methods like finite elements and finite differences. I specialized in magneto-hydrodynamics (MHD), the coupling between electromagnetism and fluid flow. One of the hardest parts of MHD is that the coupling is nonlinear and time-dependent. For a computational scientist, that means you must perform time integration and solve a nonlinear system at every time step. Each nonlinear solve requires multiple linear system solves with preconditioning, and these cannot be parallelized since every step depends on the previous one. If you want to go stochastic? Wrap the whole thing in a massive Monte Carlo simulation. Modeling just one second of physical time can mean hundreds of sequentially preconditioned solves. It’s brutal... Back then, I believed that real-time simulation for these systems was decades away. But after entering industry and working with machine learning, I realized the technology is already here. With Physics-Informed Neural Networks (PINNs), the differential equations themselves become the loss function. You can train a model that learns the physical solution directly, even without data. Or better yet, combine data and PDEs to accelerate convergence and improve accuracy. Yes, training is computationally expensive. But once trained, the model becomes an ultra-fast solver, capable of inference in milliseconds. The implications are massive, from rocket design, plasma physics, and semiconductor manufacturing, to space weather modeling. I feel lucky to have stumbled onto, quite literally, the solution to a problem I once thought impossible. #MachineLearning #PhysicsInformedML #PDEs #Simulation #PINNs #ComputationalScience #AIEngineering https://lnkd.in/eCTWqiyD
To view or add a comment, sign in
-
🔥 From 𝗦𝘁𝗿𝗼𝗻𝗴 to 𝗪𝗲𝗮𝗸! Proud to share our latest publication in IEEE Transactions on Magnetics: 📘 “Weak Formulation for Physics-Informed Neural Networks in the Resolution of Analysis Problems in Electromagnetics” 🔗 Link: https://lnkd.in/eXupApg9 🤖 We introduce a Weak (Variational) Formulation PINN, where the physical laws are enforced through integral residuals rather than pointwise constraints. 📉 reducing derivative order, ⚡ improving convergence, and 🎯 enhancing overall accuracy. In electromagnetics, solving for the potential is just the first step. All key quantities, such as fields, come from their derivatives. ⚠️ Even small potential errors can amplify downstream, affecting field accuracy. ✅ By reformulating PINNs in their weak (variational) form, we showed that minimizing the integral residual reduces not only the potential error but also the field errors, making this approach highly robust for complex EM problems involving multiple interdependent parameters. ⚙️ This work bridges the rigor of Finite Element Methods with the flexibility of Deep Learning, pushing forward the frontier of physics-consistent neural solvers. 🌟 Honored to work alongside Prof. Sami Barmada and Prof. Alessandro Formisano, whose vision and mentorship continually drive innovation at the intersection of electromagnetics and AI. #PhysicsInformedNeuralNetworks #Electromagnetics #DeepLearning #MachineLearning #Research #IEEE #WeakFormulation #VariationalMethods #ComputationalPhysics
To view or add a comment, sign in
-
Continuous-variable Photonic Quantum Extreme Learning Machines Enable Fast Collider-data Selection and Analysis Researchers demonstrate a photonic processor that rapidly identifies particles from collider experiments, achieving comparable accuracy to complex machine learning models while training significantly faster and operating with minimal power consumption #quantum #quantumcomputing #technology https://lnkd.in/enA47BYk
To view or add a comment, sign in
-
Article Spotlight: Flow Matching Meets PDEs, A Unified Framework for Physics-Constrained Generation Why bridging physics and generative AI matters Generative models such as diffusion and flow matching have reshaped how we represent complex systems, yet their use in physics and engineering still faces a key obstacle. These models learn patterns from data, not the governing laws that define physical reality. In systems described by partial differential equations (PDEs), quantities such as mass, energy, and momentum must be conserved for predictions to be valid. Many data-driven approaches ignore these principles, leading to results that may appear correct but violate physics. Achieving data-driven generation that also respects physical laws has therefore become one of the most important goals in scientific machine learning. What this paper contributes Researchers from Politecnico di Milano and the Technical University of Munich introduce Physics-Based Flow Matching (PBFM), a generative modeling framework that integrates physical constraints directly into the flow matching objective. PBFM aligns the optimization of physics and data terms so that both objectives improve together. The method also adds temporal unrolling during training, refining noise-free samples over multiple integration steps to enhance accuracy and stability. Across benchmarks such as Darcy flow, Kolmogorov turbulence, and dynamic stall around an airfoil, PBFM reduces physical residual errors by up to eight times compared to existing diffusion and flow-based models, while maintaining high distributional accuracy. How this advances physics-informed AI By embedding physics into the heart of generative training, PBFM enables the creation of surrogate models that are both accurate and physically consistent. The framework removes the need for manual weighting between data fidelity and physical residuals, which has long hindered multi-objective learning. It also scales efficiently to high-dimensional simulations, generating results that obey governing equations without additional inference cost. This unified approach positions PBFM as a foundation for reliable AI-driven modeling, design, and control in fluid dynamics, materials science, and engineering systems. 📄 Reference: Giacomo Baldan, Qiang Liu, Alberto Guardone, and Nils Thuerey. Flow Matching Meets PDEs, A Unified Framework for Physics-Constrained Generation. arXiv - preprint arXiv:2506.08604v1 (June 2025). Link in comments.
To view or add a comment, sign in
-
-
MIT Researchers Develop Brain-Inspired Neuromorphic Computing for Energy-Efficient AI Researchers at the Massachusetts Institute of Technology (MIT) are advancing brain-inspired computing technologies to make artificial intelligence systems more energy efficient and environmentally sustainable. Their work focuses on electrochemical neuromorphic computing, a novel approach that integrates memory and processing in a single device, mimicking the function of neurons and synapses in the human brain. How #MIT #NeuromorphicComputing
To view or add a comment, sign in
-
MIT Researchers Develop Brain-Inspired Neuromorphic Computing for Energy-Efficient AI Researchers at the Massachusetts Institute of Technology (MIT) are advancing brain-inspired computing technologies to make artificial intelligence systems more energy efficient and environmentally sustainable. Their work focuses on electrochemical neuromorphic computing, a novel approach that integrates memory and processing in a single device, mimicking the function of neurons and synapses in the human brain. How #MIT #NeuromorphicComputing
To view or add a comment, sign in
-
More exciting news (2/#), I host the Neural Image Processing Tutorial at ICCV 2025 📷🌴🌊 We will discuss the AI techniques used in smartphone cameras. - Datasets - RAW processing - Neural Image Signal Processors - Controllable image enhancement - VLMs and more! My colleagues Feiran Li and Jiacheng Li from SonyAI will introduce a new dataset for RAW denoising, and new techniques for RAW processing. Thanks to Tom Bishop, Jingxi Li and Shivansh Rao from GLASS Imaging , looking forward to learning more about modern optics and neural ISPs -- let's see that magical ultra zoom. We welcome everyone working on computational photography, imaging, and low-level computer vision! About ICCV: The International Conference on Computer Vision is the world’s premier biennial research conference covering advances in AI and computer vision. #ICCV25 #ComputerVision
To view or add a comment, sign in
-
ExpertSim: Accelerating discovery through generative simulation At ECAI 2025, Solvd’s research team is presenting a new approach that redefines how particle physics simulations are performed. ExpertSim uses a Mixture-of-Generative-Experts to simulate complex particle collisions faster and with greater precision. By selecting specialized neural modules to process diverse data, it improves performance while cutting computational costs. The proposed approach improved by 15% compared to previous state-of-the-art solutions. Explore the research presented at ECAI 2025 and see how Solvd’s generative AI innovation is driving breakthroughs in scientific computing: https://hubs.la/Q03QJ-BT0
To view or add a comment, sign in
-
Thank you for sharing this, Nick. Promising demonstration of physical AI.