Solving the many-electron Schrödinger equation with Transformers Every material property, in principle, comes from solving the many-electron Schrödinger equation. But the math is brutal: the Hilbert space grows exponentially, and even the best methods—DFT, coupled-cluster, DMRG—hit hard limits when strong electron correlation or large active spaces appear. Honghui Shang and coauthors present QiankunNet, a neural-network quantum state inspired by large language models. At its core is a Transformer wavefunction ansatz, where attention captures long-range electron correlations directly. Instead of slow Markov chains, it uses autoregressive sampling—generating uncorrelated electron configurations one by one, guided by Monte Carlo tree search. Physics-informed initialization from truncated CI keeps the model close to physical reality from the start. The result is striking: QiankunNet recovers 99.9% of FCI correlation energy for molecules up to 30 spin orbitals, handles N₂/cc-pVDZ (56 qubits, 14 e⁻) within 3.3 mHa of a DMRG reference, and even tackles the Fenton reaction with a CAS(46e,26o) active space—capturing complex multi-reference chemistry around Fe(II)/Fe(III) oxidation. Compared to previous NNQS, it is both faster (∼10× at 30 orbitals) and more accurate. This points toward a future where attention models don’t just process words, but represent quantum wavefunctions—bringing LLM-inspired architectures into the heart of quantum chemistry. Paper: https://lnkd.in/disnvEVi #QuantumChemistry #ArtificialIntelligence #MachineLearning #DeepLearning #Transformers #NeuralNetworks #QuantumPhysics #ComputationalChemistry #QuantumMaterials #AIforScience #QuantumComputing #Physics #Chemistry #SchrodingerEquation #ScientificInnovation
Solving Quantum Dynamics Challenges in Research
Explore top LinkedIn content from expert professionals.
Summary
Solving quantum dynamics challenges in research means finding new ways to understand and simulate how quantum systems change and interact over time, which is key for breakthroughs in physics, chemistry, and computing. Researchers are tackling tough problems like scaling quantum computers, isolating delicate quantum states, and modeling complex particle behavior using math, innovative algorithms, and powerful quantum hardware.
- Apply hybrid approaches: Combine classical and quantum methods, such as tensor networks mapped to quantum circuits, to simplify and solve large-scale computational problems previously out of reach.
- Harness advanced frameworks: Use simulation tools like ARQUIN to model distributed quantum systems, enabling more accurate scaling and resource management for quantum computers.
- Refine mathematical boundaries: Employ exact mathematical frameworks and novel error bounds to precisely measure and decouple quantum states, improving system performance and resilience to environmental noise.
-
-
Quantum Scaling Recipe: ARQUIN Provides Framework for Simulating Distributed Quantum Computing Systems Key Insights: • Researchers from 14 institutions collaborated under the Co-design Center for Quantum Advantage (C2QA) to develop ARQUIN, a framework for simulating large-scale distributed quantum computers across different layers. • The ARQUIN framework was created to address the “challenge of scale”—one of the biggest hurdles in building practical, large-scale quantum computers. • The results of this research were published in the ACM Transactions on Quantum Computing, marking a significant step forward in quantum computing scalability research. The Multi-Node Quantum System Approach: • The research, led by Michael DeMarco from Brookhaven National Laboratory and MIT, draws inspiration from classical computing strategies that combine multiple computing nodes into a single unified framework. • In theory, distributing quantum computations across multiple interconnected nodes can enable the scaling of quantum computers beyond the physical constraints of single-chip architectures. • However, superconducting quantum systems face a unique challenge: qubits must remain at extremely low temperatures, typically achieved using dilution refrigerators. The Cryogenic Scaling Challenge: • Dilution refrigerators are currently limited in size and capacity, making it difficult to scale a quantum chip beyond certain physical dimensions. • The ARQUIN framework introduces a strategy to simulate and optimize distributed quantum systems, allowing quantum processors located in separate cryogenic environments to interact effectively. • This simulation framework models how quantum information flows between nodes, ensuring coherence and minimizing errors during inter-node communication. Implications of ARQUIN: • Scalability: ARQUIN offers a roadmap for scaling quantum systems by distributing computations across multiple quantum nodes while preserving quantum coherence. • Optimized Resource Allocation: The framework helps determine the optimal allocation of qubits and operations across multiple interconnected systems. • Improved Error Management: Distributed systems modeled by ARQUIN can better manage and mitigate errors, a critical requirement for fault-tolerant quantum computing. Future Outlook: • ARQUIN provides a simulation-based foundation for designing and testing large-scale distributed quantum systems before they are physically built. • This framework lays the groundwork for next-generation modular quantum architectures, where interconnected nodes collaborate seamlessly to solve complex problems. • Future research will likely focus on enhancing inter-node quantum communication protocols and refining the ARQUIN models to handle larger and more complex quantum systems.
-
Isolating fragile quantum states relies on specific mathematical boundaries. Scaling quantum hardware involves eliminating correlations between a local system and its surrounding environment. When a bipartite quantum state undergoes a unitary operation followed by a decoupling map, the objective is to make the resulting system independent of environmental noise. Past approaches to calculate decoupling error limits relied on approximations and smoothing techniques. A joint research initiative between RWTH Aachen University and National Taiwan University introduces a one-shot decoupling theorem. This study defines the decoupling error bound through exact mathematical structures rather than general estimations. The research was conducted by Mario Berta, Yongsheng Yao, and Hao-Chung Cheng. Consider the technical parameters of this published theorem: → It utilizes quantum relative entropy distance instead of the standard trace distance criteria. → It provides a precise characterisation of one-shot decoupling error without using smoothing techniques or additive terms. → It delivers a single-letter expression for exact error exponents in quantum state merging. → It outlines achievability bounds for entanglement distillation assisted by local operations and classical communication. These mathematical limits apply directly to system performance. For coding rates below the first-order asymptotic capacity, the error decays exponentially for every blocklength. This provides a large-deviation characterisation that is mathematically stronger than conventional first-order approaches. Relative entropy operates as the primary metric for defining the capacity of these operational tasks. The bounds formulated under relative entropy convert directly into purified distance statements via standard entropy-fidelity inequalities. This establishes a strict performance criterion for applications like quantum channel simulation and secure channel coding. The current theorem primarily addresses scenarios involving identical, independently distributed quantum states. The subsequent phase of research requires applying these refined entropy bounds to complex systems featuring correlated noise and memory. This research supplies experimental physicists with a defined mathematical framework for future quantum architecture. How do you evaluate the transition from theoretical limits to functional quantum hardware? Reply in the comments.
-
In an international collaboration, researchers from BasQ, CERN, UAM–CSIC, the Wigner Research Centre for Physics, and IBM have simulated the real-time dynamics of confining strings in a (2+1)-dimensional Z2-Higgs gauge theory with dynamical matter, leveraging a superconducting quantum processor with up to 144 qubits and 192 two-qubit layers (totaling 7,872 two-qubit gates). This work tackles a longstanding challenge in high-energy physics: understanding the real-time dynamics of confinement in gauge theories with dynamical matter—a crucial aspect of non-perturbative quantum field theory, including quantum chromodynamics (QCD). Classical methods face fundamental limitations in simulating these dynamics, often requiring indirect approaches such as asymptotic in-out probes in collider experiments. Quantum processors, by contrast, now offer the opportunity to observe the microscopic evolution of confining strings directly, opening new pathways for studying these complex phenomena in real time. To accomplish this, matter and gauge fields were encoded into superconducting qubits through an optimized mapping onto IBM’s heavy-hex architecture. By exploiting local gauge symmetries, the team applied a robust combination of error suppression, mitigation, and correction techniques—including novel methods such as gauge dynamical decoupling (GDD) and Gauss sector correction (GSC)—enabling high-fidelity observations of string dynamics, supported by 600,000 measurement shots per time step. The results reveal both longitudinal and transverse string dynamics—including yo-yo oscillations and endpoint bending—as well as more complex processes such as string fragmentation and recombination, which are essential to understanding hadronization and rotational meson spectra from first principles. To predict large-scale real-time behavior and benchmark the experimental results, the study integrates state-of-the-art tensor network simulations using the basis update and Galerkin methods. Altogether, this paper marks a significant milestone in the quantum simulation of non-perturbative gauge dynamics, showcasing how current quantum hardware can be used to explore real-time phenomena in fundamental physics. paper is here https://lnkd.in/eD89BKqi
-
#QuantumTuesday What if the key to unlocking quantum computing's full potential lies not in brute force but in elegant simplicity? As the GoTo Fractional Quantum Chief Intellectual Property Officer, I constantly explore the intersection of innovation, strategy, and disruptive technologies. Today, I’m thrilled to share insights from an extraordinary paper: "Tensor Quantum Programming" by A. Termanova et al. This work brilliantly merges tensor networks (TNs) and quantum computing, opening doors to solving some of the most complex computational problems of our time. Imagine tackling partial differential equations, quantum chemistry simulations, or machine learning models not with overwhelming computational resources but by leveraging tensor efficiency and the unique strengths of quantum circuits. This hybrid approach - classical for simplicity, quantum for complexity - redefines the rules of computation. Key takeaways from this breakthrough: 🔑 Efficiency Redefined: TNs are mapped to quantum circuits, creating a paradigm where high-dimensional problems scale linearly in complexity. Yes, you read that right - linear scalability in quantum circuits for problems that traditionally overwhelmed classical systems. 🔑 Applications Everywhere: - Simulating Hamiltonians for quantum systems. - Optimizing black-box functions with precision. - Revolutionizing quantum chemistry, from molecular dynamics to electron correlations. - Enhancing machine learning models by encoding TN architectures directly onto quantum platforms. 🔑 The Future Is Here: By bridging the gap between classical and quantum resources, Tensor Quantum Programming paves the way for solving real-world problems, from innovation-driven industries to fundamental research. This paper highlights an important truth: quantum computing isn't about doing more of the same; it’s about doing what was previously impossible. For those of us in the business of strategy and intellectual property, such breakthroughs represent not just scientific progress but entirely new frontiers for value creation. As an IP Alchemist, this inspires me to think about how we can protect and leverage these innovations to shape industries and fuel growth. How do we ensure that the architectures we build today are not just protected but optimized for tomorrow’s quantum future? What are your thoughts on the role of hybrid approaches like this in quantum computing? Let’s connect and dive into the possibilities. 🚀 #QuantumComputing #TensorNetworks #InnovationStrategy #IPManagement #DeepTechDisruption Terra Quantum AG Markus Pflitsch Artem Melnikov Aleksandr Berezutskii Roman Ellerbrock Michael Perelshtein
-
By driving a quantum processor with laser pulses arranged according to the Fibonacci sequence, physicists observed the emergence of an entirely new phase of matter—one that displays extraordinary stability in a domain where fragility is the norm. Quantum computers operate using qubits, which differ radically from classical bits. A qubit can exist in superposition, occupying multiple states at once, and can become entangled with others across space. These properties enable immense computational power, but they come with a cost: quantum states are notoriously short-lived. Environmental noise, microscopic imperfections, and edge effects rapidly degrade coherence, limiting how long quantum information can survive. Seeking a new way to protect fragile quantum states, scientists at the Flatiron Institute, instead of applying laser pulses at regular intervals, they used a rhythm governed by the Fibonacci sequence—an ordered but non-repeating pattern long known to appear in biological growth, crystal structures, and wave interference. The experiment was carried out on a chain of ten trapped-ion qubits, driven by precisely timed laser pulses. The result was the formation of what is described as a time quasicrystal. Unlike ordinary crystals, which repeat periodically in space, a time quasicrystal exhibits structure in time without repeating in a simple cycle. The Fibonacci-based driving created a temporal order that resisted disruption, allowing the quantum system to remain coherent far longer than expected. The improvement was significant. Under standard conditions, the quantum state persisted for roughly 1.5 seconds. When driven by the Fibonacci pulse sequence, coherence times stretched to approximately 5.5 seconds—more than a threefold increase. Even more intriguing was the system’s temporal behavior. Measurements indicated that the quantum dynamics unfolded as if time itself possessed two independent structural directions. This does not imply time flowing backward, but rather that the system’s evolution followed two intertwined temporal pathways—an emergent property arising purely from the Fibonacci drive. The researchers propose that the non-repeating structure of the Fibonacci sequence suppresses errors that typically accumulate at the boundaries of quantum systems. By distributing disturbances in a highly ordered yet aperiodic way, the sequence stabilizes the collective behavior of the qubits. In effect, a mathematical pattern found throughout nature acts as a self-organizing error-management protocol. The findings suggest a powerful new strategy for quantum control. Rather than fighting noise solely with complex correction algorithms, future quantum technologies may harness structured patterns—drawn from mathematics and natural order—to achieve resilience at a fundamental level. https://lnkd.in/dVxp7R8J https://lnkd.in/dDVNRsPk
-
Last week, we shared exciting new results studying operator dynamics on structured circuits designed by our collaborators at Algorithmiq. Our experiments on up to 70 qubit, high-fidelity, heavy-hex layouts, with heuristic error mitigation methods, produced accurate results at short depths that were verified with classical simulation. At larger circuit depths (up to 1872 CZ gates), the circuits were seen to be challenging for Belief propagation-based tensor network methods in the Schrödinger picture, even at fairly large bond dimensions, while the experiments produced data points that were within theoretical bounds. These experiments were enabled, in part, by a 10x reduction in median 2Q error rates from the utility experiment — now at 0.101% in simultaneous operation across the layout! Thanks to our collaborators at Algorithmiq, Simons Foundation Flatiron Institute. We shared these results in the new open community Quantum advantage tracker (https://lnkd.in/eG6Ue3sg), that includes the theoretical background for the experiment, classical simulation and experimental details, run-times, open-source code, etc. This tracks progress towards observable estimation with rigorous error bounds, ground state problems with variational solutions, and problems with efficient classical verification, and also invites proposals for new advantage candidates! Looking forward to sharing upcoming results from experiments and simulations, as they roll in, in this new open "lab notebook". I hope this accelerates the feedback loop between quantum experiments and classical simulation, without boundaries, and ultimately advances the pace of scientific discovery.
-
Quantum annealing is not a theoretical problem—it’s a reliability problem. 🤝 Together with the team at eleQtron GmbH, and in close collaboration with colleagues from the Universität Siegen and the Università di Trento, we recently published a paper addressing exactly this challenge, now published in Physical Review A. At its core, quantum annealing is about solving optimization problems where classical approaches run into complexity limits. In practice, however, performance is constrained by noise in real quantum hardware.⚙️ In our work, we show that the impact of this noise can be significantly reduced during the annealing process using a simple, experimentally realistic control technique based on dynamical decoupling—without adding hardware complexity. 🦾 What I find particularly encouraging is that this result is not tied to a specific platform. While we use trapped-ion systems as a reference, the underlying idea is rather platform-independent and directly relevant for near-term quantum technologies. ⚡ For me, this is an important step toward the question that really matters: How do we turn quantum annealing into a dependable tool for real-world optimization problems? 🌍 #ZukunftofQuantum #eleQtron
-
Exploring Optimization for Chemistry and Materials Science: A cross-platform study across gate based quantum computing, quantum annealing and classical approaches! I’m excited to share insights from a project I conceived and directed before leaving the National Quantum Computing Centre (NQCC) last year. We focused on the configurational analysis of defective graphene structures, specifically aiming to find the lowest energy configuration using Quadratic Unconstrained Binary Optimization (QUBO) formulations, led by Kieran and Theo. I'll post the link in the comments. Key Findings: 1. Performance Comparison: Our results showed that simulated annealing (classical) was the standout performer, excelling in both solution quality and runtime. It effectively tackled large problems with hundreds of variables, while quantum methods faced significant scaling challenges. 2. Quantum Annealing Insights: Although quantum annealing demonstrated potential for solving QUBOs with up to 72 variables, it struggled with convergence and error accumulation as problem sizes increased. This emphasizes the need for ongoing research to optimize quantum algorithms for real-world applications. 3. Brute Force as a Benchmark: We found that classical brute force methods provide guaranteed optimal solutions for smaller problem sizes, serving as a reliable benchmark. However, their exponential growth in runtime makes them impractical for larger instances. 4. Future Directions: Our study highlights the importance of identifying problem instances that are challenging for classical techniques but well-suited for quantum algorithms. Techniques like warm starts, parameter concentration, and error mitigation could significantly enhance the performance of both quantum and classical methods. 5. Framework for Analysis: We developed a systematic approach for analyzing the performance of various algorithms, which can be applied to a wide range of optimization problems beyond just QUBO. I want to express my gratitude to my talented colleagues who contributed to this project: Kieran McDowall, who conducted the investigation; Theo Kapourniotis, who played a key role in analyzing and interpreting the results; and Chris Oliver for their invaluable support throughout the process. I also appreciate Konstantinos Georgopoulos for providing overall supervision. Of course, research and science progress due to the contributions of many in the field. I would like to thank Bruno Camino, Stefan Woerner, Andrew King and Robert Cumming for many useful discussions surrounding this work and also Vincent Graves for proof reading the manuscript! As quantum hardware continues to advance, the search for effective optimization solutions remains a dynamic and exciting area of research. I look forward to seeing how these insights will influence future developments in both classical and quantum computing! #Optimization #QuantumComputing #SimulatedAnnealing #QUBO #ResearchInsights #MaterialsScience
-
Quantum Experiments Shrunk to a Palm-Sized Chip A team at the University of California, Santa Barbara has managed to compress an entire physics laboratory into the size of a microchip. Experiments with cold atoms — once spread across rooms filled with optical tables — now fit on compact silicon nitride chips. Cold atoms form the basis of the most precise measurements in the universe. Atoms are trapped with lasers, cooled almost to absolute zero, and their quantum properties are used to measure time with billionth-of-a-second precision, detect gravitational anomalies, and search for dark matter. The problem: traditional setups occupy entire rooms with optical tables, racks of lasers, and vibration isolation systems. The breakthrough came in 2023. Daniel Blumenthal’s team created PICMOT — a photonic integrated 3D magneto-optical trap. Silicon nitride waveguides deliver laser beams into a vacuum chamber filled with rubidium vapor. Three beams cross the atoms, reflect off mirrors, and return, forming an intersection region. Magnetic coils complete the trap. The system captured a million atoms and cooled them to –273 °C. Then came the next challenge: why not fit the entire optical table on a chip? Lasers, mirrors, modulators, stabilizers, frequency shifters — everything that manipulates light. In 2024, the team solved the problem of noisy lasers. Commercial lasers have broad, unstable linewidths — useless for quantum precision. They took an ordinary Fabry-Perot diode laser worth a few dollars and passed it through on-chip resonators and waveguides. The result: a stable single-frequency light source comparable to lab-grade systems. Moreover, the compact geometry provides faster feedback, reducing noise and improving stability. The potential applications extend far beyond the lab. Portable cold-atom systems could measure sea-level rise with centimeter accuracy, detect underground structures, and track glacier movement. Earthquakes might be detectable hundreds of kilometers away by sensing shifts in the gravitational field. The vacuum chamber and atom source remain bulky for now — miniaturizing them while maintaining large atom counts is still a challenge. But the team is working on it. Their goal: a palm-sized device capable of replacing an entire quantum laboratory.