Before the Next Breakthrough in AI: Rethinking Electronics and Moore’s Law
What if AI’s limitations are not primarily algorithmic?
For decades, progress has followed a familiar rule: more transistors → more computation → more capability.
Moore’s Law 1.0 delivered extraordinary gains. But it also quietly shaped how we think about intelligence: as something executed by algorithms rather than something that emerges from physical dynamics.
AI systems today are undeniably powerful. Yet they remain curiously unlike the biological intelligence they aim to replicate:
• Vastly scalable, yet structurally rigid
• Computationally dense, yet physically disembodied
• Impressive, yet dependent on brute-force scaling
This raises an uncomfortable possibility: Are we scaling the wrong abstraction?
If intelligence is fundamentally emergent, arising from continuous interaction, feedback, and spatiotemporal signal dynamics, then simply increasing computation may never be sufficient.
The real bottleneck may not be model size. It may be the assumptions embedded in modern electronic architecture.
Moore’s Law 2.0 proposes a different direction:
· Not faster computation, but richer physical participation.
· Not deeper pipelines, but deeper feedback.
· Not static execution, but adaptive substrates.
This is not an incremental upgrade. It challenges some of our most stable intuitions, much like Quantum Mechanics did a century ago (see Figure 2).
If intelligence is not merely computed but grown through interaction, what would that imply for how we design hardware, systems, and AI itself?
Transcending Reductionism and Dualism: Philosophical Critique of Electronics and a Vision for Brain-Mimicking AI under Moore’s Law 2.0
A new article in SPM, by Liming Xiu, is available from here:
https://lnkd.in/e2Nqevty
Curious to hear perspectives from you …
IEEE Signal Processing Society