Reversible Computing Explained (2026): The Only Path Beyond Moore’s Law Energy Limits

Reversible computing could break fundamental energy limits in chips; here’s the latest research, breakthroughs, and why it matters.

Green and Black Circuit Board
Green and Black Circuit Board - Photo by Jeremy Waterhouse on Pexels.

Technical Research Report | April 2026

The Current State of Research and Potential Future Applications of Reversible Computing

audio-thumbnail
Melancholic Ambient by Aredius
0:00
/173.12

Executive summary

Reversible computing, the design of computational systems that avoid the thermodynamically irreversible erasure of information, represents the only known path to transcend the fundamental energy-efficiency limits approaching conventional semiconductor technology. Rooted in Landauer’s 1961 principle that each bit of information erased dissipates at minimum kT ln 2 of energy (~2.87 zeptojoules at room temperature), the field has progressed from theoretical curiosity to early-stage hardware demonstration. In March 2025, London-based startup Vaire Computing taped out the first chip to achieve net energy recovery in a commercial CMOS process, validating decades of theory in 22nm silicon. Superconducting adiabatic quantum flux-parametron (AQFP) circuits at Yokohama National University have demonstrated energy dissipation 10,000–100,000× below advanced CMOS, and in March 2026 Nature Electronics reported a quantum computer controlled entirely by superconducting digital electronics at millikelvin temperatures

These advances arrive at a moment of acute need. Global data center electricity demand is projected to reach 945–1,587 TWh by 2030, roughly doubling from 2025 levels, is driven overwhelmingly by AI workloads. The IEEE International Roadmap for Devices and Systems (IRDS) projects that conventional digital logic energy efficiency will plateau by the late 2020s, with gate-level power efficiency improving by only ~2× over the next dozen years. Even speculative beyond CMOS switching devices face the same thermodynamic wall. Reversible computing is explicitly identified by the IRDS as the only paradigm capable of breaking through it.

Yet the field suffers from a catastrophic funding mismatch: total commercial investment in reversible computing stands at roughly $10 million, compared to tens of billions flowing into quantum computing. No dedicated U.S. or EU federal research program targets reversible computing specifically. This report assesses the scientific foundations, current research progress, technical barriers, application horizons, and strategic implications of this emerging field, and argues that reversible computing merits urgent, sustained public and private investment commensurate with its potential to reshape the economics of computation.


Close-Up of a Computer Chip
Close-Up of a Computer Chip - Photo by Tima Miroshnichenko on Pexels

1. Thermodynamic foundations and the physics of irreversible computation

The intellectual origins of reversible computing lie at the intersection of thermodynamics and information theory. In 1961, IBM physicist Rolf Landauer demonstrated that any logically irreversible computational operation; one that maps multiple input states to a single output state, such as an AND or OR gate, must dissipate a minimum of kT ln 2 ≈ 2.87 × 10⁻²¹ J per bit erased at room temperature (300 K). This thermodynamic floor, known as the Landauer limit, arises because erasing information increases the entropy of the environment by at least k ln 2 per bit, a consequence of the second law of thermodynamics

Landauer’s principle remained contentious for decades, but a series of experimental verifications have placed it on firm empirical ground. Bérut et al. (2012, Nature) provided the first direct confirmation using a colloidal silica bead in an optical trap at ENS Lyon. Hong, Lambson, Dhuey, and Bokor at UC Berkeley (2016, Science Advances) achieved a measurement of (1.45 ± 0.35) kT per bit erasure using nanomagnetic memory, within two standard deviations of the theoretical limit. Yan et al. (2018, Physical Review Letters) verified the principle in a fully quantum system using a trapped calcium ion at the Chinese Academy of Sciences (中国科学院). Most recently, Aimet, Tajik, Eisert, and Schmiedmayer (2025, Nature Physics) extended the principle to the quantum many-body regime using ultracold rubidium Bose gases at TU Wien and Freie Universität Berlin.

The critical insight enabling reversible computing came from Charles Bennett at IBM Research in 1973. Bennett proved that any computation can be performed in a logically reversible manner, avoiding bit erasure entirely, by constructing a reversible Turing machine that executes a three-phase “compute–copy uncompute” protocol. The forward computation proceeds while saving all intermediate states; the desired output is copied to a clean register; and the computation is then run in reverse to restore the workspace to its initial state. This eliminates “garbage” bits and, in principle, permits computation with arbitrarily low energy dissipation. Bennett’s 1982 review paper further connected these results to Maxwell’s demon paradox, showing that the demon’s thermodynamic cost arises not from measurement but from the irreversible erasure of its memory, elegantly unifying information theory, computation, and statistical mechanics.

The hardware primitives for reversible computation were formalized by Tommaso Toffoli (1980) and Edward Fredkin (1982) at MIT. The Toffoli gate (controlled-controlled NOT, 3-input/3-output) is universal for classical reversible Boolean logic. The Fredkin gate (controlled-SWAP) provides an alternative universal primitive. Both gates preserve information by maintaining a one-to-one mapping between inputs and outputs, and both have direct quantum mechanical implementations a fact of profound significance for the relationship between reversible and quantum computing.


Close-up Photo of a Printed Circuit Board
Close-up Photo of a Printed Circuit Board - Photo by Tima Miroshnichenko on Pexels

2. Where the research stands in 2025–2026

2.1 The first reversible chip leaves the foundry

The most consequential development in reversible computing’s six-decade history occurred in March 2025, when but today’s 500 MHz operating frequency remains well below the multi-GHz clocks of cutting-edge conventional processors. Whether reversible architectures can achieve competitive throughput through massive parallelism (GPU-like designs where slower clock speeds matter less) remains an open engineering question. A London and Cambridge-based startup founded in 2021 by Rodolfo Rosini (CEO) and Dr. Hannah Earley (CTO) taped out its “Ice Rivertest chip in a commercial 22nm planar CMOS process. The chip demonstrated an energy recovery factor of 1.77× for a capacitor array and 1.41× for a shift register/adder relative to conventional square-wave-driven circuits, EE Times operating at a data frequency of 500 MHz. The on-chip resonator achieved approximately 50% average energy recycling. This marked the first time net energy recovery had been demonstrated in a commercial foundry process for a complete reversible computing system.

Vaire’s approach uses adiabatic switching, which is ramping control voltages gradually through LC resonant circuits rather than the abrupt square-wave transitions of conventional CMOS; to recapture the charge energy that standard logic dissipates as heat. The company recruited Michael P. Frank, arguably the field’s leading practitioner after 30 years at Sandia National Laboratories, as senior scientist in July 2024. Vaire’s roadmap targets a 1 GHz second-generation chip by 2026, a commercially competitive AI inference processor by 2027, and an ultimate efficiency gain of 4,000× over conventional CMOS within 10–15 years, contingent on the integration of high-quality MEMS resonators capable of 99.97% energy recovery.

2.2 Superconducting reversible logic achieves extraordinary efficiency

At Yokohama National University (横国), the group led by Prof. Nobuyuki Yoshikawa and Dr. Naoki Takeuchi has developed the most energy-efficient computing technology ever demonstrated. Their adiabatic quantum-flux-parametron (AQFP) circuits; superconducting loops with Josephson junctions driven by AC excitation; achieve power dissipation of approximately 7 pW per junction, Nature roughly 10,000–100,000× more efficient than advanced CMOS. An 8-bit AQFP carry-look-ahead adder demonstrated energy dissipation of 24 kT per junction, approaching the Landauer limit. Nature The group has fabricated benchmark chips with over 10,000 AQFP gates and demonstrated a prototype deep-learning accelerator.

The reversible quantum-flux-parametron (RQFP), a logically and physically reversible variant, has achieved energy dissipation below the Landauer bound at low frequencies; possible because RQFP circuits avoid bit erasure entirely. Yamae, Takeuchi, and Yoshikawa (2024, Journal of Applied Physics) demonstrated a reversible flip-flop confirming that both combinational and sequential logic can be conducted without energy dissipation in the quasi-static limit. In June 2024, Takeuchi et al. published an AQFP multiplexed qubit controller dissipating only 81.8 pW per qubit in npj Quantum Information, demonstrating that superconducting reversible logic can operate within the stringent thermal budget of dilution refrigerators used in quantum computing.

2.3 Academic and institutional landscape

Research activity spans a modest but growing constellation of institutions. Prof. Himanshu Thapliyal at Southern Methodist University (formerly University of Tennessee) has published over 200 papers on reversible logic gates, quantum-dot cellular automata, and reversible arithmetic circuits, earning an h-index of 54. Prof. Robert Wille at the Technical University of Munich leads the most prolific group at the quantum-reversible synthesis intersection, developing the Munich Quantum Toolkit and publishing extensively on RQFP circuit automation. The University of Copenhagen hosts work on reversible programming languages by Torben Mogensen and Robert Glück, while Prof. Giovanni De Micheli at EPFL has published on AQFP technology optimization.

The International Conference on Reversible Computation (RC), held annually since 2009, remains the field’s primary venue; its 17th edition convened in Odense, Denmark in July 2025. The IEEE International Conference on Rebooting Computing (ICRC) also covers reversible approaches. DARPA’s completed Reversible Quantum Machine Learning and Simulation (RQMLS) program explored fundamental limits of reversible quantum annealers, while the Air Force Research Laboratory has funded small-scale adiabatic/reversible logic test chip STTRs at Notre Dame and the University of Kentucky.


Computer Chip
Computer Chip - Photo by Tima Miroshnichenko on Pexels

3. Technical barriers that must be overcome

The energy-speed tradeoff is fundamental. Adiabatic switching achieves energy savings precisely because voltage transitions are slow. Energy dissipated per operation scales as E_diss ∝ (RC/T) × CV², where T is transition time. Slower operation reduces dissipation but constrains throughput. Vaire argues this tradeoff is manageable, modern transistor switching times are already kept relatively slow to limit heat, but today’s 500 MHz operating frequency remains well below the multi-GHz clocks of cutting-edge conventional processors. Whether reversible architectures can achieve competitive throughput through massive parallelism (GPU-like designs where slower clock speeds matter less) remains an open engineering question.

Noise sensitivity poses a scaling risk. Reversible circuits are inherently more sensitive to noise than conventional designs. Thermal noise leaking into the system propagates through the computation rather than being absorbed at each irreversible gate. Theoretical results on “Limitations of Noisy Reversible Computation” prove that any noisy reversible circuit must have size exponential in its depth to compute a function with high probability, a potentially severe constraint. Frank estimates a practical signal-energy floor of ~100 kT per bit to suppress thermally induced errors, well above the Landauer limit but still far below current CMOS operating points

Area overhead and fabrication complexity are significant. Reversible gates require equal numbers of inputs and outputs, necessitating “garbage” outputs and ancilla bits that increase transistor count and chip area. Dual-rail encoding, common in adiabatic logic, doubles wire count. The critical challenge identified by Frank is heterogeneous integration, combining high-quality-factor resonator circuits (LC or MEMS) with logic transistors on a single integrated product. No commercial foundry process currently supports this. Standard EDA tools, designed for conventional CMOS, require substantial modification; Vaire and academic groups are developing reversible-aware design automation from scratch.

The programming model is still emerging. Writing software for reversible computers requires every operation to be invertible; a constraint that fundamentally alters programming paradigms. The reversible language Janus (originating at Caltech in the 1980s) supports reversible assignments, conditionals with paired test/assertion clauses, and forward/backward procedure calls. But the ecosystem is minimal: the first optimizing compiler for reversible code was only recently developed, and measured energy overhead of dereversibilized programs ranges from 6% to 240% over conventional C implementations. Bennett’s compute-copy-uncompute construction adds space overhead proportional to computation time. Frank has conjectured that achieving maximum performance requires explicit reversibility at all levels, devices, circuits, architectures, languages, and algorithms implying a full-stack redesign of computing infrastructure.


Computer Chip
Computer Chip - Photo by Tima Miroshnichenko on Pexels

4. Near-term applications within a five-year horizon

The most immediate commercial target for reversible computing is AI inference acceleration in data centers. Vaire Computing’s roadmap explicitly targets a GPU-like, massively parallel AI inference processor by 2027. The rationale is compelling: inference workloads, which account for up to 90% of a trained model’s lifecycle energy consumption, are highly parallel and tolerant of moderate clock-speed reductions precisely the regime where adiabatic circuits offer the strongest energy-per-operation advantage. Even a 2–3× energy reduction (consistent with demonstrated adiabatic logic savings at 90nm nodes) applied to the AI inference segment alone could save tens of TWh annually by 2030.

Edge computing and IoT represent a second near-term opportunity. Battery-powered or energy-harvesting devices at the network edge face hard power budgets. Adiabatic circuits operating at reduced frequencies could enable always-on inference capabilities, such as voice recognition or anomaly detection, at power levels inaccessible to conventional CMOS. Research on adiabatic spiking neuron circuits (2024, npj Unconventional Computing) demonstrated >90% energy recovery efficiency and 9× energy reduction per spiking operation, suggesting neuromorphic-reversible hybrids as a promising near-term architecture.

Superconducting AQFP circuits offer the most immediate path to deployment in a specific high-value niche: cryogenic control electronics for quantum computers. The 81.8 pW per-qubit power dissipation demonstrated by Yokohama National University’s AQFP multiplexed qubit controller fits within the thermal budget of dilution refrigerators, eliminating the need for thousands of coaxial cables between room-temperature electronics and millikelvin qubits. The March 2026 Nature Electronics publication reporting a quantum computer controlled entirely by superconducting digital electronics at millikelvin temperatures represents a proof of concept for this application pathway.


Printed Circuit Board
Printed Circuit Board Photo by Nic Wood on Pexels

5. Long-term horizons from 2030 through 2045

The strategic significance of reversible computing crystallizes against the backdrop of approaching physical limits in conventional semiconductor scaling. The IRDS projects that gate-level energy efficiency in CMOS will improve by only ~2× over the next twelve years. Dennard scaling broke down around 2006; Koomey’s Law (computing energy efficiency doubling every ~1.57 years) is decelerating. Even speculative beyond-CMOS switching devices, such as tunnel FETs, spintronic logic, negative-capacitance transistors; face the same Landauer-imposed thermodynamic floor. The IRDS explicitly states that “the fundamental Landauer limit on energy efficiency can only be avoided in deterministic computational processes composed from local primitive operations if they have the property of logical reversibility.”

In the 2030–2035 timeframe, should Vaire’s roadmap materialize, reversible CMOS processors could achieve 10–100× energy efficiency gains over conventional architectures, sufficient to materially alter data center economics. At this efficiency level, a hyperscale data center consuming 100 MW today could deliver equivalent compute at 1–10 MW, or equivalently, deliver 10–100× more compute within the same power envelope. The implications for AI training and scientific simulation at the post-exascale frontier are substantial: the DOE’s Exascale Computing Project spent $1.8 billion and required intensive co-design with six chip vendors to bring Frontier (Oak Ridge National Laboratory) under its 20 MW target per exaflop. Reversible architectures could make post-exascale computing feasible without requiring dedicated power-plant-scale electricity infrastructure.

In the 2035–2045 horizon, the integration of MEMS-based high-Q resonators (targeting 99.97% energy recovery) with reversible CMOS logic could approach the theoretical 4,000× efficiency ceiling. At this level, the energy cost of computation becomes negligible relative to communication and memory access, fundamentally restructuring computer architecture around data movement rather than logic switching. Superconducting RQFP circuits, already demonstrated below the Landauer bound, point toward a post-CMOS computing paradigm where classical reversible processors operate alongside quantum systems in shared cryogenic environments, with AQFP circuits serving as the native interface layer. The synthesis of quantum circuits from reversible classical primitives, an active research area at TUM, Michigan, and Bremen; would become a practical engineering discipline rather than an academic exercise.

Computer Code
Computer Code - Photo by Simon Petereit on Pexels

6. Strategic and policy implications

The energy arithmetic demands attention

Global data center electricity consumption is projected to reach 945–1,587 TWh by 2030, roughly equivalent to Japan’s total electricity consumption. In the United States alone, Lawrence Berkeley National Laboratory projects data centers could consume 325–580 TWh by 2030 (6.7%–12% of total U.S. electricity) with McKinsey estimating AI data centers alone could reach 11–12% of U.S. electricity. Utilities are already strained: American Electric Power has reported 24 GW in customer interconnection commitments; five times its current system capacity. Reversible computing is the only technology with a credible theoretical pathway to reduce computing’s energy intensity by orders of magnitude rather than incremental percentages.

The funding gap is indefensible

The disparity between investment in reversible computing and its potential impact is stark. Total commercial investment in reversible computing stands at approximately $10 million, the sum raised by Vaire Computing, the field’s sole commercial venture. By contrast, global private investment in quantum computing reached $3.77 billion in the first three quarters of 2025 alone, with cumulative government commitments exceeding $10 billion across the U.S., EU, China, and allied nations. Reversible computing is closer to practical deployment than fault-tolerant quantum computing, is applicable to all computing workloads (not specialized problem classes), and is a prerequisite enabling technology for scalable quantum systems. No dedicated U.S. or EU federal program targets reversible computing; as Michael Frank has noted,

“there has not yet been any major U.S. Federal research initiative that has focused on this field.”

A Computing Community Consortium (CCC) workshop in October 2020 was explicitly organized to build the case for such an initiative, but no program has materialized.

Recommendations for policymakers and R&D strategists

Establish a dedicated federal research program for reversible and adiabatic computing, analogous to the National Quantum Initiative, with initial funding of $200 500 million over five years. Priority areas should include high-Q resonator integration, reversible EDA tool development, and adiabatic circuit fabrication at advanced nodes.

Incorporate reversible computing into semiconductor roadmap planning. The CHIPS Act’s National Semiconductor Technology Center (NSTC) and CHIPS R&D Office should explicitly include reversible/adiabatic computing in their beyond-CMOS research portfolios.

• Fund workforce development in reversible programming paradigms, circuit design, and thermodynamic computing; disciplines with essentially zero university curricula today despite growing industrial relevance.

Support quantum-reversible convergence by directing a portion of quantum computing R&D funding toward AQFP/RQFP cryogenic control electronics and reversible circuit synthesis tools, which directly benefit quantum system scalability.


Solar and Wind Energy
Solar and Wind Energy - Photo by Kindel Media on Pexels

7. Conclusion

Reversible computing occupies a unique position in the landscape of emerging technologies: it is grounded in experimentally verified physics, addresses a problem of escalating urgency, and has recently achieved its first hardware proof of concept, yet it remains almost entirely unfunded relative to its potential. The field’s trajectory over the next five years will be shaped by whether Vaire Computing’s second-generation chips achieve competitive performance at meaningful energy savings, whether AQFP-based cryogenic controllers are adopted by the quantum computing industry, and whether policymakers recognize reversible computing as critical infrastructure technology rather than a speculative academic pursuit.

The core insight is not new, Bennett established it in 1973; but its practical relevance has become acute. Conventional computing is approaching a thermodynamic wall that no amount of transistor scaling can breach. The IRDS has said so explicitly. The only question is whether the transition to reversible architectures will be driven by strategic foresight or crisis. The physics is settled. The first silicon works. What remains is the engineering, the investment, and the institutional will to pursue the only known path to computationally sustainable computing at civilizational scale.


This report was prepared as an independent technical assessment. The analysis reflects publicly available research, publications, and industry data current through April 2026

Technical Review Provided by The Means Initiative
Technical Review Provided by The Means Initiative