Moore’s Target

Foreword This won’t be a rigorous discussion; if I had enough time, I could make it so. This is written from the perspective of a computational physicist, which has become my native perspective over time.

With the invention of the transistor, we found a general device for performing compute. It’s amazing how long we were able to ride the “transistor scaling train”. But for the past few decades, computational performance has been held back by three things

  1. Transistor switching energy scaling

  2. Interconnect bandwidth

  3. Memory access latency and memory capacity (also tied to transistor scaling)

While these seem like nearly orthogonal topics, they are in fact overlapping and directly linked to two concepts: heat and density. Transistor energy scaling limits interconnect bandwidth through heat. Transistor density scaling limits memory capacity directly—through density. I would argue that heat is the most fundamental problem, since transistor density growth can continue through 3D stacking to some point, and that “point” is where you run into a heat density problem.

Seth Lloyd (a member of my PhD thesis committee) famously described the universe as a quantum computer. Physical systems everywhere are doing computation. There are many possible manifestations of computers and it will be mankind’s task to determine which ones can be scaled and used to solve useful problems. I’ve made my bet on electromagnetic radiation in the 190 terahertz range (photonics).

Nonlinearity is powerful

Commercial computing devices to this point conditionally steer electron flow on the presence of charge. This conditionality is very powerful; it is a “nonlinearity”. I remember first encountering that word and thinking to myself that it was too general to be meaningful; there are so many nonlinear functions describing things all around us. Having spent my career studying computing the concept of nonlinearity has become increasingly clear. To a physicist or computer scientist, nonlinearity is synonymous with conditionality. Conditionality is synonymous with logic and therefore computing. Nonlinearity is powerful.

Boolean logic can be directly implemented in any interacting medium. Streams of water colliding, electrons scattering off each other like billiard balls, atoms scattering, and so on. If you have two objects that can scatter off of one another and a way to detect the scattering paths, you can build a digital computer with that.

The most important thing about nonlinearity in the context of computing is the ability to limit error. By forcing a noisy result to either a ‘0’ or a ‘1’, error isn’t allowed to propagate and become proportional to the depth of the circuit.

Linearity (and superposition) are powerful

In this context, it is perhaps surprising that quantum computers are simultaneously linear and powerful. Quantum computations are described by a unitary matrix operator—an important subset of linear operators that conserves vector length or more crucially, energy. Unitary matrix operators are invertible—some call computation under unitary linear transformation reversible computing. However, measuring the quantum state produced by a quantum computer is a nonlinear operation.

Linear computations are quite useful. Maxwell’s equations are described by linear operators and govern all of electromagnetics. As we just discussed, quantum computing consists of linear operators (and Google is commercializing its matrix processors (TPUs) for quantum computing simulations). Most nonlinear optimization problems are recast into sets of locally linear operations. I could go on and on. The class of problems described by linear operators is vast and commercially important.

Where to next

Computation mediated by charge flow and voltages has been extremely successful. Current is typically carried in electrical conductors like metals, which introduces capacitive coupling between wires, inductance, and resistance (RCL). Together, RCL effects give rise to time constants, dispersion, and power dissipation. This limits the clock rate of computers and contributes significantly to power dissipation—which limits how many transistors can concurrently do useful work. I see two important frontiers in computing: interconnect technologies (on chip and off) and new computational unit cells. Interconnect technologies should address the challenges of RLC and computational unit cells should break the energy scaling problem.

Superconductivity leveraged for computing, e.g. using JJs, is an interesting approach that eliminates dissipation (R), but cryostats used to induce superconductivity in metals and ceramics is incredibly energy hungry, not compact, and unreliable…not to mention noisy. Anyone who has worked in a quantum computing lab knows how annoying cryostats are. High clock speeds are certainly an attractive factor for the technology. In this paradigm, JJs are the new computational unit cell.

Photonic waveguides act as a “wire for light”, and do not suffer from RLC effects. Certainly dispersion is still present, but it is engineerable. The time constant associated with photonic wires is above 100 terahertz at optical frequencies. In this paradigm, there are a number of candidate computational unit cells including Mach-Zehnder interferometers and resonators.

There is an entire zoo of computational platforms that need to be explored to continue the computational gauntlet thrown down by Moore’s Target (better than Moore’s Law). I’ve outlined the guiding principles in evaluating them: (1) wires become a limiting factor for interconnect and (2) find a computational unit cell that doesn’t have the same scaling problems as transistors in terms of energy dissipation (or energy dissipation at a specific computational throughput).

Building new kinds of computers is important. They’re having an increasingly substantial impact on our environment and they have been closely tied to progress across many fields that drive economic value and inspirational value.

Next
Next

Working with Pentagram