Back to analog computing: merging analog and digital computing on a single chip

By
Linda Crane
December 04, 2016

Digital computers have had a remarkable run, giving rise to the modern computing and communications age and perhaps leading to a new one of artificial intelligence and the Internet of things. But speed bumps are starting to appear. As digital chips get smaller and hotter, limits to increases in future speed and performance come into view, and researchers are looking for alternative computing methods. In any case, the discrete step-by-step methodology of digital computing was never a good fit for dynamic or continuous-time problems—modeling plasmas or running neural networks—or for robots and other systems that react in real time to real-world inputs. A better approach may be analog computing, which directly solves the ordinary differential equations at the heart of continuous-time problems. Essentially mothballed since the 1970s for being inaccurate and hard to program, analog computing is being revived by Columbia researchers; merging analog and digital on a single chip, they hope to gain the advantages of analog and bypass its problems.

What is a computer? The answer would seem obvious: an electronic device performing calculations and logical operations on binary data. Simple discrete operations carried out at lightning speed one after another—adding two numbers, comparing two others, fetching a third from memory—is responsible for the modern computing age, an impressive achievement, considering the fact that the world is not digital.

The world is analog; so to adjust for the continuous time of the real world, digital computers subdivide time into smaller and smaller discrete steps, calculating at each step the values that describe the world at a particular instant before advancing to the next step.

This discretization of time has computational costs, but each new chip generation—faster and more powerful than the last—could be expected to make up for the inefficiencies. However, as chips shrink in size and it becomes harder to dissipate the resulting heat, continued improvements are less and less guaranteed.

Some problems, especially those presented by highly dynamic systems, never fit snugly into the discretization model, no matter how small the time steps. Plasmas are one example. Modeled usually at a basic level like air and water, a plasma is also electrically charged. Atoms have separated, and electrons and nuclei float free, all of them sensitive to electromagnetics; with so many variables at play at the same time, modeling plasmas is intrinsically difficult.

Neural networks, perhaps key to real artificial intelligence, are also highly dynamic, relying on a huge number of simultaneous calculations before converging on an answer.

The analog alternative

The first efforts to address the above issues by reviving analog computers at Columbia go back to 2005, when then Ph.D. student Glenn Cowan, working under the supervision of  Yannis Tsividis, designed and demonstrated the first general-purpose scientific computation silicon chip, and described its use in solving a variety of computational problems, significantly faster than corresponding digital computers, assuming the same accuracy.

This work resumed five years ago, in a project funded by the National Science Foundation, with Profs. TsividisSeok, and Sethumadhavan as principal investigators. Electrical Engineering PhD student Ning Guo, working under the supervision of Prof. Tsividis, concentrated on the circuit design of a new silicon chip; Computer Science student Yipeng Huang, working with Prof. Sethumadhavan, worked on the instruction set architecture. Both researchers immersed themselves in the literature from the 50s. Says Guo, “Reading those old books, when we could even find them, helped us get inside how people used analog computing, the kinds of simulation they were able to run, and how powerful analog computing was 50 years ago.”

“At the core of these types of continuous problems are ordinary differential equations. Some number changes in time; an equation describes this change mathematically, using calculus” explains Yipeng Huang, “Computers in the 1950s were designed to solve ordinary differential equations. People took these equations that explained physical phenomena and built computer circuits to describe them.”

“Analog” derives from “analogy”, for the way the early computer pioneers approached computing: rather than decompose a problem into discrete steps, they set up a small-scale version of a physical problem and studied it that way. A wind tunnel is a (non-electronic) analog computer. An automobile or plane is placed inside, the wind tunnel is turned on, and researchers observe how the wind flows around the object, writing equations to describe what they see. By adjusting wind direction or speed, they can draw analogies from what happens in the tunnel to what would happen in the real system being simulated.

Electronic analog computers are the abstraction of this process, with circuits describing an equation that the computer solves directly and in real time. With no notion of a step, no need to translate between analog and digital (or power-hungry clock to keep everything in sync), analog is also much more energy efficient than digital.

This energy efficiency opens possibilities in several fields in which inputs and outputs are both analog; translating from analog to digital and back to analog becomes only so much overhead. This is true, for example, in cyber physical systems—such as the smart grid, autonomous automobile systems, and the emerging Internet of things—where sensors continuously taking readings of the physical environment.

Analog computing would seem to make sense, but analog computers can have major downsides; after all, there’s a reason they fell out of favor. Prone to error analog computers are not accurate, and they are difficult to program. However, many systems do not actually require high accuracy, and then they can benefit from an analog solution. 

Analog vs. Digital chart

Analog in a digital world

While most people picture analog computers as they were in the 1950s—big and heavy, difficult to use, and not as powerful as an iPhone— Guo’s and Huang's analog computer is not that. It is a chip made of silicon, measures 4mm by 4mm, has transistor features the size of 65nm, and has four copies of a design that can be copied over and over. It follows conventional architectures and has digital building blocks. The materials and the VLSI design are all up to the moment. 

Hybrid board

The co-designed analog-digital hybrid chip combines the high performance and efficiency of analog with the flexibility and accuracy of digital computing.

The architecture itself is conceived as a digital host with an analog accelerator; computations that are more efficiently done through analog computing get handed off by the host to the analog accelerator.

The idea is to interleave analog and digital processing within a single problem, applying each method according to what it does best. Analog processing for instance might be used to quickly produce an initial estimate, which is then fed to a digital processing component that iteratively hones in on the accurate answer. The high performance of the analog computing speeds the computation by skipping initial iterations, while the incremental digital approach zeroes in on the most accurate answer.

Digital helping analog graphs

At this point, it’s all research and prototyping. Says Huang, “We’re just trying to show that analog computing works. True integration with digital requires many steps ahead. It has to scale, it has to be more accurate, but the immediate goal is to give concrete examples of problems solvable by analog computing, categorized by type and with all pitfalls spelled out. Many modern problems didn’t exist when analog computers were widespread and so haven’t yet been looked at on analog computers; we’re the first to do it.”

Focused on the future of computing, both Huang and Guo are quietly confident that analog integrated with digital computing will some day provide advantages compared to pure digital computing. Says Guo, “Robotics, artificial intelligence, ubiquitous sensing—whether for health monitoring, weather and climate data-gathering, electric and smart grids—may require low-power signal processing and computing with real-time data, and they will operate more smoothly and efficiently if computing is lightweight. Increasingly these future applications may be more and more the ones we come to rely on.”

Technical details of the chip and its use are contained in two papers:  “Energy-Efficient Hybrid Analog/Digital Approximate Computation in Continuous Time,” an article that appeared in the July issue of IEEE Journal of Solid-State Circuits, and Evaluation of an Analog Accelerator for Linear Algebra.

About the researchers:

Ning Guo

Ning Guo is finishing his PhD in electrical engineering under the guidance of Yannis Tsividis. He received a Bachelor’s Degree in IC design from Dalian University of Technology (Dalian, Liaoning, China) in 2010, before attending Columbia University where he received first an MS in Electrical Engineering (2011) followed by a M.Phil (2015). His research interests include ultra low-power analog/mixed-signal circuit design, analog/hybrid computation, analog acceleration for numerical methods. He is also developing an interest in wearable technology and devices.

Yipeng Huang

Yipeng Huang received a B.S. degree in computer engineering in 2011, and M.S. and M.Phil. degrees in computer science in 2013 and 2015, respectively, all from Columbia University. He previously worked at Boeing, in the area of computational fluid dynamics and engineering geometry. In addition to analog computing applications, he researches performance and efficiency benchmarking of robotic systems. Advised by Simha Sethumadhavan, Huang expects to complete his PhD in 2017.

Original article can be found here