Introduction

This is an original tape-out project proposed, implemented and validated in Columbia EE6350 course offered by Prof. Peter Kinget and sponsored by Apple Inc. The chip was fabricated using TSMC 65nm technology.

We designed a Neural Processing Unit (NPU) to accelerate the inference speed of AI applications. In other words, the proposed NPU is a hardware accelerator for matrix multiplication, which is the most common type of execution in AI computing. [1-3]

We went through the entire chip design cycle from the ground up, using a typical digital design methodology. We successfully demonstrated all the functional blocks of chip in two demo set-ups. We were using our NPU to infer a Multi-Layer Perceptron (MLP) neural network for an image recognition task.

Below is the die photo for our chip, there are three fun logos on the chip. These are actual patterns that were fabricated on the top metal layer of our chip. Enjoy our "serious justification" of these logos. :)

The left one is a pineapple, which is a fruit that we chose to pay respect to the sponsor "Apple" and meanwhile avoiding copyright issues. You will also see this "mascot" fruit on our PCB and 3D printed enclosure for the demo. The right one is a dragon fruit, another one of our favorite tropical fruits with its unique shape. The bottom "A18 Bionic" is our team name. At the time we were designing our chip, Apple's most advanced phone SoC was A16, thus our chip is two generations ahead ;-).

npu_die_photo

Figure 1. NPU die photo



Back to top