Kyle Schurman
May 22, 2012
Featured

Work faster and make more mistakes; the future of microprocessing?

In terms of speed, energy consumption and size, inexact computer chips like this prototype, are about 15 times more efficient than today's microchips. Credit: Avinash Lingamneni/Rice University/CSEMThe process of building computer chips over the past several decades has been driven by a focus on Moore’s Law, which calls for improving the speed and number of transistors contained on each chip every couple years. To do this, chip makers have to find ways to shrink transistors and improve clock speeds, all while avoiding errors.

Running at faster speeds generates more heat, and shrinking transistors leads to more errors, both of which negate the advantages of increasing speed and power in the first place. Those problems have led some computing experts to predict that the days of Moore’s Law are numbered.

But there are some computing researchers who embrace the idea of putting those chip errors to use for the benefit of the chip’s speed. The research, performed at the University of California, Berkeley; Rice University in Houston; Nanyang Technological University in Singapore; and the Center for Electronics and Microtechnology in Switzerland, claims an increase in efficiency of the chip by as much as 15 times over technologies in use today.

The research phase began nearly a decade ago for this process, called “approximate computation." The testing phase began only recently, and shows great promise. As long as the number of errors is kept to a minimum, the overall result will be maintained at an acceptable level. In the image below from a Rice University Press release, the photo on the left has perfect pixels. The photo in the middle has an error rate of about 0.5%, which is in the expected range for approximate computation. On the far right, the error rate is almost 8%, which results in a poor quality image.

This comparison shows frames produced with video-processing software on traditional processing elements (left), inexact processing hardware with a relative error of 0.54 percent (middle) and with a relative error of 7.58 percent (right). CREDIT: Rice University/CSEM/NTU

So there is a “sweet spot” where the number of errors is acceptable for most application, and the key to making approximate computation work requires the number of errors remains inside this range. The researchers manage this sweet spot by figuring out where errors are most likely to occur. The system then limits the number of calculations that are likely to cause errors, ensuring that the total number of errors will remain within acceptable ranges.

Approximate computation will have the added advantage that chips will use much less electrical power than today’s chips. This occurs because the processor can run at a faster speed when the system is less concerned with errors, allowing it to complete tasks faster, which translates into less power consumption.

This is an interesting concept, as it would appear to go against an idea that drives the world of computing, where perfection is the norm. After all, computers don’t make mistakes, humans do. Those who work with computing systems may have to be choosy about which types of work they do with approximate computation, as certain types of computing work still will require precise computations. However, with video and images, a few errors are not going to noticeably affect image quality, and approximate computation could work well in these applications.

Additionally, the idea of allowing a few mistakes as a tradeoff for creating additional speed is something that has worked well in other areas of computing, so should work well here, too. For example, when sending data across the Internet, some packets can potentially produce errors. The router checks for any errors before passing the data to the computer, and if it finds an error, it requests that the packets be sent again. In this case, we accept that some packets will have errors, but those errors are an acceptable trade-off to achieve the speed needed to make the Internet work at sufficient speeds.

In the end, this idea borrows from real-life situations. After all, very few aspects of everyday life are perfect every time. Transcriptionists are allowed a certain number of errors per page. Manufacturers have an expected level of quality, but it's rarely 100%. Take a look at the FDA’s guidelines for the amount of insect parts that are allowable in a jar of peanut butter, and you’ll understand that perfection isn’t everything.