Microchips as micro Internets: Clock speeds make a quantum leap over cooling boundaries
Some of the best ideas in technology borrow from other technologies. The idea of the graphical Windows interface, for instance, may have been based on Apple’s Macintosh interface (which may have been based on a graphical Xerox interface, depending on which story you believe). USB technology has become a part of almost every mobile device, providing power and data transfer, even though it started as primarily a method of connecting printers, keyboards and mice to desktop computers.
MIT researchers have developed a chip that moves data internally in much the same way data moves along the Internet, using packets. This new design could greatly enhance the way the chips are able to move data, leading to chips that are able to work faster and more efficiently from a power standpoint, without generating additional heat.
One constant through the first couple of decades of CPUs, or central processing units, was the idea that new generations of these chips constantly increased in clock speed. The CPU is a chip that handles nearly all of the general purpose processing work that occurs in the computer, such as running sets of instructions.
Early chips, such as the 4004, which was the first microprocessor developed by Intel in 1971, ran at 740KHz. Over the ensuing decades, chips ran at ever increasing clock speeds, reaching 10MHz, then 100MHz and finally 1GHz (equal to about 1,000MHz). However, when chips began to hit a clock speed of just over 3GHz, further increases in clock speed led to an unwanted generation of heat.
Excessive heat can lead to processing errors and physical problems, eventually destroying the chip. The cost to cool the chip and the power required to run the cooling system wasn’t cost effective for additional speeds in average computing systems. Certainly, chips can run faster than 3GHz or 4GHz, but they require special cooling techniques that cost more than most computer users are willing to spend.
In an effort to improve the processing power of new chips without having to bump up the clock speed, processor manufacturers began adding multiple cores to their chips. These cores can work on multiple problems simultaneously, resulting in better overall performance. And, to save power, some cores can be shut down when they aren’t needed.
The limitation for these multiple core CPUs has become the processor bus, which carries the data back and forth between cores and to the rest of the computing system. That’s where MIT’s research enters the picture. The researchers have come up with a way of creating a network between the cores, almost acting like a network on the chip.
Keeping in mind that the Internet is simply an extremely large network, Internet nodes pass data back and forth using packets. Essentially, the router for each Internet node breaks the data into small chunks, and passes it along the large network that is the Internet. The destination router may receive the packets in a different order than they were sent, as the network sends the packets along different routes. However, the destination router can use coding placed in the packets to reassemble them in the right order, checking for any possible errors and asking for a retransmission, if needed.
The MIT researchers have adapted this technology to multiple core chips. Each core would have its own router, and could pass data to other cores through multiple paths. Ideally, this would speed up the ability of the chip to perform its computational tasks, as data bottlenecks in the bus would no longer hold back the chip from doing its work.
For an industry that has always relied on buses to pass data, the idea of changing to a network on a chip solution may be met with some early skepticism. However, this idea shows tremendous promise.
Eventually, the primary advantage that we’ll see from this type of innovation is the ability to add even more cores to these chips. Right now, with current bus technologies, general processing CPUs begin to struggle with data transfers once the chip exceeds eight cores. Transferring data in packets could easily allow a dozen or more cores per chip almost immediately. Down the road, however, those eight- and twelve-core chips could look as antiquated as a 740KHz clock speed on a chip. Thousands of cores per chip are absolutely possible in the future, as long as the cores can receive the data they need as fast as possible. Creating an Internet on a chip design looks like a strong contender to provide that required speed.