No matter how fast computer processors may get by themselves, their ability to get work done is limited by the speed of the other components that they must communicate with. RAM chips, components that provide working storage to the processor, are one of these bottlenecks; they are nowhere as fast as processor chips.
With the smallest delays in the storage and retrieval of information on the fly, the fastest processing doesn’t get users very far. Processors spin their wheels waiting for access to data.
Conventionally, a practical way to speed up memory retrieval has been to place small blocks of fast caches of DRAM memory right on the processor chip. Such memory speeds up data availability by retrieving data from RAM ahead of time. Yet, the improvements offered are limited by practical restrictions on DRAM size, and the difficulties involved in predicting what data may be required.
Cache improvements translate to speed
A major advance today comes from a breakthrough in cache storage technology. In the near future, chips will come with caches that are able to compress data for greater storage capacity. They will also learn, over time, the patterns in which data already on cache is requested. Called dense-footprint cache, the technology is able to directly translate to improved user experience, offering real-world performance gains as great as 9.5%.
To speed sensitive users such as gamers who tend to invest in equipment such as high-tech gaming keyboards for even incremental improvements in speed, a 9.5% bump can be huge.
Silicon Valley turns into Carbon Valley
Moore’s Law, the informal observation by Intel co-founder Gordon Moore that computing power doubles each year has been breaking down lately. Speed doublings, now, come not each year, but once every two years. Computer scientists are beginning to bump against the limits of silicon-based transistor technology.
Many major breakthroughs in computing speed come from innovative ways achieved to shrink the size of the chip. An obvious question here would be: why would size matter when chips are tiny, just the way they are. The answer is that as tiny as chips are, the microscopic copper pathways traced on those silicon wafers are very long ones. The longer they are, the greater the electrical resistance is to those low-voltage signals, and the slower those signals become.
As quickly as electric signals get around, there are billions of bits of data to shuffle around, and the infinitesimal delays, when multiplied billions of times, add up to slowness that can be perceived.
Shortening those microscopic traces is a good way to help those electrical signals get around more quickly. Improvements announced since 2013 take a revolutionary path. The chips of tomorrow will weave memory and processor chips together in three-dimensional form, rather than lay them out in the linear way currently favored. Silicon wafers do not lend themselves to three-dimensional transistor layouts, however. It takes a new material to achieve this: carbon nanotubes.
The move to carbon is a huge shift. Carbon nanotubes, or CNTs, have many of the properties of the silicon used today, but lend themselves to denser architectures. These chips are yet in the lab, but it won’t be long before they get out into the real world. Today, computers based on CNT chips are in the early stages, comparable to where silicon chips were perhaps a half-century ago. Compared to the millions of transistors on current processors, today’s CNT processors have no more than a couple of hundred. Improvements should begin to arrive in a few years. The computers of tomorrow will be hundreds of times as fast as today’s best. The apps that use those computers will gain abilities unknown today.
Cameron Parkinson works at a computer store and is very passionate and knowledgeable about all aspects of technology. He is a contributor to a variety of tech, computing and gaming blogs.