Computers in the Future

Recently, I have seen many articles about a new type of computer memory. Actually, the stories resemble an IBM press release but they have appeared all over the news services. The new type of memory (called “racetrack” memory) will combine the best attributes of flash memory and hard drives. It will be non-volatile, large-capacity, inexpensive, and it won’t wear out. It will be years before memory of this type is produced but the idea sounds promising. I tend to doubt promises of future technology, but I think it plausible that racetrack memory or some other promising technology could yield memory capacities of multiple terabytes in the future.

People often make the mistake of thinking that the future will look just like today, only more so. It’s easy to assume that in the future we will all be running updated versions of current operating systems, just with faster processors, more memory, and more disk space. But I have my doubts that it would even be possible for the structure of current operating systems to scale to those levels.

In 1954, IBM introduced the first mass-produced computer, the IBM 650. It was controlled using the front panel and could load programs using a card reader. The IBM 650 used drum memory to store programs while they executed.

By modern standards, drum memory was incredibly slow. It was a large, rotating metallic cylinder which could magnetically store information, but only directly under the drive head. This meant that referencing a memory location might require a complete rotation of the drum until it was in the correct position. This could be a potential delay of up to 2 ms on a memory access.

If you structured a program the logical way (one instruction followed by the next), then it would execute very slowly. This was because instructions took time to execute; by the time one instruction was finished, the drum would have already rotated past the next instruction, requiring a wait for the drum to rotate again.

For this reason, IBM 650 programs consisted of an instruction followed by the address of the next instruction. By carefully positioning instructions on the drum, you could greatly increase performance. This was often done by hand but there were optimizing assemblers to help with this.

Later computers used magnetic core memory, and the lessons of drum memory have faded into history. But what might an observer of the IBM 650 have concluded about the future of computers? Might not that observer have envisioned drum memory that rotated incredibly fast, compilers that made instruction arranging easier, or more reliable card readers?

In recent history (with a few notable exceptions), computers have always had more disk storage than memory. What will happen in ten years if computers contain multiple terabytes of memory? Is it even possible to predict how this will affect programming and the nature of computers themselves?

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2007-2024 by Matthew Reed, all rights reserved.
ContactPrivacy Policy