Hard core
From its introduction in the early 1950s, so-called 'core' memory quickly became the dominant form of technology. Non-volatile and cheap to make, it survived well into the late '70s – even beyond the introduction of DRAM chips.
In a core memory, ferromagnetic rings just a few millimetres across are threaded onto wires running vertically, horizontally and diagonally to form a large mesh. The horizontal and vertical wires are called X and Y respectively, and are used to address individual bits for both reading and writing. The diagonal wires are called 'sense wires'. A current representing the present state of individual bits is induced in these sense wires when the X and Y wires are active.
Magnetising requires a certain amount of electricity, and by supplying only half that power to a single pair of X and Y wires, those surrounding the bit to be read remain unchanged. The problem is that the core being addressed is also overwritten when its bit is read. After reading the bit's value in such a system, it must be restored by a refresh circuit that repeats the addressing procedure and returns the core to its original magnetic state.
Despite this complicated method of storing and retrieving data, early core memory had a read/write cycle time of just six microseconds. A Java applet hosted by the US National High Magnetic Field Laboratory in Florida enables you to see the magnetic core memory process in motion: head over to www.tinyurl.com/5w4dcj to take a look at it.
By the time it was finally usurped as the prime RAM technology, core memory access speeds were down to nearly a single microsecond – just 1.2ms. However, the time of core memory was very nearly up. A team at Bell Labs had been developing a revolutionary technology – the transistor – since 1948. 20 years later, that technology would change the face of computing almost overnight.
The first RAM chip
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
The first commercial transistors were small, cheap and above all, reliable. Transistors soon took over from bulky and unreliable glass valves as the main component of the logic gates and registers in the CPUs of the early 1950s.
They could switch at far higher frequencies than valves – which made for CPUs that could go faster – but took a fraction of the power. However, partly because of the amount of work required to create large memories from umpteen identical transistor-based circuits, it wasn't until 1970 that core memory saw its position as the dominant form of RAM seriously threatened.
It was then that Intel released the first general-purpose commercial DRAM chip: the model 1103. It held just 1,024 bits, but its physical size (about 25mm in length), low-power consumption and reliability changed computing as much as core memory had done in the 1950s. With each bit formed from just one microscopic transistor and capacitor prefabricated into a silicon chip containing thousands of identical component pairings, the 1103 was as simple to make as a microprocessor.
By 1974, the combination of increasingly voluminous DRAM chips and low-cost microprocessors made possible the first mass-produced home computers. Yet again, storage had led the way to increasing global computing power.
Paper memories
The development of main memory is paralleled by the need to store programs, data and results permanently for easy access. Even by the late 1940s, entering all of this data by hand was becoming a serious bottleneck, limiting the amount of work that each new computer could do – even when working 24 hours a day.
As scientists and industrialists began to realise what computers could do for them, problems ranging from calculating an entire payroll on time with 100 per cent accuracy to the fiendishly complex calculations required to make hydrogen weapons all found solutions – but too slowly. After all, computers were growing up during the height of the Cold War.