9 min readJust now
–
This post does not reflect the views of current, past, or future employers. The opinions in this article are my own.
Sure, we all know that computer memory is important, but have you ever stopped to think about how it really works? For my part, I really haven’t, outside of being a user of it. I’d like to rectify that, so let’s talk about computer memory. I’m talkin’ low level stuff, like the fundamental physics and engineering that goes into building computer memory. The plan is to take a look at the various technologies, or media, of computer memory with respect to attributes, use cases, benefits, and disadvantages.
What is computer memory?
Before we need to do a deep dive into computer memory we need to define the term. A textbook definition I fou…
9 min readJust now
–
This post does not reflect the views of current, past, or future employers. The opinions in this article are my own.
Sure, we all know that computer memory is important, but have you ever stopped to think about how it really works? For my part, I really haven’t, outside of being a user of it. I’d like to rectify that, so let’s talk about computer memory. I’m talkin’ low level stuff, like the fundamental physics and engineering that goes into building computer memory. The plan is to take a look at the various technologies, or media, of computer memory with respect to attributes, use cases, benefits, and disadvantages.
What is computer memory?
Before we need to do a deep dive into computer memory we need to define the term. A textbook definition I found is:
Computer memory stores information, such as data and programs, for immediate use in the computer; instructions fetched by the computer, and data fetched and stored by those instructions, are located in computer memory.
That’s nice, but I’m going add a few of my own qualifiers:
- For this discussion, we’ll specifically discuss computer memory media that refers to physical devices and technologies used to store, retrieve, and retain digital data, ranging from high-speed, volatile RAM to persistent, long-term storage like HDDs and SSDs. Common types include solid-state flash (SSDs, USB drives), magnetic disks (hard drives), and optical discs (CDs/DVDs).
- Computer memory stores digital as opposed to analog data. Often the elementary datum of computer memory is a bit, i.e. 0 or , but there’s no rule saying data has to be binary, in fact when we talk about Qubits it’s probably more apropos to say that memory stores states.
- There are two fundamental operations with computer memory: read data and write data. We assume the reader of computer memory is always a computer (electrical or mechanical), and the writers are mostly computers (I want to talk about a case where the writer is human).
- We include computer memory that is manifested as a device with physical media that attaches to a computer. In particular, we’ll defer discussing network attached memory like RDMA.
- Computer memory is addressable (i.e. that’s how we find the data we’re looking for). The smallest addressable unit of memory we’ll call a word. Words are often a sequence of bits, the most common word size today is a byte (eight bits).
Memory hierarchy
It’s convenient to think of computer memory as a hierarchy. I’ve presented this diagram belowbefore, but have added tape as the lowest level (it’s a popular form of so-called cold storage since it’s not continually online).
Press enter or click to view image in full size
Memory cache hierarchy. A cache hierarchy can be thought of as a pyramid. At the top, close to the CPU, access times are the fastest. Moving down the pyramid, access times increase, but capacity increases and cost decreases.
The memory hierarchy is enlightening especially with regards to three important characteristics of access time, cost per bit, and capacity. We’ll consider those and a few others in our discussions.
(A note on units of memory capacity: we’ll use PB is petabytes, Pb for petabits, GB for gigabytes, Gb for gigabits, MB for megabytes, Mb for megabits, KB for kilobytes, and Kb for kilobits)
Access time
Memory access time is the total duration a processor takes to request, locate, and retrieve data from the memory subsystem, typically measured in nanoseconds (ns) or clock cycles, milliseconds (ms), or even seconds for storage. For our discussion we are considering the time to access data for the particular memory technology. In particular, we are interested in the access time of the memory medium itself and will ignore the overhead of accessing memory over the bus for now (as one goes down the memory hierarchy the access time of the medium becomes dominant).
Cost per bit
The cost per bit is the monetary cost (i.e. in dollars and cents) per unit of memory. As we’ll see, cost is a big issue. For instance, cost explains why even in 2026 people are still using the relatively ancient technology of hard drives as opposed to SSDs: SSDs cost about $100 per TB, but a hard drive is less than half that cost.
Capacity
Memory capacity refers to the maximum amount of information a system memory device can store and process. In computing, it ranges from gigabytes to terabytes. Usually, capacity refers to how much data can be stored given some form factor of the memory device. For instance, a DIMM is a specific form factor of a memory stick, and someone might buy a DIMM with a capacity of 4 GB, 16 GB, 64 GB, etc.
Density
Memory density refers to the amount of data (measured in Mb, Gb, MB, or GB) that can be stored within a specific physical space, such as on a chip or a module, with higher density allowing more information in a smaller area. Density is related to capacity in that capacity of a memory device can increase without changing the form factor if memory density increases. Higher density memory towards lower-cost manufacturing by using fewer, more advanced chips to achieve the same or greater memory capacity.
Volatile vs. non-volatile
A key characteristic of a memory medium is whether it’s volatile or non-volatile. Volatile memory requires power to maintain information, losing data instantly when turned off, and is used for high-speed, temporary tasks like RAM. Non-volatile memory retains data without power, making it ideal for long-term storage like hard drives and ROM. Volatile is faster and temporary; non-volatile is slower and permanent.
Bandwidth
Memory bandwidth is the maximum rate at which data can be read from or written to a storage medium by a processor (CPU/GPU). It’s typically measured in Gigabytes per second (GB/s) or Terabytes per second (TB/s). Higher bandwidth enables faster data transfer, reducing bottlenecks in data-intensive tasks like AI, machine learning, and rendering.
Dimensions
Physical memory media can be thought of as being one, two, or three dimensional. For memory density, we’ll express the number in appropriate units for the dimension. For example as GB/mm, GB/mm², or GB/mm³.
Power consumption
Memory power consumption is relatively low compared to other computer components like CPUs and GPUs, but power consumption increases and it can be significant in large systems, with idle power being a major factor, especially in data centers. More RAM uses more power, but larger single sticks are often more efficient than multiple smaller ones, and newer standards reduce voltage, saving energy.
Errors
Computer memory errors are a fact of life. Pretty much any memory medium is susceptible to errors. What we mean is that data may be corrupted. That is when data is read back it is not the same as what was written. Common causes include overheating, power surges, faulty RAM modules, incorrect installation, or cosmic radiation. There’s two facets to memory errors: 1) Detecting errors, and 2) Correcting errors. In the best case scenario an error is detected and corrected by the system and there are no ill effects. In the worst case an error is neither detected or corrected resulting in the dreaded silent data corruption (we really don’t like that!).
Memory media
Here’s the areas of memory media we’ll visit in our grand tour:
- Mechanical memory (paper)
- Non-volatile memory with moving parts
- Non-volatile memory without moving parts
- Volatile memory
- Quantum memory
- Human memory (bonus)
The list is roughly in historical order (except for human memory that pre-dates the other technologies by a few hundred thousand years!). The first four we can call classical technologies are based on classical physics (i.e. mechanics, electromagnetism, and optics). Quantum memory of course gets us into the world of quantum mechanics. I also want to talk about human memory to compare it against computer memory (spoiler alert: the capabilities of human memory put computer memory to shame!).
Before we jump into our first technology, I’d like to give out a couple of honorable mentions: the phonograph and the camera. It might be more appropriate to give these lifetime achievement awards, since while they aren’t technically computer memory, they’re the precursors to computer memory and many of their design principles are still relevant today.
The phonograph
The earliest known “memory device” is the phonograph. The phonograph was invented by Thomas Edison in 1877. Edison’s phonograph could record and playback sound (i.e. it can read and write analog data). Édouard-Léon Scott de Martinville invented the phonautograph** **in 1857 that recorded sound waves onto soot-blackened paper, but couldn’t play them back.
Press enter or click to view image in full size
The phonograph and gramophone. On the left is a picture of Thomas with his phonograph, the middle image shows the wax cylinder media used by the phonograph, on the right is a picture of the gramophone.
The phonograph worked by converting sound waves into physical vibrations that engraved grooves onto a rotating cylinder. Sound spoken into a horn vibrated a diaphragm connected to a stylus, which cut a pattern into tinfoil or wax. Playback reversed this, with a needle tracing the grooves to reproduce sound. By the way, the earliest recording Edison made was thought lost until 2012 when it was able to be reconstructed, you can listen to Edison singing “Mary Had a Little Lamb” 150 years ago!
In 1887, Emile Berliner patented the gramophone , marking a pivotal shift from Edison’s cylinder phonograph to flat, mass-producible zinc and later shellac discs. This invention used lateral recording (sideways grooves) and allowed for the first commercial, mass-produced audio recordings. The gramophone evolved into the record player in the 1940s that launched the recording and music industry.
The use of cylinders versus discs is an interesting decision. We’ll see this come up again in the early 1940’s with the introduction of magnetic storage.
The common design element in either the cylinder or disc design is the stylus. A record player uses a specialized, tiny needle (i.e. the stylus), usually made of diamond or sapphire, to ride in the grooves of a vinyl record. It acts as a transducer, converting the mechanical vibrations from the grooves into an electrical signal that is amplified to produce sound.
Press enter or click to view image in full size
A record player. The top image shows the system design, and the bottom zooms in on the stylus and transducer. A turntable rotates a disc, or record, at some fixed RPM. The stylus follows a groove in the record, basically a big spiral from the outside to the inside. The stylus vibrates up and down following the surface of the disc. The stylus is attached to a magnet and the magnet induces an electrical current in a coil (by Faraday’s Law). The coil is attached to an amplifier that amplifies the signal being induced. The amplified signal is sent to a speaker that converts the electrical signal into sound. People listening perceive the sounds as music.
The camera
The camera has an interesting history that predates the phonograph. It’s not computer memory per our definition because the analog data is saved and the “memory” (i.e. a photograph) isn’t read by a computer.
Camera Obscura
Camera obscura (translated as ‘the dark room’) represents the first step in seeing the world as a series of images. It is a dark room or box having a small hole (aka pinhole camera) or lens on one side that lets the light get through and projects an image on the opposite wall. It was used around the middle of the 16th century as a way of indirectly looking at eclipses or studying astronomical phenomena, as an aid in drawing and painting, and for entertainment. Camera obscura was a first attempt at understanding and making use of light’s reflection and refraction properties.
Heliography
Camera obscura wasn’t really a camera until 1816 when Joseph Nicéphore Niépce used it to create the first-ever photograph. Niépce used a light-sensitive material called “Bitumen of Judea” or “Asphalt of Syria,” a semi-solid oil, and mixed it with pewter. The result was a permanent image that would survive after the camera obscura was closed. He named his method heliography, which translates as “sun drawing.”
Press enter or click to view image in full size
The first surviving photograph.This is the “View from the Window at Le Gras,” captured by French inventor Joseph Nicéphore Niépce in 1826 or 1827
Daguerreotype
Louis Daguerre improved upon Niépce’s process, and in 1839, he announced a new photographic process self-named the daguerreotype. The daguerreotype requires a silver-plated copper with a mirror finish treated with substances that make its surface sensitive to light. The plate is exposed to light in a black box for a given period of time, which can be as short as a few seconds. Then, it is fumed with mercury vapor, chemically treated to become insensitive to light, rinsed, dried, and sealed in a protective glass enclosure. Unlike heliography, the daguerreotype produces a much better image quality, requires less exposure time, and is portable.
The **daguerreotype. **An early version of a modern camera.
Modern times
They say that “A picture is worth a thousand words”. I believe this is a vast understatement. Consider that the grainy “The View From the Window” photograph was about 6.4 inches × 8.0 inches and it’s commonly rendered at 2 megapixels. Not bad for two hundred year old technology! But that technology was just a precursor to a huge technological revolution.
The daguerreotype was successful and launched the world of photography. In the 1890s motion pictures were developed based on the concepts and that launched the movie industry. In the 1920s, television was developed which of course was another watershed moment in human history. Following that came computer displays, LED screens, and eventually the display on your smartphone that you’re using to read this article!