Secondary storage vs. primary storage
Secondary storage commonly refers to nonvolatile storage devices, such as hard disk drives (HDDs) and solid-state drives (SSDs), that protect data for disaster recovery or long-term retention. Optical media, backup tapes and remote archives are common secondary storage technologies.
Secondary storage sits below a company’s primary storage tier, and is not under the direct control of a computer’s central processing unit (CPU). Secondary storage devices do not interact directly with an application.
The purpose of secondary storage is to provide a high-capacity tier, although the data stored is not immediately accessible. For example, a backup server is capable of storing a vast amount of data, but getting access to it requires dedicated backup software. Similarly, optical disks and backup tapes must first be mounted before they can be read.
A backup storage device is a type of secondary storage. Organizations often install multiple physical backup appliances in at least two locations to ensure data is redundant. The emergence of the public cloud as a storage tier has allowed some companies to reduce, if not eliminate, the need for such backup hardware.
This video from the Computer
Science Tutor explains why
secondary storage is needed in
Primary storage refers to local disks installed inside a server’s chassis, or to disks in an external storage array. Primary storage typically refers to random access memory (RAM) located near a computer’s CPU. This placement reduces the time needed to move data between storage and the CPU.
Because RAM is volatile, it holds active data sets as long as the computer is connected to a power source. Secondary storage, by contrast, uses nonvolatile storage devices, such as HDDs and SSDs, which retain their contents even without power. Nonvolatile storage media is also less expensive than RAM on a cost-per-gigabyte basis.