Digital Definition

Digital information is stored using a series of ones and zeros. Computers are digital machines because they can only read information as on or off — 1 or 0. This method of computation, also known as the binary system, may seem rather simplistic, but can be used to represent incredible amounts of data. CDs and DVDs can be used to store and play back high-quality sound and video even though they consist entirely of ones and zeros.

Unlike computers, humans perceive information in analog. We capture auditory and visual signals as a continuous stream. Digital devices, on the other hand, estimate this information using ones and zeros. The rate of this estimation, called the “sampling rate,” combined with how much information is included in each sample (the bit depth), determines how accurate the digital estimation is.

For example, a typical CD audio track is sampled at 44.1 KHz (44,100 samples per second) with a bit depth of 16 bits. This provides a high-quality estimation of an analog audio signal that sounds realistic the human ear. However, a higher-quality audio format, such as a DVD-Audio disc, may be sampled at 96 KHz and have a bit depth of 24 bits. The same song played on both discs will sound more smooth and dynamic on the DVD-Audio disc.

Since digital information only estimates analog data, an analog signal is actually more accurate than a digital signal. However, computers only work with digital information, so storing data digitally makes more sense. Unlike analog data, digital information can also be copied, edited, and moved without losing any quality. Because of the benefits digital information offers, it has become the most common way of storing and reading data.

For more information on analog and digital technology, view the Help Center article.