In our daily lives, especially when we’re looking for the best computer devices out there, we go through a lot of data values in bits and bytes – every file we use has a size, every hard drive has transfer speeds etc. That said, it’s important to understand the different units that data amount and transfer speeds can be measured in, and the first step here is to understand the difference between bits and bytes.

The Science Behind Data

Before we go into units of measurement, we have to look at how computer data works. Pretty much every piece of computing technology these days consists of transistor electronics. Long story short, a transistor is a microscopic (nowadays) electrical switch, where, instead of mechanical force (like pressing a light switch or pulling a lever), an electrical signal is used to tell the switch to open. Put billions of these tiny things together in specific circuits, throw in some diodes and capacitors, and you’ll have yourself a functioning processor, memory unit, storage device, and pretty much everything else your computer uses.

Although there’s more of a middle ground between full “on” and “off” states in a transistor (as opposed to a light switch), it’s much simpler to have it function like a basic switch – there is either a high signal (usually 5-12 volts passing through) or no signal at all (0 volts, the switch is shut). It could be possible to have systems where there’s, for example, 4 states, like 0, 3, 6, and 9 volts, however, in real life this causes huge difficulty in measuring these signal differences precisely, which may cause significant errors. This is why ever since the first computers in the 50’s, the binary data computing system of 1s (high signal) and 0s (no signal) was used.

In our everyday lives, we use the decimal counting system – every 10 numbers increase the next magnitude. For example, counting the tenth number after 0-9, makes you restart the counting of “singles” while increasing the number of “tens” by one – thus a “10” is born. The binary counting system works the same way, except its base is 2 instead of 10, i.e. after counting every 0-1 you increase the next multitude, resulting in “10” representing the decimal number 2, for example. This counting system makes it easier to work with and understand what is happening with your data since you can turn tons of incoming high and low signals into either a one or a zero.

Binary vs Decimal counting, byte prefixes

 

What Is a Bit?

A bit is the basic unit of information in computing and data systems, and also functions as a binary digit. Basically, a bit is a placeholder for a digital signal (0 or 1). Bits can be sent and received over large distances through cables or wireless connections (as a series of high and low signals changing at a certain frequency), they can be saved in storage devices (as a high or low particle charge) and can be used to measure how much data a single device can process at a time. The symbol of a bit is officially “bit”, but a lowercase “b” is also used (not to be mistaken with capital B, which represents a byte). The word “bit” itself originated as a super-short compound of “binary digit“.

Nowadays, our devices work with so many bits that it’s easier to give them numeral prefixes, like kilo-, mega-, and giga-, for example. These decimal prefixes usually represent a thousand, a million, and a billion units. However, in the world of computing, where everyone and everything uses binary data systems, the 1’000-time (103 – decimal) increase in every new prefix got replaced by a 1’024-time (210 – binary) increase (1kb = 1024 bits etc.). According to what is basically the galactic council of units and measurements (ISO and IEC), such a measuring system is incorrect, however, the Solid State Technology Association (JEDEC) felt like they wanted their own interpretation of this numbering system in their own field. This causes some difference between advertised and actual performance and capacity of computer devices, because manufacturers display the easier to understand decimal system, while computer read actual performance in binary prefixes.

JEDEC bit prefixes

 

What Is a Byte?

A byte is the smallest addressable amount of data a computer can actually use. Historically, it’s the amount of data necessary to encode one symbol into binary for it to be used in creating tangible information and programming (for example, a processor interprets a “01001000” binary signal as “H”). Bytes are basically just more convenient packs of bits. Nowadays, in all computer system architectures 1 byte = 8 bits, but before more unified computing technologies came into being, the size of one byte ranged from 1-48 bits, depending on the system’s architecture. The symbol for a byte is a capital “B”, not to be mistaken with the lowercase “b” assigned to bits. The word “byte” itself originated as a cheeky way for computer engineers and programmers to address bite-sized chunks of data.

bits and bytes explained
Source: http://www.teach-ict.com/gcse_computing/ocr/214_representing_data/units/miniweb/pg2.php

Bytes suffer from the same mismatch of binary and decimal prefixes as bits: 1 kB = 1024 bytes. IEC tried to present their mebi- and gibi- binary prefixes for data to make two distinguishable numbering systems, but, needless to say, this goofy system didn’t catch on.

IEC’s special binary prefixes

Instead, computers use the JEDEC system, which creates confusion in byte measurements as well. The most notable case here would be advertised hard drive capacities being higher than the real ones on your computer.

JEDEC byte numbering system

 

Different Uses of Bits and Bytes

Now that we know what bits and bytes are, and that there’s an eight-fold difference between the two units of measurement, let’s look at where both units are used in real life.

Bits are mainly used to describe data interface speeds, like internet and port speeds. Bits per second (bps, or b/s) are used to describe how much data an interface can transfer over a period of time, and can have the same prefixes as regular bits. Here are some examples of data speeds being measured in bits:

While you can still convert bits per second to bytes per second, bits are a more accurate unit here since an interface doesn’t send entire bytes, but rather their bits. Only 6 bits of an entire byte could be transferred in a time period, for example, so using bits reduces the error here, making it technically the better choice.

 

Bytes are used pretty much everywhere else – in memory unit performance, storage unit capacity and transfer speeds, as well as file sizes you see on your computer. While interfaces are just bridges between different devices and systems, the devices themselves have to function using only full bytes (since it’s the smallest actually practical data amount). In this context, there’s no point in splitting bytes down into bits, since you’ll still have to put those bytes back together to make tangible information.

SanDisk SSD Specs, sequential bytes per second

Specifications like hard drive sequential read/write speeds are usually specified in bytes per second (B/s), which can be compared to the speed of a port the drive might be using. So it’s important to make sure you’re comparing the same units, not just “megs” or “gigs” – pay attention to what unit is used in those exact specs to not get fooled by marketing tricks.

Overall, using bits over bytes or bytes over bits matters only if you’re looking at technicalities in computing systems and want to be 100% politically correct. The only thing to really keep an eye out is what exact unit is being used and not to mix the two up, because the eight-fold difference between bits and bytes is no laughing matter.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here