Nowadays data storage has become an integral part of our lives and the need for more space for storing files grows every day. This is why huge strides in storage technology are needed to sustain our ever-growing need to take a picture of every meal we have and save every cat video we find. But can we expect any such technological leaps in the near future?
The answer is yes, in even more futuristic ways than one might expect! These are not only improvements of existing systems, but also entirely new, innovative ways of storing and accessing data. While this may seem promising, it is still necessary to evaluate each of these projects in terms of their strengths and weaknesses, their potential of changing the storage device market, and how soon such technologies might reach the customer.
The near Data Storage future
Helium Hard Drives HDDs
This technology, commercially pioneered by Western Digital’s HGST brand works by completely sealing the inside of a HDD and filling it up with helium. The lower density of helium (as opposed to air) lessens the effects of aerodynamic phenomena that take place on the fast-spinning disk platters inside a hard disk drive. Mainly, turbulence forces on the drive platters are reduced, which allows more platters to be placed closer together in the same space. This is why 7 platters (as opposed to the usual maximum of 5) could be used in some helium HDDs, like the HGST’s Ultrastar 12TB drive, and in turn, more data could be stored within the same space.
Another benefit of such a system is reduced power usage. This is the result of less pressure being applied on the moving parts of the drive. The lessened pressure on moving parts also reduces the chance of failure over time. These advantages are the reason why helium drives are focused more on data centers where capacity per space, power usage, and potential failure amount are important variables.
As for the future of this technology, there isn’t much room for extra-ordinary discoveries. While the technology has been around for a couple years, Western Digital has announced an even bigger, 14TB drive coming in the near future.
HAMR – Heat Assisted Magnetic Recording
A hard drive technology with a much brighter future may be Heat-Assisted Magnetic Recording (HAMR). This technology will allow the grains of the magnetic surface within the drive platters to become even smaller by changing their magnetic properties, thus increasing the data density of the drive.
Information in hard disk drives is stored by changing the magnetic alignment of microscopic grains on the magnetized surface. To increase information density, smaller grains are needed to be packed into the same surface area. However, when the grains of the material get too small, the state of a single grain can change randomly due to a quantum phenomenon called superparamagnetism. To avoid this, the magnetic surface material has to have a higher coercivity.
To simplify, coercivity is the ability of a material to sustain its magnetic charge in the presence of surrounding, external magnetic fields. The downside of coercivity is that writing data on such a drive platter would require an incredibly strong magnetic field that cannot be created with our current technology.
The solution to this is to change the magnetic properties of the grains during the writing process and this is where HAMR comes in. HAMR drive read/write heads would contain a Near-Field optical Transducer (NFT), which is basically a microscopic heat laser. During the process of writing, the NFT would heat up the magnetic grains on the platter near the Curie temperature, thus lowering the magnetic field strength required to change the state of the grains.
Curie temperature is the temperature at which magnetic materials lose their permanent magnetic properties and are affected by induced magnetic fields. After the writing process, the grains that were written on would cool off almost instantly because of their small size and retain the stored information safely because of the high coercivity of the material at room temperature.
All of this seems very complicated, even more so when you consider that the technology has been in development since the 90’s. However, if the estimates of the leading company in this field – Seagate are anything to go by, HAMR technology alone could increase HDD capacities to 20TB by year 2020. In the farther future, HAMR could be coupled with other HDD technologies, like Shingled Magnetic Recording (SMR), Bit-Patterned Magnetic Recording (BPMR), Two-Dimensional Magnetic Recording (TDMR), and other technologies that have a less prominent effect on HDD capacities on their own.
With these technological combinations, 100TB capacity is estimated by year 2025-2030, but this is to be taken with a grain of salt as predictions so far into the future are closer to guesses than estimates.
As for price, some estimate HAMR drives to not be much more expensive than current HDDs. But as with any new technology, not having high prices on launch would be extremely generous. Either way, the potential of the technology is very high and the combinations of this and other technologies might just keep HDDs around for a couple more years.
As previously mentioned, the development of the floating gate transistor-based SSDs is constant and will most likely stay in front of HDDs in the near future, in terms of performance. However, first signs of a fully functional storage system, called 3D Xpoint (three dee cross point) by Intel and Micron, show it to have a pretty decent shot of overtaking traditional transistor storage.
Conventional SSDs use separate floating gate transistors, which have been designed specifically for storing data and is thus non-volatile (data does not disappear after power is turned off). Main DRAM memory transistors, on the other hand, are designed for large amounts of state changes over a period of time, thus this type of memory allows much faster access of data but is volatile and will lose any stored information upon losing power. 3D Xpoint technology puts the best of both worlds into one device architecture.
While not much is known about the specifics of Xpoint technology, the gist of it is the accessing of each transistor via a network of mutual connections between multiple sets of transistors. Imagine how locations on the planet are determined by latitude and longitude. While conventional NAND technology works similarly, it’s the 3D part of 3D Xpoint that makes Intel’s approach to improving data storage special. 3D NAND technology has been in development for a while now and the main idea of it is to basically stack existing layers of transistors used in SSDs sideways in order to squeeze more of them in the same space. An analogy would be files being laid out on the table vs them being stacked neatly in a file cabinet. To simplify, the idea behind Xpoint seems to be stacking them more like boxes in storage, except you could get even to boxes that are blocked off by other boxes around them.
Such a system is promised to achieve data access speeds closer to DRAM and not compromising data storage in the process. Unfortunately, leaked images of Intel’s Optane P4800X series drive (the first 3D Xpoint drive) shows almost no increase in sequential read/write performance over other drives already on the market, like the Samsung’s 960 EVO (Optane’s 2400/2000MB/s vs 960 EVO’s 3200/1500MB/s read/write).
However, random read/write performance is where this drive would shine at 550/500 kIOPS r/w speeds, which is around 1.5 times faster than its closest competitor. Pricewise, the drives are estimated to not be too expensive (up to half the cost of DRAM). Overall, there may be a lot of potential in this technology, as well as other 3D NAND technologies, all of which will keep SSDs on the market for a long time from now.
The far Data Storage future
While the previous examples had full-fleshed models being already sold or developed, the following technologies have a long way to go until they will reach our computers, so making any reliable estimates is close to impossible. But regardless of current progress, the future of these technologies has a lot of potential to completely change the way we use and store information.
Quantum data storage
You may have heard about quantum computers a couple years ago, but lately there have not been many news about their development, so has the dream of ungodly fast computers gone? Not at all, because first prototypes of quantum computers are being developed as we speak, alongside the first steps in reliable quantum data storage.
The idea of quantum data storage is to store information within the spin state of an electron of one atom, similarly to how information is stored in the charge state of a material within transistors. The way that quantum particles interact (quantum entanglement) is unusual to say the least. While a single atom has spin states of just up and down (0 or 1), there also exists a superposition of both of these states. This is why quantum data is measured in qubits instead of bits, since a qubit can take more than just two forms.
While putting tangible information in single particles seems completely impossible, it has already been done by scientists like Andrea Morello, who managed to store information within a single atom for about 80 milliseconds. It does not seem like much, but it is one of the first steps towards creating the first quantum storage device. The next hurdles in quantum storage are ensuring that data stays intact for prolonged periods of time, building a system that can interact with this technology, and make it feasible to mass manufacture such devices. We’re long ways away from having a super-computer in our pockets, but the journey to it has surely begun.
Data Storage in DNA
Amongst cold semiconductors and metals there may arise a storage medium that is much more familiar and organic – literally organic. DeoxyriboNucleic Acid or DNA is what stores information that our cells use to function. The way this information is stored is somewhat similar to how modern storage drives store it – the binary 0 or 1 / charge or no charge approach correlates to the way pairs of nucleobases A (adenine) & T (thymine), and C (cytosine) & G (guanine) store information within strands of DNA.
Writing data on DNA is a misleading way to call it as the DNA strands are synthesized from their basic components. The exact patterns of these DNA strands are determined by remapping the binary code of data into the previously mentioned pairs of nucleobases with the addition of error-correcting and optimizing protocols. Once a strand is created, the information can be read via DNA sequencing technology, which is has been used to “read” DNA for scientific purposes for quite a while now. DNA has very high data storage density, since books, movies and even a small operating system have been encoded into strands of DNA. As for the security of the data, it is unmatched by anything we have created so far since DNA storage is promised to hold information without corruption for thousands to millions of years.
The only real problem here is that the current way of writing and reading DNA data is very expensive and impractical, with the lowest estimates of cost per data synthesis since the beginning of DNA storage research being around $500/MB. This, of course, is way too much for the technology to reach the computing market anytime soon.
However, better data encoding and retrieval processes are being researched as we speak, so a major breakthrough may be close. Some even estimate that a decade would be enough for us to being storing our cat videos within the same building blocks that make our bodies function.