Like anything else, SDS also has two kind of amuser effects –
So let begin with the market perception of the new terminology;
Answer to the first question, is something like this – if you really want to understand a new concept, try to get the literal meaning of the term first – its Software DEFINED Storage – that means, the definition of the storage would be done by a software, it’s not that the software is storing things, but software is defining where to store. So the bottom line is – the hardware remain the same, but the controller or the processor of that physical storage would be done by a software.
And the second one is rather simpler, from this tech generation. SDS is not “CLOUD” storage. May be the cloud storage company could be using SDS as their own backend technology, but these are two entirely different things. Cloud is the concept wherein you have intelligent end points that can compute, but to make the light weight, we don’t attach the localized storage to the end point and keep data in the cloud, enabling the end point to fetch the data, as per the need. Thus our smartphones and tablets have limited spaces, but almost the same or may be higher space is allocated to them in cloud. It’s like using somebody else’s datacenter services, wherein they are allowing you to host your data.
Now no one raises an eyebrow when we talk of professional Data Centre services, as everyone is using the service today. But just about a decade back, people would ask this question – why should I keep my data with you? How can I keep my confidential data outside my premise? That’s because the complete ecosystem was yet to evolve. But today, the situation is different. No one wants to take unnecessary hassle of building a 24×7 server for mail for applications and for data, so people get the servers hosted with a professional Data Centre Company and focus on what they do the best – their own business.
Like we got away from the hassle of putting so many servers in place, then keep the infrastructure and manpower to make it 24×7, similarly people want to walk away with the hassles of upgrading the storage. And that’s how the need for SDS got generated.
What we need in a storage is – a lot of storage capacity, a processor, an operating system and a network connectivity. As we know the data need has galloped in last few years, nearly a decade, and will continue to grow in leaps and bounds, the need for storage expansion would remain a never ending process till the end of time, but the issue is – when you need to expand, you need to expand the whole system, not just the storage capacity. With the storage capacity, the processor, the OS and the network everything needs to be updated. And that costs a lot of money.
The simple logical solution was to separate the hardware storage and the software intelligence, so that the hardware at the data plane keeps on getting added and the control plane is managed by the software, saving us from the complete system overhaul, which was time-taking, tedious, expensive and a lot of hassle , downtime, migration challenges and what not.
By year 2008, the concept of OpenFlow® from Martin Casado, had got a good audience and response. By February 2011, the OpenFlow v1.1 was also formulated, The OpenFlow Forum also got established in 2011 itself and it was the same year, when the concept of SDS evolved.
It’s not a DAS (direct attached Storage), SAN (Storage Area Network) or NAS (Network Attached Storage), don’t get confused. It’s not a JBOD (Just a Bunch Of Disks) either, that comes with a SAN or NAS, nor it is RAID (Redundant Array of Inexpensive Disks). These are all the conventional technologies. Okay, let’s understand the evolution of storage, first.
The need for a storage outside of a machine and a separate system started way back with the popularization of the networking concept. Way back in 1983, Novell NetWare developed the concept of NetWare Core Protocol (NCP). NCP was a network protocol used in some products from Novell, usually associated with the client-server operating system Novell NetWare which supported primarily MS-DOS client stations back in 1983.
Following this, Sun Microsystems in year 1984 released Network File System (NFS), which was a distributed file system protocol allowing a user on a client computer to access files over a computer network much like local storage is accessed. Then there was 3Com and Microsoft combined LAN manager, and so on. Everyone was interested in the network attached storage by then and by year 200, while there was a buzz for Y2K problem, the other buzz was for NAS (Network Attached Storage). We saw a series of tech startups jumping in to the storage solutions. To name a few, we have Spinnaker Networks (acquired by NetApp in February 2004), PolyServe (acquired by HP in 2007), ONStor (acquired by LSI in 2009), Exanet (acquired by Dell in February 2010), Isilon (acquired by EMC – November 2010), Gluster (acquired by RedHat in 2011), and so on.
The second school of though was the SAN (Storage Area network), which was more professional way of managing storage, but often very expensive compared to NAS. In here, there was no file system, but just the storage blocks. Unlike the NAS concept, where the entire NAS was shown as the file server to the client machines, here each of the disk appeared as the disk on which storage could be done. Though we could add HDDs to a SAN & NAS, but there was still a cap on how many disks of what make & model could be attached. So before adding any more disks, we had to check the disk size supported by the system disk type acceptable by it, disk make compatibility and so on. Basically a proprietary features sets in expansion were to be taken care of.
And as every proprietary system has a limited life span, so it happened to storage as well. From British colonial rule to Czar in Russia to Japanese rule over China – all proprietary systems eventually come to an end, as it’s not popular. It’s in interest of one and loss to everyone, that’s why dictatorships ended and that’s why colonies ended. By end of 2nd World War, the world had a new political map, it’s the same that happened to storage world, with the advent of SDS.
As explained above, the concept was to save the world from expansion hassles of the storage and make it open to all, instead of proprietary expansions arm-twists, thus the key features of SDS could be summed up as –
Thus to give a definition to the SDS, we can say – “SDS is the latest technology offering in the data storage domain, that separates the control plane from the data plane, like SDN and gives the ease of programmability, migration, virtualization and automation of the storage system. This enables user to build a vendor agnostic storage system on the open platform and do orchestration as and when desired.”
APPLICATION & USAGE
As we’ve understood the concept, it is very important to understand the applications and usability too, as technology is just and enabler, it’s just a means, not the purpose. But there’s a difference between Blacksmith & Goldsmith, both need different tools and different pressures that aren’t interchangeable.
Since it is a storage technology, it is obviously of use to those who need a lot of storage;
To sum it up, like SDS is hardware vendor agnostic, the need for SDS is industry vertical independent. Every industry is running on data and the need for storage expansion will continue to rise, till we develop a yogic science and explore the human brain potential to the fullest and become data agnostic and if you still want to know more……reach us, we’ve be happy to solve your queries and feel obliged too !!