IBM Storage Head Talks Virtualization
Last week, IBM revealed that it had passed over 1,000 customers of its storage virtualization software and spun the achievement to defiantly taunt storage market leader EMC for not yet releasing a virtualization product.
IBM boasts that storage virtualization is the "game changer" that will turn the storage market on its head by lowering costs, simplifying storage management into a "single pool" of information and dissolving the complexities of heterogeneous storage management.
BetaNews took the opportunity to sit down with IBM Global Storage Software general manager Jens Tiedemann to find out more about storage virtualization and why IBM feels that it will level the playing field.
BetaNews: IBM has breached the thousand customer threshold for its virtualization software. Why is the enterprise investing in virtualization?
Jens Tiedemann: We all know that the amount of data that people are storing is going through the roof in all aspects of the business. What is less known, but equally interesting to businesses, is that the percentage of the IT budget being spent on storage is also growing. Even though we as vendors are coming out with cheaper and cheaper storage, it is really not enough to offset the growth.
People are spending more and more of their overall IT budget on storage and that obviously is something that cannot go on indefinitely. In 1997 about 11% of IT budgets were spent on storage, last year it was about 17%, and storage is currently projected to go to 22% of the IT budget. So that is something that is worrying our customers and they need to do something about it.
And it's not just the money spent on hardware; it is actually all of the costs associated with storage hardware and software, the management and resources to do all of this work that is expanding this spending. So something needs to happen. One of the technologies you can deploy to disrupt that picture is virtualization.
BetaNews: You have said that virtualization eases storage growing pains. Aside from that, what are some of the other value propositions and benefits for customers?
Jens Tiedemann: With virtualization you can move the data around while the application server is working, so the SAN Volume Controller will keep track of the data at all times. They (the servers) are always up and running. That's the first value proposition.
The second value proposition is the allocation and the utilization of data. So, without virtualization, when you allocate a disk to a server you typically over allocate the space. Next time the disk is full you need to take the application server down you just give it enough space to run for the next 14 days or months or whatever your window is. This leads to a lot of under utilization of the storage infrastructure.
You can imagine that when you can do that much more granularly and on the fly, you don't need to allocate more data at the time then the application needs.
BN: How will these efficiencies affect information workers?
JT: The second element is the people cost. Now that you can do all of this and tinker and mess with the storage environment without impacting the application servers, the value proposition is that you make the storage administrator's job a 2PM job rather than a 2AM job.
BN: How does storage virtualization differ from traditional disk management techniques?
JT: When you have external storage you typically allocate disk space to an application server, and you still do that with the SAN volume controller in between. The only difference is that the disks that the application server is seeing and accessing are virtual disks.
So from the application server point of view there is no change, it just does what it always has done. But below the virtualization layer you can actually move the disks around.
You can have a mapping from one virtual disk to one physical disk but let's say you need more capacity. In the old days, you needed to take the application server down, go and find some space in your storage environment, format all of that space, move all of the data over, and then get the application server up and running again.
With the virtualization layer you can do all of that while the application server is still running. While it is finding the new space, formatting the disk and especially moving the data over, the application server is just going on doing transactions. And the virtualization layer is keeping track of where the data is currently residing whether it is on the old disk, on its way to the new disk, on the new disk or wherever.
BN: In past releases of IBM's virtualization software, there were components that sounded grid- or cluster-like and others such as micro-partitioning divided processors into "pieces" that were fault-isolated. Does this reflect IBM's overall strategy for distributed systems, or is this something that is exclusively a part of IBM's vision for virtualization?
JT: It's all and above what you are saying. Going back to the business issue that we starting talking about: The cost of managing infrastructure is becoming more and more expensive. You can take an analogy with a car, where you have a manual transmission and you actually simplify the car by putting in an automatically transmission.
That analogy works well for virtualization because you can switch around all the streams of data automatically. So even though you are putting in an infrastructure that from a technology point of view may seem more complex, because an automatic transmission is much more complex than a manual one, you are really simplifying the use of the car and that is the same thing we are doing with virtualization. And we are doing that across servers and storage.
So on the server side it is very much what you are describing in that instead of dedicating one specific CPU -- very much like we just talked about -- you can actually do some of the same on the server side where you virtualize the process space if you want and you can direct it to where it's most needed.
Virtualization is just screening off the physical world from the virtual world with a table. You put in this table in between -- the applications go to that table asking for the resources -- and by changing that table around on the fly you can do a lot of stuff under the table. If you are familiar with virtual memory, there are really only so many new concepts in this world.
BN: So IBM is borrowing from its mainframe heritage?
JT: That's a very good way to describe it. What I am talking about on the disk side is really a lot of the stuff that has been going on in the mainframe world for years that we are bringing into what we call the open space: Windows, Linux, AIX and so on. In the mainframe space these things were much easier to do because you had a very controlled environment such as an IBM mainframe and we knew exactly what was going on.
BN: What is different today?
JT: These days there are a lot of heterogeneous OS's that all have their own characteristics. So if you want to solve all of these things you not only need to solve it for one operating system, but for many and for many storage devices on so on.
And that's probably one of the main reasons it has taken virtualization to catch up in the open space because you have to get to this critical mass of supported platforms.
BN: Is IBM playing catch-up?
JT: Way back, IBM invented storage. But in the 1990's IBM lost it a little bit because we were a very server-centric company and we had our own set of challenges in those years. Competitors like EMC frankly got in and took it away from us. We are not the incumbent in the storage market; EMC is the incumbent.
What has our interest is that virtualization changes the game again. Very much the same way as it happened with the server market. If you look back 7-8 years ago, Sun Microsystems was the incumbent with the Solaris operating system and was going gangbusters all around.
It's by only embracing Linux and open standards that IBM has had a big comeback in servers. Frankly, that is what we are counting on for virtualization to do in the marketplace, to level the playing field.
BN: How will virtualization level the playing field?
JT: Virtualization is a big can opener for providing customers with a lot of options and choice. Currently, if you are going into one vendor, every vendor including IBM has advanced services running on their storage devices (mirroring, replication services, etc.), but typically they only go from one type of box to another.
But with virtualization you are opening up that whole layer. And since the disks and subsystems are all virtualized beneath the virtualization layer, there are a lot of interesting things you can do that levels the playing field
BN: Let's discuss IBM virtualization software roadmap.
JT: Right now we are on the sixth generation of our SAN Volume Controller, so block-level virtualization. The roadmap there has really been all around to support more and more platforms.
The platform is very established now. What you will see on the platform is typically more and more functionability. One of the things coming up is asynchronous replication, and more platform support. We need to support everything that comes out.
One of the other things that is happening is a cluster solution so you can scale it more and more. With our latest 8-way cluster -- we have a 16-way cluster coming out -- there are customers running this behind hundreds and hundreds of terabytes in storage. So it is a very scaleable solution and will be even more scalable in the future because storage environments tend to grow, as you know.
BN: Are there any particular platforms or devices that IBM is presently working on supporting?
JT: The next one will be TagmaStore from HDS.
BN: When can we expect the next release of IBM virtualization software?
JT: We just released 2.3 so we really want to have something out every six months or so.
IBM is already on our sixth generation while EMC doesn't even have its first out the door yet. So this is a big play for us and why our customers are looking at us.
BN: Thank you for your time.