About Me

My photo
A Senior Research Analayst for a leading firm, with a focus on infrastructure management and virtualisation

Friday 5 February 2010

VMware targets the SME sector with VMware GO

Some believe that one of the reasons behind Diane Green’s departure as CEO of VMware in July 2008 was the company’s lack of penetration in the SME and cloud computing markets. With VMware Go it has announced a cost-effective solution that provides an on-ramp for the SME sector to cloud computing and virtualisation. This demonstrates the importance of the SME sector to the growth potential of the cloud computing market, and also to VMware’s future.

The SMB sector is being used as the vehicle to drive public cloud computing

The cloud computing market has many challenges, not least of which is the need for some consensus on exactly what cloud computing is. I believe that cloud computing can be simply described as a new delivery mechanism for IT as a service, which operates at three different layers in the technology delivery stack. The foundation layer is the infrastructure level, focusing on how an organisation’s or service provider’s physical IT assets can be transformed so that they offer a shared platform for flexible IT delivery; this is better known as infrastructure-as-a-service (IaaS). The second layer is that of the application and is more commonly known as platform-as-a-service (PaaS), which involves application development and looking at how these applications will operate on the underlying shared infrastructure. Finally, software-as-a-service (SaaS) represents the third layer and is focused on the delivery of specific business services using the previous layers as enabling technologies.

This picture is further complicated by the terms public, private and hybrid cloud, where private refers to an on-premise only approach (behind the firewall), public to a generally available Internet-based service (outside the firewall), and hybrid to some form of mixed approach that is still too loosely defined to be of any use.

Given this level of ambiguity, complexity and a lack of standards, combined with the ‘conservative’ nature of many large enterprise customers (particularly those with security concerns at the top of their agenda), the move towards this new service-driven approach to IT delivery requires either a compelling business reason why any organisation should move to the cloud, or a sense of momentum that makes its widespread adoption a seemingly inevitable consequence of market forces.

I we believe that currently neither of these conditions is prevalent, but the SME sector appears to the most interested in a public cloud approach and can clearly see some definite business benefits. Therefore, vendors are keen to use the SMB sector to drive the further development of public cloud computing, and by implication the use of private and hybrid cloud solutions in enterprise customers: it is too early to say for certain if this approach will work, but enterprise-class customers are beginning to show more interest, and we expect 2010 to be a year that will define the future market penetration potential of cloud computing.

Vmware sees the cloud as a long term solution

The market for virtualisation technology solutions is rapidly becoming commoditised, as the entrance of Microsoft and its Hyper-V product has reduced the entry price for many organisations. The only impediment to the complete communisation of the virtualisation stack, relating to server virtualisation, is a lack of interoperability between the main solutions. However, the Distributed Management Task Force (DMTF) released version 1.0 of the Open Virtualisation Format (OVF) in September 2008, which provides a Virtual Machine (VM) transport format; although basic (it does not support all hypervisor technologies currently) at least begins to address the movement issues of VMs between different vendor solutions.
Therefore, it is not surprising that VMware has recognised that any revenues from the core base technology are only a short- to medium-term prospect. Its movement into the cloud computing market represents at least a ten-year plan to continue to deliver significant revenues from its core capabilities of virtualisation solutions. Interestingly, VMware is also looking at wider markets, with two recent acquisitions (Springsource and Zimbra) that demonstrate its belief that the future will not be dominated by Microsoft, but will be more diverse. This is a bet worth playing, and only time will tell if VMware is correct in its assessment.

Friday 14 August 2009

Who is a Cloud Vendor

What is in a Name

The term Cloud Computing (CC) is one of those universal terms that can be described as “meaning all things to all men”, which for end user organisation’s looking to understand how/if CC fits into their strategies and is as much use the proverbial chocolate teapot. I define CC in a number of different classifications, and this approach can be used to sort out the vendors so that an organisation is more targeted in its definition of CC, or more importantly it can deliver what the organisation expects and wants.

I classify CC in four different ways: Firstly the Infrastructure as a Service (IaaS) which refers to the vendors that offer the servers and storage needed to execute an organisation’s IT needs, IBM and Amazon are the big names in this class; Secondly, the Platform as a Service (PaaS) vendors such as Google and Microsoft, where the vendors provide the development environment for organisations to design and build solutions and get them to market quickly; Thirdly, Software as a Service (SaaS), which is probably the best known of the CC offerings where the software is hosted and made available to the customer over the Web, Salesforce.com are the best know vendors; Finally, Build Your Own Cloud (BYOC), which provides the capabilities to do all the above internally based on a vendors underlying technology stack, VMware and Citrix are the biggest players in this market. Another option that is distorting the market even more is the move by vendors such as HP and Dell where they are modifying their tradition hosting services to provide CC.

Wednesday 12 August 2009

VMWare moves into the Application Layer

SpringSource aquired by VMWare

I see this as a significant, if risky, move by VMWare. This I believe is a move aimed at its pretentions in the Cloud, and if you want to play in that field beyond the nuts and bolts of the Hypervisor then applications and their portability are significant elements. To that end this is as much a response to Oracle’s acquisition of Sun as it is to Microsoft, but it does move them from being just a pure virtualisation player. I do not believe that SpringSource will be re-branded, or that we will see significant changes immediately, but I anticipate that next year the developer platform will be pushed as the platform for the cloud, success will depend on how MS, IBM, and Oracle respond and how many developers switch to using it, or leave it and focus on .NET, BEA, or Websphere (with BMC acquiring MQ series) that can not be ruled out just yet.

Wednesday 7 January 2009

Windows Server 2008

Windows Server 2008 is a surprisingly diverse product, with some very small new enhancements that could be easily overlooked, and some large high profile additions that Microsoft is certainly not allowing anybody, including the media, to overlook. However, what is the balanced view on Windows Server 2008.
Firstly, it has an inbuilt Hypervisor – a Hypervisor enables the virtualisation of the commodity server hardware so that it can support the execution of multiple Virtual Machines. Hyper-V, as it is known, is not the most technically advanced Hypervisor on the market, that award goes to Vmware, but it is a very good basic Hypervisor with a couple of interesting features: The ability to execute a Xen based Virtual Machine, and the concept of synthetic device drivers – the synthetic device drivers are the new high performance device drivers that are available with Hyper-V, rather than emulating an existing hardware device Microsoft exposes a new hardware device that has been designed for optimal performance in a virtualised environment.
One of the smaller and easier to overlook features of Windows Server 2008 is the ability to have a finer grained password policy, which may sound dull, but consider the IT department that is supporting ‘C’-level executives who do not necessarily have the time, or inclination, to maintain a complex alphanumeric 10 character password that is forced to be changed every 30 days. This finer control allows for these users to have different rules to say a database administrator, which enables IT to ensure that password policies are designed appropriately for the role/purpose of the account.
The other big feature of Windows Server 2008 is the introduction of server core, a stripped down operating system. This according to Microsoft requires up to 40% less patches to be applied, and occupies significantly less disk space than for the full Windows Server 2008. I consider this to be a major advancement, which will enable organisations to install server core on systems such file and print servers, reducing the maintenance required, and hence the operational cost.
Other features that are worthy of a mention at this stage include; role-based installation of features, simplified clustering using the wizard concept, read-only domain controllers, modified boot process that brings the firewall up earlier and so reduces the window of vulnerability, and the use of Network Access Protection (NAP) so a health policy can be set for anything connected to the network.

Tuesday 4 November 2008

Moving from Old to New

Many analysts predict the future of IT and we are all talking about what technologies will shape this future, but back in the real world CIOs are trying to deliver a service to their users while using for the most part a mixed bag of technologies.

I believe that many organisations in the next few years will reach a tipping point where they must make the leap and implement new technology, which could create problems for other older technologies; therefore, I think a time will come when organisations think about complete replacement using Infrastructure as a Service (IaaS) approach, which will be the beginning of the cloud revolution.

However, do not worry that time is a few years away yet, and in fact may not happen for 10 years, but like the Internet, it is coming so do not bury your head in the sand, be prepared and start planning now for a brave new IT world.

Monday 3 November 2008

Clouds

Dell raised a petition to trademark the term ‘cloud computing’, but in August the US Patent and Trademark Office (USPTO) issued a ruling that denied the company's claim to the term, but it did leave Dell with the option of appealing.

Over recent weeks a number of high profile announcements about cloud initiatives have circulated, and at first sight this may suggest that cloud computing has broken through from concept to reality in the enterprise, but is this just more hype or are we at the dawn of a new era in computing?
Firstly, the concept of cloud is not universally categorized so all the rhetoric needs to be carefully evaluated; I describe ‘cloud computing’ as the ability to deliver IT as a collection of services to a wide range of customers over the Web. We further refine this definition as either internal – IT within an organization’s firewall making its resources available as a cloud to its customers – or external – where a service provider supplies IT capability to customers via the Internet as either a top-up to existing IT resources, or as a complete solution thereby making the server-less organization a reality.
One of these announcements was that IBM is investing US$300M in 13 new data centres world-wide aimed at providing Disaster Recovery (DR) capabilities. This new initiative was described as a cloud computing solution for DR; it provides backups of data on servers that can then be quickly accessed to rapidly restore lost files. This solution can be seen as IBM leveraging its acquisition in 2007 of Arsenal Digital Solutions – a manufacturer of rack-mounted appliances dedicated to business continuity.
However, the IBM announcement uses the term cloud, but does not really deliver a cloud solution; it merely offers a single cloud-based service which I believe represents the current state of the market: that is to say single cloud solutions aimed at particular niche deployments. In July HP, Intel, and Yahoo announced that they are working together to deploy a global test-bed of six data centres for the open development and testing of solutions to the challenges cloud computing will present. This announcement is a larger scale than the Google and IBM announcement of October 2007, where two data centres dedicated to cloud research were being set-up.
I consider the concept of cloud computing to represent the future of how IT will be delivered to its customers, but we believe that many issues remain with cloud computing and applaud the efforts of HP, Intel, Yahoo, IBM, Google, and Microsoft for providing the platforms for developers and researchers to work on the challenges. One of the most fundamental challenges to be how the services and delivery will be managed, and more importantly charged for. This is just one example of the sort of practical questions that come to mind when you start to consider how the cloud concept can be used.

I expect many more announcements of cloud solutions like the IBM DR one will be made over the coming years. This we believe will create confusion in the market similar to that when virtualization first appeared, but as the research turns to solutions the scope of these announcements will increase from single solutions to more enterprise-ready solutions; however, the marketing hype may have already created a high-level of scepticism among end-user organizations that will need to be convinced that the cloud has arrived and is fit for purpose in commercial deployments.

Monday 6 October 2008

V IS THE WORD

Last week at VMworld Paul Maritz CEO of vmware set out its vision for the future of virtualisation. In his keynote speech he positioned vmware’s approach around three key elements; firstly the concept of a Virtual Data Centre (VDC)-OS, secondly a move towards the concept of vCloud, and finally re-positioning vmware – changing the message from having a server virtualisation heritage to emphasising that it began as a founder of client virtualisation – and launching its vClient initiative. Supporting this vision was a rebranding, with all products/services now being prefixed with a ‘v’.

For many years now the analysts have been calling vmware an OS vendor, but it refused to accept the label arguing that it considered its self a virtualisation vendor, and was not a direct competitor of Microsoft’s, rather a complementary technology. However, the announcement of the VDC-OS demonstrates a move towards a vmainframe computer principal. Effectively, what VDC-OS will enable is for large resource pools to be created from a collection of commodity based hardware, which can consist of storage, servers, or network devices. These large computing resource pools it is argued will provide the scalability needed by organisations to execute a collection of applications that provide a business service as a single Virtual Machine (VM).
The VDC-OS is a framework that has three main components, the interface to the hardware is called vInfrastructure Services, and includes vCompute, vStorage, and vNetwork, which are all capabilities designed to abstract the resources so they can be pooled. The second component is Application vServices, which addresses the changing nature of an applications relationship to an OS. The final component is the management layer, and vmware have renamed Virtual Centre (VC) to vCentre. The VDC-OS represents the evolution of vmware’s Virtual Infrastructure (VI) solution, and it is anticipated that by early 2009 many of the capabilities required to make VDC-OS a reality will be available.
The vCloud initiative consists of three different approaches, the first is a program for service providers that will enable them to construct cloud solutions that can be offered to its customers, secondly it will eventually be a product that enterprises can purchase to construct its own internal cloud, and finally a set of APIs that will allow the on-demand allocation of resources between internal and external clouds to operate. However, I consider that before vCloud becomes widely adopted a number of key issues need to be addressed, not least of which is details on how the services will be licensed and managed.
The last major thread of vmware’s future plans is a shift in emphasis away from server virtualisation towards desktop or client virtualisation with its vClient initiative. The objective is to enable the end-user to connect from any device and receive their personal desktop environment. The main component of this plan is the development of a client side Hypervisor that will be a bare metal Hypervisor and control the protocols used so that the end-user experience is delivered according to what device and what connectivity the user is using.

I believe that vmware has been very bold in making clear how it see’s its future, but has to wonder how much of Paul Maritz key note was aimed at Wall Street and allaying the fears of its investors about future revenue streams; this road map of how it plans to grow the business and deal with the threat of Microsoft’s entry into the market provides that audience with what it needs, but is the technology mature enough to support this radical shift in the data centre.

Server Virtualization Blog