Cloud Computing

Heroku Versus AppEngine and Amazon EC2 – Where Does it fit in?

I’ve just had a really pleasant experience looking at Heroku – the ‘cloud application platform’ from Salesforce.com but it’s left me wondering where it fits in.

A mate of mine who works for Salesforce.com suggested I look at Heroku after I told him that I’d had some good and bad experiences with Google’s AppEngine and Amazon’s EC2. I’d been looking for somewhere to host some Python code that I’d written in my spare time and I had looked at both AppEngine and EC2 and found pros and cons with both of them.

As it turns out it was a good suggestion  because Heroku’s approach is very good for the spare-time developer like me. That’s not to say that it’s only an entry level environment – I’m sure it will scale with my needs, but getting up and running with it is very easy.

Having had some experience of the various platforms, I’m wondering where Heroku fits in. My high-level thoughts…

Amazon’s EC2 – A Linux prompt in the sky

Starting with EC2, I found EC2 the simplest concept to get to grips with but by far the most complex to configure. For the uninitiated, EC2 provides you with a machine instance in the cloud which is a very simple concept to understand. Every time you start a machine instance you effectively get a Linux prompt, of varying degrees of power and capacity, in the sky. What this means is that you have to manually configure the OS, database, web infrastructure, caching, etc. This is excellent in that it gives unrivalled flexibility and after all, we’ve all had to configure our development and test environment anyway so we should understand the technology.

But imagine that you’ve architected your system to have multiple machines hosting the database, multiple machines processing logic and multiple web servers managing user load; you have to configure each of these instances yourself. This is non-trivial and if you want to be able to flexibly scale each of the machine layers then you own that problem yourself (although there are after market solutions to this too).

But what it does mean is that if you’re taking a system that is currently deployed on internal infrastructure and deploying it to the cloud, you can mimic the internal configuration in the cloud. This in turn means that the application itself does not necessarily need to be re-archtected.

The sheer amount of additional infrastructure that Amazon makes available to cloud developers (Queuing, cloud storage,  MapReduce farms, storage, caching, etc) coupled with their experience of managing both the infrastructure and the associated business models, makes Amazon an easy choice for serious cloud deployments.

Google AppEngine – Sandbox deployment dumbed down to the point of being dumb?

So I’m a fan of Google, in the same way that I might say I’m a fan of oxygen. It’s ominpresent and it turns out that it’s easier to use a Google service than not – for pretty much all of Google’s services. They really understand the “giving crack cocaine free to school kids” model of adoption. They also like Python (my drug of choice) and so using AppEngine was a natural choice for me. AppEngine presents you with an abstracted view of a machine instance that runs your code and supports Java, Python or Google’s new Go language. With such language restrictions it’s clear to see that, unlike EC2, Google is presenting developers with a cosseted, language-aware, sand-boxed environment in which to run code. The fact that Google tunes the virtual machines to host and scale code optimally is, depending on your mindset, either a very good thing or close to being the end of the world. For me, not wanting, knowing how to, or needing to push the bounds of the language implementation, I found the AppEngine environment intuitive and easy. It’s Google right?

But some of the Python restrictions, such as not being able to use modules that contain C code are just too restrictive. Google also doesn’t present the developer with a standard SQL database interface, which adds another layer of complexity as you have to use Google’s high replication datastore.  Google would argue, with some justification I’m sure, that you can’t use a standard SQL database in an environment when the infrastructure that happens to be running your code at any given moment could be anywhere in Google’s data centres worldwide. But it meant that my code wouldn’t port without a little bit of attention.

The other issue I had with Google is that the pricing model works from quotas for various internal resources. Understanding how your application is likely to use these resources and therefore arriving at a projected cost is pretty difficult. So whilst Google has made getting code into the cloud relatively easy, it’s also put in place too many restrictions to make it of serious value.

Heroku- Goldilock’s porridge too hot, too cold or just right?

It would be tempting, and not a little symmetrical, to place Heroku squarely between the two other PaaS environments above. And whilst that is sort of where it fits in my mind, it would also be too simplistic. Heroku does avoid the outright complexity of EC2 and seems to also avoid some of the terminal restrictions (although it’s early days) of AppEngine. But the key difference with EC2 lies in how Heroku manages Dynos (Heroku’s name for an executing instance). To handle scale and to maximise use of its own resources, Heroku runs your code only for the specific instance that it is being executed. After that, the code, the machine instance and any data it contained are forgotten. This means that things like a persistent file system or a having a piece of your code always running cannot be relied upon.

These problems are pretty easily surmountable. Amazon’s S3 can be used as a persistent file store and Heroku apps can also launch a worker process that can be relied upon to not be restarted in the same way as the other Dyno web processes.

Scale is managed intelligently by Heroku in that you simply increase the number of web and worker processes that your application has access to – obviously this also has an impact on the cost. Finally there is an apparently thriving add-on community that provides (at additional monthly cost) access to caching, queuing and in fact any type of additional service that you might otherwise have installed for free on your Amazon EC2 instance.

Conclusion

I guess the main conclusion of this simple comparison is that whilst Heroku does make deploying web apps simple, you can’t simple take code already deployed on internal servers and git commit it to Heroku.com. Heroku forces you to think about the interactions your application will have with its new deployment environment, because if it didn’t, your app wouldn’t scale. This is also true of Google’s AppEngine, but the restrictions that AppEngine places on the type of code you can run makes it of limited value to my mind. These restrictions do not appear to be there with Amazon EC2. You can simply take an internally hosted system and build a deployment environment in the cloud that mimics the current environment. But at some point down the line, you’re going to have to think about making the code a better cloud citizen. With EC2, you’re simply able to defer the point of re-architecture. And the task of administering EC2 is a full time job in itself and should not be underestimated. Heroku is amazingly simply by comparison.

Anyway, those are my top of mind thoughts on the relative strengths and weaknesses of the different cloud hosting solutions I’ve personally looked at. Right now I have to say that Heroku really does strike an excellent balance between ease and capability. Worth a look.

Danny Goodall

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn

Cloud gives ESBs a new lease of life

ESBs have become the cornerstone of many SOA deployments, providing a reliable and flexible integration backbone across enterprises. However, the Cloud Computing model has given ESBs a new lease of life as the link between the safe, secure world behind the firewall and the great unknown of the Cloud.

As ESB vendors look for more reasons for users to buy their products, the Cloud model has emerged at just the right time. Companies looking to take advantage of Cloud Computing quickly discover that because of key inhibitors like data location, they are forced to run applications that are spread between the Cloud and the Enterprise. But the idea of hooking up the safe, secure world of the enterprise, hiding behind its firewall, and the Cloud which lies out in the big, wide and potentially hostile world is frightening to many. Step forward the ESB – multi-platform integration with security and flexibility, able to hook up different types of applications and platforms efficiently and securely.

More and more ESB vendors are now jumping on the ‘Cloud ESB’ bandwagon. Cast Iron, now part of IBM, made a great name for itself as the ESB for hooking Salesforce.com with in-house applications; Open Source vendor MuleSource has been quick to point to the advantages of its Mule ESB as a cost-effective route to cloud integration; Fiorano has tied its flag to the Cloud bandwagon too, developing some notable successes. Recently, for instance, Fiorano announced that Switzerland’s Ecole hôtelière de Lausanne (EHL) had adopted the Fiorano Cloud ESB to integrate 70 on-premise applications with its Salesforce.com CRM system.

Over the next few months, we expect to see a growing number of these ‘cloud ESB’ implementations as more companies realize the potential benefits of combining ESBs and Cloud.

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn

If you want to fly in the Cloud, check the exits first

While Cloud adoption may be very cautious for core business systems, desktop clouds have seen a high take-up. But if you want to fly in the Clouds, you really should check your nearest emergency exit before you take off.

The cost advantages of putting all your desktop files and storage into the cloud are very persuasive, not to mention the attractions of access anywhere. But as Lustratus has pointed out before, there is a concern here. There are LOTS and LOTS of cloud suppliers – not unexpected when a new and radical idea comes along. But remember the real problem with cloud from the supplier’s side; the supplier has to put in all the investment in infrastructure up front, and then receives income in small per-user usage charges. This might look a great plan on a five-year basis with a rapidly expanding user base, but when year one or year two is tied to a period of tighter credit conditions it is easy to get over-extended. Look at G.host for instance, which went bust two or three months ago because it found its cloud no longer economical. Not a nice situation for all the people who had files and data living in it, although to be fair they got reasonable warning from the company.

The sensible thing to do is check the escape routes before you go in. Perhaps your cloud vendor will be fine, growing into the market leader with oodles of cash to invest in new infrastructure to sustain the huge number of users, but just maybe it might be one of the ones that doesn’t make it. Look at your back-up procedures, and put in place an emergency plan to avoid any disruption if the worst happens. And make sure above all that you do your due diligence before selecting your cloud.

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn

IBM acquires Cast Iron

I am currently at IBM’s IMPACT show in Las Vegas, where the WebSphere brand gets to flaunt its wares, and of course one of the big stories was IBM’s announcement that is has acquired Cast Iron.

While Cast Iron may only be a small company, the acquisition has major implications. Over the past few years, Cast Iron has established itself as the prime provider of Cloud to Cloud and Cloud to on-premise integration, with a strong position in the growing Cloud ecosystem of suppliers. Cast Iron has partnerships with a huge number of players in the Cloud and application packages spaces, including companies such as  Salesforce.com, SAP and Microsoft, and so IBM is not just getting powerful technology but also in one move it is taking control of the linkage between Cloud and anything else.

On the product front, the killer feature of Cast Iron’s offering is its extensive range of pre-built integration templates covering many of the major Cloud and on-premise environments. So, for example, if an organization wants to link invoice information in its SAP system with the Salesforce.com sales force environment,  then the Cast Iron offering includes prepared templates for the required definitions and configurations. The result is that the integration can be set up in a matter of hours rather than weeks.

So why is this so important? Well, for one, most people have already realized that Cloud usage must work hand-in-hand with on-premise applications, based on such things as security needs and prior investments. On top of this, different clouds will serve different needs. So integration between clouds and applications is going to be a fact of life. IBM’s acquisition leaps it into the forefront of this area, in both technology and partner terms. But there is a more strategic impact of this acquisition too. Noone knows what the future holds, and how the Cloud market will develop. Think of the situation of mainframes and distributed solutions. As the attractions of distributed systems grew, doomsayers were quick to predict the end of the mainframe. However, IBM developed a powerful range of integration solutions in order to allow organizations to leverage the advantages of both worlds WITHOUT having to choose one from the other. This situation almost feels like a repeat – Cloud has a lot of advantages, and some misguided ‘experts’ think that Cloud is the start of the end for on-premise systems. However, whether you believe this or not, IBM has once again ensured that it has got a running start in providing integration options to ensure that users can continue to gain value from both cloud and on-premise investments.

Steve

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn

Platform Computing takes on the Cloud

ListeningI was on a call this week with Platform Computing, a well-known software vendor in the high-performance computing (HPC) world of grids and clusters that is now trying to make the leap to the Cloud Computing market.

Platform Computing has a strong reputation in the HPC world, selling software that helps manage these multi-processing environments, but it is keen to expand its market coverage and open up new opportunities in more general areas of IT, and it has selected the Cloud Computing marketplace to help it achieve these diversification aims. At first, this may seem odd, but a little thought quickly shows that this is not nearly as big a leap for Platform as it might at first seem. After all, internal clouds almost always involve virtualization, and handling the management needs of a virtualized environment is very much up Platform Computing’s street.

But for me, the real nugget that came out of this briefing was an interesting distinction that helps improve understanding of Cloud Computing and its relationship to Virtualization. I meet a growing number of people who have heard about Cloud, but do not see the distinction between Cloud and virtualization. While there are a number of ways to look at this distinction, as I discussed in my Executive Overview to Cloud which Lustratus offers at no charge from its web store, the discussions with Platform brought another one that I think is an interesting take. The Platform position is that virtualization solutions by definition only make virtualized resources available for usage. Its Cloud management software differentiates itself from virtualization by offering heterogeneous access to resources – that is, Cloud-based access to resources that have already been virtualized AND ones that haven’t. I think this is a useful distinction to keep in mind when looking at data centre strategies.

Steve

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn

Cloud computing – balancing flexibility with complexity

justice scaleIn the “Cloud Computing without the hype – an executive guide” Lustratus report, available at no charge from the Lustratus store, one of the trade-offs I touch on is flexibility against complexity.

To be more accurate, flexibility in this case refers to the ability to serve many different use cases as opposed to a specific one.

This is an important consideration for any company looking to start using Cloud Computing. Basically, there are three primary Cloud service models; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). In really simple terms, an IaaS cloud provides the user with virtual infrastructure (eg storage space, server, etc), PaaS offers a virtual platform where the user can run home-developed applications (eg a virtual server with an application server, database and development tools) and SaaS provides access to third-party supplied applications running in the cloud.

The decision of which is the most appropriate choice is often a trade-off. The attraction of SaaS is that it is a turn-key option – the applications are all ready to roll, and the user just uses them. This is pretty simple, but the user can only use those applications supplied. There is no ability to build new applications to do other things. Hence this approach is specific to the particular business problem addressed by the packaged application.

PaaS offers more flexibility of usage. A user builds the applications that will run in the cloud, and can therefore serve the needs of many different business needs. However, this requires a lot of development and testing work, and flexibility is restricted by the pre-packaged platform and tools offered by the PaaS provider. So, if the platform is WebSphere with DB2, and the user wants to build a .NET application for Windows, then tough.

IaaS offers the most flexibility, in that it effectively offers the infrastructure pieces and the user can then use them in any way necessary. However, of course, in this option the user is left with all the work. It is like being supplied with the raw hardware and having to develop all the necessary pieces to deliver the project.

So, when companies are looking at their Cloud strategies, it is important to consider how to balance this tradeoff between complexity/effort and flexibility/applicability.

Steve

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn

Introducing Cloud for Executives

Example report front coverAt Lustratus we have been doing a lot of research into Cloud Computing, as have many firms.

I must confess the more I have dug into it, the more horrified I have become at the hype, confusion, miscommunication and manipulation of the whole Cloud Computing concept.

In the end, I decided the time was right for an Executive Guide to Cloud – defining it in as simple terms as possible and laying out the Cloud market landscape. Lustratus has just published the report, entitled “Cloud Computing without the hype; an executive guide” and available at no charge from the Lustratus store. Not only does the paper try to lock down the cloud definitions, but it also includes a summary of some 150 or so suppliers operating in the Cloud Computing space.

The paper deals with a number of the most common misunderstandings and confusions over Cloud. I plan to do a series of posts looking at some of these, of which this post is the first. I thought I would start with the Private Cloud vs Internal Cloud discussion.

When the Cloud Computing model first emerged, some were quick to try to define Cloud as a public, off-premise service (eg Amazon EC2), but this position was quickly destroyed as companies worldwide realized that Cloud Computing techniques were applicable in many different on and off premise scenarios. However, there has been a lot of confusion over the terms Private Cloud and Internal Cloud. The problem here is that analysts, media and vendors have mixed up discussions about who has access to the Cloud resources, and where the resources are located. So, when discussing the idea of running a Cloud onsite as opposed to using an external provider such as Amazon, people call one a Public Cloud and the other an Internal Cloud or Private Cloud.

This is the root of the problem. This makes people think that a Private Cloud is the same as an Internal Cloud – the two terms are often used interchangeably. However, these two terms cover to different Cloud characteristics, and it is time the language was tightened up. Clouds may be on-premise or off-premise (Internal or External), which refers to where the resources are. (Actually, this excludes the case where companies are running a mix of clouds, but let’s keep things simple). The other aspect of Cloud usage is who is allowed to use the Cloud resources. This is a Very Important Question for many companies, because if they want to use Cloud for sensitive applications then they will be very worried who else might be running alongside in the same cloud, or who might get to use the resources (eg disk space, memory, etc) after they have been returned to the cloud.

A Public Cloud is one where access is open to all, and therefore the user has to rely on the security procedures adopted by the cloud provider. A Private Cloud is one that is either owned or leased by a single enterprise, therefore giving the user the confidence that information and applications are locked away from others. Of course, Public Cloud providers will point to sophisticated security measures to mitigate any risk, but this can never feel as safe to a worried executive than ‘owning’ the resources.

Now, it is true that a Public Cloud will always be off-premise, by definition, and this may be why these two Cloud characteristics have become intertwined. However, a Private Cloud does not have to be on-premise – for example, if a client contracts with a third party to provide and run an exclusive cloud which can only be used by the client, then this is a Private Cloud but it is off-premise. It is true that USUALLY a Private Cloud will be on-premise, and hence equate to an Internal Cloud, but the two terms are not equal.

The best thing for any manager or exec trying to understand the company approach to cloud can do is to look at these two decisions separately – do I want the resources on or off premise, and do I want to ensure that the resources are exclusively for my use or am I prepared to share. It is a question of balancing risk against the greater potential for cost savings.

Steve

Post to Twitter Post to Delicious Post to Facebook Post to LinkedIn


Twitter Goodies