There are not yet any cookie-cutter best practices and enabling technologies to move applications to the cloud.
Enterprises are migrating to the cloud in big ways these days. However, the number of moving parts leave many people in IT a bit perplexed — and fearful that they could be making major mistakes.
The reality is that cloud migration is new, so best practices and enabling technologies have yet to emerge. Moreover, there is the added complexity of DevOps, big data, and the Internet of things. How the heck do you fit those in too?
Having done a ton of these migrations in the last few years, I can give you basic advice about how to manage the complexities of migrating to the cloud. Use these tips to get a jump start. And make sure your very first step is to learn all you can about cloud migration in general.
1. Think stepwise. Many enterprises move from 0 to 100mph when looking to migrate applications and data. This leads to too many moving parts and not enough time to recover from mistakes. As a result, at least some of your massive migration project will fall on its face.
Instead, do things in order. Assemble a comprehensive plan that takes the necessary time to put the applications in the correct priority order, then migrate them as chunks of applications and data.
Follow that approach whether you are doing a simple “lift and shift” type of migration or acomplete refactoring of the applications and the data.
2. Think security and governance. These two considerations should be systemic to the applications and data, so you must address them in each workload moved to the cloud. What are the security requirements? What are the compliance requirements? How do you manage cloud services, so they can be reused in other applications?
3. Watch for performance issues. Performance problems tend to become known at the time of deployment — not when you typically want to make fixes.
To find performance problems before you go live, make sure to test as you go. You’ll find that chatty applications (those that require a great deal of data exchange with public clouds) can introduce issues. You’ll have to refactor the application to fix the issues. Refactoring per se is not a problem, but you have to set aside the time to do it before the migration, not after.
Many enterprises took baby steps toward the cloud just a few years ago. According to 451 Research, 34 percent of enterprises will have 60 percent or more of their applications on a cloud platform within two years.
These days, enterprises are ramping up for the migration of massive amounts of applications to hybrid clouds. Some are new applications, but the majority are more than 10 years old.
So, how do you make the move to the hybrid cloud?
The trick is to select the right applications in the first place. To do that you must understand what each application does, assess the business case for migration, and then select the right approach. Determining which applications will bring the most value in a move to the cloud should be the starting point. From there you move on to address the technology trade-offs, such as selecting the best approach to migrate the application and deal with important issues such as security, governance, and disaster recovery. In this way you can modernize your application development processes and technology to make the most out of your newly deployed hybrid cloud.
Making the hybrid cloud jump
Before migrating legacy applications to a hybrid cloud, make sure you understand the difference between traditional legacy application architectures and hybrid cloud platforms. Hybrid clouds have at least a private and public cloud side to the architecture, but multicloud architectures include more than just one private and one public cloud.
Hybrid clouds are desirable because they can deliver the best of both the private and public cloud worlds by letting you move workloads back and forth between the two platforms. You can also partition applications so that components can reside on both the public and private cloud.
With a hybrid cloud, enterprises can host the applications and the data on the platform that delivers the optimal mix of cost efficiency, security, and performance. For example, a big data application that requires expensive storage systems may run more cost effectively on a public cloud, while sensitive data or data that needs to be close to the end user for performance reasons remains hosted on a private cloud for risk management reasons.
The trick here is to analyze the source applications with a complete understanding of the types of workloads involved. Armed with this information, you’ll be ready to make an informed decision as to what changes each application requires and what approach will work best when rehosting on a hybrid cloud. You have three options: to lift and shift, partition, or refactor the application.
Lift and shift
In this approach, you move the application and its data without making any significant changes to the application itself. This means picking either the public or private cloud as the destination for the application. You select the best host based on workload requirements.
For instance, let’s say you have an inventory control application, written in Java, that runs alongside a relational database on a Linux system. The application is “chatty” with the user interface, so any network latency will be noticed by the end user. In order to provide the best performance, the application should be lifted and shifted to the private cloud components of the hybrid cloud, where it can run relatively unmodified.
The advantage of the lift and shift method is cost. Since you don’t need to redesign or modify the applications, your costs are limited to moving and testing the application on the private or public cloud.
The disadvantage is that you are still using a single platform—either public or private. You are not taking advantage of the distributed capability of a hybrid cloud by distributing the workload.
To partition an application for a hybrid cloud, you separate the workloads within the application to run on the public and private cloud sides of the hybrid cloud simultaneously. You can usually do this with minimal modifications to the application, minimizing the cost and risk.
Let’s say your inventory control application has a transaction layer that costs less to run on the public cloud, but the data and the user interface run best in the private cloud. You partition the application between your private and public cloud, but the partitions are in constant communication, as though each were running on a single physical system.
Applications can be partitioned to best take advantage of your hybrid cloud, but to do this effectively you must understand the application in great detail and be prepared for some modifications, as well as testing and deployment.
For these reasons, the cost of partitioning applications is higher than with the lift and shift approach. However, this method makes sense when the cost savings from running the appropriate workload on the appropriate cloud outweigh the costs of modifying the application.
Refactoring an application means doing a complete (or almost complete) rewrite of it to take full advantage of the features of your hybrid cloud. In leveraging so-called “cloud-native” features, you optimize the application to use your underlying private and public cloud resources most effectively. For example, you may want to modify legacy applications in this way to increase performance.
Typically, you access these features in layers. These include the topmost virtual platform or operating system; underlying resources, such as storage and data; and cloud-native services, such as provisioning and tenant management.
Refactoring also lets you access the native features of public or private cloud services to provide better performance than non-native features. For example, if you are working with an input/output (I/O) system that works with an auto scaling and load-balancing feature, you can drive this dynamically from within the application.
What’s more, cloud-native applications can use cloud-native features and APIs to more efficiently use underlying resources. You can better manage the application’s impact on the underlying hardware and software and get better performance as a result. Additionally, you can usually reduce the cost for public and private cloud resources.
As with partitioning, you distribute the components of your application between the public and private cloud, but refactoring lets you break the application apart in a much more precise manner. This usually involves rewriting it as sets of services or microservices that can reside on either the public or private cloud so that you can relocate them as needed to maximize operating cost and performance.
The big trade-off with this approach is cost. Refactoring typically means redesigning and rewriting the application from the ground up. You will need to pay a development team for several months or more while they rework the application, so make sure the benefits outweigh the costs before proceeding.
Business continuity and disaster recovery
You have a couple of options when business continuity is a concern for applications running in a hybrid cloud. You rely on the hybrid cloud service itself to provide resiliency services, but it’s up to you to ensure that the public cloud provider maintains redundant services as part of its infrastructure so that outages on primary cloud centers will failover to secondary centers without an interruption in application services. You should replicate the same services on the private cloud side as well.
Alternatively, you can build business continuity services directly into the application. This means investing the time and money required to modify the application to provide these services. This might require storing data in two places at the same time, keeping backup instances of the application running at all times, and even distributing the application geographically to avoid geographically oriented disruptions, such as weather events.
Security and governance
You have two primary choices for security when moving legacy applications to a hybrid cloud:
- Leave the application unmodified and retrofit cloud security services around your application and data. This often means restricting access at the machine instance or storage levels. This approach is a “you’re in” or “you’re out” type of security solution.
- Modify the application to use application-level security services, such as those that are entity-based. This allows fine-grained access to application services and data and restricts how the application is used. However, this typically requires modifications to the application to leverage this type of security, such as using Security Assertion Markup Language (SAML), the open standard for federated identity.
Much like security, governance can occur at the legacy-application resource or machine level, at the application or service level, and at the data level. However, when you’re implementing governance at the service level, the application may need to be modified. Cost and risk issues come into play again and may change the business case.
The idea here is to order legacy applications in terms of importance to the business, ease of moving to the hybrid cloud, and the cost of any changes that should occur in moving the application. Using a ranking system, you can then determine which applications should take priority over others and even determine which applications should not be moved or replaced.
It is important that the people who choose which applications to move to the hybrid cloud properly analyze those applications in terms of benefits, function, architecture, technology, configuration, and so on. They must understand enough about the application to make a proper assessment as to whether and how each application should move to the cloud. Typically, mistakes occur when the enterprise does not take the time to understand the breadth and depth of each legacy application. That leads to improper conclusions.
Now is the time to start looking at your application portfolio as you consider the best ways to leverage your hybrid cloud. Many things will need to change before you can begin, including the application development processes and tools and your organization’s cultural acceptance of public and private clouds.
You have many options when porting to a hybrid cloud, and you can hedge your bets in terms of determining the best location to run the application workloads, whether public or private. The cloud can provide a path to improve performance, security, and resiliency. It’s not easy, but the payoff is worth it. Done right, legacy applications migrated to hybrid clouds run faster and at a lower cost.
- Published in App Migration
Pricing and perceived security will be key differentiators in Oracle’s more direct competition with Amazon Web Services, one analyst says
Oracle set its sights on a bigger piece of the cloud pie with new IaaS services that put it in more direct competition with Amazon Web Services.
The new IaaS services were introduced Tuesday by Thomas Kurian, Oracle’s president of product development, at its OpenWorld show in San Francisco.
First, Oracle Elastic Compute Cloud allows customers to chose between elastic and dedicated compute options. Elastic Compute offers the ability to run any workload in the cloud in a shared compute zone, while the dedicated option adds capabilities such as CPU pinning and complete network isolation.
Compute Cloud supports a variety of operating systems, including Linux and Windows, and features robust monitoring capabilities, Oracle said.
Two new Storage Cloud services, meanwhile, focus on different types of storage. An archive option provides storage for applications and workloads that are accessed infrequently and require long-term retention, with predictable SLAs for data retrieval. A File Storage service, on the other hand, offers file-based NFS v4 network protocol access to both Object Storage and Archive Storage tiers in Oracle Storage Cloud Service.
Oracle Network Cloud is designed to provide secure and high-performance connectivity from customer data centers to the cloud following a software-defined network configuration approach. Connectivity options include VPN, Oracle Cloud Connect and network bonding.
A Container Cloud offering allows customers to run applications within Docker containers, which can be easily deployed within the Oracle Compute Cloud.
Finally, Oracle and its partners have certified a number of technology stacks on the Oracle Cloud; they’re available in a standard service catalog to simplify deployment.
The new offerings “really help round out Oracle’s IaaS solutions portfolio,” said Charles King, principal analyst with Pund-IT. They include the kinds of “more or less generic” compute, storage and networking capabilities available on major public clouds, King noted.
At the same time, though, “Oracle is also making good on delivering key parts of its application portfolio via the cloud, including the eBusiness Suite, PeopleSoft and JD Edwards solutions,” he pointed out. “That should stir the particular interest of existing Oracle customers, and could also tempt businesses already considering deployment of those solutions,” King said.
The extent to which Oracle’s new IaaS offerings will compete with AWS, meanwhile, “largely depends on two issues: pricing and perceived security,” King noted.
Oracle customers can already run many or most of their applications on AWS without any additional license fees, so “unless Oracle decides to add a sweetener of some kind to its new services, it may be difficult to tempt customers already using AWS into its own IaaS fold,” he said.
Similarly, “though it’s common for IT vendors to pitch their own cloud platforms as being somehow inherently more secure than public cloud platforms like AWS, that could be a double-edged sword for Oracle,” King suggested. “If the company were to pursue that line, customers might ask why the company was collaborating with AWS in the first place.”