At Re:Invent, Amazon Web Services offer new options for all phases of data in the cloud
Uploading data, ingesting data, getting insights from data — we typically associate all three capabilities with cloud workloads. Amazon’s Wednesday keynote announcements at Re:Invent unveiled new services for doing all of the above — with creative wrinkles all around.
Why sync data over the wire to Amazon, for instance, when you can instead mail it? And given how much data gets socked away in Amazon for analysis, how about a tool aimed at business folks, not IT personnel, for getting value from that data?
AWS Import/Export Snowball
“Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway,” computer scientist Andy Tannenbaum is reputed to have said. Amazon’s new appliance for migrating data to the cloud takes that notion to heart.
AWS Import/Export Snowball is a refined version of a service Amazon started offering back in 2009, where the user loaded data onto a device of their choosing and shipped it to Amazon with a manifest file. With Snowball, Amazon automates the process by providing the hardware to be loaded and streamlining the round-trip process.
The Snowball appliance is a ruggedized, tamper-proof, network-connected disk array, outfitted with a 10GB network port. The user fills it with up to 50TB of data at once, then ships it back to Amazon to have the data dumped into an S3 volume of their choosing. Each device costs $200 per job, with a daily penalty of $15 imposed for taking longer than the allotted 10 days to fill the device and ship it back.
Snowball’s appeal is meant to go beyond convenience, since many of its current and possible future features are aimed at assuring a customer its data won’t end up in the wild blue yonder. Not only is the data encrypted at rest on the device, Amazon can alert the customer whenever a Snowball job hits specific milestones: “in transit to customer,” “in transit to AWS,” “importing,” and so on. The e-ink status monitor on the front of the box even doubles as a shipping label, and Amazon mentioned the possibility of “other enhancements including continuous, GPS-powered chain-of-custody tracking.”
Amazon Database Migration Service
For those who are comfortable shuttling structured data over the wire incrementally into Amazon’s data centers, the company whipped the drapes off a similar item: Amazon Database Migration Service.
Users of Oracle, MySQL, or Microsoft SQL Server can replicate data from their data center to either the same database in AWS or convert it on the fly to a different one — for example, Oracle to MySQL. An included schema conversion tool ensures that the translated data won’t get mangled during the move, and Amazon claims it can suggest parallel ways to implement features that might not be available on the target platform.
Pricingis calculated by instance-hour for a virtual machine that runs the migration service (starting at 1.8 cents per hour), but data transfers to a database in the same availability zone as the Migration Service instance cost nothing.
Who’s the target audience? Most likely those looking to migrate to Amazon, but on their own terms and their own time. By setting up a path where data is passively replicated in the background, alongside existing business operations, they aren’t stuck in an all-at-once-or-nothing migration to AWS.
Amazon Kinesis Firehose
Amazon’s Kinesis was created to allow AWS customers to capture and work with live data, no matter the source. Its newest wrinkle, Amazon Kinesis Firehose, doesn’t expand on that idea. In fact, it cuts it down.
As the name implies, Firehose is little more than a connector that allows streaming data to be written into S3 or Redshift as it arrives. The only (optional) processing done on the stream in Firehose is compression or encryption, and the only options set by users are, for example, buffer size and the interval before data is delivered to its target bucket.
What’s interesting about Firehose is that it allows for data gathering and processing to be decoupled from each other. A user could, for instance, hitch an AWS Lambda job to trigger whenever Firehose data arrives in its target S3 bucket. That way, work can be done entirely on-demand as data arrives, using only as much code as is needed (per the Lambda processing model).
Data in the cloud isn’t much good on its own. Amazon has not traditionally lacked for options to collect and store data at scale, but now offers users a way to derive visualizations and insights from the bits they’ve amassed — through a service hosted right on Amazon.
Amazon’s new business intelligence product QuickSight connects to Amazon’s forest of existing database (RDS, DynamoDB, ElastiCache, Redshift) and analytics systems (EMR, Data Pipeline, Elasticsearch Service, Kinesis, Machine Learning). Once a given data source is connected, the user is presented with a UI akin to a simplified version of products like Tableau, with recommendations on what kinds of visualizations might be most appropriate for the selected data set.
Amazon’s cloud products are notoriously obtuse for users, but the interface for QuickSight seems straightforward and uncluttered — after all, it’s meant for the business side of an enterprise rather than IT. In another nod to business users is a promised feature where data harvested through QuickSight can be accessed via a SQL-like command language, so partner products (right now, Domo, Qlik, Tableau, and Tibco) can eventually make use of QuickSight’s in-memory processing. That said, there must be a better way to hitch Excel up to QuickSight, or Amazon will miss out on taking advantage of the single biggest self-service data tool in use in enterprises.
In another appeal to business users, QuickSight will cost $12 per user per month, or $9 per user for a year at a time. Up to 10GB of data taken into QuickSight from other systems can be stored for free. However, it’ll be a while before Amazon customers can judge if this is an improvement over legacy BI — QuickSight isn’t scheduled to launch officially until “early 2016.”
- Published in Cloud Storage
If Amazon Web Service is becoming a nearly ubiquitous technology, what does that mean for the future of data and how companies work with Amazon moving forward?
Amazon has been grabbing headlines this month, from its recent earnings report, which demonstrated a major revenue lift despite an increased discounting strategy, to an expose on the company’s work culture that caused ripples across the Internet and around the water cooler.
While many people may have bristled reading about working at a company that has been described as “bruising,” if the company’s recent revenue numbers mean anything, working with Amazon Web Services has become almost unavoidable.
According to Arik Hesseldahl at Re/Code, one of the ways that Amazon has been able to grow revenue while also significantly cutting costs on products and services is the rapid rate at which Amazon Web Services is growing. One Amazon executive estimates that “AWS is adding enough capacity to serve a mini-Amazon (when Amazon was the size it was 10 years ago) every single day.” Though Amazon has only released estimates on how many AWS customers it has, it’s safe to assume based on these figures that there are a lot of them.
So, if AWS is becoming a nearly ubiquitous technology, what does that mean for the future of data and how companies work with Amazon moving forward?
1. More companies will be dependent on AWS, both directly and indirectly. While this might sound scary, it doesn’t have to be. AWS gives companies access to some of the most exhaustive user data available, which is why so many organizations have become reliant on it. The key is to make sure that you aren’t too reliant on it, which may require building additional safeguards and back-ups into your own systems to ensure that you aren’t completely derailed by any AWS disruptions. Even if you don’t think that you’re using AWS, it’s possible that you are indirectly through another third-party, so make sure that the vendors that you’re working with are transparent about the resources that they’re using.
2. There will be a greater need to adapt existing technologies to work hand in hand with AWS. In order to maximize efficiency and reduce unnecessary redundancies, it may be necessary to adjust your current internal technologies to work more smoothly with AWS. If you’re making a significant investment in AWS, it doesn’t make sense to also be financing an internal tool that is doing the same thing. Your internal continuous improvements processes should spot these redundancies that can be corrected relatively easily and cost-effectively through application modernization.
In short, AWS is not going away anytime soon, but that isn’t necessarily a bad thing. The reason that AWS has grown to be the giant that it is today is that it provides tremendous value to organizations, and for many the benefits of working with them outweigh the potential risks of relying too heavily on the technology. The key is finding the right way to work together (including storing local backups) and ensure that you’re getting the most out of the services.
- Published in Cloud Storage
Sales are growing fastest for companies that ship generic storage hardware to big cloud providers, IDC says
The cloud is where the action is in enterprise storage.
Sales are way up for little-known manufacturers that sell directly to big cloud companies like Google and Facebook, while the market for traditional external storage systems is shrinking, according to research company IDC.
Internet giants and service providers typically don’t use specialized storage platforms in their sprawling data centers. Instead, they buy vast amounts of capacity in the form of generic hardware that’s controlled by software. As users flock to cloud-based services, that’s a growing business.
Revenue for original design manufacturers that sell directly to hyperscale data-center operators grew by 25.8 percent to more than $1 billion in the second quarter, according to the latest global IDC report on enterprise storage systems. Overall industry revenue rose just 2.1 percent from last year’s second quarter, reaching $8.8 billion.
These so-called ODMs are low-profile vendors, many of them based in Taiwan, that do a lot of their business manufacturing hardware that’s sold under better known brand names. Examples include Quanta Computer and Wistron.
General enterprises aren’t buying many systems from these vendors, but the trends at work in hyperscale deployments are growing across the industry. Increasingly, the platform of choice for storage is a standard x86 server dedicated to storing data, according to IDC analyst Eric Sheppard. Sales of server-based storage rose 10 percent in the quarter to reach $2.1 billion.
Traditional external systems like SANs (storage area networks) are still the biggest part of the enterprise storage business, logging $5.7 billion in revenue for the quarter. But sales in this segment were down 3.9 percent.
The smarts that used to be built into dedicated external storage systems are now moving into overarching virtualization systems that aren’t tied to hardware, Sheppard said. The software, not the hardware, defines the storage architecture. Like computing power, storage can now be managed per virtual machine instead of per unit of storage, which can simplify management and reduce enterprise operating costs over the long term.
All these changes are just beginning to play out and should keep accelerating for the next five years, Sheppard said. “It’s very early days.”
The cloud and virtualization trends didn’t reshuffle the main players in the second quarter but may have influenced some of their results. EMC remained the biggest vendor by revenue with just over 19 percent of the market, followed by Hewlett-Packard with just over 16 percent. EMC, which sells newer technologies like solid-state and software-defined storage but is also deeply invested in traditional platforms, dropped 4 percent in revenue, IDC said.
Other hot trends in storage systems include the growth of startups selling all-flash arrays and the increasing popularity in China of homegrown vendors like Huawei Technologies, Sheppard said.
Overall demand for storage capacity continued to grow strongly, with 37 percent more capacity shipped in the quarter compared with a year earlier.
(IDC is a sister company of IDG, which owns IDG News Service.)
- Published in Cloud Storage
Is your company looking to get started with IoT and big data or improve on how you’re handling it now? Here are six tips from the pros
Is your company looking to get started with the Internet of things and big data, or are you looking to improve on how you’re handling it now? Here are six tips from the pros that should help anyone:
Use the right people. Data scientists are in short supply and get very sizable salaries. But you don’t need to hire data scientists, says Andrew Brust, Senior Director of Technical Product Marketing and Evangelism at Datameer, a big data Analytics and Visualization company. Instead, look at your existing staff for people with data warehouse and IT experience, and are willing to learn, and train them.
Be smart about data capture. Carefully design exactly how you’ll capture IoT data. GE, for example, uses small data-collection appliances that determine what kinds of data to collect, what protocols to use for collection, and how the data should be stored. And keep all of your data, even if you don’t know how you’ll use it, recommends Mike Maciag, Chief Operating Officer at Altiscale, which offers a cloud-based Hadoop platform. As your company strategies change, you may well find a need for it.
Provide an abstract data layer. IoT comes in many different protocols and data standards that aren’t always compatible with one another. Sometimes the data is highly structured, and other times it isn’t. Your best bet is to provide an abstraction layer that can handle multiple data types, including new ones you haven’t yet encountered.
Choose the right platform. Your company may not want to spend its time and money building a large data analytics platform on its own. Consider using one of the many cloud-based ones currently available.
Start with a small pilot, then build out. Intel’s Sharma says many companies bite off more than they can chew when taking on IoT big data projects. Instead, he says, start small with a pilot. Once you’ve got all the problems ironed out, roll it out to the rest of your enterprise.
This story, “6 tips for working with IoT and big data” was originally published byITworld.
- Published in Internet of Things
Amazon dives into the Internet of things with a two-pronged strategy covering both data and devices
Quibble if you will about the definition or long-term viability of the Internet of things, but Amazon is charging full ahead to fashion itself into an catch-all IoT platform.
At the Re:Invent keynote today, Amazon unveiled the AWS IoT framework to not only gather data from devices, but provide device-specific management and introspection functions as well.
AWS IoT presents devices in two ways: the devices themselves, aka “things,” and virtualized representations, or “thing shadows.” The latter lets the user preemptively set the state of devices without requiring a network connection; once a disconnected thing reconnects, it attempts to sync with its shadow and apply any changes pushed (a function natively supported in the MQTT protocol). Devices can also be tracked through a registry.
Amazon surrounds these features with a few additions that, while not explicitly IoT-related, can fall under the heading. A new function for Amazon’s Kinesis Analyticsallows SQL queries to run against streaming data — for instance, as part of a time-series processing job. The service is set to include many prebuilt functions, such as moving averages or totals.
In terms of construction, the heart of AWS IoT isn’t drastically different from that of other Web service back ends. The fact that it’s Amazon makes the difference, what with so many customers already building on top of Amazon’s application, data-storage, and data-ingestation frameworks. Anyone already on Amazon’s cloud has one fewer reason to bother with other IoT integrators. Contrast that withSalesforce IoT Cloud, which limits its appeal to existing Salesforce customers, whereas nearly everyone is a potential AWS customer.
InfoWorld’s David Linthicum made a case for why IoT and public clouds like Amazon’s complement each other: a measure of built-in security, elasticity, and a geographically distributed architecture that works with the devices themselves. It was inevitable that Amazon would become a center of gravity, but now we’ll see if its device-management-plus-data-collection approach pulls people in.
- Published in Internet of Things
The popular Python-powered devops tool joins Red Hat, which plans to make it part of the larger workflow for hybrid clouds
Ansible has been acquired by Red Hat in a deal with terms that remain private, though VentureBeat claims $100 million changed hands. The purchase gives Red Hat a relatively well-known and widely used devops tool for system configuration that it can incorporate into its devops workflow.
In a news release and an FAQ, Red Hat said it saw Ansible’s tool set as a strong and complementary match for its own product line. The acquisition includes the Ansible project, as well as the Ansible Tower commercial product outfitted with many enterprise-grade additions (such as role-based access control). Existing contracts for Tower customers, and the existing Tower development model, will remain in place.
Ansible’s hooks are its simplicity and power, as InfoWorld’s Paul Venezia said in his review. The project enjoys a strong community of developers, as cited by Ansible’sblog post about the acquisition.
Among the major devops automation frameworks, Ansible most closely resembles Salt/SaltStack, which can operate in both agented and agentless mode, while Ansible uses an agentless architecture. Also, both Salt and Ansbile are built on Python, leveraging the convenience, development speed, and breadth of libraries available to the language, along with all the modules already created for it.
Red Hat has a sizable devops tool set, so will it trade them for Ansible and Ansible Tower? That seems unlikely, based on Red Hat’s positioning of its existing solutions, CloudForms and Satellite. CloudForms is mainly aimed at policy, orchestration, and governance of hybrid clouds, not automation; Satellite is for maintaining Red Hat servers and has been integrated with a competing automation system, Puppet.
Rather than replace existing solutions outright, Red Hat plans to adopt Ansible as automation middleware. Configuration requests supplied by CloudForms can be passed on to Ansible, which in turn can automate changes — for example, by deploying Satellite agents on the machines that need them.
Ansible might also edge out the use of Puppet in Satellite, depending on which of the two solutions has the bigger draw with Red Hat’s user base. Satellite itself is also Python-based, meaning Ansible could be a more complementary fit for it in the long run.[An earlier version of this article incorrectly stated that Salt requires an agent for its operations, and uses Ruby as its language.]
- Published in DevOps
Automation is good in many cases — but not all. Too many enterprises don’t make that assessment
We all know the trend: Use the cloud to automate security, governance, and management, and use devops tools and technology to automate the stream of software that flows from the coders to the cloud.
Automation is good. It removes us from the mundane tasks, and it drives a repeatable process that eliminates the element of human error. But enterprises are going a bit nuts with the concept. It’s clear to me that you can overautomate far past a diminished return.
When moving to the cloud and devops, here are the types of duties you want to automate:
- Any task that has a repeatable pattern, such as unit testing, proactive performance monitoring, and removing unused machine instances
- Any task that runs better without a person involved or that are easily automated
- Any task that is noncritical to the business; if it fails, you won’t be hurt too badly
However, here’s what you should perhaps not automate:
- A task that requires constant human intervention; if a person must constantly make a decision, automation brings little value
- A task that is not repeatable, which means it’s difficult or impossible to pre-program an automated response (sorry, machine learning is nowhere near that advanced)
- A task that is critical to the business — if it fails, you will be badly hurt
Cloud and tool providers tell IT that automation leads to productivity, which leads to efficiency and, in turn, a return on investment. Although the notion is generally true, it’s not consistently true. You must make the decision to automate, or not, on a case-by-case basis.
As you move to cloud and use devops as a path to leveraging the cloud more effectively, you might have too many items you can automate. Keep in mind that simply because a task can be automated doesn’t mean it should. Pick your automation opportunities, battles, risk, and — ultimately — ROI thoughtfully.
In the next few years, I figure many enterprises will find they overdid the automation because they could. Try not to be one of those enterprises.
- Published in DevOps
Bruno Connelly, vice president of engineering at LinkedIn, describes how transforming operations gave rise to a new, hyperscale Internet platform
Bruno Connelly is not a fan of the term devops, mainly because it means different things to different people.
In certain startups, for example, devops simply means that developers shoulder tasks once performed by operations. But at LinkedIn, where as VP of engineering Connelly has led the company’s site reliability efforts for five and a half years, operations has expanded its role to become more vital than ever while providing developers with the self-service tools they need to be more productive.
You might call that devops done right. In fact, Connelly’s buildout of operations holds valuable lessons for any organization that needs to scale its Internet business. For LinkedIn, that growth has been dramatic: Over the past five years, the service has ballooned from around 80 million to nearly 400 million users — and from basic business social networking to a wide array of messaging, job seeking, and training services.
Throughout that expansion, Connelly has played a key role in creating new sets of best practices and infrastructure-related technologies. More importantly, he has helped lead a transformation of operations culture that has affected the entire company.
A shaky situation
When Connelly joined LinkedIn in 2010, both traffic and the brand were taking off — and LinkedIn.com was creaking under the load. “We struggled just keeping the site up. I spent my first six months, maybe a year, at LinkedIn being awake and on a keyboard with a bunch of folks during those periods trying to get portions, if not all, of the site back up.”
The team he inherited was great, he says, but there were only six or seven of them, as opposed to a couple of hundred software engineers writing code constantly. “I was hired at LinkedIn specifically to scale the product, to take us from one data center to multiple data centers, but also to lead the cultural transition of the operations team,” he says.
As with many enterprise dev shops today, developers had no access to production — nor even to nonproduction environments without chasing down ops first. “The cynical interpretation is that operations’ job was to keep developers from breaking production,” Connelly says. Essentially, new versions of the entire LinkedIn.com site were deployed every two weeks using a branch-based model. “People would try to get all their branches merged. We’d get as much together as we could. If you missed the train, you missed the train. You had to wait two weeks.”
Adding to the frustration were the site rollouts themselves, which Connelly remembers as “an eight-hour process. Everyone was on deck to get it out there.” At a certain point in that process, rollback was impossible, so problems needed to be fixed in production. At the same time, the site ops team had to maintain the nonproduction environment “just to keep that release train going, which is not a healthy thing.”
Change came from the top, driven by David Henke, LinkedIn’s then-head of operations, and Kevin Scott, who was brought in from Google in 2011 to run software engineering. Connelly reported to Henke and was charged with changing the role of operations.
The first priority across the company was to stop the bleeding and get everyone to agree that site reliability trumped everything else, including new product features.
Along with that imperative came a plan to make operations “engineering focused.” Instead of being stuck in a reactive, break-fix role, operations would take charge of building the automation, instrumentation, and monitoring necessary to create a hyperscale Internet platform.
Operations people would also need to be coders, which dramatically changed hiring practices. The language of choice was Python — for building everything from systems-level automation to a wide and varied array of homegrown monitoring and alerting tools. The title SRE (site reliability engineer) was created to reflect the new skillset.
Many of these new tools were created to enable self-service for developers. Today, not only can developers provision their own dev and test environments, but there’s also an automated process by which new applications or services can be nominated to the live site. Using the monitoring tools, developers can see how their code is performing in production — but they need to do their part, too. As Connelly puts it:
Monitoring is not something where you talk to operations and say: “Hey, please set up monitoring on X for me.” You should instrument the hell out of your code because you know your code better than anyone else. You should take that instrumentation, have a self-service platform with APIs around it where you can get data in and out, and set up your own visualization.
On the development side, Connelly says that Scott established an “ownership model and ownership culture.” All too often, developers build what they’re told to build and hand it off to production, at which point operations takes on all responsibility. In the ownership model, developers retain responsibility for what they’ve created — improving code already in production as needed. Pride in software craftsmanship became an important part of the ethos at LinkedIn.
Altogether, a great deal of self-service automation has been put into place. I asked if, on the operations side, whether some engineers feared they were automating themselves out of a job. Connelly’s answer was instructive:
My personal opinion is that is absolutely the right goal. We should be automating ourselves out of a job. In my experience, though, that never happens — it’s an unreachable goal. That’s point one … point two is there’s a lot of other stuff that SREs do, especially what we call embedded SREs. They are part of product teams; they are involved with the design of new applications and infrastructure from the ground up so they are contributing to the actual design. “Hey, there should be a cache here, this should fail this way …”
Meanwhile, the monitoring, alerting, and instrumentation has grown more sophisticated. To ensure high availability, operations has written software to simulate data center failures multiple times per week and measure the effects. “We built a platform last year called Nurse, which is basically a workflow engine, where you can define a set of automated steps to do what we associate with a failure scenario,” Connelly says. Currently, he says he’s building a self-service escalation system with functionality similar to that of PagerDuty.
The most important lesson from LinkedIn’s journey is that the old divisions between development and operations become showstoppers at Internet scale. Developers need to be empowered through self-service tools, and operations needs a seat at the table as applications or services are being developed — to ensure reliability and to inform the creation of appropriate tooling. Call it devops if you like, but anything less and you could find yourself on shaky ground.
- Published in DevOps
Delphix’s first ‘State of DevOps’ report finds even the definition of the word is in flux among its practitioners
Data-as-a-service company Delphix has launched its first annual “State of DevOps” report, which attempts to gather data from “leaders and practitioners” across North American and European enterprises on how they see devops.
One of the biggest questions, Delphix believes, is the very definition of the term: What does devops stand for among its implementers, what is the term meant to encompass, and how should it be handled?
Leaders, practitioners, and everyone in between
Delphix’s survey breaks devops people into two camps: “leaders” and “practitioners.” The former includes those who self-identify as being part of a “a strongly defined and successful series of DevOps initiatives.” Only 10 percent of those surveyed identified themselves as leaders; another 59 percent identified as practitioners who were involved in ongoing devops work or planned on starting such work. (The remaining 31 percent evidently didn’t meet the criteria for either category.)
Much of the rationale for seeing devops teams in this binary fashion is the belief that while there are plenty of definitions for devops, few are in agreement. The ones who have the sharpest definition of the term, claims Delphix, benefit the most simply because they’re able to better describe the missions at hand.
However, not everyone feels an exact definition is needed. Adam Jacob, CTO of Chef, has likened devops to kung fu: The implementations vary, but those who practice the art recognize its other practitioners as well.
Data, devops, and cloud deployments
Delphix’s background in data virtualization influenced the report’s approach. One section, entitled “The State of Data in DevOps,” covered how devops teams deal with live data. Ninety percent of the respondents cited limitations with their testing environments due to data management issues, saying they needed full production data to do devops work and more often than not simply gave developers unaudited access to production data. (The report doesn’t attempt to connect such behavior to data leaks, but asserts that “companies are opting for agility over security.”)
Even aside from Dephix’s theses about devops, the data gathered about specific devops activities is intriguing. The most often cited reason why organizations embrace devops (true for 70 percent of leaders and 59 percent of practitioners) was pressure from other parts of the organization to deliver — to get products out faster, to reduce defect counts — far more than the need to accomplish more with less.
Another intriguing finding concerns what types of devops projects get the lion’s share of attention. The lowest-ranked item in the report was “deployments to private cloud,” cited by 29 percent of leaders and 47 percent for practitioners. “Testing” and “continuous integration” both ranked only incrementally higher. The reason for the private cloud’s low ranking wasn’t teased out in the report, but may reflect how the ops side of devops is potentially endangered by cloud; those with major cloud initiatives already enacted simply have less for their ops teams to do.
- Published in DevOps
In the cloud, infrastructure is accessible via APIs, and developers now have complete control.
There’s something new on the horizon, taking theprinciples of devops to a new area. It’s called infrastructure as code (IaC), and it’s how you configure cloud-based infrastructure via applications, using APIs.
This is a hard turn from the old days of configuration management, with sys admins controlling the platforms on behalf of developers.
There are compelling reasons for this new approach:
- Applications are now configurable for only the platform resources they need, programmatically. You can allocate the right amount of storage, memory, and processor instances. There is no more adapting the applications for the limitations of the platform; you can make the platform anything you want.
- The costs should drop for each application, considering you use only the resources you need. (If you check server utilization at any data center, you’ll see it’s typically less than 3 percent. The excess capacity represents wasted dollars.)
- You don’t need as much management and operations resources, both people and technology. You give up control to developers, so those items are now redundant.
The larger question: Is passing control of the infrastructure to developers a good thing? You bet. With the rise of devops and its tight coupling of developers and operations, the ability for developers to dynamically configure their deployment platforms via APIs is a better way to manage many applications and many platform instances.
You can now match platforms to applications, one to one. This is different from the past, where developers had to accept whatever platform was set up, then fight for any necessary configuration changes.
That said, this new approach requires culture change. People who are part of legacy processes and roles won’t easily accept this new process, and I suspect they will push back hard on IaC. However, organizations will discover that IaC works better, and the old guard will need to adjust. This notion simply makes sense.
- Published in DevOps