Technology

“It’s all about the money” – Our takeaways from AWS re:Invent 2022

Amazon’s cloud business, Amazon Web Services, lured some 60.000 visitors to Las Vegas in December 2022 for its annual developer conference, re:Invent. You can view most of the session for free, and highlights of key moments are all over social media and tech media.

Now, we want to offer our take on the undercurrents we observed at re:Invent 2022 and what these notions imply for our customers.

Behind the catchy session titles and grandiose promises of better performance, improved productivity, and more, what we heard was it’s all about the money. In short, beyond the factual product announcements and newly added features to existing AWS services, the key takeaways from the re:Invent 2022 conference, combined with our observations about the industry in general:

  1. CFOs are increasingly running the show, and costs are very much top of mind of many an IT department wanting to understand and optimize their cloud economics.
  2. Optimizing the type of cloud instances your workloads are running on might offer quite significant savings, up to a 50% reduction in your cloud bill
  3. Swapping your reserved and on-demand instances with a Serverless model benefits AWS, too, but that doesn’t change the fact that the pay-as-you-go model can be a huge relief on your spreadsheet.
  4. Industries such as manufacturing, logistics, and transportation can save big by applying the latest advances in IoT technology and AI/ML to optimize energy efficiency and predictive maintenance.

CFOs are increasingly running the show, and costs are very much top of mind of many an IT department wanting to understand and optimize their cloud economics.

We’re firmly in the cost-conscious phase of the cyclic ebb-and-flow of the IT industry’s life cycle. Leaders are increasingly double-clicking on that ever-increasing number labeled “cloud spend” in their spreadsheets. Questions around managing – and especially reducing – cloud costs have been posed to AWS customer success managers and solution architects around the globe. Accordingly, this year’s AWS re:Invent had plenty of sessions addressing cloud economics, governance, and cost optimization needs.

Unfortunately, the back office has long been the ugly duckling of cloud service providers focused on providing faster hardware, launching new services, and providing higher-level abstractions over existing foundational services. The daily cost and usage report (CUR) file export is a good starting point, being the most granular data currently available. 

Still, it precludes creating real-time visibility and is challenging to aggregate and attribute. Perhaps not surprisingly, the re:Invent expo area was bursting with vendors offering cloud cost optimization solutions to address this pain for AWS customers. AWS Well Architected Labs also has a lab and ready-made templates for creating your own Cloud Intelligence Dashboards and, with a few minor changes, can serve as a starting point for creating KPIs for sustainability improvements.

Optimizing the cost of your cloud infrastructure is, of course, desirable. It’s a worthy investment in itself. But especially for the big-ticket players with significant monthly bills, investing in developing better visibility into cloud spend might be a more thoughtful place to start. Not only does this option establish a clear baseline to optimize against, but it also helps you better understand the cross-organization contributions to those costs.

There are a couple of routes you could start to give IT the visibility and the feedback loop they need to target cost-optimization where it matters:

  • Establish unified dashboards across teams.
  • Collect data on active commitments to reserved instances or savings plans, and attribute resource usage across applications, teams, and environments.
  • Track KPIs such as what percentage of your compute needs is covered by reserved instances versus more flexible but costlier on-demand resources. 

Optimizing the type of cloud instances your workloads are running on might offer quite significant savings, up to a 50% reduction in your cloud bill.

It’s hard to argue against the potential of operating your cloud infrastructure more sustainably – and at a lower cost – in terms of fewer compute resources while accomplishing the same outcomes. Based on the various customer case studies, compute-intensive solutions can achieve double-digit cost reductions simply by switching to newer Graviton-based instance types, provided they are available. Switching to newer or different instance types more suitable for your particular workload might halve your cloud bill while delivering higher performance than before.

The Amazon Ads team has been able to gather similar wins. They’re responsible for picking the “sponsored” products you see while browsing Amazon. The team has managed to keep the growth of costs for its ML models linear or below by regularly updating to newer and better-suited instance types for its training and inference workloads. For example, in 2018, the team switched from EC2 P3 instances to SageMaker P3 instances and managed to train and run 50% more ML models at no additional cost. Later, in 2021, the team upgraded from SageMaker P3 to a mix of P3 and P4 instances for training. They switched from SageMaker G4 to AWS Inferentia instances for inference and grew the number of models by 25% while reducing costs by 50%.

Of course, the latest or most powerful instance type isn’t always optimal. Testing different types – and combination of types – for your particular workloads is key to navigating towards an efficient mix that delivers the kind of performance you need. If you haven’t done that in a while, testing what kind of price performance would be available with different instance types might be among the easiest ways to not only cut your cloud costs and improve your performance but also to reach your sustainability goals.

AWS is pushing serverless mainly for its own good, of course, but the pay-as-you-go model also offers significant benefits for AWS customers.

In his keynote, Dr. Werner Vogels announced several additions and updates to Amazon’s services. He also spent a non-trivial share of his time on stage to promote event-driven architectures and serverless computing.

It’s no surprise AWS is pushing for serverless despite many customers ending up with smaller cloud bills than before. Serverless allows customers to avoid paying for compute they don’t need, but it also enables AWS to utilize its infrastructure more efficiently. As customers deploy more granular serverless workloads, AWS has more control over the utilization of the underlying infrastructure. It’s essentially able to serve more customers with a sublinear increase in hardware costs.

The service-oriented, serverless model is great for developers because it simplifies the path to production and the challenge of building scalable, resilient systems. Similarly, event-driven systems have clear technical advantages from their inherently asynchronous nature. However, both serverless systems and event-driven architectures come with a complexity cost, too, that may or may not be justified with the benefits.

Frankly, and as a slight detour from the topic of cost, we do feel the need to contradict Dr. Vogels’ claim in his keynote that event-driven architectures “drive” loose coupling. They certainly facilitate creating a loosely coupled system. Still, coupling is about much more than whether the transport and messaging are asynchronous or synchronous. Two asynchronously connected components waiting for and responding to messages from each other can be just as much coupled and make just as many assumptions about each other as two components integrated with a synchronous REST API. Event-driven or not, to achieve the outcomes you’re looking for, you’ll need intelligent engineering decisions.

Nevertheless, serverless computing has value to offer, which AWS customers have testified to with their continued use and to which AWS has responded by continuing to make more and more of its services available in a pay-as-you-go manner.

Industries such as manufacturing, logistics, and transportation can save big by applying the latest advances in IoT technology and AI/ML to optimize energy efficiency and predictive maintenance.

According to IoT Analytics, the number of connected IoT devices is increasing and is forecast to grow to 27 billion by 2025 (22% CAGR).

Fundamentally, IoT represents more data and more opportunities for utilizing automation and business intelligence to drive better performance, better service, and better business decisions. Advancements in technology over the last few years have facilitated new kinds of customer experiences previously deemed impractical or prohibitively expensive.

For AWS, the accelerating growth of IoT implies a direct increase in data storage, bandwidth, and processing that its customers need to get done. AWS was undoubtedly happy to announce the general availability of its services explicitly targeted at solving IoT-related challenges, from creating digital twins with AWS IoT TwinMaker to managing equipment from various robotics vendors with AWS IoT RoboRunner to locating battery-powered devices not fitted with a power-hungry GPS chip.

When a massive factory can save 20% in energy and 9% in water consumption thanks to better visibility into what’s happening and reacting accordingly, it’s quite a no-brainer to invest in such capabilities. The big question we’d recommend to ask yourself is, what does your upside from IoT technology – and AI/ML that often go hand in hand – look like, and can you justify the investment necessary to reach those potential gains?

While we’re hesitant to promote caution in the face of significant savings and advances in sustainability, reaping the benefits of IoT technology is not exactly low-hanging fruit. Potential is still a theoretical concept, not a guarantee of payback. Thus, we’d recommend starting the journey with a proof of concept to determine what kind of data would be within your reach with IoT technology and an analysis of what, if anything, you could accomplish with that data.

Amazon’s future is very much in the cloud – like the rest of us.

AWS currently represents only 15% of Amazon’s revenue but more than 70% of its operating profit. 

Despite the rising energy costs, we believe they will push that share even higher in 2023. That’s because AWS and its customers have ways to reduce their cloud spending by double-digit percentages: By adopting newer Graviton instance types and moving increasingly from reserved instances and fixed-capacity compute resources towards serverless services, paying only for what they use.

The more granular serverless workloads allow AWS to utilize its infrastructure more efficiently and serve more customers with a sublinear increase in its hardware costs. With the cloud industry estimated to continue growing at a healthy 15% CAGR for the next few years, it’s a win-win.

Given the increasing importance of cost efficiency across so many industries and the trend towards sustainable business beyond merely compensating for carbon emissions, we firmly believe that companies will continue to flock towards cloud services despite regulatory concerns around data residency and data privacy. AWS and its competitors are actively working to help us solve those concerns, too: By giving us tools to control where our data moves, how it’s encrypted at rest, and making it inaccessible while being processed.

We’d be happy to help.

At Reaktor, we love to work on things that build impact beyond the bottom line. Reducing cloud costs helps lower systems’ environmental impact. 

We’d love to help you to

  • Create visibility into your cloud usage and costs
  • Evaluate where you could shave down instance hours with serverless or trim unit cost with reserved instances
  • Implement optimization in cloud workloads
  • Design and build serverless and event-driven architectures that are truly loosely coupled

Sign up for our newsletter

Get the latest from us in tech, business, design – and why not life.