Re:Invent 2019. Huge Leaps Forward in Machine Learning for AWS Cloud

News home

With 65,000 attendees in Las Vegas over 2nd – 6th December, the huge, once-a-year event lived up to its reputation. Announcements included better price-performance workloads, multiple hybrid offerings, SageMaker Studio, leaps forward in Machine Learning cost-efficiency and an AI as-a-service model.

The new chips based on an A1 N1 core promises to deliver up to 40% improved price-performance over competitor instances. This is a huge boom for CirrusHQ’s customers who come to us looking precisely for high-performance to cost efficiencies.

AWS hybrid offerings just got even more flexible. The options available for experts at CirrusHQ to offer those customers who’re looking to expand their network infrastructure into a hybrid cloud environment have become even broader.

SageMaker Studio is the first comprehensive Integrated Developer environment designed for Machine Learning. It enables developers to work through the entire ML Dev cycle as well as run their machine learning models from a single interface.

The release of Inferentia is the second prong in the AWS ML delivery this year. Pre-announced last year, AWS explains that “[the chip] With Amazon EC2 Inf1 instances, customers receive the highest performance and lowest cost for machine learning inference in the cloud.”

However, one of the most exciting announcements for AWS managed cloud services providers like CirrusHQ is the plethora of new services that require zero ML experience, framework or on-site expertise.

The AWS ML stack is a huge step forward, enabling companies like us to allow our customers to leverage high-level natural-language, automation and augmented threat detection technology into their AWS solution.