Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Amazon SageMaker Ground Truth Plus: Enhanced Data Labeling
Latest   Machine Learning

Amazon SageMaker Ground Truth Plus: Enhanced Data Labeling

Last Updated on July 26, 2023 by Editorial Team

Author(s): Juv Chan

Originally published on Towards AI.

Cloud Computing

Highlights of AI & ML Launches at AWS re: Invent 2021 Keynotes

AWS re:Invent 2021 β€” Opening Keynote with Adam Selipsky, CEO, Amazon Web Services

Preface

The year 2021 marks a memorable milestone for Amazon Web Services (AWS) as it celebrates both re: Invent’s 10th anniversary as well as its 15th anniversary. This post consolidates and summarizes all the AI and ML-related launch announcements across various keynotes at the AWS re: Invent 2021. Some of these launch announcements are the same or similar in more than one of the keynotes, which mark their significance.

The full AWS re: Invent keynote sessions are now available for on-demand views on the re:Invent website as well as the AWS official YouTube channel.

10 years of re:Invent

AWS Graviton3: 3x faster for ML Workloads

AWS Graviton3 launch at Adam Selipsky’s Keynote

AWS Graviton3 is the first launch at Adam Selipsky’s keynote. Graviton3 is the latest AWS-designed Arm-based processor which is 25% faster on average for general compute workloads, performs even better on certain specialized workloads e.g. 3x faster for ML workloads, and consumes up to 60% less energy compared to Graviton2.

There are also highlights on Graviton3 performance improvements or bandwidth increments over Graviton2 on some general compute workloads, memory bandwidth as well as ML inference workloads as shown below at Peter DeSantis keynote.

Graviton 3 Specialized Workloads Performance at Adam Selipsky’s Keynote
Graviton3 Improvements over Graviton2 on Real Workloads at Peter DeSantis’s Keynote
Graviton3 Improvements over Graviton2 on Memory Bandwidth at Peter DeSantis’s Keynote
Graviton3 ML Inference Performance Improvement over Graviton2 at Peter DeSantis’s Keynote

C7g Instance for EC2: First EC2 Instance Type powered by AWS Graviton3 (Preview)

C7g Instance for EC2 launch at at Adam Selipsky’s Keynote

Amazon EC2 C7g is the first Graviton3-based EC2 instance type. It takes advantage of the latest improvements and benefits offered by the Graviton3 processor. It is available in preview now. Sign up for preview.

Amazon EC2 C7g β€” Graviton3 Specifications at Peter DeSantis’s Keynote

Trn1 Instance for EC2: First EC2 Instance Type powered by AWS Trainium (Preview)

Trn1 Instance for EC2 Features at at Adam Selipsky’s Keynote

Amazon EC2 Trn1 is the first Trainiumbased EC2 instance type. AWS Trainium is the custom, high-performance machine learning (ML) chip designed by AWS to deliver the best price-performance for training deep learning models in the cloud.

Trn1 is also the first EC2 instance type with up to 800 Gbps network bandwidth which is ideal for large-scale, multi-node distributed training use cases. It is available in preview now. Sign up for preview.

Amazon EC2 Trn1 Specifications at Peter DeSantis’s Keynote
Amazon EC2 Trn1 Network Bandwidth Comparison at Peter DeSantis’s Keynote

Amazon SageMaker Canvas: A visual, no-code interface to build ML models without ML Expertise

Amazon SageMaker Canvas launch at at Adam Selipsky’s Keynote
Amazon SageMaker Canvas launch at Swami Sivasubramanian’s Keynote

Amazon SageMaker Canvas is a new capability of Amazon SageMaker that enables users who are without any machine learning, data science or coding experience e.g. business analysts to generate highly accurate ML models via a simple, visual, point-and-click user interface. It is generally available at launch.

Amazon SageMaker Canvas console user interface
Amazon SageMaker Ground Truth Plus launch at Swami Sivasubramanian’s Keynote

Amazon SageMaker Ground Truth Plus is a turnkey data labeling service that enables users to build high-quality training datasets without having to build labeling applications and manage their own labeling workforce. Ground Truth Plus provides ML-based labeling techniques, including active learning, pre-labeling, and machine validation. It is generally available at launch.

Amazon SageMaker Ground Truth Plus console user interface

Amazon SageMaker Studio Notebook: Big Data Sources Native Integration

Amazon SageMaker Studio Notebook new feature launch at Swami Sivasubramanian’s Keynote

Amazon SageMaker Studio Notebook now has built-in integration with Apache Spark, Apache Hive, and Presto running on Amazon EMR clusters and Data Lakes on Amazon S3, with support for additional data sources in early 2022. Users can now connect to these data sources from SageMaker Studio Notebook to perform data engineering, analytics, and ML workflows within the same notebook. It is generally available at launch.

Amazon SageMaker Studio Notebook Data Integration: How It Works at Swami Sivasubramanian’s Keynote
Sample Studio Notebook Clusters Connection Drop-down

Amazon SageMaker Infrastructure Innovation: Training Compiler, Inference Recommender & Serverless Inference

Amazon SageMaker Infrastructure Innovation launch at Swami Sivasubramanian’s Keynote

Amazon SageMaker Training Compiler is a new capability of SageMaker that makes graph- and kernel-level optimizations that use GPUs more efficiently s to reduce training time by up to 50% on GPU instances. SageMaker Training Compiler is integrated into the AWS Deep Learning Containers (DLCs).

Using the SageMaker Training Compiler–enabled AWS DLCs, you can compile and optimize training jobs on GPU instances with minimal changes to your code.

SageMaker Training Compiler is available at no additional charge within SageMaker and can help reduce total billable time as it accelerates training. It is generally available at launch.

Amazon SageMaker Inference Recommender is a new capability of Amazon SageMaker that reduces the time required to deploy ML models into production by automating load testing and model tuning across SageMaker ML instances.

Inference Recommender helps users to select the best available instance type and configuration (e.g. instance count, container parameters, and model optimizations etc.) to deploy ML models for optimal inference performance and cost. It is generally available at launch.

Amazon SageMaker Serverless Inference is a new capability of Amazon SageMaker and a new inference option that enables users to easily deploy ML models for inference without having to configure or manage the underlying infrastructure.

Serverless Inference is ideal for workloads that have idle periods between traffic spurts and can tolerate cold starts. It is in preview at launch.

Amazon Kendra Experience Builder: Build & Deploy Search Applications without writing code

Amazon Kendra Experience Builder launch at Swami Sivasubramanian’s Keynote

Amazon Kendra is an intelligent search service powered by machine learning. Amazon Kendra Experience Builder is a new capability of Amazon Kendra which enables users to quickly deploy a fully-featured, customizable intelligent search application in a few clicks and without any coding required. It is generally available at launch.

Amazon Kendra Experience Builder console user interface

Amazon Lex Automated Chatbot Designer: Automate Conversational Design (Preview)

Amazon Lex Automated Chatbot Designer launch at Swami Sivasubramanian’s Keynote

Effective conversational design separates good chatbots from bad ones

Amazon Lex Automated Chatbot Designer is a new capability in Amazon Lex that enables chatbot developers to easily design chatbots from conversation transcripts in hours rather than weeks.

The new automated chatbot designer can automate conversational design by using ML to analyze conversation transcripts and semantically cluster them around the most common intents and related information, thus minimizing developer effort and reducing the time it takes to design a chatbot. It is in preview at launch.

Amazon Lex Automated Chatbot Designer console user interface

Amazon SageMaker Studio Lab: Free Web-based ML Development Environment (Preview)

Amazon SageMaker Studio Lab launch at Swami Sivasubramanian’s Keynote

Amazon SageMaker Studio Lab is a free web-based ML development environment that provides the compute, storage (up to 15GB), and security for anyone to learn and experiment with ML. Anyone with a valid email address, even without an AWS account can sign up to use this service at no cost.

Sign up for a new account. It is in preview at launch.

Amazon SageMaker Studio Lab console user interface

AWS AI & ML Scholarship Program & AWS DeepRacer Student: ML Education & Competition

AWS AI & ML Scholarship Program launch at Swami Sivasubramanian’s Keynote
AWS DeepRacer Student

The AWS AI & ML Scholarship program, in collaboration with Intel and Udacity, aims to help underrepresented and underserved global high school and college students learn foundational ML concepts to prepare them for careers in AI and ML. It is launching as part of the all-new AWS DeepRacer Student service and AWS DeepRacer Student League.

The AWS AI & ML Scholarship program awards 2,000 students per year with scholarships for the Udacity AI Programming with Python Nanodegree program (a $4,000 USD value). To enroll in the AWS AI & ML Scholarship program, first sign up at the AWS DeepRacer Student service with a valid email address. Note that this student player account is separate from an AWS account and doesn’t require any billing or credit card information.

Learn more about the program details, including how it works as well as its journey and timelines here.

Thanks very much for reading this article. Your feedback is welcome and appreciated to improve the content quality of this article.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓