Build and Run Data Pipelines with Sagemaker Pipelines
Last Updated on June 4, 2024 by Editorial Team
Author(s): Jake Teo
Originally published on Towards AI.
Leverage AWSβs MLOps Platform to run on your large data processing workloads seamlessly
Image from Amazonβs sagemaker official website [1]
In this article, I will show how you can run long-running, repetitive, centrally managed and traceable data pipelines leveraging AWSβs MLOps platform, Sagemaker, and its underlying services, Sagemaker pipelines and Studio.
Sagemaker is a fully managed AWS service which consists of a suite of tools and services to facilitate an end-to-end machine learning (ML) lifecycle.
One of these services is Sagemaker Pipeline, a CICD service to build and publish data and ML workflows. On the other hand, Sagemaker Studio provides a convenient user interface to view and execute pipelines and other ML workloads of a single project or group.
Four stages of setting up your data pipeline with Sagemaker. (Image by author)
This article will be divided into four stages. First, how the processing modules should be designed. Second, how to build an image to access each module with arguments. Third, how to design a data pipeline. And last, how to execute the pipeline with input parameters within Sagemaker Studio.
Prerequisites1. Build Process Scripts2. Build Image3. Build Pipeline4. Execute PipelineConclusion
Since Sagemaker is an AWS service, a fair understanding of the ecosystem is necessary. You will also need to provision an S3 bucket that hosts your dataset, and an AWS… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI