Introduction to ETL Pipelines for Data Scientists
Last Updated on July 3, 2024 by Editorial Team
Author(s): Marcello Politi
Originally published on Towards AI.
Learn the basics of data engineering to improve your ML models
Photo by Mike Benna on Unsplash
It is not news that developing Machine Learning algorithms requires data, often a lot of data. Collecting this data is not trivial, in fact, it is one of the most relevant and difficult parts of the entire workflow. When the data is not good, the algorithms trained on it will not be good either.
For example, recently, I started working on developing a model in an open-science manner for the European Space Agency for fine-tuning an LLM on data concerning earth observation and earth science. The whole thing is very exciting, but where do I get the data from?
In this article, we will look at some data engineering basics for developing a so-called ETL pipeline.
I run the scripts of this article using Deepnote: a cloud-based notebook thatβs great for collaborative data science projects and prototyping.
In data engineering, when we talk about pipelines, we basically talk about moving data from one place to another. In the case of training an LLM, we probably want to scrap text from various sources, such as Wikipedia, open books, datasets on hugging-face, etc. All these data, though are in different places and all have different formats, so the task starts to… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI