Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Navigating the World of Data Engineering: A Beginner’s Guide.
Latest   Machine Learning

Navigating the World of Data Engineering: A Beginner’s Guide.

Last Updated on July 25, 2023 by Editorial Team

Author(s): Data Science meets Cyber Security

Originally published on Towards AI.

A GLIMPSE OF DATA ENGINEERING U+2764

IMAGE SOURCE: BY AUTHOR

Data or data?

No matter how you read or pronounce it, data always tells you a story directly or indirectly. Data engineering can be interpreted as learning the moral of the story.

Welcome to the mini tour of data engineering where we will discover how a data engineer is different from a data scientist and analyst. Processes like exploring, cleaning, and transforming the data that make the data as efficient as possible.

If you ever wonder how predictions and forecasts are made based on the raw data collected, stored, and processed in different formats by website feedback, customer surveys, and media analytics, this blog is for you. Data engineering is a process that includes steps on how the data flows once the data is collected, ingested, and later managed into the storage system like rows and column tables.

IMAGE SOURCE: BY AUTHOR

The first step is data collection and preparation. This step includes cleaning the data for better and more accurate results we need. Cleaning the data can be finding and removing the missing values or duplicate values, converting the values into the same data format of a row, etc.

Now we have organized our data for further exploration and visualization. The visualization of the data is important as it gives us hidden insights and potential details about the dataset and its pattern, which we may miss out on without data visualization. These visualizations can be done using platforms like software tools (e.g., PowerBI, Tableau) and programming languages like R and Python in the form of bar graphs, scatter line plots, histograms, and much more. To learn more about visualizations, you can refer to one of our many blogs on data visualization for a glance.

With the help of the insights, we make further decisions on how to experiment and optimize the data for further application of algorithms for developing prediction or forecast models.

What are ETL and data pipelines?

IMAGE SOURCE: BY AUTHOR

A data pipeline is a process that ensures the flow of data efficiently by automation techniques and reducing errors. The ETL framework is popular for Extracting the data from its source, Transforming the extracted data into suitable and required data types and formats, and Loading the transformed data to another database or location. These data pipelines are built by data engineers.

Extract:

Extraction is the first step that needs to be performed to get the required data ready for processing. The source of extraction of data can be files like text files, excel sheets, word documents, databases like relational as well as non-relational, and also the APIs.

Text files include both unstructured and flat files (row and columns as record and attribute like .tsv and .csv). The JSON format is also used for extraction, which is the JavaScript Object Notation. It is semi-structured with data types like integer, string, null, boolean, and array as well as objects. The APIs deal with requesting responses from the web. For example, Twitter API provides us with a few user info, their tweets, activity, etc.

There are application databases and analytical databases. Application databases include transactions, inserts, updates, changes, and online transactional processing (OLTP) and are row-oriented. OLTP allows the databases to be executed by large numbers of people on the internet in real-time. The analytical databases are column-oriented and follow the OLAP method. OLAP is an approach that performs analysis on huge volumes of data from a store, warehouse, database, etc. We’ll learn more about this in detail ahead.

Transform:

Transformation comes from an extraction of data from the source. It can be the selection of features, validation, splitting as well as joining the data features. These transformations are done to make data more efficient to use and work with, identify hidden patterns and trends, and make our work easier and simpler as much as possible.

E.g., join() and split() methods.

Load:

In short, loading is a process that loads the data from sources like analytical, applications, or MPP databases into your required system or another database. These databases can be both columns as well as row-oriented. This is done with the help of aggregating queries and performing online transactions.

The automation techniques can be extracting the data from file formats and various sources, transformation. Combining different data from different sources, validation of data, and loading the data into the databases. As these techniques are executed by the machine itself, it reduces the chances of human error and requires lesser human attention. One of the major advantages of data pipelines is that it minimizes the time consumption of the data flow.

The data pipelines follow the Extract, Transform, and Load (ETL) framework. Data pipelines play a crucial role in managing and handling data from one system to another. This data can be directly loaded and can be ready to use in the applications.

Difference between Data Engineer and Data Scientist

IMAGE SOURCE: BY AUTHOR

You may have already guessed by now, that it is the job of a Data Engineer to collect, store, prepare, explore, visualize, and experiment with the data and its limits to utilize it as much as possible. Perfectly using the correct data for the right people in every possible way is one of the required skills by the data engineer. The data engineer is responsible for ingesting the data from various external as well as internal sources, optimization of data storage systems, and removing corrupt and unwanted data for others to further carry out tests and advance the development. This is done with the knowledge of how to process a large amount of data and its management with the required and appropriate skills of using the available machines, platforms, and tools for

The skills of Data Engineers:

  • Ingestion and Storage of data
  • Setting up the databases
  • Building the data pipeline
  • Has Strong software skills

Data engineers make a Data Scientist’s life easier. As we now know, the data workflow consists of collection & storage, preparation, exploring, and gaining insights through visualization and experimentation. The data engineers deal with the data collection and storage part of the workflow. The later steps are carried out by the Data Scientists.

The skills of a Data Scientist:

  • Exploiting Data
  • Accessing databases
  • Using pipeline outputs
  • Having Strong analytical skills

The famous Big Data and its Vs

Today’s trending jargon is ‘Big Data’, which you have heard many times in this modern digital era.

While you are reading this, within seconds, a huge amount of data is generated and processed around the whole world in many ways, like sending and receiving emails, playing media like videos and streaming music, and payment transactions. The word ‘big’ can mean a huge quantity concerning the type of data, accuracy, usefulness, size, and volume. Sensors, devices, social media, organizational data, communication channels, and multimedia play a crucial role in the growth, management, and development of big data.

IMAGE SOURCE: BY AUTHOR

The 5 V’s of Big Data

  1. Volume: The size and amount of the data.
  2. Value: The quality and helpfulness of the data. Some of the data can be converted into valuable data.
  3. Variety: The range and diversity of data types like structured (row-column format), unstructured (images, videos) as well as semi-structured data (e.g. JSON, XML, YAML).
  4. Velocity: The speed of the data to ensure a smooth and continuous flow of data for storing, handling, and managing the data in a given period of time like within an hour, day, or a week.
  5. Veracity: It is the accuracy and honesty of the data. The truthfulness of the data showcases confidence in the results.

More on Vs of Big data here: https://www.irjet.net/archives/V4/i9/IRJET-V4I957.pdf

Data Storage

Let us first understand structured, unstructured, and semi-structured data.

Structured Data:

IMAGE SOURCE: BY AUTHOR

About 20 percent of the data is structured in the real world. Structured data as the name suggests follows a structural format. For example, relational databases store the data in row and column model structures. This type of data is easy to organize and can be searched quickly when necessary. These rows and columns contain defined data types like numbers and characters. They are created and managed with the help of SQL queries.

Unstructured Data:

IMAGE SOURCE: BY AUTHOR

This kind of data has no fixed structure and does not follow any model. This data cannot be stored in the row and column format. This kind of data includes images, videos, text, or sound. It is usually stored in data lakes as well as in data warehouses or databases. AI can be used to easily organize and find the required unstructured data.

Semi-structured Data:

IMAGE SOURCE: BY AUTHOR

What makes semi-structured data different from structured data is that even though both kinds of data have a consistent model, semi-structured data has a less-rigid format when it comes to sizes. Different observations can have different numbers of values. They can also be of different data types. Examples of semi-structured data are JSON, XML, and YAML which are used in NoSQL databases.

What is SQL?

SQL is a Structured Query Language used in the industry for relational database management systems (RDBMS). With SQL, the data is stored in row and column format and can be accessed all at once, in groups, by applying filters, and also can be aggregated. It is easier to learn as it is written in the English language. Data engineers use SQL to create and manage databases.

E.g., SQLite, MySQL, PostgreSQL, SQL server, MariaDB

What is NoSQL?

NoSQL means Not Only SQL but more than SQL. They are non-relational databases that do not follow a row-column format. Both structured and unstructured types of data can be stored. It is stored in the “key-value” format rather than the traditional row column. The tables are referred to as document objects in NoSQL databases.

E.g., Redis, MongoDB, Firebase, Cassandra, CouchDB

IMAGE SOURCE: BY AUTHOR

With the large amounts of data generated every millisecond in real-time, we need physical space as well as memory to support and manage the application and its data for further analysis.

HDFS:

IMAGE SOURCE: BY AUTHOR
  • Distributed file system for large data sets.
  • A major component of Apache Hadoop
  • Scales Hadoop clusters into hundreds of nodes.
  • Follows the principle of “write once and read many times”

More on HDFS:

https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html

Apache Hive:

IMAGE SOURCE: BY AUTHOR
  • A warehouse system that is fault-tolerant and distributed.
  • Central store of information for making data-driven decisions.
  • Used language similar to SQL to allow users to read, write and manage data in the form of queries.

More on Hive:

https://cwiki.apache.org/confluence/display/Hive/

Data Processing — Cleaning, manipulation, optimization

The processing of data includes cleaning the data by removing the unnecessary data and keeping the user data as well as manipulating the data for optimization. The manipulation of data includes aggregation and joining methods in the data processing. The data engineer needs to understand the abstractions.

In short, processing is a technique that converts raw data into meaningful information. The following can be used as a template for data engineers to process data.

Apache Spark

IMAGE SOURCE: BY AUTHOR
  • Data processing framework that combines data and AI.
  • Faster because it uses the least recently used (LRU) algorithm.
  • Used for real-time and stream processing, feature selection, and building ML pipelines across programming languages like Python, Java, and Scala.
  • Performs transformation and actions at large scale for ML and graph algorithms.

More on Spark:

https://spark.apache.org/docs/3.3.1/

Scheduling — Batch and streams

Scheduling is a process of planning data processing jobs within specific intervals of time. It considers the time taken to process and complete the task for the next steps. It resolves the dependency requirements of jobs.

Scheduling can be called the “glue” that holds the whole system together. Scheduling organizes and manages every job to work together in harmony. It gives the tasks a specific order and resolves the running dependencies. Scheduling can be done manually, at a specific time, or with a sensor. Sensor scheduling is a task that automatically runs after a specific condition is fulfilled.

  1. Manual: Updating the database table manually.
  2. Specific time: Updating the database table at 7 pm.
  3. Sensor: Updating the database table if a new entry was added or removed.

It can be done in batches as well as in streams. In batches, a group of records is managed together at intervals and is cheaper while in streams individual records are sent in real-time immediately.

Popular Tools: Apache Airflow, Oozie, Luigi

IMAGE SOURCE: BY AUTHOR

Apache Airflow

Parallel Computing

Dealing with lots and lots of data every second, a lot of memory and power consumption takes place. As a solution to these problems, parallel computing comes into the picture. As the name suggests, the smaller subtasks are split and they work individually in parallel. These subtasks are distributed over different computer systems. These jobs work together to finish the task. The task needs to be large and must contain several processing units.

The famous parallel computation framework is Apache Hadoop.

Apache Hadoop:

IMAGE SOURCE: BY AUTHOR
  • Performs computation on large data sets across clusters.
  • Used for batch processing.
  • Written in Java.
  • Handles failures in the application layer.

More on Hadoop:

https://hadoop.apache.org/

Cloud Computing and providers

IMAGE SOURCE: BY AUTHOR

The term “cloud” refers to the data almost being stored in the air like a cloud. Technically, they are cloud services that allow us to use data computing services with maximum reliability.

The physical servers are bought and need a huge amount of space along with electrical and maintenance costs. The constant supply of power is enough for the tasks and sometimes can go to waste.

The servers on the cloud can be rented and do not require space. Many cloud providers follow a “pay-as-you-go” model where you only pay for the resources we need and which are used. It is reliable for data replication for databases.

Tools: AWS, Azure, Google

Data being the center of our everyday business lives, its collection, organization, and management are equally important to make our lives more efficient. Data engineering is vital in today’s world for making future predictions, risk prevention and management, developing new marketing as well as business strategies, and many more applications. Now you know what data engineering is and what skills and tools you need to use to become a data engineer.

FOLLOW US FOR THE SAME FUN TO LEARN DATA SCIENCE BLOGS AND ARTICLES:U+1F499

LINKEDIN: https://www.linkedin.com/company/dsmcs/

INSTAGRAM: https://www.instagram.com/datasciencemeetscybersecurity/?hl=en

GITHUB: https://github.com/Vidhi1290

TWITTER: https://twitter.com/VidhiWaghela

MEDIUM: https://medium.com/@datasciencemeetscybersecurity-

WEBSITE: https://www.datasciencemeetscybersecurity.com/

— TEAM DATA SCIENCE MEETS CYBER SECURITY U+1F499U+2764️

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓