Work

Pipepline

Pipeline
Flux ETL
Pre processing

In this article, I present the steps involved in building a pipeline and provide a site to help you understand the ETL process.

Dashboard

Overview

Pipeline for data wrangling

A machine learning pipeline is a structuring tool used to automate and simplify the data processing and modelling workflow. It consists of an ordered sequence of steps, where each step corresponds to a specific transformation (pre-processing, feature selection, etc.) or learning model. The main objective of a pipeline is to guarantee consistent processing of the data, from preparation to training and prediction, while minimising the risk of manual errors. By encapsulating all the stages in a single entity, a pipeline promotes replicability, improves code readability and facilitates cross-validation of complete processes. It is therefore crucial in machine learning to develop robust and maintainable models, especially in environments where data preparation and model training require a high degree of consistency and rigour.


Link : Pipeline for data preprocessing

Link to the article on the ETL process : Neptune.