AI and Machine Learning

Experiment Tracking with MLflow: From Hyperparameters to Production

This video series takes you from ML fundamentals to production-ready workflows using MLflow. You’ll learn how supervised/unsupervised learning maps to real tasks, why hyperparameters (learning rate, depth, regularization) make or break performance, and how to avoid “parameter chaos.” Then we introduce MLOps—versioning, automation, and reproducibility—and show how MLflow becomes the backbone for tracking experiments, comparing runs, registering models, and promoting versions to production. In guided demos, you’ll set up MLflow, run Scikit-learn experiments (Wine Quality dataset), log metrics/params/artifacts, and use the Model Registry with aliases for clean inference. We close with an applied Q&A on predicting football scores, covering data needs, model selection, and finding datasets (e.g., Kaggle). By the end, you’ll have a practical, repeatable workflow for training, comparing, and shipping models with confidence.

Welcome to the “Experiment Tracking with MLflow” Series, led by Victor Franco Matzkin, Machine Learning Engineer at Azumo

This series introduces one of the most overlooked but essential aspects of the machine learning lifecycle: experiment tracking. From understanding hyperparameters and model configuration to organizing, comparing, and deploying models, Victor demonstrates how to bring structure and scalability to the model development process using MLflow.

Whether you’re an aspiring data scientist, an ML engineer looking to automate experimentation, or a manager seeking visibility into model performance, this course offers a practical, hands-on approach to modern MLOps workflows.

Machine Learning: How to Get Started in Less than 2 Minutes!

Victor starts by breaking down what machine learning actually is — a branch of AI that teaches computers to learn from data through supervised, unsupervised, and reinforcement paradigms.

He explains how training a model means configuring thousands of parameters and why getting those configurations right from the start is crucial for accurate predictions. This quick overview lays the foundation for understanding why experiment tracking matters — because every “good” model depends on reproducible configurations and controlled iterations.

Experiment Chaos: Finding the Right Hyperparameters

Hyperparameters control how a model learns — from the number of layers in a neural network to the learning rate and regularization strength.

In this video, Victor shows how experimenting with these settings can drastically change outcomes. Too fast a learning rate can derail training; too slow can take forever. He illustrates this challenge through visual examples of overfitting, underfitting, and convergence curves — the “parameter mess” that every ML practitioner faces.

The session closes with an introduction to how MLflow helps turn this chaos into organized experimentation.

MLOps Demystified: Unlock Model Magic with MLflow!

Machine learning doesn’t exist in isolation — it’s part of a larger system involving data pipelines, infrastructure, and deployment.

Victor introduces MLOps as the convergence of DevOps, data engineering, and machine learning, highlighting how it shortens the development lifecycle and ensures reproducible, high-quality models.

He then connects theory to practice, showing how MLflow fits into the MLOps stack as a foundational tool for tracking experiments, managing versions, and integrating seamlessly with orchestrators like Airflow.

You’ll come away understanding how MLOps turns experimentation into production-ready reliability.

How to Set Up and Track Experiments in MLflow

Time to get hands-on.

In this section, Victor demonstrates how to set up MLflow and start tracking experiments using the Wine Quality dataset. He walks through creating a virtual environment, installing MLflow, and configuring experiments in Scikit-learn.

You’ll learn how MLflow logs every run — parameters, metrics, and configurations — and how to compare them in its clean, intuitive interface.

This step transforms disorganized notes and Jupyter chaos into a transparent, sharable record of every experiment.

Tuning Hyperparameters and Managing Model Experiments

Now that the experiments are running, Victor explains how to take them to the next level.

You’ll explore how MLflow allows you to compare runs, register models, and manage versions — turning experimental results into deployable assets.

Through live examples, he shows how to use the Model Registry to track different versions, assign aliases like “production” or “staging,” and retrieve specific models for inference.

This part bridges the gap between model experimentation and real-world MLOps, showing how version control empowers teams to collaborate and scale efficiently.

Your First ML Project: Predicting Football Scores

In this interactive Q&A session, Victor answers a common beginner’s question: How much data do I need to start an ML project?

Using the example of predicting football scores, he walks through how to collect, structure, and prepare data, and how to choose simple models for experimentation.

He introduces Kaggle as a resource for open datasets and explains why even small, personal projects can teach you valuable lessons about feature engineering, evaluation, and iteration.

It’s a relatable, motivating conclusion that connects technical learning with real-world creativity.

To Sum Up

By the end of this series, you’ll have learned how to go from an idea to a fully traceable, reproducible machine learning workflow.

You’ll understand the difference between hyperparameters and learned parameters, how MLOps principles bring order to experimentation, and how MLflow serves as your single source of truth for every model you build.

From tuning and training to tracking and deploying, you’ll gain the confidence to manage your ML projects like a pro — structured, scalable, and production-ready.

‍

About the Author:

Franco Matzkin

Machine Learning Engineer | PhD Researcher | Expert in Deep Learning & Computer Vision | Scalable AI Solutions

Franco Matzkin, Machine Learning Engineer at Azumo, specializes in deep learning, computer vision, and AI solutions for the energy sector, driving smarter decisions.