Anatomy of an MLOps Pipeline - Part 2: Deployment and Infrastructure

Complete MLOps Series: ← Part 1: Pipeline | Part 2 (current) | Part 3: Production → Anatomy of an MLOps Pipeline - Part 2: Deployment and Infrastructure 8. CI/CD with GitHub Actions: The Philosophy of Automated MLOps The Philosophical Foundation: Why Automation Isn’t Optional Before diving into YAML, let’s address the fundamental question: why do we automate ML pipelines? The naive answer is “to save time.” The real answer is more profound: because human memory is unreliable, manual processes don’t scale, and production systems demand reproducibility. ...

January 13, 2026 · Carlos Daniel Jiménez

MLops into Raspberry Pi 5

One of the tools I use most for practicing MLOps, both for designing pipelines and APIs (for inference), is the Raspberry Pi. Today, I spent several hours trying to install Visual Studio Code to complement my iPad Pro as a development tool. Why this setup? 🤔 Improve programming skills—I am a big fan of using Weights & Biases (W&B) to monitor the resource usage of each service I create. Using the Raspberry Pi as a server allows me to test Edge computing deployments. For scalable prototype development, it’s a great way to test artifacts and the lifecycle of models. When designing a model from hyperparameters, it helps me fine-tune grid search or Bayesian methods efficiently to optimize experimentation. Running MLflow on Edge computing enables optimization in model registry and updates. Using Docker and Kubernetes helps ensure clean code before committing changes. There are many more reasons, but these are the main ones. Now, how do you set up Raspberry Pi to unlock its full power for MLOps? ...

February 23, 2025 · Carlos Daniel Jiménez

📬 Did this help?

I write about MLOps, Edge AI, and making models work outside the lab. One email per month, max. No spam, no course pitches, just technical content.