AI Architecture - Notions on Training and Inference

CPU · GPU · TPU · Edge Computing The problem is not that AI is expensive. It’s that for years, people paid to train models as if that were the main cost — when the real cost, the one that never stops, is serving every response. TL;DR Inference costs exceed training by 15x–20x over a model’s operational lifetime. Optimizing for training while ignoring inference is optimizing the wrong problem. CPU (Intel Xeon AMX): the correct choice when the model lives alongside the data. Network latency kills any compute gain from moving to a GPU cluster. NVIDIA GPU (Blackwell/Hopper + TensorRT-LLM): still the default for research and heterogeneous production. CUDA is a 20-year moat. Don’t lock in at peak prices. Google TPU v6/v7: the right answer for high-volume, predictable inference. Midjourney cut monthly costs from $2.1M to $700K. The CUDA migration barrier no longer exists. Edge AI: thermodynamics, not algorithms, sets the limits. Pi 5 + Hailo-10H delivers 320 ms TTFT (6.4× faster than CPU-only) with a PCIe x1 bottleneck you need to design around. The right hardware is not the most powerful. It is the one that matches the problem topology to the silicon architecture without wasting energy or budget. Introduction In 2023, Nvidia published a post titled What Is AI Computing? focused on handling intensive computations — particularly useful for embedding design and optimization processes in Machine Learning — and advancing toward hardware acceleration to find patterns in immense amounts of data, thereby updating the assumptions of ML or AI models. All of this typically runs on GPUs. ...

April 5, 2026 · Carlos Daniel Jiménez

Statistical Learning: Foundations, Bias-Variance and the Art of Estimation

Abstract Imagine you are a hospital administrator deciding whether to deploy a machine learning model that predicts which ICU patients will deteriorate in the next six hours. The model was built by a talented team, trained on two years of electronic health records, and achieves 89% accuracy on a held-out test set. The question you actually need to answer is not in the model card: what does the model not know — and what can it not know, regardless of how much more data you feed it? ...

March 12, 2026 · Carlos Daniel Jiménez

Attention Windows: A Novel Framework for Measuring Narrative Cognitive Load in Beatles vs Pink Floyd

Abstract This research introduces Attention Windows, a novel framework for measuring the cognitive span required by listeners to follow lyrical narratives. How long can a theme persist before the lyrics shift to something new? Building on previous semantic embedding analyses of the Beatles and Pink Floyd, we develop a multi-method approach to quantify this narrative architecture across two iconic albums: The Dark Side of the Moon and Abbey Road. Core Finding (UNEXPECTED): The analysis reveals a systematic failure of distributional semantics to capture abstract thematic coherence in progressive rock. The Beatles exhibit significantly longer attention windows (μ = 0.57 lines, SD = 1.48) than Pink Floyd (μ = 0.25 lines, SD = 0.97) when measured with OpenAI’s text-embedding-ada-002 at its calibrated threshold (θ = 0.85). This counterintuitive result (p < 0.01, Cohen’s d = -0.24) exposes a fundamental limitation: transformer-based embeddings, trained on distributional statistics from web corpora, systematically privilege type-level lexical overlap (repeated tokens, n-grams) over token-level conceptual continuity (abstract themes expressed through synonymy, metaphor, and semantic field variation). The Beatles’ verse-chorus architecture creates high embedding similarity through verbatim repetition, while Pink Floyd’s through-composed approach—deploying varied metaphorical expressions of unified philosophical themes—produces orthogonal embedding vectors despite conceptual unity. This is not a quirk of ada-002 but a structural property of distributional semantics: co-occurrence statistics cannot distinguish “same theme, different words” from “different themes, same words.” ...

February 10, 2026 · Carlos Daniel Jiménez

Anatomy of an MLOps Pipeline - Part 1: Pipeline and Orchestration

Complete MLOps Series: Part 1 (current) | Part 2: Deployment → | Part 3: Production → Anatomy of an MLOps Pipeline - Part 1: Pipeline and Orchestration Why This Post Is Not Another Scikit-Learn Tutorial Most MLOps posts teach you how to train a Random Forest in a notebook and tell you “now put it in production.” This post assumes you already know how to train models. What you probably don’t know is how to build a system where: ...

January 13, 2026 · Carlos Daniel Jiménez

Anatomy of an MLOps Pipeline - Part 2: Deployment and Infrastructure

Complete MLOps Series: ← Part 1: Pipeline | Part 2 (current) | Part 3: Production → Anatomy of an MLOps Pipeline - Part 2: Deployment and Infrastructure 8. CI/CD with GitHub Actions: The Philosophy of Automated MLOps The Philosophical Foundation: Why Automation Isn’t Optional Before diving into YAML, let’s address the fundamental question: why do we automate ML pipelines? The naive answer is “to save time.” The real answer is more profound: because human memory is unreliable, manual processes don’t scale, and production systems demand reproducibility. ...

January 13, 2026 · Carlos Daniel Jiménez

📬 Did this help?

I write about MLOps, Edge AI, and making models work outside the lab. One email per month, max. No spam, no course pitches, just technical content.