AI Architecture - Notions on Training and Inference

CPU · GPU · TPU · Edge Computing The problem is not that AI is expensive. It’s that for years, people paid to train models as if that were the main cost — when the real cost, the one that never stops, is serving every response. TL;DR Inference costs exceed training by 15x–20x over a model’s operational lifetime. Optimizing for training while ignoring inference is optimizing the wrong problem. CPU (Intel Xeon AMX): the correct choice when the model lives alongside the data. Network latency kills any compute gain from moving to a GPU cluster. NVIDIA GPU (Blackwell/Hopper + TensorRT-LLM): still the default for research and heterogeneous production. CUDA is a 20-year moat. Don’t lock in at peak prices. Google TPU v6/v7: the right answer for high-volume, predictable inference. Midjourney cut monthly costs from $2.1M to $700K. The CUDA migration barrier no longer exists. Edge AI: thermodynamics, not algorithms, sets the limits. Pi 5 + Hailo-10H delivers 320 ms TTFT (6.4× faster than CPU-only) with a PCIe x1 bottleneck you need to design around. The right hardware is not the most powerful. It is the one that matches the problem topology to the silicon architecture without wasting energy or budget. Introduction In 2023, Nvidia published a post titled What Is AI Computing? focused on handling intensive computations — particularly useful for embedding design and optimization processes in Machine Learning — and advancing toward hardware acceleration to find patterns in immense amounts of data, thereby updating the assumptions of ML or AI models. All of this typically runs on GPUs. ...

April 5, 2026 · Carlos Daniel Jiménez

📬 Did this help?

I write about MLOps, Edge AI, and making models work outside the lab. One email per month, max. No spam, no course pitches, just technical content.