<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>inferops</title><description>MLOps, LLMOps, and AI infrastructure — practical guides for ML engineers and platform teams.</description><link>https://inferops.dev/</link><item><title>Running LLM Inference: How To Evaluate Self-Hosted vs. Cloud in a Regulated Environment</title><link>https://inferops.dev/posts/self-hosted-vs-cloud-llm-inference/</link><guid isPermaLink="true">https://inferops.dev/posts/self-hosted-vs-cloud-llm-inference/</guid><description>A real-world evaluation of self-hosted vs. cloud LLM inference in a regulated enterprise environment — covering total cost of ownership, model selection, regulatory compliance, operational readiness, and the MLOps discipline that makes it work.</description><pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate></item></channel></rss>