Did Docker’s Model Runner Just DESTROY Ollama?



Date: 04/28/2025

Watch the Video

Okay, this video is seriously worth a look if you’re like me and trying to weave AI deeper into your development workflow. It basically pits Docker against Ollama for running local LLMs, and the results are pretty interesting. They demo a Node app hitting a local LLM (Smollm2, specifically) running inside a Docker container and show off Docker’s new AI features like the Gordon AI agent.

What’s super relevant is the Gordon AI agent’s MCP (Multi-Container Placement) support. Think about it: deploying and managing complex AI services that need multiple containers (like microservices, but for AI) can be a real headache. This video shows how Docker Compose makes it relatively painless to spin up MCP servers, something that could simplify a lot of the AI-powered features we’re trying to bake into our applications.

Honestly, I’m digging the idea of using Docker to manage my local AI models. Containerizing everything just makes sense for consistency and portability. It’s a compelling alternative to Ollama, especially if you’re already heavily invested in the Docker ecosystem. I’m definitely going to play around with the Docker Model Runner and Gordon to see if it streamlines my local LLM experiments and how well it plays with my existing Laravel projects. The ability to version control and easily share these AI-powered environments with the team is a HUGE win.