← All episodes
Podcast artwork
Ep 10 April 21, 2026

Run Gemma 4 with Ollama locally, and keep the Aspire LLM Insights (sparkles and all)

Can't use Microsoft Foundry because of compliance or an Azure bill that doubles during an AI development spike, but still want the best AI debugging experience in Aspire? Here's how to keep the full GenAI chat-log sparkles while Ollama and Gemma 4 run locally.

Episode 10 thumbnail