llms 5
- Building a Local RAG Pipeline for the Hollow Knight Wiki with Crawl4ai, Supabase and Ollama
- Dockerizing a RAG Application with FastAPI, LlamaIndex, Qdrant and Ollama
- Model Context Protocol - Let's build an MCP server in Python
- Building a Local RAG api with LlamaIndex, Qdrant, Ollama and FastAPI
- Use custom LLMs from Hugging Face locally with Ollama