hwdsl2/docker-ai-stack: Deploy a complete, self-hosted AI stack on your own server with one command. Includes Ollama (LLM), LiteLLM (AI gateway), Whisper (STT), Kokoro (TTS), Embeddings (RAG), and MCP Gateway. Most services run locally; LiteLLM optionally routes to external providers. Supports NVIDIA GPU (CUDA) acceleration. 🔗Linkerd
DuckTapeKiller/horme: Minimalist, privacy-first AI assistant for Obsidian. Powered by local LLMs by default, requiring no API keys and ensuring zero data leaks. Includes an optimised Vault Brain (RAG) with character-offset indexing and automated privacy firewalls to secure data if optional cloud models are enabled. 🗂️Obsidian