Roadmap
Our product roadmap is where you can learn about what features we're working on and what's coming next. Have any questions or comments? Share your feedback via our Discord.
Current Features
- Local model inference (llama.cpp embedded in Rust)
- No external model provider required
- GPU acceleration (Metal on macOS, OpenMP on Linux)
- GGUF model support from Hugging Face
- Tools
- Shell command execution
- File read/write
- File globbing
- Web search (DuckDuckGo)
- Multi-turn chat with context management
- Model management (pull, list, remove)
Coming Soon
- Voice interaction: Enable voice-based interaction with the agent
- Image/video generation: Enable the agent to generate images and videos
- More models support: Expand the list of supported GGUF models
- MCP support: Use Model Context Protocol for extensible tooling
- More tools: Additional built-in tools for the agent
- And more...