Familiar lives on your machine, trains custom models on real code from merged PRs and open-source datasets, and works autonomously while you sleep. Local-first, always learning.
Your data never leaves your machine. Your models improve for you alone.
macOS · Linux · Requires git
PaymentService.ts. Two concurrent processCheckout() calls could debit the same balance. Added a mutex lock around the critical section. Tests pass. (session logged to Forge)
Familiar gives you a local AI that actually improves. Install it, point it at your repos, and it starts building models trained on real code — merged PRs, open-source datasets, and ground-truth diffs.
One command. The setup wizard walks you through everything.
Forge mines real merged code from GitHub and open-source datasets. No cloud model outputs used.
LoRA fine-tuning on your GPU. Models improve based on real code patterns.
Cloud AI tools ship a frozen model. You use it, they collect your data, the model never changes for you. Familiar is different.
The Forge pipeline mines ground-truth code from merged PRs and open-source datasets, fine-tunes local models, and deploys them — automatically. The more you train, the more it understands your domain and coding patterns. Your data stays on your machine. Your models improve for you alone.
Feature-by-feature breakdown against alternatives. No spin — just what each tool can and can't do.
| Feature | Familiar | OpenClaw | Cloud AI |
|---|---|---|---|
| Self-training models | Forge pipeline | — | — |
| Autonomy modes | off/ask/on/full | off/ask/on/full | — |
| Multi-model routing | Claude/Gemini/Ollama | Single model | Single model |
| Desktop control | 36 MCP tools | Limited | — |
| Channels | CLI, Telegram, iMessage, WhatsApp, Voice, Mobile | CLI + Web | Web only |
| Plugin marketplace | With security scanning | No scanning | — |
| Sandboxed execution | Docker + allowlist + audit | Docker | N/A |
| Team/multi-user | Role-based (4 tiers) | Shared workspaces | Yes |
| Autonomous tasks | 17 scheduled hands | Basic agents | — |
| Knowledge graph | Graph-boosted RAG | — | — |
| Daily learning cycle | 5-step nightly | — | — |
| Data privacy | 100% local | Local | Cloud |
Shipped this week. Available now via brew upgrade familiar
React Native app for iOS and Android. Chat, status dashboard, team management, and autonomy control. Connects to your gateway over WebSocket.
Four graduated levels: off, ask, on, full. 25 tools classified into safe/moderate/dangerous tiers. Set via Telegram, mobile, or gateway API.
Install hands and skills from the community registry. Every package is security-scanned for malware, injection, and token exfiltration before install.
Add team members with role-based access: owner, admin, member, viewer. Per-user tokens, tool restrictions, and team management via CLI or mobile.
Run untrusted commands in isolated Docker containers with memory limits, no network, and read-only filesystems. Full audit logging to JSONL.
11-section diagnostic: ports, tokens, bridges, brain health, security audit, config migration. Auto-fix with --fix.
A self-improving system that runs entirely on your machine. Cloud APIs for heavy lifting, local models for everything else.
Always-on AI companion. Remembers everything, learns from every session, develops knowledge you didn't give it.
Configurable cloud model for heavy tasks. Reads your files, writes code, runs tests, iterates. Falls back to local models if the cloud is unavailable.
Local language models that run on your hardware. Handles fast chat, routing, classification, and embeddings with zero cloud dependency. Works offline.
Rust daemon exposes 36+ MCP tools: memory, filesystem, code analysis, HTTP, and more. Connect via CLI, Telegram, Slack, or any MCP-compatible client.
All local models run via Ollama. Cloud APIs used only for agentic coding and research phases — never as training targets.
One install command. A personal AI that grows with you.
Cloud model reads your files, writes code, runs tests, and iterates until it's right. Configurable provider. Falls back to local models automatically.
Forge mines real merged code from GitHub and open-source datasets, then fine-tunes local models on your GPU (or CPU). Each version gets smarter based on real code patterns.
SQLite knowledge base with semantic embeddings, knowledge graph, and CRAAP source evaluation. Familiar builds context from your code, docs, and conversations.
Scheduled tasks that run while you sleep. The researcher mines knowledge, the trainer builds models, the coder writes and tests code. Cron-driven, event-triggered, multi-phase.
Rust daemon with memory search, filesystem operations, code analysis, HTTP requests, system management, and more. Any MCP client connects to the full brain.
Local models run on your machine via Ollama. Training happens locally. Your data stays on your network. Cloud APIs only used when you choose agentic mode.
This is what makes Familiar alive. Ground-truth code feeds the pipeline. Your models develop understanding from real, reviewed code.
Runs nightly at 2 AM, or on-demand: familiar forge status, familiar forge train, familiar forge eval
One command installs Familiar, pulls Ollama models, and wires everything together. The setup wizard handles configuration.
brew tap engindearing-projects/tap && brew install familiar
Run familiar init to configure your AI, API keys, channels, and services.
familiar init
Just type familiar. It's already connected to your brain, knowledge base, and models. It starts learning from day one.
familiar
The Forge pipeline produces familiar-brain — a fine-tuned coding and reasoning model trained on real merged code and open-source datasets. Use it standalone, no Familiar install required.
Pull and run locally. Needs ~12GB RAM.
OpenAI-compatible endpoint. Works with any SDK.
Drop this into .opencode.json in any project:
GGUF weights also available on HuggingFace for llama.cpp and other runtimes.
Built from scratch. No LangChain, no AutoGen, no third-party agent frameworks. The training pipeline, RAG system, autonomous hands, skill modules, MCP bridge, and model serving layer are all original code by engindearing. We use Ollama for local inference and configurable cloud providers for heavy tasks. We focus on making your AI smarter.