AI & Intelligence Tools
Documentation Navigation
Navigate the documentation:
- Quick Start - Get started in 60 seconds
- CLI Reference - Complete command reference for all binaries
- AI Command Guide - Natural language command workflows
Overview
KAICS (Kodachi AI Command Intelligence System) is a suite of AI binaries plus the ai-gateway policy/execution layer. Together they provide natural language command execution, machine learning model management, automated system monitoring, trusted agent integration, and safe command orchestration. All core inference and policy evaluation run locally.
The AI suite transforms Kodachi from a manual security toolkit into an intelligent, self-improving system that learns from user behavior and proactively maintains security posture.
Core Architecture Principles
- Privacy-First AI: All inference runs locally, no data leaves the system
- Tiered Intelligence: TF-IDF → ONNX semantic → Mistral.rs (local GGUF) → GenAI/Ollama → Legacy LLM → Claude CLI
- Zero-Config Start: ai-cmd works immediately with built-in TF-IDF — no training needed
- Automated Improvement: ai-learner and ai-trainer continuously improve accuracy
Binary Categories
| Binary | Primary Function | Type | Raw Command Reference |
|---|---|---|---|
| ai-cmd | Natural language CLI for Kodachi commands | On-demand | Commands |
| ai-gateway | Unified AI/agent command gateway, policy firewall, and safe executor | On-demand | Commands |
| ai-trainer | ML model training and validation | On-demand | Commands |
| ai-learner | Learning orchestration and analysis | On-demand | Commands |
| ai-admin | Database management and diagnostics | On-demand | Commands |
| ai-discovery | Binary watcher and auto-indexer daemon | Daemon | Commands |
| ai-scheduler | Cron-based task scheduler | Daemon | Commands |
| ai-monitor | Proactive system monitoring daemon | Daemon | Commands |
Independent Runtime (not a KAICS sub-binary):
| Binary | Primary Function | Type | Raw Command Reference |
|---|---|---|---|
| kodachi-claw | Anonymous autonomous AI agent runtime with embedded Tor | On-demand / Daemon | Commands |
Command Reference
For complete CLI flags, options, and raw command syntax, see each binary's command reference page. This section focuses on workflows and scenarios showing how binaries work together.
Gateway Layer (ai-gateway)
ai-gateway is the machine-facing command API used by agents and automation. It provides:
- unified service discovery (
list,search,help) - machine invocation hints in search results (
invocation.service,invocation.command) - safe execution with JSON arguments (
run --args-json) - strict risk controls for dangerous commands
- per-agent capabilities and trusted batch mode
Verified Agent IDs
anonymous, kodachi-claw, nullclaw, openclaw, picoclaw, nanoclaw, claude-code, gpt, gemini, open-interpreter
Agent ID Aliases
zeroclaw is accepted as an alias and normalizes to kodachi-claw internally. Both names work in --agent-id flags.
Verified command patterns
# A) Machine invocation field from search
ai-gateway search "tor status" --limit 1 --json | jq '.data.results[0].invocation'
# B) JSON args for deterministic agent invocation
ai-gateway run ip-fetch --command fetch --args-json '{}' --dry-run --json
# C) Dangerous commands: dry-run is allowed, live execution needs explicit approval
ai-gateway run health-control --command wipe-logs --dry-run --json
ai-gateway run health-control --command wipe-logs --json
# D) Capability hints for planning engines
ai-gateway help tor-switch tor-status --json | jq '.data.needs_root,.data.offline_safe,.data.network_touching,.data.creates_files'
Trusted Batch Mode
KODACHI_TRUSTED_BATCH_MODE=true \
KODACHI_AGENT_TOKEN_ZEROCLAW=your_token \
ai-gateway run --agent-id zeroclaw --agent-token your_token \
--batch-json '[{"service":"tor-switch","command":"tor-status","dry_run":true}]' --json
Kodachi Claw — Anonymous AI Agent Runtime
Get the power of ZeroClaw with full anonymity. kodachi-claw hides your AI agent inside the Tor network — every API call, every model request, every channel message is routed through embedded Tor circuits. Your agent cannot be tracked, fingerprinted, or traced back to you.
ZeroClaw is an ultra-lightweight Rust AI agent runtime — the fastest and smallest autonomous assistant. kodachi-claw extends that engine with Kodachi's anonymity stack.
The Claw Family:
| Project | Language | Description | GitHub |
|---|---|---|---|
| OpenClaw | Node.js | The original — personal AI assistant on any platform | openclaw/openclaw |
| ZeroClaw | Rust | Ultra-lightweight fork — 99% less memory, runs on $10 hardware | zeroclaw-labs/zeroclaw |
| NullClaw | Zig | Fastest & smallest — 678KB binary, <2ms startup | nullclaw/nullclaw |
| PicoClaw | Go | Tiny personal agent — <10MB RAM | sipeed/picoclaw |
| IronClaw | Rust | Privacy-focused — WASM-sandboxed tools | nearai/ironclaw |
| NanoClaw | TypeScript | Lightweight container-secured agent — Claude Agent SDK, WhatsApp, scheduled jobs | qwibitai/nanoclaw |
| Kodachi Claw | Rust | Anonymity-hardened — embedded Tor, identity randomization, OPSEC filter | Standalone Binary · Terminal Server · Desktop Debian XFCE |
All the claws give you an AI agent. Only kodachi-claw makes that agent invisible.
Anonymity features:
- Embedded Arti Tor runtime with multi-circuit load balancing (default: 10 instances)
- Network namespace isolation via oniux (
--mode isolated) - MAC address, hostname, and timezone randomization
- IP verification and DNS leak verification
- OPSEC outbound filter (redacts identity leaks)
- Restore-on-exit cleanup (
--restore-on-exit) - 4 circuit strategies: round-robin, random, least-used, sticky
kodachi-claw integrates Kodachi services (online-auth, ip-fetch, tor-switch, oniux) directly as in-process Rust libraries — not external binary calls.
How the Binaries Work Together
┌─────────────┐ indexes ┌──────────────┐ trains ┌──────────────┐
│ ai-discovery │──────────────→│ ai-trainer │─────────────→│ ai-cmd │
│ (daemon) │ │ (training) │ │ (queries) │
└─────────────┘ └──────────────┘ └──────┬───────┘
↑ │
│ retrains │ feedback
│ ↓
┌─────────────┐ schedules ┌──────────────┐ learns ┌──────────────┐
│ ai-scheduler │──────────────→│ ai-learner │←────────────│ ai-cmd │
│ (daemon) │ │ (learning) │ from │ (feedback) │
└──────┬──────┘ └──────────────┘ └──────────────┘
│ ↑
│ triggers │ checks DB
↓ │
┌─────────────┐ ┌──────────────┐
│ ai-monitor │ │ ai-admin │
│ (daemon) │ │ (maintenance)│
└─────────────┘ └──────────────┘
┌──────────────────┐
│ kodachi-claw │ (independent peer)
│ (agent runtime) │
│ ┌──── Tor ────┐ │
│ │ multi-circ │─┼──→ AI Providers / Channels
│ └─────────────┘ │
└────────┬─────────┘
│ in-process libs
┌────────┼─────────────────┐
▼ ▼ ▼ ▼
online-auth ip-fetch tor-switch oniux
| Service | Calls These | Called By |
|---|---|---|
| ai-cmd | ai-discovery (for binary index), logs-hook | None (user-facing) |
| ai-gateway | All Kodachi binaries (policy + execution) | ai-cmd, external agents |
| ai-trainer | ai-discovery, logs-hook | ai-learner |
| ai-learner | ai-trainer, ai-admin, logs-hook | ai-scheduler |
| ai-admin | logs-hook | ai-trainer, ai-learner |
| ai-discovery | logs-hook | ai-cmd, ai-trainer |
| ai-scheduler | Any binary via whitelist, logs-hook | None (daemon) |
| ai-monitor | health-control, tor-switch, dns-leak, logs-hook | ai-scheduler |
| kodachi-claw | online-auth, ip-fetch, tor-switch, oniux (in-process) | None (user-facing) |
AI Tier Architecture
| Tier | Engine | Speed | Accuracy | Setup Required |
|---|---|---|---|---|
| Tier 1 | TF-IDF | < 1ms | Good | None (built-in) |
| Tier 2 | ONNX Semantic | ~10ms | Very Good | sudo ai-trainer download-model + sudo ai-trainer train --data ./data/training-data.json |
| Tier 3 | Mistral.rs (GGUF) | ~200ms | Excellent | Download GGUF model + ai-cmd query --engine mistral |
| Tier 4 | GenAI / Ollama | ~300ms | Excellent | Install Ollama + pull model + ai-cmd query --engine genai |
| Tier 5 | Local LLM (legacy) | ~500ms | Good | Download GGUF model + --engine llm (deprecated, use mistral) |
| Tier 6 | Claude CLI | ~1s | Best | Claude CLI installed + ai-cmd query --engine claude |
The tiered system automatically falls back across the standard local/provider tiers (auto routing through TF-IDF, ONNX, Mistral.rs, GenAI, and legacy LLM). Claude CLI (Tier 6) is explicit opt-in and is not part of automatic fallback.
System Requirements
| Resource | Minimum (TF-IDF only) | Recommended (ONNX + Mistral.rs) | Full Stack (All Tiers) |
|---|---|---|---|
| CPU | Any x86_64 | Any x86_64 | Any x86_64 |
| RAM | 128MB | 512MB | 4GB+ |
| Disk Space | 20MB (binaries only) | 250MB (binaries + ONNX models) | 3GB+ (with GGUF models) |
| GPU/NPU | Not required | Not required | Not required |
| OS | Linux (Kodachi OS) | Linux (Kodachi OS) | Linux (Kodachi OS) |
No Special Hardware Needed
All AI tiers run on CPU. No GPU, NPU, CUDA, or specialized hardware is required. The system runs identically in virtual machines, containers, and bare metal.
Exit Codes
All AI binaries use consistent exit codes:
| Code | Meaning | Action |
|---|---|---|
0 |
Success | Command completed successfully |
1 |
General error | Check error message for details |
2 |
Invalid arguments | Verify command syntax with -e flag |
3 |
Authentication failure | Check online-auth status |
4 |
Database error | Run ai-admin diagnostics |
5 |
Model not found | Download models with ai-trainer download-model |
10 |
Command blocked by policy | Check AI policy settings or use --dry-run |
130 |
Interrupted (Ctrl+C) | Normal user cancellation |
Quick Start
Get from zero to working AI in 60 seconds:
# 1. Query immediately using built-in TF-IDF (no setup needed)
ai-cmd query "check my network status"
# 2. Train for better accuracy (one-time setup)
sudo ai-trainer download-model
sudo ai-trainer train --data ./data/training-data.json
# 3. Now queries use semantic matching
ai-cmd query "rotate my tor circuit"
# 4. Optional: Enable local LLM reasoning (Mistral.rs)
# Download a small GGUF model (e.g., TinyLlama Q4_K_M) into models/
ai-cmd query "analyze security" --engine mistral
# 5. Optional: Enable Ollama provider
# Install Ollama, pull a model, then:
ai-cmd query "check all services" --engine genai
# 6. Check which tiers are available
ai-cmd tiers --json
Real-World Command Examples
350+ Built-in Intents — No Training Required
ai-cmd ships with 350+ pre-configured intent patterns covering Kodachi services, Linux system commands, SSL/TLS operations, network diagnostics, privacy checks, and hardware monitoring. The pattern classifier works immediately with zero setup — no model download, no training, no configuration. Just type your question in plain English.
The examples below demonstrate the breadth of natural language queries that ai-cmd understands out of the box. Every query shown uses the built-in TF-IDF classifier (Tier 1) and requires no prior training or model download.
SSL/TLS Certificate Inspection
Inspect, validate, and troubleshoot SSL/TLS certificates for any domain directly from the command line.
# Check if a website's SSL certificate is valid
ai-cmd query "is digi77.com ssl valid?"
# → openssl s_client -connect digi77.com:443 ... | openssl x509 -noout -dates -subject -issuer
# Get the SSL fingerprint of any website
ai-cmd query "show me finger print of ssl of digi77.com"
# → openssl ... -fingerprint -sha256 -noout
# View the full certificate chain
ai-cmd query "ssl chain of example.com"
# → openssl s_client -connect example.com:443 -showcerts
# Check who issued the certificate
ai-cmd query "who issued the ssl of github.com"
# → openssl ... -issuer
# Check certificate expiration date
ai-cmd query "when does the ssl of kodachi.com expire"
# → openssl ... -enddate -noout
Network Intelligence
Diagnose connections, discover devices, inspect ports, and measure performance with natural language.
# See all active connections to your machine
ai-cmd query "show me all ips connected to my pc"
# → ss -tnp state established
# Find your active network interface
ai-cmd query "what is my active interface"
# → health-control mac-active-interface
# Discover devices on your local network
ai-cmd query "who is on my network"
# → ip neigh show
# Check what process is using a port
ai-cmd query "what is running on port 8080"
# → ss -tlnp sport = :8080
# Run a download speed test
ai-cmd query "speed test"
# → curl-based download speed measurement
# Trace the route to a remote host
ai-cmd query "traceroute to 1.1.1.1"
# → traceroute 1.1.1.1
Privacy & Anonymity
Verify your anonymity posture, detect leaks, and confirm that privacy layers are active.
# Verify Tor is routing your traffic
ai-cmd query "am i on tor"
# → tor-switch check-tor
# Check VPN connection status
ai-cmd query "am i on vpn"
# → routing-switch status
# Check for IP leaks
ai-cmd query "am i leaking my ip"
# → External + internal IP comparison
# Verify DNS encryption
ai-cmd query "is my dns encrypted"
# → dns-switch dnscrypt-status
# See who can intercept your traffic
ai-cmd query "who can see my traffic"
# → traceroute analysis
# Check overall privacy score
ai-cmd query "how private am i"
# → health-control sec-score
WiFi & IPv6 Management
Control wireless interfaces and IPv6 configuration using plain English commands.
# Toggle WiFi off
ai-cmd query "turn off wifi"
# → health-control offline-wifi --action disable
# Toggle WiFi on
ai-cmd query "turn on wifi"
# → health-control offline-wifi --action enable
# Scan for available networks
ai-cmd query "scan wifi networks"
# → nmcli device wifi list
# Disable IPv6 (privacy measure)
ai-cmd query "disable ipv6"
# → health-control ipv6-disable
# Check IPv6 status
ai-cmd query "is ipv6 enabled"
# → health-control ipv6-status
# Show current WiFi signal strength
ai-cmd query "wifi signal strength"
# → nmcli -f SIGNAL,SSID device wifi list
System & Hardware
Monitor CPU, GPU, temperature, battery, and boot performance with natural language queries.
# Find what's consuming CPU
ai-cmd query "what is eating cpu"
# → health-control offline-info-process
# Check GPU information
ai-cmd query "what gpu do i have"
# → health-control offline-info-hardware
# Monitor CPU temperature
ai-cmd query "how hot is cpu"
# → health-control offline-info-hardware
# Check battery status
ai-cmd query "battery status"
# → upower battery info
# Analyze slow boot
ai-cmd query "what slowed boot"
# → systemd-analyze blame
# Check available disk space
ai-cmd query "how much disk space left"
# → df -h
Smart File Operations
ai-cmd resolves natural path references like "my desktop" or "my downloads" to real filesystem paths automatically.
# Checksum with natural path resolution
ai-cmd query "show me the md5sum of a file in my desktop k900.zip"
# → md5sum ~/Desktop/k900.zip
# (AI resolves "in my desktop" to ~/Desktop/ automatically)
# Find large files
ai-cmd query "find files larger than 1gb in my home"
# → find ~ -type f -size +1G
# Search file contents
ai-cmd query "find files containing password in /etc"
# → grep -rl "password" /etc
# Check file permissions
ai-cmd query "who can read /etc/shadow"
# → ls -la /etc/shadow
Natural Language Flexibility
You do not need to memorize exact phrasing. ai-cmd understands variations like "check ssl", "verify certificate", "is the cert valid", and "ssl status" as equivalent queries. The classifier matches intent, not exact wording.
Workflow 1: Complete Training & Learning Cycle
The core AI improvement loop involves 5 binaries working together. This is the most important workflow to understand.
Step 1: Train the Model (ai-trainer)
# Download ONNX model (one-time)
sudo ai-trainer download-model
# Train from command metadata
sudo ai-trainer train --data ./data/training-data.json
# Verify training
sudo ai-trainer status
Step 2: Query with Natural Language (ai-cmd)
# Basic query — the model classifies intent and executes
ai-cmd query "check network connectivity"
# Preview without executing (safety check)
ai-cmd preview "enable panic mode"
# Use specific AI engine
ai-cmd query "check tor status" --engine onnx
ai-cmd query "rotate tor circuit" --engine auto
Step 3: Submit Feedback on Mistakes (ai-cmd)
# When ai-cmd gets it wrong, tell it the correct intent
ai-cmd feedback "check tor" --correct-intent tor_status
# Or correct the command mapping directly
ai-cmd feedback "check dns" --correct-command "dns-leak test"
Step 4: Learn from Feedback (ai-learner)
# Process accumulated feedback to adjust model weights
sudo ai-learner learn
# Incremental learning with custom learning rate
sudo ai-learner learn --incremental --learning-rate 0.01
# Set minimum feedback threshold before learning
sudo ai-learner learn --min-feedback 10
Step 5: Analyze Improvement (ai-learner)
# Review accuracy metrics
sudo ai-learner analyze --period last-7-days
# Analyze specific metrics
sudo ai-learner analyze --metric accuracy
sudo ai-learner analyze --metric confidence --learning-curve
# Generate comprehensive report
sudo ai-learner report --period last-30-days
# Export report to HTML
sudo ai-learner report --format html --output report.html
# Export report to Markdown
sudo ai-learner report --format markdown --output report.md
Continuous Improvement
Repeat Steps 2-5 regularly. Each cycle improves accuracy as the system learns from corrections. Automate this with ai-scheduler (see Workflow 4).
Workflow 2: Automated Security Monitoring
Combine ai-monitor and ai-scheduler for hands-free security maintenance.
Set Up Background Monitoring
# Start the monitoring daemon with custom interval and threshold
sudo ai-monitor start --daemon --interval 60 --threshold 0.75
# Install as a systemd service so it survives reboots
sudo ai-monitor service install
# Check monitoring status
ai-monitor status
Schedule Automated Security Checks
# Start the scheduler daemon (requires sudo)
sudo ai-scheduler start
# DNS leak checks every 6 hours
ai-scheduler add --name "dns-leak-check" \
--command "dns-leak test" \
--cron "0 */6 * * *"
# Tor circuit rotation every 2 hours
ai-scheduler add --name "tor-rotate" \
--command "tor-switch new-circuit" \
--cron "0 */2 * * *"
# Daily security score check
ai-scheduler add --name "daily-security" \
--command "health-control sec-score" \
--cron "0 0 * * *"
# Network health check every 30 minutes
ai-scheduler add --name "network-health" \
--command "health-control net-check" \
--cron "*/30 * * * *"
# Verify all scheduled tasks
ai-scheduler list
Review and Act on Suggestions
# Check what the monitor has detected
ai-monitor suggestions
# Filter by category
ai-monitor suggestions --category security
ai-monitor suggestions --category network
# Follow the suggested fix commands, then mark as resolved
ai-monitor suggestions --resolve 1
# Dismiss false positives
ai-monitor suggestions --dismiss 2
# Clean up old suggestions
ai-monitor suggestions --cleanup
Result: Your system continuously monitors for threats and runs preventive security checks automatically.
Workflow 3: Database Maintenance & Health
Keep the AI database healthy using ai-admin alongside other binaries.
Weekly Maintenance Routine
# Step 1: Create a backup before any maintenance
ai-admin db backup --output ./backup/ai-db-$(date +%Y%m%d).db
# Step 2: Check database integrity
ai-admin db integrity-check
# Step 3: Optimize database performance
ai-admin tune optimize
# Step 4: Rebuild search indexes
ai-admin tune rebuild-index
# Step 5: Clean data older than 30 days
ai-admin tune cleanup --days 30
# Step 6: Run full diagnostics to verify health
ai-admin diagnostics --full
# Step 7: Check database info
ai-admin db info
Recovery After Database Corruption
# Step 1: Run diagnostics to assess damage
ai-admin diagnostics --full
# Step 2: Restore from backup
ai-admin db restore --backup ./backup/ai-db-20260209.db
# Step 3: Run migrations to ensure schema is current
ai-admin db migrate
# Step 4: Verify integrity after restore
ai-admin db integrity-check
# Step 5: Rebuild indexes
ai-admin tune rebuild-index
# Step 6: Retrain the model from fresh data
sudo ai-trainer train --data ./data/training-data.json
# Step 7: Verify the model works
ai-cmd preview "check network"
Workflow 4: Automated Learning Pipeline
Use ai-scheduler to automate the entire learning cycle without manual intervention.
# Start the scheduler daemon (requires sudo)
sudo ai-scheduler start
# Schedule daily incremental learning at 2 AM
ai-scheduler add --name "daily-learning" \
--command "ai-learner learn --incremental" \
--cron "0 2 * * *"
# Schedule weekly full analysis on Sundays at 3 AM
ai-scheduler add --name "weekly-analysis" \
--command "ai-learner analyze --period last-7-days" \
--cron "0 3 * * 0"
# Schedule monthly database optimization on the 1st at 4 AM
ai-scheduler add --name "monthly-db-optimize" \
--command "ai-admin tune optimize" \
--cron "0 4 1 * *"
# Schedule monthly data cleanup (older than 90 days) on the 1st at 5 AM
ai-scheduler add --name "monthly-cleanup" \
--command "ai-admin tune cleanup --days 90" \
--cron "0 5 1 * *"
# Schedule monthly backup on the 1st at 6 AM
ai-scheduler add --name "monthly-backup" \
--command "ai-admin db backup --output ./backup/ai-db-\$(date +%Y%m%d).db" \
--cron "0 6 1 * *"
# Verify all scheduled tasks
ai-scheduler list
Result: The AI system learns from user feedback, maintains its database, and optimizes itself — all automatically.
Workflow 5: New Binary Integration
When a new Kodachi binary is installed, use ai-discovery and ai-trainer to make it available to ai-cmd.
# Step 1: ai-discovery automatically detects new binary (if daemon is running)
# Or force a reindex
sudo ai-discovery reindex
# Reindex specific service only
sudo ai-discovery reindex --service new-binary-name
# Step 2: Verify the new binary was indexed
ai-discovery status --verbose
# Step 3: Retrain the model to include new commands
sudo ai-trainer train --data ./data/training-data.json
# Step 4: Create a snapshot before deploying (use -v flag, NOT --name)
sudo ai-trainer snapshot -v 1.0.0-post-new-binary
# Step 5: Validate model accuracy hasn't degraded
sudo ai-trainer validate --test-data ./data/test-commands.json
# Step 6: List all snapshots to verify
sudo ai-trainer list-snapshots
# Step 7: Test the new binary via natural language
ai-cmd query "use the new binary"
ai-cmd preview "new binary command" --alternatives 5
Workflow 6: Troubleshooting AI Accuracy
When ai-cmd isn't classifying queries correctly, use multiple binaries to diagnose and fix.
Diagnose the Problem
# Preview to see what ai-cmd thinks the query means
ai-cmd preview "check tor status" --alternatives 5
# Use suggest mode to get command recommendations
ai-cmd suggest "tor"
# Check overall system health
ai-admin diagnostics --full
# Analyze recent accuracy metrics
sudo ai-learner analyze --period last-7-days --metric accuracy
# Check confidence distribution
sudo ai-learner analyze --metric confidence
# Generate learning curve analysis
sudo ai-learner analyze --learning-curve
Fix the Problem
# Submit corrections for misclassified queries
ai-cmd feedback "check tor status" --correct-intent tor_status
ai-cmd feedback "test dns leaks" --correct-command "dns-leak test"
# Run a learning cycle to process corrections
sudo ai-learner learn
# Verify improvement
sudo ai-learner analyze --period last-7-days
# If accuracy is still low, retrain from scratch
sudo ai-trainer train --data ./data/training-data.json
# Validate the retrained model
sudo ai-trainer validate --test-data ./data/test-commands.json
# Export model for backup
sudo ai-trainer export --output ./backup/model-$(date +%Y%m%d).onnx
Prevent Recurrence
# Start scheduler daemon (requires sudo)
sudo ai-scheduler start
# Schedule automated learning
ai-scheduler add --name "daily-learn" \
--command "ai-learner learn --incremental" \
--cron "0 2 * * *"
# Generate periodic reports to track accuracy
sudo ai-learner report --period last-30-days --format markdown --output monthly-report.md
Privacy & Security
All AI processing in KAICS happens locally by default:
- TF-IDF (Tier 1): Built-in statistical matching, no external dependencies
- ONNX (Tier 2): Local semantic model inference, no network calls
- Mistral.rs / Local GGUF (Tier 3): GGUF models run on-device
- GenAI / Ollama (Tier 4): Provider-backed path (local Ollama or configured provider)
- Claude CLI (Tier 6): Opt-in only — requires explicit
--engine claudeand Claude CLI authentication
The system never sends data to external servers unless you explicitly choose a provider-backed engine (for example GenAI with external provider or Claude CLI).
Security-First AI Routing
KAICS implements a two-path routing system that prioritizes speed and security while maintaining flexibility for complex queries.
Routing Architecture
| Path | Classifier | Speed | Coverage | Use Case |
|---|---|---|---|---|
| FAST PATH | ONNX Intent Classifier | <5ms | ~80% of queries | Direct tool execution without LLM overhead |
| SLOW PATH | Mistral.rs → GenAI → Legacy LLM | 200ms-2s | Complex queries | Multi-step reasoning and analysis |
| EXPLICIT OPT-IN | Claude CLI | ~1s | Advanced queries | Manual selection only (NOT in auto mode) |
ONNX Intent Classifier
The ONNX intent classifier uses a fine-tuned transformer-based model to identify user intent in <5ms, enabling instant command execution for common queries.
12 Intent Categories:
| Intent | Example Queries | Target Tool |
|---|---|---|
NetworkStatus |
"check network", "is internet working" | health-control net-check |
TorControl |
"rotate tor circuit", "change tor exit" | tor-switch |
VpnControl |
"enable vpn", "connect to vpn server" | routing-switch |
DnsManagement |
"check dns leaks", "switch dns server" | dns-leak, dns-switch |
SecurityCheck |
"show security score", "verify integrity" | health-control, integrity-check |
SystemInfo |
"show system status", "get ip address" | health-control, ip-fetch |
FileManagement |
"check permissions", "scan files" | permission-guard |
ServiceControl |
"start service", "restart daemon" | System services |
ConfigurationChange |
"update settings", "change config" | Various config binaries |
HelpRequest |
"how do I", "explain", "what is" | AI assistance |
EmergencyAction |
"panic mode", "block internet", "wipe data" | health-control (with high confidence) |
Unknown |
Ambiguous or unclear queries | Route to SLOW PATH |
How It Works:
- User query arrives → ONNX classifier analyzes intent
- High confidence → Execute tool directly (FAST PATH)
- Low confidence → Route to LLM for deeper analysis (SLOW PATH)
- Emergency intents require elevated confidence for safety
Performance: - Inference time: <5ms on CPU - Accuracy: 90%+ on test set - Model size: <100MB (transformer-based) - No GPU required: Optimized ONNX runtime
AI Policy System
The AI policy system provides fine-grained control over AI behavior through a cryptographically signed JSON policy file.
Policy Components:
The policy file defines:
- Per-category confidence thresholds — higher for sensitive operations like emergency actions, lower for informational queries
- Approved tool allowlist — restricts which Kodachi services the AI can invoke
- Risk mode — controls overall safety behavior
- Cryptographic signature — prevents unauthorized policy tampering
Risk Modes:
| Mode | Description | Behavior |
|---|---|---|
safe |
Default mode for general use | Highest safety, strictest thresholds |
advanced |
Expert mode for power users | Relaxed thresholds, extended tool access |
Generating Policy:
# Generate AI policy from learning data
sudo ai-learner learn --output-policy
# Expected output:
# ✓ Processing feedback data
# ✓ Analyzing intent confidence patterns
# ✓ Generating AI policy file
# ✓ Policy saved to: results/ai-policy.json
# ✓ Policy signature: <RSA-SHA256>
# Verify policy integrity
ai-admin diagnostics --full
Policy Features: - Per-intent thresholds: Fine-tune confidence requirements for each intent category - Tool allowlist: Restrict which binaries AI can execute - Risk mode switching: Toggle between safe/advanced modes - Cryptographic signing: Prevent policy tampering - Auto-update: Regenerate policy after learning cycles
Security Benefits: - Prevents accidental execution: High thresholds for dangerous operations - Audit trail: Policy changes are logged and signed - Granular control: Different confidence levels per intent type - Fail-safe defaults: Unknown intents route to human review
Fast Path vs Slow Path Decision Tree
User Query
↓
ONNX Intent Classifier
↓
Confidence ≥ Threshold?
↓
YES → FAST PATH (direct tool call)
↓
NO → SLOW PATH
↓
Mistral.rs available?
↓
YES → Mistral.rs reasoning
↓
NO → GenAI/Ollama available?
↓
YES → GenAI reasoning
↓
NO → Legacy LLM available?
↓
YES → Legacy LLM reasoning
↓
NO → TF-IDF fallback
Routing Example:
# FAST PATH (ONNX → direct execution)
ai-cmd query "check network connectivity"
# Intent: NetworkStatus (confidence: 0.92 ≥ 0.75)
# Route: FAST PATH → health-control net-check
# Time: 4ms
# SLOW PATH (ONNX → Mistral.rs → reasoning)
ai-cmd query "what's the best way to secure my connection on public wifi?"
# Intent: HelpRequest (confidence: 0.65 ≥ 0.60)
# Route: SLOW PATH → Mistral.rs analysis
# Time: 240ms
# EXPLICIT OPT-IN (Claude CLI)
ai-cmd query "analyze my threat model" --engine claude
# Route: Explicit Claude selection (NOT in auto mode)
# Time: 1200ms
Why This Architecture?: - Speed: 80% of queries use FAST PATH (<5ms) - Accuracy: SLOW PATH handles complex reasoning - Privacy: Local processing by default - Safety: High thresholds prevent accidental destructive operations - Flexibility: Manual engine selection available for advanced use
Model Management
KAICS provides comprehensive model management for all AI tiers through ai-trainer.
Pre-Installed vs Optional Models
The installation package includes essential ONNX models only (~152 MB) for the fast-path AI tiers:
- Custom-trained Kodachi intent classifier model
- Companion tokenizer and label mapping files
- Semantic embedding model for command matching
GGUF/LLM models (Tier 3+) are not included in the package to keep it lightweight. Download them on-demand using ai-trainer download-model --llm as shown below. The AI engine gracefully falls back to available tiers if higher-tier models are not present.
Downloading Models
# Download ONNX semantic model (Tier 2)
sudo ai-trainer download-model
# Download specific LLM model (Tier 3)
sudo ai-trainer download-model --llm default # Qwen2.5-3B-Instruct (3B, ~1.8GB)
sudo ai-trainer download-model --llm small # Qwen2.5-1.5B-Instruct (1.5B, ~0.9GB)
sudo ai-trainer download-model --llm large # Phi-3.5-mini-instruct (3.8B, ~2.3GB)
# List available downloadable models
sudo ai-trainer download-model --show-models
# Force re-download if corrupted
sudo ai-trainer download-model --force
GUI Download Behavior
Essentials AI and AI Chat Commander use the trainer-backed forced-latest download flow (equivalent to ai-trainer download-model --force) and automatically resolve the latest compatible LLM profile when needed.
Using Downloaded Models
# Auto-detect and use best available model
ai-cmd query "check security status" --engine auto
# Use specific ONNX model
ai-cmd query "rotate tor circuit" --engine onnx
# Use specific GGUF model
ai-cmd query "explain threat model" --use models/Qwen2.5-3B-Instruct-Q4_K_M.gguf
# Use Mistral.rs with specific model
ai-cmd query "analyze network" --engine mistral --use models/mistral.gguf
Model Status
# Check all available models
ai-trainer status
# Expected output:
# AI Engine Status:
# ┌──────────────────────────────────────┐
# │ TF-IDF: ✓ Active │
# │ ONNX Model: ✓ Active │
# │ - Model: kodachi-intent │
# │ - Size: <100 MB │
# │ - Accuracy: 90%+ │
# │ Mistral.rs: ✓ Active │
# │ - Model: Qwen2.5-3B-Instruct │
# │ - Size: 1.8 GB │
# │ Ollama: ✗ Not configured │
# │ Legacy LLM: ✗ Deprecated │
# │ Claude API: ✗ Not configured │
# └──────────────────────────────────────┘
Model Recommendations: - CPU-only systems: Use ONNX + Qwen2.5-3B-Instruct (default) - 16GB+ RAM: Use ONNX + Phi-3.5-mini-instruct (large) - Low storage: Use ONNX + Qwen2.5-1.5B-Instruct (small) - Privacy-critical: ONNX + Mistral.rs only (no cloud) - Expert analysis: Add Ollama + Claude CLI (opt-in)
System Information
| Component | Version | Build Date | License |
|---|---|---|---|
| ai-cmd | 9.0.1 | 2026-02-22 | Proprietary |
| ai-trainer | 9.0.1 | 2026-02-22 | Proprietary |
| ai-learner | 9.0.1 | 2026-02-22 | Proprietary |
| ai-admin | 9.0.1 | 2026-02-22 | Proprietary |
| ai-discovery | 9.0.1 | 2026-02-22 | Proprietary |
| ai-scheduler | 9.0.1 | 2026-02-22 | Proprietary |
| ai-monitor | 9.0.1 | 2026-02-22 | Proprietary |
| ai-gateway | 9.0.1 | 2026-02-22 | Proprietary |
| kodachi-claw | 9.0.1 | 2026-02-20 | Apache-2.0 |
| Documentation | 9.0.1 | 2026-02-22 | © 2026 Linux Kodachi |