ai-cmd
AI-powered command-line interface for natural language command execution
Version: 9.0.1 | Size: 56.0MB | Author: Warith Al Maawali
License: Proprietary | Website: https://www.digi77.com
File Information
| Property | Value |
|---|---|
| Binary Name | ai-cmd |
| Version | 9.0.1 |
| Build Date | 2026-04-02T13:42:25.827327095Z |
| Rust Version | 1.82.0 |
| File Size | 56.0MB |
| JSON Data | View Raw JSON |
SHA256 Checksum
Features
| Feature | Description |
|---|---|
| Core | Advanced functionality for Kodachi OS |
Security Features
| Feature | Description |
|---|---|
| Inputvalidation | All inputs are validated and sanitized |
| Ratelimiting | Built-in rate limiting for network operations |
| Authentication | Secure authentication with certificate pinning |
| Encryption | TLS 1.3 for all network communications |
System Requirements
| Requirement | Value |
|---|---|
| OS | Linux (Debian-based) |
| Privileges | root/sudo for system operations |
| Dependencies | OpenSSL, libcurl |
Global Options
| Flag | Description |
|---|---|
-h, --help |
Print help information |
-v, --version |
Print version information |
-n, --info |
Display detailed information |
-e, --examples |
Show usage examples |
--json |
Output in JSON format |
--json-pretty |
Pretty-print JSON output with indentation |
--json-human |
Enhanced JSON output with improved formatting (like jq) |
--verbose |
Enable verbose output |
--quiet |
Suppress non-essential output |
--no-color |
Disable colored output |
--config <FILE> |
Use custom configuration file |
--timeout <SECS> |
Set timeout (default: 30) |
--retry <COUNT> |
Retry attempts (default: 3) |
Commands
Commands
query
Process a natural language query and execute the matching command
Usage:
Options:
- --threshold: Confidence threshold for execution
- --dry-run: Preview without execution
- --auto-execute: Auto-execute high confidence matches
- --engine: AI engine tier: auto, tfidf, onnx, onnx-classifier, llm, mistral, genai, claude
- --model-path: Path to GGUF model for local LLM tier
- --stream: Stream response tokens in real-time
- --temperature: Sampling temperature 0.0-2.0 (default: 0.7)
- --model: Model name for GenAI tier (supports local, OpenAI/Codex, Claude, Gemini, OpenRouter routing)
- --tor-proxy: Route cloud providers through Tor proxy
- --use: Use a specific GGUF model file from models/ directory
- --no-gateway, --skip-gateway: Bypass gateway validation and execute directly
Examples:
interactive
Start interactive REPL mode for continuous queries
Usage:
Options:
- --threshold: Confidence threshold for execution
- --engine: AI engine tier: auto, tfidf, onnx, llm, claude
- --model-path: Path to GGUF model for local LLM tier
Examples:
feedback
Submit feedback to improve intent classification
Usage:
Options:
- --correct-intent: Specify the correct intent ID
- --correct-command: Specify the correct command
- --comment: Add a comment
Examples:
preview
Preview intent classification without execution
Usage:
Options:
- --alternatives: Number of alternative matches to show
Examples:
voice
Voice input mode for natural language queries
Usage:
Options:
- --continuous: Enable continuous listening mode
- --timeout: Timeout for voice input in seconds
- --provider: STT provider: whisper-cpp, vosk, placeholder, auto
- --voice: TTS voice name
- --speed: Speech speed (words per minute)
- --list-devices: List available audio devices
- --check-deps: Check voice engine dependencies
Examples:
suggest
Get proactive command suggestions based on usage patterns
Usage:
Options:
- --limit: Number of suggestions to show
- --proactive: Show proactive suggestions
Examples:
workflow
Execute workflow profiles via natural language queries
Usage:
tiers
List all AI engine tiers and their availability status
Usage:
Examples:
tools
List all callable AI tools with parameter schemas
Usage:
Examples:
providers
List available GenAI providers and model configuration
Usage:
Examples:
model-info
Show loaded AI model details and configuration
Usage:
Examples:
policy
Show current AI policy (intent thresholds, tool allowlist, risk mode)
Usage:
Examples:
export-intents
Export all intents as JSON catalog for shared libraries
Usage:
Examples:
Operational Scenarios
Scenario-oriented workflows generated from the binary's built-in -e --json examples.
Scenario 1: Basic Usage
Basic natural language queries (safe mode examples)
Step 1: Check network status using natural language
Expected Output: Dry-run preview of matched commandStep 2: Get IP address query result with JSON output
Expected Output: JSON with intent and command previewScenario 2: Advanced Features
Advanced query options and preview workflows
Step 1: Preview command execution without running
Expected Output: Dry-run preview of Tor commandStep 2: Show interactive mode usage
Expected Output: Interactive mode options and examplesStep 3: Preview intent matching with alternatives
Expected Output: Intent analysis with alternative matchesStep 4: Use custom confidence threshold with dry-run
Expected Output: Dry-run preview gated by thresholdScenario 3: Feedback System
Feedback command syntax and usage patterns
Step 1: Show feedback syntax for intent correction
Expected Output: Feedback command usage helpStep 2: Show feedback syntax for command correction
Expected Output: Feedback command usage helpStep 3: Show feedback syntax for comments
Expected Output: Feedback command usage helpScenario 4: Voice Input
Voice command usage, diagnostics, and device checks
Step 1: Show voice mode usage and options
Expected Output: Voice command helpStep 2: Show continuous voice mode usage
Expected Output: Voice command help for continuous modeStep 3: Show timeout option usage
Expected Output: Voice command help with timeout optionStep 4: Show provider selection usage
Expected Output: Voice command help with provider optionStep 5: List available audio input devices
Expected Output: Audio device listStep 6: Check voice dependencies
Expected Output: Voice dependency status reportScenario 5: AI Suggestions
Get command suggestions based on usage patterns
Step 1: Get recent command suggestions
Expected Output: List of suggested commandsStep 2: Get proactive suggestions
Expected Output: Popular or proactive suggestionsStep 3: Limit number of suggestions
Expected Output: Top 5 suggestionsScenario 6: Interactive Mode
Interactive REPL usage and configuration options
Step 1: Show interactive mode usage
Expected Output: Interactive mode helpStep 2: Show threshold usage in interactive mode
Expected Output: Interactive help with threshold optionStep 3: Show ONNX engine interactive usage
Expected Output: Interactive help with engine optionStep 4: Show Claude interactive syntax
Expected Output: Interactive help with Claude engine optionsStep 5: Show local LLM interactive syntax
Expected Output: Interactive help with model-path optionScenario 7: AI Engine Tiers
Select specific AI engine tiers for query processing
Step 1: Auto-select best available engine tier
Expected Output: Dry-run preview using auto-selected tierStep 2: Use TF-IDF keyword matching tier
Expected Output: Dry-run preview using tfidf tierStep 3: Use ONNX semantic matching tier
Expected Output: Dry-run preview using onnx tierStep 4: Show Mistral tier syntax and options
Expected Output: Query help with mistral engineStep 5: Show GenAI tier syntax and options
Expected Output: Query help with genai engineStep 6: Show legacy LLM tier syntax
Expected Output: Query help with llm engineStep 7: Show custom model-path syntax
Expected Output: Query help with model-path optionStep 8: Show Claude tier syntax and options
Expected Output: Query help with claude engineStep 9: ONNX engine with dry-run preview
Expected Output: Preview matched command without executingStep 10: ONNX engine dry-run validation
Expected Output: Dry-run DNS leak query previewStep 11: Show all available AI tiers and status
Expected Output: Tier list with availability indicatorsScenario 8: Mistral.rs Local LLM
Mistral tier command syntax and diagnostics
Step 1: Show local mistral query syntax
Expected Output: Query help with mistral optionsStep 2: Show mistral streaming syntax
Expected Output: Query help including stream optionStep 3: Show mistral temperature option usage
Expected Output: Query help including temperature optionStep 4: Show loaded model details and tier metadata
Expected Output: JSON model statusScenario 9: Ollama / GenAI Provider
GenAI tier syntax and provider discovery
Step 1: Show genai query syntax
Expected Output: Query help with genai optionsStep 2: Show genai model override syntax
Expected Output: Query help with model optionStep 3: List available providers and models
Expected Output: JSON provider listStep 4: Show OpenAI/Codex model syntax with tor-proxy flag
Expected Output: Query help including tor-proxy optionStep 5: Show Claude model syntax (direct key or OpenRouter fallback)
Expected Output: Dry-run preview using Claude-compatible cloud routingStep 6: Show Gemini model syntax (direct key or OpenRouter fallback)
Expected Output: Dry-run preview using Gemini-compatible cloud routingScenario 10: Tool Calling
Tool-calling queries and tool catalog inspection
Step 1: Tool-calling query in safe dry-run mode
Expected Output: Dry-run preview for multi-tool queryStep 2: List all callable tools with JSON metadata
Expected Output: JSON tool definitionsStep 3: Show tools grouped by domain
Expected Output: Human-readable tool catalogScenario 11: Security-First Routing
Fast-path and fallback-path routing examples
Step 1: Fast-path style query with dry-run
Expected Output: Dry-run preview for tor statusStep 2: Show slow-path mistral syntax
Expected Output: Query help for mistral explanation pathStep 3: Configuration change flow with safety preview
Expected Output: Dry-run preview of DNS changeScenario 12: AI Policy
View and manage AI policy configuration including intent thresholds, tool allowlists, and risk mode.
Step 1: Show current AI policy (thresholds, tools, risk mode)
Expected Output: Policy details with intent thresholds and tool listStep 2: AI policy as JSON for programmatic use
Expected Output: JSON with version, thresholds, allowlist, signature statusScenario 13: Model Management
Download, list, and select GGUF models for local LLM inference
Step 1: Show syntax for selecting a specific GGUF model file
Expected Output: Query help with --use model selectorNote
Use --use without --help when the model file exists in models/
Step 2: Show details of loaded model
Expected Output: Model path, architecture, and capabilitiesStep 3: Show all AI tiers including model availability
Expected Output: JSON with tier details and download statusScenario 14: Linux System Commands
General-purpose Linux system commands via natural language
Step 1: Disk usage request preview
Expected Output: Dry-run preview for disk space commandStep 2: Process listing request preview
Expected Output: Dry-run preview for process listing commandStep 3: Network port check request preview
Expected Output: Dry-run preview for port inspection commandStep 4: Safe preview of package update flow
Expected Output: Dry-run preview of package updateNote
Remove --dry-run to execute after authentication
Step 5: Network diagnostic request preview
Expected Output: Dry-run preview for ping diagnosticStep 6: Log viewing with explicit TF-IDF engine
Expected Output: Dry-run preview using tfidf tierStep 7: Service management query with strict confidence threshold
Expected Output: Dry-run preview gated by 0.9 thresholdStep 8: Firewall inspection query preview
Expected Output: Dry-run preview for firewall status commandScenario 15: Confidence Threshold Reference
The --threshold flag controls minimum confidence required for a match. Range: 0.3 to 1.0. Default: 0.6.
Step 1: Minimum threshold for broad matching
Expected Output: Dry-run preview with loose confidence gatingStep 2: Default threshold balance
Expected Output: Dry-run preview with default confidence gatingStep 3: Higher threshold for sensitive operations
Expected Output: Dry-run preview with strict confidence gatingStep 4: Very strict threshold for near-exact matching
Expected Output: Dry-run preview requiring near-exact matchScenario 16: Workflow Profiles
Execute multi-step workflow profiles for complete operations
Step 1: List all available workflow profiles
Expected Output: Profiles grouped by category (privacy, emergency, network, etc.)Step 2: Preview privacy workflow profile
Expected Output: Workflow details, steps, and services usedNote
Use without --preview to execute after review
Step 3: Preview emergency recovery workflow
Expected Output: Workflow details, steps, and services usedStep 4: Dry-run workflow execution
Expected Output: Shows what would be executed without running commandsStep 5: Preview with custom confidence threshold
Expected Output: Workflow preview if confidence >= 0.7Step 6: Show workflow registry statistics
Expected Output: Profile counts, categories, and service usageStep 7: Preview workflow with custom parameters
Expected Output: Workflow with custom parameter valuesNote
Pass key=value pairs for profile parameters
Scenario 17: Setup & Model Management
Download AI models and manage engine tiers (via ai-trainer)
Step 1: List all AI engine tiers and their availability
Expected Output: Tier list showing which engines are availableNote
Shows setup hints for unavailable tiers
Step 2: Show active model paths and availability across AI tiers
Expected Output: Model inventory for embeddings/LLM backendsNote
Useful before running complex queries
Step 3: Get model inventory in JSON format
Expected Output: Structured model metadataNote
Suitable for automation checks
Step 4: List available provider backends
Expected Output: Provider list and statusStep 5: List providers in JSON format
Expected Output: Structured provider status detailsNote
Includes fields useful for health checks
Step 6: Show AI tier readiness in JSON format
Expected Output: Tier availability and setup hints as JSONScenario 18: Gateway Core Integration
ai-cmd and ai-gateway share the same registry/policy/executor core for consistent behavior
Step 1: Service-style queries now validate through shared gateway core
Expected Output: Dry-run preview with consistent policy decisionStep 2: Bypass gateway validation and execute directly (primary flag)
Expected Output: Dry-run preview without gateway validation stepStep 3: Same bypass behavior using alias flag
Expected Output: Dry-run preview without gateway validation stepNote
--skip-gateway is an alias of --no-gateway
Step 4: Skip gateway for commands with long execution times
Expected Output: Dry-run preview bypassing gateway timeoutStep 5: Display policy guard configuration used by shared gateway core
Expected Output: JSON policy summaryNote
Confirms deny/allow behavior used during execution
Step 6: List tool metadata exposed through the shared core
Expected Output: JSON tool catalog with safety tiersNote
Use this to verify command availability before query execution
Environment Variables
| Variable | Description | Default | Values |
|---|---|---|---|
RUST_LOG |
Set logging level | info | error |
NO_COLOR |
Disable all colored output when set | unset | 1 |
Exit Codes
| Code | Description |
|---|---|
| 0 | Success |
| 3 | Permission denied |
| 2 | Invalid arguments |
| 1 | General error |
| 4 | Network error |
| 5 | File not found |