ai-learner
AI learning engine for continuous improvement and performance analysis
Version: 9.0.1 | Size: 2.7MB | Author: Warith Al Maawali warith@digi77.com
License: LicenseRef-Kodachi-SAN-1.0 | Website: https://kodachi.cloud
File Information
| Property | Value |
|---|---|
| Binary Name | ai-learner |
| Version | 9.0.1 |
| Build Date | 2026-02-14T07:59:29.571032032Z |
| Rust Version | 1.82.0 |
| File Size | 2.7MB |
| JSON Data | View Raw JSON |
SHA256 Checksum
Features
| Feature | Description |
|---|---|
| Feature | Feedback aggregation and analysis |
| Feature | Incremental learning with convergence detection |
| Feature | Performance tracking and trend analysis |
| Feature | Multi-format report generation (JSON, Markdown, HTML) |
Security Features
| Feature | Description |
|---|---|
| Inputvalidation | All inputs are validated and sanitized |
| Ratelimiting | Built-in rate limiting for network operations |
| Authentication | Secure authentication with certificate pinning |
| Encryption | TLS 1.3 for all network communications |
System Requirements
| Requirement | Value |
|---|---|
| OS | Linux (Debian-based) |
| Privileges | root/sudo for system operations |
| Dependencies | OpenSSL, libcurl |
Global Options
| Flag | Description |
|---|---|
-h, --help |
Print help information |
-v, --version |
Print version information |
-n, --info |
Display detailed information |
-e, --examples |
Show usage examples |
--json |
Output in JSON format |
--json-pretty |
Pretty-print JSON output with indentation |
--json-human |
Enhanced JSON output with improved formatting (like jq) |
--verbose |
Enable verbose output |
--quiet |
Suppress non-essential output |
--no-color |
Disable colored output |
--config <FILE> |
Use custom configuration file |
--timeout <SECS> |
Set timeout (default: 30) |
--retry <COUNT> |
Retry attempts (default: 3) |
Commands
Analysis Operations
analyze
Analyze model performance and trends
Usage:
Learning Operations
learn
Run learning cycle to improve model based on feedback
Usage:
Reporting Operations
report
Generate comprehensive performance reports
Usage:
Status Operations
status
Show ai-learner status, database health, and activity metrics
Usage:
Examples
Basic Learning Operations
Run learning cycles and update models based on feedback
Run a full learning cycle on all feedback
Expected Output: Learning statistics showing improvementsNote
Processes all feedback since last run
Run incremental learning on new feedback only
Expected Output: Quick learning update with delta statisticsGet learning results in JSON format
Expected Output: JSON response with detailed learning metricsNote
Useful for automated processing
Learn with custom learning rate
Expected Output: Learning results with adjusted convergence speedNote
Lower rates for more stable convergence
Learn with minimum feedback threshold
Expected Output: Learning skipped if insufficient feedback availableNote
Ensures statistical significance
Incremental learning with JSON output
Expected Output: JSON with incremental learning delta statisticsNote
Combines fast incremental mode with structured output
Full parameter learning with JSON output
Expected Output: JSON with custom rate and threshold learning metricsNote
All learning parameters combined for fine-tuned runs
Generate signed AI policy file after learning
Expected Output: Learning cycle + ai-policy.json generated in results/Note
Policy contains intent thresholds, tool allowlist, risk mode
Generate policy with JSON output
Expected Output: JSON learning results + signed policy file writtenNote
Policy is signed with SHA-256 HMAC to prevent tampering
Performance Analysis
Analyze model accuracy and performance trends
Analyze performance over the last week
Expected Output: Accuracy metrics and trend analysisGet accuracy analysis in JSON format
Expected Output: JSON with per-intent accuracy breakdownNote
Supports: accuracy, confidence, f1-score
Generate learning curve visualization data
Expected Output: Time-series data showing accuracy improvementNote
Useful for identifying plateaus
Analyze confidence metrics
Expected Output: Confidence score distribution and statisticsNote
Shows prediction certainty levels
Analyze last 30 days as JSON
Expected Output: JSON with monthly performance trendsNote
Useful for monthly reporting
Analyze all-time data
Expected Output: Complete historical performance analysisNote
Shows long-term improvement trends
Analyze F1-score metric with JSON output
Expected Output: JSON with F1-score breakdown per intentNote
F1-score balances precision and recall
Learning curve data as JSON
Expected Output: JSON time-series of accuracy improvementNote
Structured data for visualization tools
AI Tier Performance
Analyze learning metrics per AI engine tier (TF-IDF, ONNX, Mistral.rs, GenAI/Ollama, Legacy LLM, Claude)
Show accuracy breakdown across all AI tiers
Expected Output: JSON with per-tier accuracy metricsNote
Compares tier performance for optimization decisions
Weekly tier performance trends
Expected Output: JSON with weekly metrics including new tier dataNote
Tracks mistral.rs and GenAI tier improvement over time
ONNX vs LLM routing breakdown
Expected Output: JSON with fast-path vs slow-path query statisticsNote
Shows what percentage of queries use ONNX fast path vs LLM
Report Generation
Generate comprehensive reports on learning performance
Generate a summary report in JSON format
Expected Output: JSON report with all performance metricsGenerate a Markdown report
Expected Output: Formatted Markdown document with tables and graphsNote
Great for documentation
Generate an HTML report with visualizations
Expected Output: Interactive HTML report saved to fileNote
Includes charts and graphs
Generate weekly report
Expected Output: JSON report covering last 7 days of activityNote
Useful for weekly reviews
Full report in markdown JSON format
Expected Output: Complete historical report in markdown format as JSONNote
Combines all-time data with markdown formatting
Generate HTML report to file
Expected Output: HTML report with charts saved to results/report.htmlNote
File output within execution folder
Related Tools
Other Kodachi AI binaries for model management and querying
Download ONNX embeddings model for AI engine
Expected Output: Model files downloaded to models/ directoryNote
Required for ONNX-based intent matching
Download GGUF model for local Mistral.rs inference (~1.8GB)
Expected Output: GGUF model downloaded to models/ directoryNote
Enables ai-cmd --engine mistral for local LLM queries
List downloaded and available AI models
Expected Output: Model inventory with sizes and statusCheck which AI engine tiers are available
Expected Output: Tier list with availability statusNote
Shows TF-IDF, ONNX, Mistral.rs, GenAI, Claude tiers
Run a natural language query through the AI engine
Expected Output: AI-matched command with execution resultNote
ai-learner tracks these queries for learning
Train the AI model on collected feedback data
Expected Output: Training results with accuracy metricsNote
Run after collecting sufficient feedback via ai-cmd
Environment Variables
| Variable | Description | Default | Values |
|---|---|---|---|
RUST_LOG |
Set logging level | info | error |
NO_COLOR |
Disable all colored output when set | unset | 1 |
Exit Codes
| Code | Description |
|---|---|
| 1 | General error |
| 4 | Network error |
| 2 | Invalid arguments |
| 0 | Success |
| 5 | File not found |
| 3 | Permission denied |