Skip to content

ai-learner

AI learning engine for continuous improvement and performance analysis

Version: 9.0.1 | Size: 2.7MB | Author: Warith Al Maawali warith@digi77.com

License: LicenseRef-Kodachi-SAN-1.0 | Website: https://kodachi.cloud


File Information

Property Value
Binary Name ai-learner
Version 9.0.1
Build Date 2026-02-14T07:59:29.571032032Z
Rust Version 1.82.0
File Size 2.7MB
JSON Data View Raw JSON

SHA256 Checksum

80b67b0695a5c5e14510dffa12c80084dc2a1f1afe2e0bfc3cf71348dad8ae48

Features

Feature Description
Feature Feedback aggregation and analysis
Feature Incremental learning with convergence detection
Feature Performance tracking and trend analysis
Feature Multi-format report generation (JSON, Markdown, HTML)

Security Features

Feature Description
Inputvalidation All inputs are validated and sanitized
Ratelimiting Built-in rate limiting for network operations
Authentication Secure authentication with certificate pinning
Encryption TLS 1.3 for all network communications

System Requirements

Requirement Value
OS Linux (Debian-based)
Privileges root/sudo for system operations
Dependencies OpenSSL, libcurl

Global Options

Flag Description
-h, --help Print help information
-v, --version Print version information
-n, --info Display detailed information
-e, --examples Show usage examples
--json Output in JSON format
--json-pretty Pretty-print JSON output with indentation
--json-human Enhanced JSON output with improved formatting (like jq)
--verbose Enable verbose output
--quiet Suppress non-essential output
--no-color Disable colored output
--config <FILE> Use custom configuration file
--timeout <SECS> Set timeout (default: 30)
--retry <COUNT> Retry attempts (default: 3)

Commands

Analysis Operations

analyze

Analyze model performance and trends

Usage:

ai-learner analyze [OPTIONS]

Learning Operations

learn

Run learning cycle to improve model based on feedback

Usage:

ai-learner learn [OPTIONS]

Reporting Operations

report

Generate comprehensive performance reports

Usage:

ai-learner report [OPTIONS]

Status Operations

status

Show ai-learner status, database health, and activity metrics

Usage:

ai-learner status [OPTIONS]

Examples

Basic Learning Operations

Run learning cycles and update models based on feedback

Run a full learning cycle on all feedback

sudo ai-learner learn
Expected Output: Learning statistics showing improvements

Note

Processes all feedback since last run

Run incremental learning on new feedback only

sudo ai-learner learn --incremental
Expected Output: Quick learning update with delta statistics

Get learning results in JSON format

sudo ai-learner learn --json
Expected Output: JSON response with detailed learning metrics

Note

Useful for automated processing

Learn with custom learning rate

sudo ai-learner learn --learning-rate 0.05
Expected Output: Learning results with adjusted convergence speed

Note

Lower rates for more stable convergence

Learn with minimum feedback threshold

sudo ai-learner learn --min-feedback 100
Expected Output: Learning skipped if insufficient feedback available

Note

Ensures statistical significance

Incremental learning with JSON output

sudo ai-learner learn --incremental --json
Expected Output: JSON with incremental learning delta statistics

Note

Combines fast incremental mode with structured output

Full parameter learning with JSON output

sudo ai-learner learn --learning-rate 0.05 --min-feedback 100 --json
Expected Output: JSON with custom rate and threshold learning metrics

Note

All learning parameters combined for fine-tuned runs

Generate signed AI policy file after learning

sudo ai-learner learn --output-policy
Expected Output: Learning cycle + ai-policy.json generated in results/

Note

Policy contains intent thresholds, tool allowlist, risk mode

Generate policy with JSON output

sudo ai-learner learn --output-policy --json
Expected Output: JSON learning results + signed policy file written

Note

Policy is signed with SHA-256 HMAC to prevent tampering

Performance Analysis

Analyze model accuracy and performance trends

Analyze performance over the last week

sudo ai-learner analyze --period last-7-days
Expected Output: Accuracy metrics and trend analysis

Get accuracy analysis in JSON format

sudo ai-learner analyze --metric accuracy --json
Expected Output: JSON with per-intent accuracy breakdown

Note

Supports: accuracy, confidence, f1-score

Generate learning curve visualization data

sudo ai-learner analyze --learning-curve
Expected Output: Time-series data showing accuracy improvement

Note

Useful for identifying plateaus

Analyze confidence metrics

sudo ai-learner analyze --metric confidence
Expected Output: Confidence score distribution and statistics

Note

Shows prediction certainty levels

Analyze last 30 days as JSON

sudo ai-learner analyze --period last-30-days --json
Expected Output: JSON with monthly performance trends

Note

Useful for monthly reporting

Analyze all-time data

sudo ai-learner analyze --period all-time
Expected Output: Complete historical performance analysis

Note

Shows long-term improvement trends

Analyze F1-score metric with JSON output

sudo ai-learner analyze --metric f1-score --json
Expected Output: JSON with F1-score breakdown per intent

Note

F1-score balances precision and recall

Learning curve data as JSON

sudo ai-learner analyze --learning-curve --json
Expected Output: JSON time-series of accuracy improvement

Note

Structured data for visualization tools

AI Tier Performance

Analyze learning metrics per AI engine tier (TF-IDF, ONNX, Mistral.rs, GenAI/Ollama, Legacy LLM, Claude)

Show accuracy breakdown across all AI tiers

sudo ai-learner analyze --metric accuracy --json
Expected Output: JSON with per-tier accuracy metrics

Note

Compares tier performance for optimization decisions

Weekly tier performance trends

sudo ai-learner analyze --period last-7-days --json
Expected Output: JSON with weekly metrics including new tier data

Note

Tracks mistral.rs and GenAI tier improvement over time

ONNX vs LLM routing breakdown

sudo ai-learner analyze --metric accuracy --json
Expected Output: JSON with fast-path vs slow-path query statistics

Note

Shows what percentage of queries use ONNX fast path vs LLM

Report Generation

Generate comprehensive reports on learning performance

Generate a summary report in JSON format

sudo ai-learner report
Expected Output: JSON report with all performance metrics

Generate a Markdown report

sudo ai-learner report --format markdown
Expected Output: Formatted Markdown document with tables and graphs

Note

Great for documentation

Generate an HTML report with visualizations

sudo ai-learner report --format html --output results/learning-report.html
Expected Output: Interactive HTML report saved to file

Note

Includes charts and graphs

Generate weekly report

sudo ai-learner report --period last-7-days
Expected Output: JSON report covering last 7 days of activity

Note

Useful for weekly reviews

Full report in markdown JSON format

sudo ai-learner report --period all-time --format markdown --json
Expected Output: Complete historical report in markdown format as JSON

Note

Combines all-time data with markdown formatting

Generate HTML report to file

sudo ai-learner report --format html --output results/report.html
Expected Output: HTML report with charts saved to results/report.html

Note

File output within execution folder

Other Kodachi AI binaries for model management and querying

Download ONNX embeddings model for AI engine

ai-trainer download-model
Expected Output: Model files downloaded to models/ directory

Note

Required for ONNX-based intent matching

Download GGUF model for local Mistral.rs inference (~1.8GB)

ai-trainer download-model --llm
Expected Output: GGUF model downloaded to models/ directory

Note

Enables ai-cmd --engine mistral for local LLM queries

List downloaded and available AI models

ai-trainer download-model --show-models
Expected Output: Model inventory with sizes and status

Check which AI engine tiers are available

ai-cmd tiers
Expected Output: Tier list with availability status

Note

Shows TF-IDF, ONNX, Mistral.rs, GenAI, Claude tiers

Run a natural language query through the AI engine

ai-cmd query "check tor status"
Expected Output: AI-matched command with execution result

Note

ai-learner tracks these queries for learning

Train the AI model on collected feedback data

ai-trainer train
Expected Output: Training results with accuracy metrics

Note

Run after collecting sufficient feedback via ai-cmd

Environment Variables

Variable Description Default Values
RUST_LOG Set logging level info error
NO_COLOR Disable all colored output when set unset 1

Exit Codes

Code Description
1 General error
4 Network error
2 Invalid arguments
0 Success
5 File not found
3 Permission denied