Skip to content

AI Learner (ai-learner) — Workflow Guide

File Information

Property Value
Binary Name ai-learner
Version 9.0.1
File Size 2.7MB
Author Warith Al Maawali warith@digi77.com
License LicenseRef-Kodachi-SAN-1.0
Category AI & Intelligence
Description AI learning engine for continuous improvement and performance analysis
JSON Data View Raw JSON

SHA256 Checksum

80b67b0695a5c5e14510dffa12c80084dc2a1f1afe2e0bfc3cf71348dad8ae48

What ai-learner Does

ai-learner is the learning engine that processes feedback from ai-cmd, adjusts model weights, and generates performance reports. It bridges the gap between user feedback and model improvement — turning corrections into better predictions.

Key Capabilities

Feature Description
Learning Cycles Process accumulated feedback to improve accuracy (basic + incremental modes)
Incremental Learning Fast weight updates without full retraining using --incremental
Performance Analysis Accuracy, confidence, and F1-score metrics with --metric
Period Filtering Analyze data by last-7-days, last-30-days, or all-time
Learning Curves Visualize improvement trends over time with --learning-curve
Report Generation Markdown, HTML, and JSON reports with --format
Parameter Tuning Adjust --learning-rate and --min-feedback for optimization

Scenario 1: The Feedback-to-Learning Loop

Goal: Process user feedback from ai-cmd to improve prediction accuracy.

Cross-binary workflow: ai-cmd → ai-learner → ai-cmd (verification)

# Step 1: User submits corrections via ai-cmd
ai-cmd feedback "check tor" --correct-intent tor_status
ai-cmd feedback "dns leak" --correct-command "dns-leak test"
ai-cmd feedback "network check" --correct-intent network_check

# Step 2: Run basic learning cycle to process feedback
sudo ai-learner learn

# Output shows:
# Feedback processed: 15
# Accuracy improvement: +3.2%
# Intents updated: 5
# Converged: false (more improvement possible)

# Step 3: Verify improvement with analysis
sudo ai-learner analyze --period last-7-days --metric accuracy

# Step 4: Test that corrections took effect
ai-cmd query "check tor" --dry-run
# Should now correctly map to tor_status

# Step 5: If you want JSON output for automation
sudo ai-learner learn --json
# Returns structured JSON with learning metrics

# Step 6: For faster updates on new feedback
sudo ai-learner learn --incremental
# Processes only new feedback since last learning cycle

When to run: After accumulating feedback submissions. Use --min-feedback to set threshold (default: varies per model).

Key flags: - --json — Machine-readable output - --incremental — Fast updates without full retraining - --learning-rate 0.05 — Control convergence speed (default: auto-tuned) - --min-feedback 100 — Minimum feedback items before learning


Scenario 2: Weekly Performance Review

Goal: Analyze AI system performance over the past week and track improvement.

Cross-binary workflow: ai-learner (analyze) → ai-admin (diagnostics if issues found)

# Step 1: Analyze performance for the past week
sudo ai-learner analyze --period last-7-days

# Step 2: Check specific performance metrics
sudo ai-learner analyze --metric accuracy
sudo ai-learner analyze --metric confidence
sudo ai-learner analyze --metric f1-score

# Step 3: View learning curve to track improvement trends
sudo ai-learner analyze --learning-curve

# Step 4: Get JSON output for automation/dashboards
sudo ai-learner analyze --period last-7-days --json

# Step 5: Generate comprehensive weekly report
sudo ai-learner report --period last-7-days

# Step 6: If accuracy is dropping, investigate with ai-admin
ai-admin diagnostics --full

# Step 7: Run learning cycle if needed
sudo ai-learner learn

# Step 8: Verify improvement
sudo ai-learner analyze --period last-7-days --metric accuracy

Available periods: - last-7-days — Weekly performance review - last-30-days — Monthly analysis - all-time — Complete historical data

Available metrics: - accuracy — Overall prediction accuracy - confidence — Model confidence scores - f1-score — Balanced precision/recall metric


Scenario 3: Monthly Performance Report

Goal: Generate comprehensive monthly report with historical comparison.

Cross-binary workflow: ai-learner (analyze + report) → ai-trainer (retrain if needed)

# Step 1: Analyze full month performance
sudo ai-learner analyze --period last-30-days

# Step 2: Generate monthly report (default format)
sudo ai-learner report --period last-30-days

# Step 3: Export report as Markdown for documentation
sudo ai-learner report --format markdown --period last-30-days

# Step 4: Generate HTML report with custom output path
sudo ai-learner report --format html --output results/learning-report.html

# Step 5: Get JSON report for archiving
sudo ai-learner report --period last-30-days --format markdown --json

# Step 6: Compare with previous months
sudo ai-learner analyze --period all-time

# Step 7: If accuracy is below target, retrain
sudo ai-trainer train --data ./data/training-data.json

# Step 8: Validate after retraining
ai-trainer validate --test-data ./data/test-commands.json --threshold 0.90

# Step 9: Generate post-training report
sudo ai-learner report --period last-7-days --format html --output results/post-training.html

Report formats: - Default (terminal output) - markdown — For documentation - html — For web viewing with --output path


Scenario 4: Tuning Learning Parameters

Goal: Optimize learning performance by adjusting convergence speed and feedback thresholds.

When to tune: - Learning too slowly → increase --learning-rate - Overfitting → decrease --learning-rate - Not enough data → adjust --min-feedback - Need fast updates → use --incremental

# Default learning (auto-tuned parameters)
sudo ai-learner learn

# Increase learning rate for faster convergence
sudo ai-learner learn --learning-rate 0.05

# Decrease learning rate for stability (prevent overfitting)
sudo ai-learner learn --learning-rate 0.01

# Set minimum feedback threshold
sudo ai-learner learn --min-feedback 100

# Combine parameters with JSON output
sudo ai-learner learn --learning-rate 0.05 --min-feedback 100 --json

# Incremental learning for quick updates
sudo ai-learner learn --incremental

# Incremental with JSON output
sudo ai-learner learn --incremental --json

# Full parameter tuning workflow
sudo ai-learner learn --learning-rate 0.05 --min-feedback 100
sudo ai-learner analyze --learning-curve
# If curve shows good convergence, keep settings
# If oscillating or diverging, decrease learning rate

Parameter guide: - --learning-rate 0.01 — Conservative (slow, stable) - --learning-rate 0.03 — Balanced (default range) - --learning-rate 0.05 — Aggressive (fast, may overshoot) - --min-feedback 50 — Low threshold (more frequent learning) - --min-feedback 100 — Balanced threshold - --min-feedback 200 — High threshold (batch processing)


Scenario 5: Fully Automated Learning Pipeline

Goal: Set up ai-scheduler to automate daily learning, weekly analysis, and monthly reports.

Cross-binary workflow: ai-scheduler + ai-learner (fully automated)

# Step 1: Schedule daily incremental learning at 2 AM
ai-scheduler add --name "daily-learning" \
  --command "sudo ai-learner learn --incremental" \
  --cron "0 2 * * *"

# Step 2: Schedule weekly accuracy analysis on Sundays at 3 AM
ai-scheduler add --name "weekly-analysis" \
  --command "sudo ai-learner analyze --period last-7-days --json" \
  --cron "0 3 * * 0"

# Step 3: Schedule monthly comprehensive reports on the 1st at 4 AM
ai-scheduler add --name "monthly-report" \
  --command "sudo ai-learner report --period last-30-days --format html --output results/monthly-$(date +%Y%m).html" \
  --cron "0 4 1 * *"

# Step 4: Schedule learning curve analysis bi-weekly
ai-scheduler add --name "biweekly-curve" \
  --command "sudo ai-learner analyze --learning-curve --json" \
  --cron "0 5 1,15 * *"

# Step 5: Verify all scheduled tasks
ai-scheduler list

# Step 6: Test scheduled commands manually
ai-scheduler run --name "daily-learning"

Result: The AI system automatically processes feedback daily, analyzes performance weekly, and generates comprehensive reports monthly — all without manual intervention.

Complete automation stack (combine with other AI tasks):

# Full pipeline: Monitor → Learn → Optimize → Report
ai-scheduler add --name "dns-check" \
  --command "dns-leak test" \
  --cron "0 */6 * * *"

ai-scheduler add --name "daily-learning" \
  --command "sudo ai-learner learn --incremental --json" \
  --cron "0 2 * * *"

ai-scheduler add --name "weekly-optimize" \
  --command "ai-admin tune optimize" \
  --cron "0 3 * * 0"

ai-scheduler add --name "monthly-report" \
  --command "sudo ai-learner report --period last-30-days --format markdown" \
  --cron "0 4 1 * *"


Scenario 6: Diagnosing and Fixing Accuracy Problems

Goal: Identify root cause of accuracy drop and restore performance.

Cross-binary workflow: ai-learner → ai-admin → ai-trainer → ai-cmd (multi-service debugging)

# Step 1: Detect accuracy problem
sudo ai-learner analyze --period last-7-days --metric accuracy
# Shows: 62% (was 85% last week — major drop!)

# Step 2: Check learning curve for anomalies
sudo ai-learner analyze --learning-curve
# Look for sudden drops or plateau

# Step 3: Verify database health (corruption can cause drops)
ai-admin diagnostics --full
ai-admin db integrity-check

# Step 4: Check if feedback quality is the issue
sudo ai-learner analyze --period last-7-days --json | jq '.feedback_stats'

# Step 5: Attempt learning cycle to process new feedback
sudo ai-learner learn

# Step 6: Verify if learning improved accuracy
sudo ai-learner analyze --metric accuracy

# Step 7: If still low, try incremental learning
sudo ai-learner learn --incremental

# Step 8: If problem persists, retrain the model
sudo ai-trainer train --data ./data/training-data.json

# Step 9: Validate training results
ai-trainer validate --test-data ./data/test-commands.json --threshold 0.85

# Step 10: Verify recovery
sudo ai-learner analyze --period last-7-days --metric accuracy
sudo ai-learner analyze --learning-curve

# Step 11: Generate diagnostic report
sudo ai-learner report --period last-7-days --format markdown

# Step 12: Test specific commands that were failing
ai-cmd query "check tor" --dry-run
ai-cmd query "dns leak" --dry-run

Common causes and fixes: - Database corruptionai-admin db integrity-check + repair - Insufficient feedback → Accumulate more corrections via ai-cmd feedback - Stale modelai-trainer train with updated data - Parameter tuning → Adjust --learning-rate and --min-feedback


Scenario 7: Pre- and Post-Training Analysis

Goal: Measure training impact by comparing before/after performance metrics.

Cross-binary workflow: ai-learner → ai-trainer → ai-learner (comparative analysis)

# Step 1: Baseline measurement before training
sudo ai-learner analyze --period all-time --metric accuracy --json > /tmp/before-accuracy.json
sudo ai-learner analyze --period all-time --metric confidence --json > /tmp/before-confidence.json
sudo ai-learner analyze --period all-time --metric f1-score --json > /tmp/before-f1.json

# Step 2: Run the training
sudo ai-trainer train --data ./data/training-data.json

# Step 3: Measure post-training performance
sudo ai-learner analyze --period last-7-days --metric accuracy --json > /tmp/after-accuracy.json
sudo ai-learner analyze --period last-7-days --metric confidence --json > /tmp/after-confidence.json
sudo ai-learner analyze --period last-7-days --metric f1-score --json > /tmp/after-f1.json

# Step 4: Compare results
echo "=== Accuracy Comparison ==="
echo "Before:" && cat /tmp/before-accuracy.json | jq '.accuracy'
echo "After:" && cat /tmp/after-accuracy.json | jq '.accuracy'

echo "=== Confidence Comparison ==="
echo "Before:" && cat /tmp/before-confidence.json | jq '.confidence'
echo "After:" && cat /tmp/after-confidence.json | jq '.confidence'

echo "=== F1-Score Comparison ==="
echo "Before:" && cat /tmp/before-f1.json | jq '.f1_score'
echo "After:" && cat /tmp/after-f1.json | jq '.f1_score'

# Step 5: If accuracy improved, create snapshot
sudo ai-trainer snapshot -v "improved-$(date +%Y%m%d)"

# Step 6: Generate comparison report
sudo ai-learner report --period last-7-days --format html --output results/post-training-$(date +%Y%m%d).html

# Step 7: View learning curve to verify improvement
sudo ai-learner analyze --learning-curve

# Step 8: If accuracy dropped, investigate
sudo ai-learner analyze --period last-7-days --json | jq
ai-admin diagnostics --full

# Step 9: Validate with test commands
ai-trainer validate --test-data ./data/test-commands.json --threshold 0.90

Best practices: - Always capture baseline metrics before training - Use --json output for programmatic comparison - Compare multiple metrics (accuracy, confidence, f1-score) - Create snapshots of successful training runs - Document training parameters and results



Troubleshooting

Problem Cause Solution
"Insufficient feedback" error Not enough user corrections collected Use ai-cmd feedback to provide more corrections before learning
Accuracy drops after learning Conflicting or bad feedback data Review feedback with ai-learner report, remove outliers
Learning cycle takes too long Large feedback dataset Use --incremental flag for faster partial updates
Reports show no improvement Learning rate too low or too high Adjust with --learning-rate (try 0.01-0.05 range)