Learning Engine - Experience-Based Learning & Adaptation
The Learning Engine enables agents to learn from experience, adapt their behavior, and improve performance over time through Reinforcement Learning from Human Feedback (RLHF).
Overview
The Learning Engine implements a sophisticated experience-based learning system that:
- Records Experiences: Captures every agent execution with full context
- Detects Patterns: Identifies successful and unsuccessful patterns
- Generates Adaptations: Suggests behavior improvements
- Applies Learning: Modifies agent behavior based on feedback
- Tracks Performance: Monitors learning effectiveness
Location: src/lib/ai/learning-adaptation-engine.ts, backend-saas/core/learning_engine.py
Architecture
Core Algorithms
1. Experience Recording
ALGORITHM: Record Experience
INPUT: tenant_id, agent_id, experience_data
OUTPUT: experience_id
1. VALIDATE EXPERIENCE DATA
============================================================================
Ensure all required fields are present and valid.
REQUIRED_FIELDS = [
'task_type',
'task_description',
'outcome', # success/failure
'approach_taken',
'confidence'
]
FOR each field IN REQUIRED_FIELDS:
IF field NOT IN experience_data:
RAISE ValidationError(f"Missing required field: {field}")
# Validate data types
IF experience_data.confidence NOT IN [0.0, 1.0]:
RAISE ValidationError("Confidence must be between 0 and 1")
IF experience_data.outcome NOT IN ['success', 'failure']:
RAISE ValidationError("Outcome must be 'success' or 'failure'")
2. EXTRACT EXPERIENCE FEATURES
============================================================================
Extract meaningful features for pattern recognition.
features = {
# Task features
task_type: experience_data.task_type,
task_complexity: calculate_complexity(experience_data.task_description),
domain: extract_domain(experience_data.task_description),
# Execution features
approach_used: experience_data.approach_taken,
skills_used: experience_data.skills_involved || [],
reasoning_pattern: extract_reasoning_pattern(experience_data.reasoning_chain),
# Outcome features
success: experience_data.outcome == 'success',
confidence: experience_data.confidence,
execution_time: experience_data.duration_seconds,
resource_usage: experience_data.resource_consumption,
# Context features
agent_role: experience_data.agent_role,
maturity_level: experience_data.maturity_level,
timestamp: now()
}
RETURN features
3. GENERATE EXPERIENCE EMBEDDING
============================================================================
Create vector representation for semantic similarity search.
# Create searchable text
searchable_text = """
{experience_data.task_type}
{experience_data.task_description}
{experience_data.approach_taken}
{experience_data.outcome}
{"".join(experience_data.learnings or [])}
""".strip()
# Generate embedding
embedding = embed(searchable_text)
RETURN embedding
4. STORE EXPERIENCE
============================================================================
Persist experience in multiple storage systems.
a) Store in PostgreSQL (episodes table)
episode = {
id: generate_uuid(),
tenant_id: tenant_id,
agent_id: agent_id,
# Task information
task_type: experience_data.task_type,
task_description: experience_data.task_description,
input_summary: summarize(experience_data.input),
# Execution details
reasoning_chain: serialize(experience_data.reasoning_chain),
approach_taken: experience_data.approach_taken,
actions_taken: experience_data.actions,
# Outcome
outcome: experience_data.outcome,
success: experience_data.outcome == 'success',
confidence: experience_data.confidence,
# Learning
learnings: experience_data.learnings || [],
metacognitive_insights: serialize(experience_data.metacognition),
# Metadata
features: features,
embedding: embedding,
timestamp: now()
}
INSERT INTO episodes VALUES (episode)
b) Index in LanceDB (vector search)
lance_record = {
episode_id: episode.id,
tenant_id: tenant_id,
agent_id: agent_id,
embedding: embedding,
features: features,
timestamp: episode.timestamp
}
INSERT INTO lancedb TABLE experiences VALUES (lance_record)
c) Update learning statistics
UPDATE agent_learning_stats
SET
total_experiences = total_experiences + 1,
successful_experiences = successful_experiences + (1 IF success ELSE 0),
last_learning_timestamp = now()
WHERE agent_id = agent_id
5. TRIGGER PATTERN RECOGNITION
============================================================================
After recording experience, check for learnable patterns.
# Run pattern recognition asynchronously
TRIGGER background_job:
detect_learning_patterns(
tenant_id=tenant_id,
agent_id=agent_id,
episode_id=episode.id
)
6. RETURN episode_id
============================================================================
RETURN episode.id
MAIN RETURN episode_id
2. Pattern Recognition
ALGORITHM: Detect Learning Patterns
INPUT: tenant_id, agent_id, recent_episode_count=30
OUTPUT: learning_patterns
1. RETRIEVE RECENT EXPERIENCES
============================================================================
Query recent episodes for pattern analysis.
experiences = query(
SELECT * FROM episodes
WHERE tenant_id = tenant_id
AND agent_id = agent_id
ORDER BY timestamp DESC
LIMIT recent_episode_count
)
IF len(experiences) < 10:
RETURN {
status: "insufficient_data",
patterns: [],
message: "Need at least 10 experiences to detect patterns"
}
2. ANALYZE SUCCESS PATTERNS
============================================================================
Identify patterns that lead to successful outcomes.
successful_experiences = [e FOR e IN experiences IF e.success == True]
failed_experiences = [e FOR e IN experiences IF e.success == False]
success_rate = len(successful_experiences) / len(experiences)
# Group by task type
task_type_stats = GROUP experiences BY task_type
FOR each task_type, task_experiences IN task_type_stats:
task_success_rate = (
COUNT(task_experiences WHERE success == True) /
len(task_experiences)
)
# Group by approach
approach_stats = GROUP experiences BY approach_used
FOR each approach, approach_experiences IN approach_stats:
approach_success_rate = (
COUNT(approach_experiences WHERE success == True) /
len(approach_experiences)
)
# Group by reasoning pattern
reasoning_stats = GROUP experiences BY reasoning_pattern
FOR each pattern, pattern_experiences IN reasoning_stats:
pattern_success_rate = (
COUNT(pattern_experiences WHERE success == True) /
len(pattern_experiences)
)
3. IDENTIFY HIGH-PERFORMING PATTERNS
============================================================================
Find patterns that consistently lead to success.
high_performance_patterns = []
# Task types with >80% success rate
FOR each task_type, stats IN task_type_stats:
IF stats.success_rate > 0.8 AND stats.count >= 5:
high_performance_patterns.append({
pattern_type: 'task_type',
pattern_value: task_type,
success_rate: stats.success_rate,
sample_size: stats.count,
confidence: calculate_confidence(stats.count, stats.success_rate)
})
# Approaches with >80% success rate
FOR each approach, stats IN approach_stats:
IF stats.success_rate > 0.8 AND stats.count >= 5:
high_performance_patterns.append({
pattern_type: 'approach',
pattern_value: approach,
success_rate: stats.success_rate,
sample_size: stats.count,
confidence: calculate_confidence(stats.count, stats.success_rate)
})
# Reasoning patterns with >80% success rate
FOR each pattern, stats IN reasoning_stats:
IF stats.success_rate > 0.8 AND stats.count >= 5:
high_performance_patterns.append({
pattern_type: 'reasoning_pattern',
pattern_value: pattern,
success_rate: stats.success_rate,
sample_size: stats.count,
confidence: calculate_confidence(stats.count, stats.success_rate)
})
4. IDENTIFY FAILURE MODES
============================================================================
Find patterns that consistently lead to failure.
failure_patterns = []
# Task types with <50% success rate
FOR each task_type, stats IN task_type_stats:
IF stats.success_rate < 0.5 AND stats.count >= 5:
failure_patterns.append({
pattern_type: 'task_type',
pattern_value: task_type,
failure_rate: 1.0 - stats.success_rate,
sample_size: stats.count,
severity: 'high' IF stats.success_rate < 0.3 ELSE 'medium'
})
# Approaches with <50% success rate
FOR each approach, stats IN approach_stats:
IF stats.success_rate < 0.5 AND stats.count >= 5:
failure_patterns.append({
pattern_type: 'approach',
pattern_value: approach,
failure_rate: 1.0 - stats.success_rate,
sample_size: stats.count,
severity: 'high' IF stats.success_rate < 0.3 ELSE 'medium'
})
5. DETECT CONFIDENCE CALIBRATION
============================================================================
Check if agent's confidence matches actual success rate.
# Bin experiences by confidence
confidence_bins = {
'high': [e FOR e IN experiences IF e.confidence > 0.7],
'medium': [e FOR e IN experiences IF 0.3 <= e.confidence <= 0.7],
'low': [e FOR e IN experiences IF e.confidence < 0.3]
}
# Calculate actual success rate for each bin
FOR each bin_name, bin_experiences IN confidence_bins:
IF len(bin_experiences) > 0:
actual_success_rate = (
COUNT(bin_experiences WHERE success == True) /
len(bin_experiences)
)
expected_confidence = {
'high': 0.7,
'medium': 0.5,
'low': 0.3
}[bin_name]
calibration_error = abs(actual_success_rate - expected_confidence)
IF calibration_error > 0.2:
# Agent is poorly calibrated
calibration_issues.append({
confidence_level: bin_name,
expected_confidence: expected_confidence,
actual_success_rate: actual_success_rate,
calibration_error: calibration_error,
recommendation: (
"Reduce confidence by 20%" IF actual_success_rate < expected_confidence
ELSE "Increase confidence by 20%"
)
})
6. GENERATE LEARNING INSIGHTS
============================================================================
Synthesize patterns into actionable insights.
insights = []
# High-performing pattern insights
FOR each pattern IN high_performance_patterns:
insights.append({
type: 'success_pattern',
message: f"Pattern '{pattern.pattern_value}' shows {pattern.success_rate * 100}% success rate",
recommendation: f"Prefer {pattern.pattern_type} '{pattern.pattern_value}' for similar tasks",
confidence: pattern.confidence,
evidence: {
sample_size: pattern.sample_size,
success_rate: pattern.success_rate
}
})
# Failure pattern insights
FOR each pattern IN failure_patterns:
insights.append({
type: 'failure_pattern',
message: f"Pattern '{pattern.pattern_value}' shows {(1.0 - pattern.failure_rate) * 100}% success rate",
recommendation: f"Avoid {pattern.pattern_type} '{pattern.pattern_value}' or investigate root cause",
severity: pattern.severity,
evidence: {
sample_size: pattern.sample_size,
failure_rate: pattern.failure_rate
}
})
# Calibration insights
FOR each issue IN calibration_issues:
insights.append({
type: 'calibration_issue',
message: f"Agent's {issue.confidence_level} confidence predictions are off by {issue.calibration_error * 100}%",
recommendation: issue.recommendation,
severity: 'medium' IF issue.calibration_error < 0.3 ELSE 'high'
})
7. GENERATE ADAPTATION SUGGESTIONS
============================================================================
Convert insights into concrete behavior modifications.
adaptations = []
FOR each insight IN insights:
IF insight.type == 'success_pattern':
# Suggest reinforcing successful patterns
adaptations.append({
adaptation_type: 'reinforce_pattern',
target_pattern: insight.pattern_value,
action: 'increase_usage',
expected_improvement: (insight.success_rate - 0.5) * 0.2, # Max 10% improvement
confidence: insight.confidence,
rationale: insight.message
})
ELSE IF insight.type == 'failure_pattern':
# Suggest avoiding or fixing failed patterns
adaptations.append({
adaptation_type: 'avoid_pattern',
target_pattern: insight.pattern_value,
action: 'decrease_usage',
expected_improvement: insight.failure_rate * 0.15, # Up to 15% improvement
severity: insight.severity,
rationale: insight.message
})
ELSE IF insight.type == 'calibration_issue':
# Suggest confidence calibration
adaptations.append({
adaptation_type: 'calibrate_confidence',
target_level: insight.confidence_level,
action: insight.recommendation,
expected_improvement: insight.calibration_error * 0.5,
rationale: "Better confidence calibration improves decision quality"
})
8. RETURN PATTERNS AND ADAPTATIONS
============================================================================
RETURN {
status: "success",
patterns: {
successful: high_performance_patterns,
failures: failure_patterns,
calibration_issues: calibration_issues
},
insights: insights,
adaptations: adaptations,
summary: {
total_experiences: len(experiences),
success_rate: success_rate,
high_confidence_patterns: len(high_performance_patterns),
critical_failures: len([p FOR p IN failure_patterns IF p.severity == 'high'])
}
}
MAIN RETURN learning_patterns
3. Adaptation Application
ALGORITHM: Apply Adaptation
INPUT: tenant_id, agent_id, adaptation_id, human_approver
OUTPUT: application_result
1. VALIDATE ADAPTATION
============================================================================
Ensure adaptation is safe and appropriate to apply.
adaptation = query(
SELECT * FROM learning_adaptations
WHERE id = adaptation_id
AND tenant_id = tenant_id
AND agent_id = agent_id
AND status = 'pending'
)
IF NOT adaptation:
RETURN {
success: false,
error: "Adaptation not found or already processed"
}
# Check governance compliance
governance = AgentGovernanceService()
decision = await governance.can_perform_action(
tenant_id,
agent_id,
adaptation.adaptation_type
)
IF NOT decision.allowed:
RETURN {
success: false,
error: f"Adaptation not allowed by governance: {decision.reason}"
}
2. ASSESS ADAPTATION IMPACT
============================================================================
Evaluate potential impact before applying.
impact_assessment = {
# Behavioral impact
behavior_change_magnitude: calculate_behavior_change(adaptation),
# Performance impact
expected_improvement: adaptation.expected_improvement,
confidence_level: adaptation.confidence,
# Risk assessment
risk_level: assess_risk(adaptation),
reversibility: is_reversible(adaptation),
# Dependencies
affected_capabilities: identify_affected_capabilities(adaptation)
}
# High-risk adaptations require additional approval
IF impact_assessment.risk_level == 'high' AND NOT human_approver.is_admin:
RETURN {
success: false,
error: "High-risk adaptations require admin approval",
impact_assessment: impact_assessment
}
3. APPLY BEHAVIOR MODIFICATION
============================================================================
Implement the adaptation based on type.
SWITCH adaptation.adaptation_type:
CASE 'reinforce_pattern':
# Increase preference for successful pattern
UPDATE agent_preferences
SET pattern_weights[adaptation.target_pattern] *= 1.2
WHERE agent_id = agent_id
applied_change = {
type: 'weight_increase',
target: adaptation.target_pattern,
previous_value: old_weight,
new_value: old_weight * 1.2
}
CASE 'avoid_pattern':
# Decrease preference for failed pattern
UPDATE agent_preferences
SET pattern_weights[adaptation.target_pattern] *= 0.8
WHERE agent_id = agent_id
applied_change = {
type: 'weight_decrease',
target: adaptation.target_pattern,
previous_value: old_weight,
new_value: old_weight * 0.8
}
CASE 'calibrate_confidence':
# Adjust confidence calculation
UPDATE agent_cognitive_profile
SET confidence_multiplier = calculate_multiplier(adaptation.action)
WHERE agent_id = agent_id
applied_change = {
type: 'calibration_adjustment',
target: adaptation.target_level,
adjustment: adaptation.action
}
DEFAULT:
RETURN {
success: false,
error: f"Unknown adaptation type: {adaptation.adaptation_type}"
}
4. RECORD ADAPTATION APPLICATION
============================================================================
Log the adaptation for tracking and rollback.
application_record = {
id: generate_uuid(),
tenant_id: tenant_id,
agent_id: agent_id,
adaptation_id: adaptation_id,
# Application details
applied_at: now(),
applied_by: human_approver.id,
applied_change: applied_change,
impact_assessment: impact_assessment,
# Baseline for comparison
baseline_performance: get_current_performance(agent_id),
# Status
status: 'applied'
}
INSERT INTO adaptation_applications VALUES (application_record)
5. UPDATE ADAPTATION STATUS
============================================================================
Mark adaptation as applied.
UPDATE learning_adaptations
SET status = 'applied',
applied_at = now(),
applied_by = human_approver.id
WHERE id = adaptation_id
6. SCHEDULE PERFORMANCE MONITORING
============================================================================
Monitor adaptation effectiveness over time.
# Schedule background job to check performance in 7 days
SCHEDULE background_job:
monitor_adaptation_effectiveness(
application_id: application_record.id,
check_after: 7 days
)
7. RETURN SUCCESS
============================================================================
RETURN {
success: true,
application_id: application_record.id,
adaptation: adaptation,
applied_change: applied_change,
expected_improvement: adaptation.expected_improvement,
monitoring_scheduled: true
}
MAIN RETURN application_result
4. Performance Monitoring
ALGORITHM: Monitor Adaptation Effectiveness
INPUT: application_id, evaluation_period_days=7
OUTPUT: effectiveness_report
1. RETRIEVE APPLICATION RECORD
============================================================================
application = query(
SELECT * FROM adaptation_applications
WHERE id = application_id
)
IF NOT application:
RETURN { error: "Application not found" }
# Ensure enough time has passed
days_since_application = (now() - application.applied_at) / 86400
IF days_since_application < evaluation_period_days:
RETURN {
status: "waiting",
message: f"Only {days_since_application} days passed, need {evaluation_period_days}"
}
2. COMPARE PERFORMANCE
============================================================================
Compare performance before and after adaptation.
# Get baseline performance (before adaptation)
baseline_experiences = query(
SELECT * FROM episodes
WHERE agent_id = application.agent_id
AND timestamp < application.applied_at
ORDER BY timestamp DESC
LIMIT 30
)
baseline_metrics = {
success_rate: COUNT(baseline_experiences WHERE success) / len(baseline_experiences),
avg_confidence: AVG(e.confidence FOR e IN baseline_experiences),
avg_execution_time: AVG(e.execution_time FOR e IN baseline_experiences)
}
# Get current performance (after adaptation)
current_experiences = query(
SELECT * FROM episodes
WHERE agent_id = application.agent_id
AND timestamp >= application.applied_at
ORDER BY timestamp DESC
LIMIT 30
)
current_metrics = {
success_rate: COUNT(current_experiences WHERE success) / len(current_experiences),
avg_confidence: AVG(e.confidence FOR e IN current_experiences),
avg_execution_time: AVG(e.execution_time FOR e IN current_experiences)
}
3. CALCULATE IMPROVEMENT
============================================================================
Determine if adaptation had positive effect.
improvement = {
success_rate_delta: current_metrics.success_rate - baseline_metrics.success_rate,
confidence_delta: current_metrics.avg_confidence - baseline_metrics.avg_confidence,
execution_time_delta: current_metrics.avg_execution_time - baseline_metrics.avg_execution_time
}
# Determine overall effectiveness
overall_improvement = (
improvement.success_rate_delta * 0.6 +
(improvement.confidence_delta IF abs(improvement.confidence_delta) < 0.1 ELSE -0.1) * 0.3 +
(-improvement.execution_time_delta / baseline_metrics.avg_execution_time) * 0.1
)
effectiveness = "effective" IF overall_improvement > 0.05 ELSE (
"ineffective" IF overall_improvement < -0.05 ELSE "neutral"
)
4. GENERATE RECOMMENDATIONS
============================================================================
Provide guidance based on effectiveness.
recommendations = []
IF effectiveness == "effective":
recommendations.append({
type: "maintain",
message: "Adaptation is working well, continue using it",
action: "keep_adaptation_active"
})
ELSE IF effectiveness == "ineffective":
recommendations.append({
type: "rollback",
message: "Adaptation is not improving performance, consider rollback",
action: "rollback_adaptation",
reason: f"Overall improvement: {overall_improvement * 100}%"
})
ELSE:
recommendations.append({
type: "monitor",
message: "Adaptation impact is neutral, continue monitoring",
action: "extend_monitoring_period",
additional_days: 7
})
5. UPDATE APPLICATION RECORD
============================================================================
Store effectiveness results.
UPDATE adaptation_applications
SET
effectiveness = effectiveness,
overall_improvement = overall_improvement,
improvement_metrics = improvement,
recommendations = recommendations,
evaluation_completed_at = now()
WHERE id = application_id
6. TRIGGER ACTIONS IF NEEDED
============================================================================
Execute recommended actions automatically if safe.
IF effectiveness == "ineffective" AND application.impact_assessment.reversibility:
# Auto-rollback reversible adaptations
execute_rollback(application_id)
7. RETURN REPORT
============================================================================
RETURN {
application_id: application_id,
effectiveness: effectiveness,
overall_improvement: overall_improvement,
metrics: {
baseline: baseline_metrics,
current: current_metrics,
delta: improvement
},
recommendations: recommendations,
evaluation_period_days: days_since_application
}
MAIN RETURN effectiveness_report
Data Structures
ExperienceData
interface ExperienceData { // Task information task_type: string; task_description: string; input: any; input_summary?: string; // Execution details reasoning_chain: ReasoningChain; approach_taken: string; actions_taken: string[]; skills_involved?: string[]; // Outcome outcome: 'success' | 'failure'; success: boolean; confidence: number; duration_seconds?: number; resource_consumption?: ResourceUsage; // Learning learnings: string[]; metacognition?: MetacognitiveInsights; // Context agent_role: string; maturity_level: MaturityLevel; timestamp?: Date; }
LearningPattern
interface LearningPattern { pattern_type: 'task_type' | 'approach' | 'reasoning_pattern'; pattern_value: string; success_rate: number; sample_size: number; confidence: number; } interface FailurePattern extends LearningPattern { failure_rate: number; severity: 'high' | 'medium' | 'low'; }
Adaptation
interface Adaptation { id: string; tenant_id: string; agent_id: string; // Adaptation details adaptation_type: 'reinforce_pattern' | 'avoid_pattern' | 'calibrate_confidence'; target_pattern?: string; action: string; expected_improvement: number; confidence: number; // Metadata rationale: string; created_at: Date; status: 'pending' | 'approved' | 'applied' | 'rejected'; applied_at?: Date; applied_by?: string; }
Example Usage
Record Experience
import { LearningAdaptationEngine } from '@/lib/ai/learning-adaptation-engine'; const learning = new LearningAdaptationEngine(db, llmRouter); // Record experience after agent execution const experience = await learning.recordExperience(tenantId, { agent_id: agentId, agent_role: 'Finance', maturity_level: 'supervised', task_type: 'reconciliation', task_description: 'Reconcile SKU-123 inventory discrepancy', input: { sku: 'SKU-123', expected: 100, actual: 95 }, reasoning_chain: reasoning, approach_taken: 'Weighted average costing method', actions_taken: ['Query ERP', 'Compare counts', 'Calculate variance'], outcome: 'success', success: true, confidence: 0.92, duration_seconds: 45, learnings: [ 'Weighted average minimizes variance', 'Physical count accuracy is critical' ], metacognition: metacognitiveInsights }); console.log('Experience recorded:', experience.episode_id);
Detect Patterns
// Detect learning patterns from recent experiences const patterns = await learning.detectLearningPatterns( tenantId, agentId, 30 // Analyze last 30 episodes ); console.log('High-performing patterns:', patterns.patterns.successful); console.log('Failure patterns:', patterns.patterns.failures); console.log('Adaptations suggested:', patterns.adaptations); // Example output: // { // patterns: { // successful: [ // { // pattern_type: 'approach', // pattern_value: 'Weighted average costing', // success_rate: 0.92, // sample_size: 12, // confidence: 0.85 // } // ], // failures: [...] // }, // adaptations: [ // { // adaptation_type: 'reinforce_pattern', // target_pattern: 'Weighted average costing', // action: 'increase_usage', // expected_improvement: 0.084, // confidence: 0.85 // } // ] // }
Apply Adaptation
// Apply suggested adaptation const adaptation = patterns.adaptations[0]; const result = await learning.applyAdaptation( tenantId, agentId, adaptation.id, humanApprover // User who approved ); console.log('Adaptation applied:', result.applied_change); // Pattern weight increased from 1.0 to 1.2
Performance Characteristics
Storage
- PostgreSQL: Immediate write for episode records
- LanceDB: Background indexing for vector search
- Latency: < 100ms per experience
Pattern Recognition
- Time Complexity: O(n × m) where n = experiences, m = pattern types
- Space Complexity: O(n) for storing experiences in memory
- Latency: < 2 seconds for 100 experiences
Adaptation Application
- Validation: < 100ms
- Application: < 50ms (in-memory update)
- Total Latency: < 200ms
Configuration
interface LearningConfig { // Pattern detection min_experiences_for_patterns: number; // Default: 10 pattern_detection_threshold: number; // Default: 0.8 (80% success) pattern_sample_size_min: number; // Default: 5 // Adaptation auto_apply_safe_adaptations: boolean; // Default: false require_human_approval: boolean; // Default: true max_concurrent_adaptations: number; // Default: 3 // Monitoring evaluation_period_days: number; // Default: 7 effectiveness_threshold: number; // Default: 0.05 (5% improvement) auto_rollback_reversible: boolean; // Default: true // Confidence calibration confidence_calibration_bins: number; // Default: 3 (low, medium, high) calibration_tolerance: number; // Default: 0.2 }
References
- Implementation:
src/lib/ai/learning-adaptation-engine.ts,backend-saas/core/learning_engine.py - Tests:
src/lib/ai/__tests__/learning-engine.test.ts - Related: Cognitive Architecture, World Model
Last Updated: 2025-02-06 Version: 8.0 Status: Production Ready