Cognitive Architecture
The central brain system that enables human-like reasoning, attention allocation, memory recall, and problem-solving.
Overview
The Cognitive Architecture is the core reasoning engine that processes tasks, generates decisions, and coordinates all other brain systems. It implements a computational model of human cognition including:
- Attention Allocation - Determining what information matters
- Memory Recall - Retrieving relevant past experiences
- Reasoning Chains - Multi-step logical reasoning
- Decision Making - Selecting optimal approaches
- Metacognition - Thinking about thinking
Location: src/lib/ai/cognitive-architecture.ts
Architecture
Core Algorithm
Cognitive Reasoning Process
ALGORITHM: Cognitive Reasoning Process
INPUT: task, context, agent_profile
OUTPUT: reasoning_result
1. ATTENTION ALLOCATION
============================================================================
Determine what information is most important for the current task.
INPUT: task, context, cognitive_profile
OUTPUT: attention_allocation
STEPS:
a) Calculate task complexity score
- Analyze task description
- Count sub-problems
- Identify required capabilities
- complexity = 0-1 scale (0=simple, 1=complex)
b) Determine available cognitive resources
- Get current working_memory capacity
- Check cognitive_load (current tasks)
- available_capacity = max_capacity - cognitive_load
c) Allocate attention budget
- Assign attention weights to task aspects:
* Task description: 40%
* Context relevance: 30%
* Past experiences: 20%
* External data: 10%
d) Generate attention allocation map
- attention_map = {
primary_focus: task_description,
secondary_focus: [context_elements],
background_monitoring: [external_factors]
}
RETURN attention_allocation
2. MEMORY RECALL
============================================================================
Retrieve relevant past experiences to inform current reasoning.
INPUT: task_description, agent_role, limit=5
OUTPUT: relevant_experiences
STEPS:
a) Generate semantic embedding of task
- Call embedding model on task description
- Extract key entities and concepts
- query_vector = embed(task_description)
b) Query WorldModel for experiences
- Search experiences by semantic similarity
- Filter by agent_role
- Apply canvas context filters
- Apply feedback score filters (min_score=0.5)
c) Rank experiences by relevance
- Calculate semantic similarity score (0-1)
- Apply recency boost: +0.1 for recent (< 7 days)
- Apply feedback adjustment:
* Positive feedback: +0.2
* Negative feedback: -0.3
- final_score = semantic + recency + feedback
d) Select top-K experiences
- Sort by final_score descending
- Return top limit results
RETURN relevant_experiences
3. REASONING CHAIN GENERATION
============================================================================
Decompose task and generate step-by-step reasoning.
INPUT: task, relevant_experiences, attention_allocation
OUTPUT: reasoning_chain
STEPS:
a) Task decomposition
- Identify sub-problems
- Determine dependencies between sub-problems
- Create execution graph
FOR each sub_problem:
- sub_problem.complexity = estimate_complexity()
- sub_problem.required_capabilities = identify_capabilities()
- sub_problem.dependencies = find_dependencies()
b) Generate reasoning steps
- FOR each sub_problem in topological_order:
* Generate reasoning approach based on experiences
* Identify alternative strategies
* Select optimal reasoning pattern:
- Deductive (general → specific)
- Inductive (specific → general)
- Abductive (best explanation)
- Analogical (pattern matching)
c) Build reasoning chain
reasoning_chain = []
FOR each step in execution_order:
reasoning_step = {
step_number: i,
sub_problem: step.sub_problem,
reasoning_pattern: step.pattern,
approach_description: step.approach,
alternatives: step.alternatives,
confidence: step.confidence
}
reasoning_chain.append(reasoning_step)
RETURN reasoning_chain
4. DECISION MAKING
============================================================================
Evaluate alternatives and select optimal approach.
INPUT: reasoning_chain, agent_maturity_level, governance_constraints
OUTPUT: decision
STEPS:
a) Apply reasoning style
- reasoning_style = SELECT based on:
* Logical: Analytical, data-driven
* Intuitive: Pattern-based, heuristic
* Creative: Exploratory, innovative
- Style depends on:
* Agent maturity (student → autonomous)
* Task complexity
* Domain knowledge
* Time constraints
b) Evaluate alternatives
FOR each reasoning_step in reasoning_chain:
FOR each alternative in reasoning_step.alternatives:
- Calculate expected_outcome
- Estimate resource_requirements
- Assess risk_level (low/medium/high)
- Check governance_compliance
- Calculate utility_score
utility = (
expected_value * 0.4 +
confidence * 0.3 +
(1 - risk) * 0.2 +
governance_compliance * 0.1
)
c) Select optimal approach
- Find highest utility_score across all alternatives
- Verify governance compliance
- Check resource availability
IF governance_compliance == FALSE:
- Select next best alternative
- Add governance_constraint to reasoning
decision = {
selected_approach: optimal_approach,
reasoning_chain: reasoning_chain,
expected_outcome: predicted_result,
confidence: confidence_score,
risk_assessment: risk_level,
governance_notes: compliance_notes
}
RETURN decision
5. METACOGNITION
============================================================================
Generate insights about the reasoning process itself.
INPUT: reasoning_chain, decision, task_complexity
OUTPUT: metacognitive_insights
STEPS:
a) Analyze reasoning quality
- reasoning_depth = count(reasoning_chain)
- reasoning_breadth = count(alternatives considered)
- reasoning_coherence = check_logical_consistency()
b) Assess confidence calibration
- predicted_confidence = decision.confidence
- historical_accuracy = get_past_accuracy(similar_tasks)
- calibration_score = compare(predicted vs actual)
IF calibration_score < threshold:
- confidence_adjustment = "overconfident, reduce confidence"
ELSE IF calibration_score > threshold:
- confidence_adjustment = "underconfident, increase confidence"
c) Identify learning opportunities
- gaps_in_knowledge = find_unanswered_questions()
- uncertain_assumptions = identify_assumptions(low_confidence)
- future_improvements = suggest_enhancements()
metacognitive_insights = {
reasoning_quality: {
depth: reasoning_depth,
breadth: reasoning_breadth,
coherence: reasoning_coherence
},
confidence_assessment: {
predicted: decision.confidence,
calibrated: calibration_score,
adjustment: confidence_adjustment
},
learning_opportunities: {
knowledge_gaps: gaps_in_knowledge,
uncertain_assumptions: uncertain_assumptions,
improvements: future_improvements
}
}
RETURN metacognitive_insights
6. LEARNING INTEGRATION
============================================================================
Record reasoning chain for future learning.
INPUT: task, decision, metacognitive_insights, outcome
OUTPUT: recorded
STEPS:
a) Extract learnings
- successful_patterns = extract(decision.reasoning_chain)
- failed_alternatives = extract(rejected_approaches)
- metacognitive_patterns = extract(metacognitive_insights)
b) Create experience record
experience = {
agent_id: agent_id,
agent_role: agent_role,
task_type: classify(task),
task_description: summarize(task),
reasoning_chain: decision.reasoning_chain,
selected_approach: decision.selected_approach,
outcome: outcome,
success: outcome.success,
confidence: decision.confidence,
metacognitive_insights: metacognitive_insights,
timestamp: now()
}
c) Record to WorldModel
- Generate embedding of experience
- Store in PostgreSQL (episodes table)
- Index in LanceDB (semantic search)
RETURN recorded
MAIN RETURN reasoning_result = {
attention_allocation: attention_allocation,
relevant_experiences: relevant_experiences,
reasoning_chain: reasoning_chain,
decision: decision,
metacognitive_insights: metacognitive_insights
}
Data Structures
CognitiveProfile
interface CognitiveProfile { // Core cognitive capabilities (0-1 scale) reasoning: number; // Logical reasoning ability memory: number; // Memory recall accuracy attention: number; // Focus and concentration language: number; // Natural language understanding problem_solving: number; // Analytical problem solving learning: number; // Learning from experience // Cognitive resources working_memory_capacity: number; // Max concurrent thoughts cognitive_load: number; // Current cognitive utilization // Metacognitive awareness self_awareness: number; // Understanding own limitations confidence_calibration: number; // Accuracy of confidence estimates // Learning patterns learning_style: 'logical' | 'intuitive' | 'creative' | 'mixed'; adaptation_rate: number; // How quickly agent adapts }
ReasoningChain
interface ReasoningChain { steps: ReasoningStep[]; overall_confidence: number; selected_approach: string; alternatives_rejected: string[]; governance_notes: string[]; } interface ReasoningStep { step_number: number; sub_problem: string; reasoning_pattern: 'deductive' | 'inductive' | 'abductive' | 'analogical'; approach_description: string; alternatives: Alternative[]; confidence: number; estimated_effort: number; } interface Alternative { approach: string; expected_value: number; risk_level: 'low' | 'medium' | 'high'; resource_requirements: string[]; governance_compliant: boolean; utility_score: number; }
MetacognitiveInsights
interface MetacognitiveInsights { reasoning_quality: { depth: number; // How many reasoning steps breadth: number; // How many alternatives coherence: number; // Logical consistency }; confidence_assessment: { predicted: number; // Original confidence calibrated: number; // Historical accuracy adjustment: string; // Recommendation }; learning_opportunities: { knowledge_gaps: string[]; // What we don't know uncertain_assumptions: string[]; // Low-confidence assumptions improvements: string[]; // How to get better }; }
Integration Points
World Model Integration
// Recall relevant experiences const worldModel = new WorldModelService(db); const experiences = await worldModel.recallExperiences( tenantId, agentRole, taskDescription, 5 // top-5 experiences ); // Experiences include: // - Similar tasks and outcomes // - Successful approaches // - Failed alternatives // - Canvas context // - Feedback scores
Learning Engine Integration
// Record experience for learning const learning = new LearningAdaptationEngine(db, llmRouter); await learning.recordExperience(tenantId, { agent_id: agentId, task_type: taskType, task_description: taskDescription, reasoning_chain: reasoningChain, selected_approach: decision.selected_approach, outcome: outcome, success: success, confidence: decision.confidence, metacognitive_insights: metacognitiveInsights }); // Learning engine extracts patterns and generates adaptations
Agent Governance Integration
// Validate decision against governance const governance = new AgentGovernanceService(db); const decision = await governance.canPerformAction( tenantId, agentId, actionType ); if (!decision.allowed) { // Adjust reasoning to comply with governance // Select alternative approach // Add governance notes to reasoning chain }
Example Usage
Basic Reasoning
import { CognitiveArchitecture } from '@/lib/ai/cognitive-architecture'; import { WorldModelService } from '@/lib/ai/world-model'; import { LLMRouter } from '@/lib/ai/llm-router'; // Initialize cognitive architecture const db = getDatabase(); const llmRouter = new LLMRouter(db); const cognitive = new CognitiveArchitecture(db, llmRouter); // Initialize agent await cognitive.initializeAgent(tenantId, agentId); // Process task with reasoning const task = { description: "Analyze Q4 sales data and identify trends", context: { domain: "finance", data_sources: ["salesforce", "quickbooks"], constraints: ["read-only"] } }; const reasoning = await cognitive.reason( tenantId, agentId, task, { includeExperiences: true } ); console.log('Selected Approach:', reasoning.decision.selected_approach); console.log('Confidence:', reasoning.decision.confidence); console.log('Reasoning Chain:', reasoning.reasoning_chain); console.log('Metacognitive Insights:', reasoning.metacognitive_insights); // Execute the selected approach const result = await executeApproach(reasoning.decision.selected_approach); // Record experience for learning await cognitive.recordExperience(tenantId, agentId, { task: task, reasoning: reasoning, outcome: result, success: result.success });
Performance Characteristics
Time Complexity
- Attention Allocation: O(n) where n = task complexity
- Memory Recall: O(k) where k = number of experiences retrieved
- Reasoning Generation: O(m × a) where m = sub-problems, a = alternatives
- Decision Making: O(m × a × g) where g = governance checks
Space Complexity
- Working Memory: O(c) where c = cognitive capacity
- Experience Cache: O(k) where k = cached experiences
- Reasoning Chain: O(m) where m = reasoning steps
Optimization Strategies
-
Experience Caching
- Cache frequently accessed experiences
- TTL: 5 minutes
- Invalidation on new experiences
-
Parallel Reasoning
- Reason about independent sub-problems in parallel
- Aggregate results at dependency points
-
Lazy Evaluation
- Don't generate all alternatives if first is sufficient
- Stop reasoning when confidence threshold reached
Configuration
Cognitive Profile Initialization
// New agent starts with baseline profile const baselineProfile: CognitiveProfile = { reasoning: 0.5, memory: 0.5, attention: 0.5, language: 0.5, problem_solving: 0.5, learning: 0.5, working_memory_capacity: 5, // 5 concurrent thoughts cognitive_load: 0.0, self_awareness: 0.3, confidence_calibration: 0.3, learning_style: 'mixed', adaptation_rate: 0.5 }; // Profile improves through learning // Each successful experience: +0.01 to relevant capabilities // Each failed experience: -0.005 (with feedback)
Reasoning Parameters
interface CognitiveConfig { // Memory recall experience_recall_limit: number; // Default: 5 min_similarity_threshold: number; // Default: 0.6 recency_boost_days: number; // Default: 7 // Reasoning max_alternatives_per_step: number; // Default: 3 confidence_threshold: number; // Default: 0.7 reasoning_depth_limit: number; // Default: 10 // Learning learning_rate: number; // Default: 0.1 experience_weight: number; // Default: 0.3 // Governance governance_compliance_required: boolean; // Default: true risk_tolerance: 'low' | 'medium' | 'high'; // Default: 'medium' }
Troubleshooting
Common Issues
1. Overconfidence Problem
- Symptom: Agent consistently confident but wrong
- Diagnosis: Check
confidence_calibrationscore - Fix: Reduce confidence by 20% until calibration improves
2. Analysis Paralysis
- Symptom: Agent takes too long reasoning
- Diagnosis: Check
reasoning_depth_limit - Fix: Reduce alternatives, increase confidence threshold
3. Poor Memory Recall
- Symptom: Agent doesn't learn from past experiences
- Diagnosis: Check
experience_recall_limitand similarity scores - Fix: Increase recall limit, lower similarity threshold
4. Governance Violations
- Symptom: Selected approaches rejected by governance
- Diagnosis: Check governance compliance during reasoning
- Fix: Incorporate governance constraints earlier in reasoning
References
- Implementation:
src/lib/ai/cognitive-architecture.ts - Tests:
src/lib/ai/__tests__/cognitive-architecture.test.ts - Related: World Model, Learning Engine, Agent Governance
Last Updated: 2025-02-06 Version: 8.0 Status: Production Ready