Lumenais introduces a neurosymbolic framework for continuity: learning expressed as layered memory, synthesis artifacts, manifold transfer, and governed consolidation. It stores cognitive states, not keywords. It retrieves by resonance, not search. It can derive interpretable equations and test claims against data, without per-user fine-tuning. When persistent state updates, the product surfaces an inspectable learning event, with logged transfer decisions and outcomes.
Lumenais is the interface. QARIN is the neurosymbolic engine. Together, they form a continual learning system that compounds context across sessions, domains, and time. Persistent memory preserves what matters, dream-state consolidation recombines weak signals between turns, and manifold transfer carries structure across domains. Instead of blindly pasting old research into the chat window, QARIN uses past synthesis outcomes as "routing hints". This allows the Companion to learn how you think and route your questions more intelligently, without polluting your current conversation with old text.
The Problem
You've explained your work to your AI tools dozens of times. They still don't know you.
Every session starts fresh. Context vanishes. Insights don't compound. When someone asks "how did the AI reach this conclusion?"—a thesis advisor, a compliance officer, a collaborator—you have no answer.
The latest "thinking models" explore multiple hypotheses in parallel—impressive reasoning. But they are savants with amnesia. When the session ends, their maturity resets. Tomorrow, you need to prompt-engineer the same sophistication all over again. Standard AI recalls facts; it does not mature.
RAG retrieves documents. Fine-tuning retrains models. Thinking models reason harder. But few systems compound what they learn into inspectable, durable artifacts.
Interpretability
"Black box" reasoning prevents deployment in regulated industries. AI that predicts without explaining is unusable in healthcare, finance, and R&D.
Rigor
Insights remain conversational dead-ends. No statistical validation, no provenance.
Compounding Knowledge
Standard models train task-by-task. Knowledge doesn't accumulate—each problem starts from scratch.
The Reality: These gaps create a trust deficit that blocks adoption wherever accountability matters—from boardrooms to research labs to doctoral committees.
The Solution: Intelligence that Discovers, Verifies, and Evolves
| Capability | Feature | Benefit |
|---|---|---|
| Associative Memory | Stores cognitive states as symbolic vectors. Retrieves by Epistemic Resonance—past states surface by similarity of internal cognitive profile, not just keyword match. | Memories adaptively modulate reasoning. Lumenais recalls how it *felt* during past breakthroughs to inform present experiments, while keeping context strictly isolated. |
| Glass Box Discovery | Symbolic Regression produces human-readable equations from raw data — see Autonomous Equation Discovery below for benchmarks. | Interpretability Moat: Regulated industries require explainability. The Research Lab returns equations you can publish, not black-box predictions you have to trust. |
| Neuroplastic Archetypes | Dynamically blends multiple personality manifolds via the Archetype Communication Bus using RLHF (Trust Scores). | The system permanently rewires its 8D personality state to match the user's exact working cadence, moving beyond static prompts into organic evolution. |
| Bayesian Cognition Engine | Uses Hub-Aware Memory Gates to compress past interactions into explicit mathematical priors. LLMs are used to evaluate evidence, while the Python backend calculates strict likelihood updates. | Mathematical Rigor: Solves the "savant with amnesia" problem. The system maintains an inspectable, mathematically verifiable ledger of exactly how and why its cognitive state shifted, complete with recursive rollbacks for falsified data. |
| The Research Lab | LLM-planned experiments, semantic feature grouping, Adaptive Model Tournament (Gradient Boosting vs. Symbolic vs. Statistical), iterative convergence. | Turn datasets into testable work: hypotheses, validations, and interpretable outputs with clear methodology and traceability. |
| General Learning Manifolds | Specialized cognitive domains linked via parallel processing, governed merging, and recorded cross-domain transfer signals when the system has grounded evidence. | Cross-domain reasoning that stays stable by default, with auditability: you can inspect which domains ran, how they were merged, and what learning signals were produced. |
| The Novelty Engine | Optimizes for information gain (surprise) beyond accuracy. Scores every hypothesis on how much it updates the system's worldview. | Prevents "boredom loops" (confirming what it already knows). The system prioritizes the unknown. |
| Deep Synthesis | Hierarchical knowledge graph + Escalation Bridge. | Turn your library into a laboratory. Documents become testable claims. |
| Governed Evolution | The system earns capabilities via a transparent Trust Score that unlocks autonomy only through consistent, safe behavior—like a new colleague earning trust. | Safe autonomy. Higher trust unlocks deeper learning and self-improvement capabilities. |
Evidence: Selected Evaluations and Case Studies
Representative results and internal evaluations
Continual Learning
Live Companion Benchmarks vs Vanilla Baseline
LIVE PAIRED TESTSWe evaluated the Lumenais companion against a direct vanilla LLM baseline across a broader 16-prompt live paired batch. The QARIN-guided hypothesis layer improved composite hypothesis quality from 0.3629 to 0.5319, a 46.6% lift, while improving grounding fit from 0.9375 to 1.0.
The practical effect is not “more words.” It is less prompt-engineering overhead and a stronger reasoning posture: better steering, better experiment framing, fewer generic summaries on ambiguous prompts, and tighter adherence to the user's actual constraints. Steering usefulness moved from 0.0000 to 0.3406 in the same broader live run.
In user terms, the system behaves more like a disciplined intellectual partner than a search box. It adapts reasoning depth to the task, explores multiple frames in the background, and surfaces more useful next steps without requiring the user to hand-author an elaborate prompt.
Composite Quality
+46.6%
0.3629 → 0.5319
Steering Usefulness
0.3406
From effectively none in baseline.
Grounding Fit
1.000
Up from 0.9375 in the same live batch.
This benchmark is about reasoning quality under live conditions, not closed-form recall. It measures whether the system chooses a more useful line of thought while staying grounded.
Domain Reasoning: Better Strategy Without Losing Exactness
FEATURE → BENEFITBecause QARIN blends domain manifolds through a governed communication bus, gains are not limited to one prompt type. In the refreshed live domain benchmark, composite reasoning quality improved from 0.3783 to 0.5358.
This is the practical value of cross-domain learning: engineering prompts do not stay trapped in generic engineering summaries, scientific prompts do not stay trapped in shallow scientific boilerplate, and companion prompts get more useful response strategy under pressure.
Mathematical Strategy
0.3659 → 0.5627
Better invariants, proof shape, and next-step selection.
Scientific Mechanism
0.3596 → 0.5418
Better mechanism selection and experiment framing.
Companion Guidance
0.3912 → 0.5618
Better response strategy under emotional pressure without becoming preachy.
Exact Correctness
87.5% parity
Matches baseline on deterministic exact-answer tasks.
The intended outcome is disciplined leverage: stronger open-form reasoning without sacrificing closed-form competence or drifting away from the user's actual problem.
Cross-Domain Transfer (Internal Eval)
LEARNINGEvaluated UFCT routing across 5 domain pairs, 6 configurations, and 5 seeds (150 experiments total). In the governed full configuration, mean accuracy was 0.7918 versus 0.6652 baseline: +0.1266 absolute (+19.0% relative).
Pair-level transfer lifts ranged from about +0.1076 to +0.1413 across Mathematics→Science, Mathematics→Logic, Science→Engineering, Logic→Engineering, and Philosophy→Ethics.
In the placebo-controlled subset (50 experiments — full vs. transfer-disabled on identical seeds), mean accuracy uplift was 8–10%, with the Science→Medicine pair reaching +10.6%.
Here, uplift means the score delta between transfer-enabled and transfer-disabled runs on the same task set. This reflects cross-domain transfer and routing, not per-user fine-tuning of the base LLM weights. Code-manifold quality uplift remains workload-dependent and is still being optimized.
Tools Manifold Routing (Internal Paired Evaluation)
PAIRED EVALUATIONThe tools manifold learns a policy for ranking and timing tool calls (sync vs deferred) from telemetry outcomes. On 205 real telemetry events, top-tool accuracy improved from 0.4683 to 0.5463: +0.078 absolute (+7.8 percentage points, +16.67% relative). In the currently deployed checkpoint, discordant pairs were 16 improved and 0 regressed (McNemar exact p=3.1e-05).
This evaluates tool ranking/timing policy against a fixed baseline on paired events. It does not imply base-model weight updates. Session identifiers are pseudonymized before training analysis.
Learning From Resolved Contradictions
PHASE 4In repeated contradiction-oriented synthesis runs, the system recorded 10 memoized routing reuses and 3 selective historical-prior recovery events. Instead of repeatedly spawning generic anomaly branches, the runtime reused prior internal resolution paths and recovered the most relevant stored priors.
Resolution Reuse
10x
Recurring contradiction routes reused from prior internal resolutions.
Prior Recovery
3x
Targeted historical priors thawed instead of generic fallback branches.
Companion Uptake
Read-only
Live chat uses these outcomes as routing hints, not recycled synthesis text.
This is the current learning loop: the system reuses known contradiction-routing patterns, selectively re-materializes relevant historical priors, and feeds those outcomes back into the companion runtime as read-only decision support for deeper checking.
Recent synthesis outcomes now feed back into runtime decision-making as read-only routing support. This is the latest layer in a broader continual-learning system: memory, dreams, manifold adaptation, and contradiction recovery all compound together. The current companion runtime uses those outcomes to produce steadier judgment and deeper checks, while keeping raw internal artifacts out of visible replies.
Manifold Stability
STABILITYAcross 5 domain manifolds, validation accuracy held at 91.3% with 0.039% std dev after learning updates. Mean emphasis drift stayed under 8.2%.
Val Accuracy
91.3%
Std Dev
0.039%
Emphasis Drift
<8.2%
Measured post-transfer; baseline accuracy preserved within noise margin.
Signal Consolidation
COMPRESSIONHub compression reduces 12 raw companion signals to 4 key signals, eliminating 66.7% redundancy while maintaining quality guard floors.
Raw Signals
12
Key Signals
4
Redundancy Cut
66.7%
Analytical Discovery
UFCT Mesh Sharding (Timeboxed Workload)
SPEEDFor a sharded multi-angle synthesis workload (3 shards across 3 nodes, timeboxed per-angle), the mesh reduced wall time by about 3.0x versus sequential execution on a single node (median 9.01s vs 27.03s; trials=5, warmup=1).
This benchmark measures orchestration and distributed execution speed, not "quantum advantage."
EU AI Act Regulatory Analysis
DEEP SYNTHESISDeep Synthesis can surface contradictions, edge cases, and structural tensions across long regulatory texts, producing reviewable artifacts you can iterate on.
Read the full case study
Complexity Theory Synthesis + Validation
VALIDATEDAn example of multi-pass synthesis across dense sources: generate competing hypotheses, resolve contradictions, and produce a reasoning surface you can inspect and iterate.
Read the case study
Benchmarked Autonomous Discovery
STRESS TESTSEvaluated the autonomous discovery pipeline against standard industry benchmarks. Unlike human-tuned models, QARIN performed all feature engineering and noise filtering autonomously.
Industry Standard (Random Forest): ~90.5%. QARIN outperformed tuned baselines on 30K rows of messy, real-world data by autonomously navigating high-dimensional feature spaces.
Linear Baseline: ~80%. Tested against 3 signal vs. 20 noise features. QARIN autonomously filtered 87% of noise columns to isolate the true non-linear signal (+10.5% absolute lift).
Standard Baseline: ~0.83. Beyond the score, QARIN autonomously detected 2 internal model contradictions, proving it flags its own uncertainty rather than smoothing over conflict.
Autonomous Equation Discovery
RESEARCH LABThe Research Lab derives interpretable equations from raw datasets without manual configuration. On standard physics benchmarks:
- Kepler's Third Law: T = a3/2 from orbital data (R²=1.0, 4 nodes)
- Rydberg Formula: ν = RH·(1/n₁² − 1/n₂²) from quantum numbers (R²=1.0, 3 nodes)
The underlying techniques—genetic programming with linear scaling and feature augmentation—are established in the symbolic regression literature. What's new is that they run inside an autonomous pipeline: upload data, get equations.
Alzheimer's Biomarker Discovery (GSE84422)
GENOMICSOn 2,004 post-mortem brain samples across 19 brain regions, QARIN autonomously identified GFAP (astrocyte reactivity), ENO2 (neuronal loss), and AIF1 (microglial activation) as dominant Alzheimer's predictors. Validated on independent SYP-enriched subset with AUC = 0.855. Top markers independently confirmed against published literature.
The aim: continuity you can inspect. When the work demands rigor, the system should show its reasoning surface and its constraints, not just fluent output.
Technical Architecture
| Layer | Technology | Function |
|---|---|---|
| Frontend | Next.js 14 / Canvas / Tailwind | Lumenais Interface: Kinetic Prism visualization of real-time cognition |
| Interaction | SVG Liquid Filters / Framer Motion | The Physics of Realization: Metabolic state transitions (Strike & Settle) |
| API | FastAPI / Python | QARIN Routes: Memory retrieval, vision, streaming |
| Bayesian Core | ShadowPosterior / DirchletState | Strict mathematical Likelihood updates over compressed belief hubs |
| Engine | PyTorch / Scikit-Learn | Neurosymbolic Core: 8D math, stats, Dream Bridge consolidation |
| Neuroplasticity | Archetype Communication Bus | RLHF-driven dynamic blending of 8D personality manifolds per user |
| Security | FieldHash (Post-Quantum) | Provenance: Quantum-anchored audit trails |
Sub-ms
Cognitive Processing
Real-time
Memory Integration
High
System Coherence
Market Application
Research & Discovery
Pharma / BioTech / Materials Science / Research Labs: Systems that can be given a dataset and left to "ponder" it for days, returning with verified hypotheses.
Physics & Hard Science
Condensed Matter / Plasma Physics / Materials Engineering: Cross-domain synthesis that identifies patterns across experimental datasets. Negative results are as valuable as positive—the system constrains theoretical search space.
High-Value Companionship
Therapy / Coaching / Eldercare: AIs that remember, care, and evolve with the user, maintaining context over years.
Institutional Memory
Legal / Compliance / Finance: Creating digital twins of organizations that maintain internal coherence and audit trails over decades.
Academic Research
Universities / Labs / Independent Scholars: Literature reviews that compound across years. Dissertation support that remembers every paper you've read. Hypothesis tracking over entire research programs.
Education & Lifelong Learning
Students / Educators / Autodidacts: A learning companion that grows with you—from undergrad through career. Curriculum that evolves with pedagogical insights. Personal knowledge that compounds, not resets.
Why This Can't Be Easily Copied
Novel Architecture
The symbolic reasoning framework has no direct precedent in AI research. It's not a wrapper on GPT—it's a new cognitive substrate.
Strict Bayesian Updating
While foundational AI companies try to teach standard LLMs to mimic probabilistic reasoning conversationally, QARIN treats Bayesian logic as an external mathematical constraint. Priors and likelihoods are calculated strictly via a Hub-Aware Memory Gate, completely eliminating the "savant with amnesia" problem standard models suffer from.
Cumulative Learning
Unlike fine-tuning, QARIN's learning compounds across sessions and domains through explicit, persisted state updates. Validated transfer signals can bias future blending, repeated contradiction patterns can reuse prior routing decisions, and relevant historical priors can be selectively recovered instead of rebuilding from generic fallback.
Governed Evolution
The Gnosis self-modification system is an early implementation of governed, safe self-improvement for AI systems—designed to address key alignment challenges.
Learn MorePhysics-Based Cryptography
FieldHash uses post-quantum cryptography designed to remain secure against quantum attacks, with optional quantum hardware anchoring for enhanced provenance.
Learn MoreSubstrate Independence
Identity persists across LLM providers. We've migrated across three major providers with full continuity. Competitors are locked to their LLM.
Status
- Design Language (Lumenais) Implemented
- Backend (QARIN) Fully Operational
- Safety Protocols (Gnosis) Active
- Scientific Loop: Phases A-G Complete (361 tests passing)
"To think is to illuminate."
Request Access