Coherascent Labs
Neuro-Symbolic AI
Coherascent Labs leads research at the intersection of Neuro-Symbolic AI and Mathematical Optimization, grounded in prior work building production-grade autonomous agents and GPT-style Transformers with a focus on rigor and reproducibility. The long-term goal is to reduce hallucinations in generative models by grounding outputs in formal logic and mathematically coherent constraints.
Seeking truth and integrity in the age of generative AI.
Deterministic Architecture
Building control layers that force probabilistic agents to adhere to formal logic structures, enabling verifiable reasoning.
Continuous DPLL
Investigating how SAT-style reasoning can be expressed inside differentiable vector spaces for scalable inference.
Optimization Cognition
Designing gradient ascent pathways that minimize prediction error and reveal latent structure in cognitive state models.
Neuro-Symbolic Bridge
Connecting explicit rule systems with data-driven Transformer architectures to unify formal and statistical reasoning.
Grounded in a deep understanding of large language model mechanics and prior work building production-grade autonomous agents.
Investigating the adaptation of discrete satisfiability algorithms (DPLL) into continuous vector spaces to address reasoning limitations in standard generative architectures.
Engineering architectural determinism by developing control layers that force probabilistic agents to adhere to formal logic, reducing hallucination and guiding outputs toward mathematically coherent truth rather than statistical plausibility.
Using Python, C++, and PyTorch to model cognitive states as high-dimensional optimization problems where prediction error is minimized via custom gradient ascent pathways.