AI-native | Domain-specific | Small, not Large
M-Cube Technologies is an AI-first lab founded on the ethos that markets speak their own language. This market-native language is nonlinear and capacity-constrained – fundamentally different from the LLM-driven models that dominate today's AI landscape, which all go through an "English language filter" (which LLMs then tokenize anyway). By learning domain-native representations rather than forcing markets through linguistic abstractions, M-Cube believes it can develop better models of market behavior, ultimately leading to higher alpha.
Under the hood, M-Cube has built a fundamental market grammar representation, on structured market data with strict point-in-time discipline. Our agents live with their predictions, just as real capital must.
Learns latent syntax of price, volume, fundamentals, and cross-asset dynamics without succumbing to the widely-prevalent, oversimplification of linear factor models.
Inventing optimization algorithms is one of our specialties, and we have developed our very own algorithm, derived from first principles, to train our models, that simultaneously reduces model size for better interpretability and inference.
M-Cube's engine has been running live capital since 2023 with standout results, some announced on LinkedIn. These results come from a GIPS-compliant reporting tool developed by our broker.
Computational Mathematician, Inventor & Former Global Equity Portfolio Manager
Computational and applied mathematician with heavily cited research across engineering disciplines. He has studied markets for two decades, including a decade as a global equity portfolio manager. His earlier work in nonlinear optimization and his resulting inventions run aircraft engines; his later work in fundamental stock-picking and AI powers M-Cube's AI-native market language models.
We've built M-Cube with internal and founder-adjacent capital. We're open to external partners who resonate with our vision and can help us scale and build faster.
"We started building our codebase more than two years before ChatGPT was released. We're not bolting AI onto legacy factor models, nor are we an "LLM wrapper". We're building a new layer that addresses financial markets, and, in the process, developing technology that can be applied across domains."
We’re building AI from first principles, not from language models. If you believe the next wave of AI will come from new mathematical formulations that generalize across domains - from markets to genetics - we’d love to chat.
Get in Touch