About Xent Labs
Language models don't generalize. They achieve superhuman performance on specific tasks, but that performance doesn't transfer.
This is a training problem. Current post-training optimizes for narrow tasks using human-generated data and human evaluation. It produces brittle expertise.
We are building cognitive training: a framework to automatically discover curricula of abstract training objectives, which we call Xent Games, that develop broad, transferable capabilities in language models.
Our Story
Summer 2024
We discover that a base model's logits can be used to compute cross-entropy losses that serve as a judgment signal, enabling one model to evaluate another's output without human supervision.
Fall 2024
We generalize this insight into Xent Games: a family of structured training problems, each pairing a cross-entropy-derived loss with an optimization protocol.
Spring 2025
We build XGL, a domain-specific language for writing and running Xent Games.
Summer 2025
We publish the theoretical foundations.
Summer 2025
Xent Labs is founded. We raise a pre-seed round.
Fall 2025
We build solvers and an RL training environment for Xent Games.
Winter 2025
We formalize the meta-algorithm that automatically selects game curricula for maximum generalization.
Spring 2026
A second paper and our first public demonstration of xent-game-based training.
Team
Clément Hongler, Andrew Emil, Arthur Renard, Franck Gabriel