We're excited to publicly launch Algorithmic Research Group, an AI safety research lab focused on building benchmarks, environments, and multi-agent systems for understanding recursive self-improvement.
Our research focuses on developing rigorous evaluation frameworks for frontier AI systems. We believe that understanding how AI systems learn and improve is critical to ensuring they remain safe and beneficial.
What We Do
We build open-source tools and datasets that help researchers study AI capabilities and limitations. Our work includes:
- Benchmarks for measuring AI system capabilities across domains
- Multi-agent environments for studying emergent behaviors
- Evaluation frameworks for frontier model assessment
Get Involved
Our code and datasets are available on GitHub and HuggingFace. We welcome contributions from the research community.
