Measuring AI needs open tools.
We build open-source benchmarks, datasets, and infrastructure for evaluating what autonomous AI systems can actually do.
Why Open Source
Evaluating what AI agents can actually do — whether they can deceive, collude, self-improve, or game their benchmarks — is too important to happen behind closed doors. This work needs contributors, not gatekeepers.
We build in the open so that anyone can use what we ship, find what we missed, and push the work further than we could alone.
What We Build
Benchmarks
Can AI agents do real ML research? Can they deceive, collude, or game evaluations? We build the benchmarks that answer these questions with evidence, not speculation.
Datasets
1.1M enriched papers. 129K research repositories. 778K code functions. The raw material for studying how AI systems interact with real scientific work.
Infrastructure
Runtimes for structured agent workloads. Orchestration topologies for recursive improvement loops. The scaffolding to run experiments at scale.
Research
ARIA Benchmark: How Much Machine Learning Do AI Models Actually Know?
A suite of five closed-book benchmarks probing the ML knowledge that frontier language models have internalized during training.
ArXiv Research Code Dataset: 129K Research Repositories
A collection of 4.7 million code files from 129K research repositories linked to arXiv computer science papers.
ArXivDLInstruct: 778K Research Code Functions for Instruction Tuning
A dataset of 778,152 functions extracted from arXiv-linked research code, each paired with instruction prompts, for training...
DeltaMLBench: Can AI Agents Improve on Published ML Research?
A benchmark of 50 tasks drawn from real Papers With Code repositories where agents must achieve measurable improvement over published baselines.
ML Research Benchmark: Can AI Agents Do Real ML Research?
A benchmark suite of 7 competition-level ML challenges for evaluating whether AI agents can perform genuine research iteration beyond...
About
Algorithmic Research Group builds open-source tools and infrastructure for AI security research. Benchmarks for evaluating autonomous agents. Datasets for studying how models fail. Runtimes for running agent workloads at scale.
We publish everything we build. The field moves faster when researchers can build on each other's work instead of rebuilding the same tooling behind closed doors.
