ACON: Optimizing Context Compression for Long-horizon LLM Agents
A new method for compressing context in long-horizon LLM agents to reduce token overhead while maintaining planning performance.
What changed. ACON introduces learned context compression for long-horizon LLM agents, replacing static pruning strategies with adaptive selection that understands task relevance.
Why it matters. As agents take more steps and accumulate observations, context windows become a bottleneck—both for cost and latency. ACON demonstrates that 60% of context can be safely discarded without harming task success, directly improving the economics of agentic AI deployment at scale.
Builder takeaway. If you’re building agents that run for 10+ steps, context compression should be a first-class concern. ACON’s learned approach outperforms rule-based alternatives and integrates cleanly into existing LLM agent frameworks.
See the full paper: ACON on arXiv