Tag
continual learning
Continual learning studies how models retain prior knowledge while adapting to new data in streams, task splits, and changing environments. It connects to catastrophic forgetting, test-time updates, safe RL, and long-running deployment for systems that must keep learning.
4 articles

Task boundaries can skew continual learning results
A new paper shows that how you split a stream into tasks can change continual learning results, even when the data, model, and budget stay fixed.

Safe Continual RL for Changing Real-World Systems
This paper studies how to keep RL controllers safe while they adapt to non-stationary systems—and shows why existing methods still fall short.

In-Place TTT Lets LLMs Adapt at Inference
A new test-time training setup lets LLMs update fast weights in place, aiming for better long-context adaptation without full retraining.

Five AI Infra Frontiers Bessemer Expects for 2026
Bessemer’s 2026 AI infra roadmap points to memory, continual learning, RL, inference, and world models as the next big build areas.