Varol Cagdas Tok

Personal notes and articles.

Benchmarking Offline RL Algorithms

After implementing and benchmarking four distinct offline RL algorithms (Behavioral Cloning, our custom TD3+BC, IQL, and CQL) across eight MuJoCo environments and two dataset qualities (expert and medium) for 10 random seeds each, our research yielded insights.

The reality of practice is that the "best" algorithm is always context-dependent. Our findings, summarized in Table III of our paper, reveal clear patterns.

TABLE III from our report, showing the final mean return (\(\pm\) std. dev.) for all four algorithms across all tasks. Best results per row are highlighted.

Lesson 1: On Expert Data, Simplicity Wins

Our finding was that Behavioral Cloning (BC) performs well.

On expert-level datasets, where trajectories are already near-optimal, BC was the most consistent algorithm.

Standard RL algorithms (TD3+BC, CQL) often degraded performance on expert datasets. This is likely because the "imitation" constraint in these algorithms is imperfect, and the "RL" component attempts to improve upon a policy that is already optimal, effectively introducing noise.

Takeaway: If you trust your data source completely, simple imitation is often a very strong baseline.

Lesson 2: On Medium Data, Processing Matters

On "Medium" datasets—collected from a policy trained only partially—the story changes. These datasets contain a mix of successful and failed actions. Here, pure BC fails because it copies the mistakes alongside the successes.

Our Custom TD3+BC, which filters out the bottom 50% of trajectories, showed improvements:

Lesson 3: IQL is the Stability King

Implicit Q-Learning (IQL) was rarely the absolute highest scorer, but it was the most stable across all tasks. It never crashed catastrophically.

Summary of Algorithm Performance

* Strengths: Unbeatable computational efficiency. High performance on expert data.

* Weaknesses: Fails completely on noisy or mixed data. No ability to improve beyond the demonstrator.

* Strengths: Excellent on medium/mixed data when combined with trajectory filtering. Can stitch together sub-optimal parts to create a super-optimal policy.

* Weaknesses: Can be unstable on expert data, where the RL component can add noise and degrade a near-perfect policy.

* Strengths: Consistent and stable across medium-quality datasets. Its expectile regression mechanism identifies and extracts value from mixed data. Showed low variance across runs.

* Weaknesses: Offers limited benefit on expert data where there is no "advantage" to weight.

* Strengths: Theoretically robust and provides a "safe" lower-bound value.

* Weaknesses: In practice, it was often pessimistic and difficult to tune. It was rarely competitive on either expert or medium tasks in our benchmark, suggesting its practical utility may be limited to specific safety-critical applications or datasets with poor coverage.

Final Conclusion

Our research reinforces that modern offline RL: there is no single best algorithm. Success requires matching the algorithm's philosophy to the dataset's characteristics.

  1. If you have expert data, start with BC.
  2. If you have medium/mixed data, a well-engineered TD3+BC (like our custom variant) or IQL are good choices.
  3. Data preprocessing is not optional. Filtering bad trajectories and normalizing states are steps that can yield performance gains.
  4. Offline RL requires data engineering and tuning as well as algorithmic design.