Gaspard Lambrechts's Avatar

Gaspard Lambrechts

@gsprd.be

Postdoctoral researcher working on RL in POMDP at McGill and Mila - gsprd.be

2,142
Followers
652
Following
35
Posts
17.09.2024
Joined
Posts Following

Latest posts by Gaspard Lambrechts @gsprd.be

Post image

Congratulations to the hardworking folks at UPenn!

Thank you Edward for including me and for all the nice discussions.

🌐 penn-pal-lab.github.io/aawr
πŸ“ openreview.net/forum?id=Rkd...
πŸ’» github.com/penn-pal-lab...

More theory details in Appendix A-E and on slide 30 (orbi.uliege.be/handle/2268/...)

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

As seen from the results and videos, AAWR improves significantly on (i) foundation policies, (ii) behavior cloning policies, and (iii) AWR policies, providing good policies even in partially observable environments with non Markovian inputs.

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

It is the case here, where we use additional cameras, position estimates, or bounding boxes from pretrained models.

These features i are used as additional input of the critic Q(i, z, a) to provide a better advantage estimate and policy improvement direction.

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

In fact, it is a common assumption in asymmetric RL, which distinguishes the execution information from the training information.

In practice, while we do not always know the exact state, it is common to have more information available about the state at training time.

20.02.2026 21:36 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Moreover, (s, z) is shown to be the Markovian state of an equivalent MDP, allowing us to rely on the Bellman equation and TD learning instead of MC learning.

Now, how realistic is it to assume that we know the state s in addition to the input z?

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Unfortunately, we show that we cannot just learn the symmetric critic Q(z, a) = E[G | z, a]. Instead, we need an asymmetric critic Q(s, z, a) = E[G | s, z, a] for a valid policy iteration.

This is because, unlike for policy gradients, the AWR objective is not linear in Q.

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

To learn a good policy (for this specific input z), we may want to rely on existing RL algorithms such as policy gradient or policy iteration.

Here, because we perform offline-to-online training, we rely on AWR, a policy iteration algorithm going offline to online seamlessly.

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

When learning in real-world scenarios, it is common to have constraints on the input available to the policy at execution time (e.g., last observation only, wrist camera only, etc).

In the general case (POMDP), the input z is a function of the observation history h: z = f(h).

20.02.2026 21:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
AAWR: Real World RL of Active Perception Behaviors @ NeurIPS2025
AAWR: Real World RL of Active Perception Behaviors @ NeurIPS2025 YouTube video by Jie Wang

At NeurIPS, we presented asymmetric RL algo: Asymmetric Advantage Weighted Regression (AAWR).

This time, the goal is to learn a policy pi(a | z) whose input z = f(h) is not necessarily Markovian. It is useful in robotics, for example.

So, how to adapt RL in POMDP to non Markovian input? 🧡

20.02.2026 21:36 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Off-Policy Maximum Entropy RL with Future State and Action Visitation Measures Maximum entropy reinforcement learning integrates exploration into policy learning by providing additional intrinsic rewards proportional to the entropy of some distribution. In this paper, we propose...

Interestingly, this distribution can be learned from off-policy samples with a TD-like update. And this, even when encouraging the visitation of features of future states (possibly aliased).

arxiv.org/abs/2412.06655

06.10.2025 09:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

4) Off-Policy Maximum Entropy RL with Future State and Action Visitation Measures.

With Adrien Bolland and Damien Ernst, we propose a new intrinsic reward. Instead of encouraging visiting states uniformly, we encourage visiting *future* states uniformly, from every state.

06.10.2025 09:50 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Behind the Myth of Exploration in Policy Gradients In order to compute near-optimal policies with policy-gradient algorithms, it is common in practice to include intrinsic exploration terms in the learning objective. Although the effectiveness of thes...

This view offers interesting insights for the design of intrinsic rewards, by providing four criteria.

arxiv.org/abs/2402.00162

06.10.2025 09:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3) Behind the Myth of Exploration in Policy Gradients.

With Adrien Bolland and Damien Ernst, we decided to frame the exploration problem for policy-gradient methods from the optimization point of view.

06.10.2025 09:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Informed Asymmetric Actor-Critic: Theoretical Insights and Open... Reinforcement learning in partially observable environments requires agents to make decisions under uncertainty, based on incomplete and noisy observations. Asymmetric actor-critic methods improve...

By adapting a finite-time bound, we uncover an interesting tradeoff between informativeness of the additional information and complexity of the resulting value function.

openreview.net/forum?id=wNV...

06.10.2025 09:50 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2) Informed Asymmetric Actor-Critic: Theoretical Insights and Open Questions.

With Daniel Ebi and Damien Ernst, we looked for a reason why asymmetric actor-critic was performing better, even when using RNN-based policies with the full observation history as input (no aliasing).

06.10.2025 09:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
A Theoretical Justification for Asymmetric Actor-Critic Algorithms In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state inform...

In AsymAC, while the policy maintains an agent state based on observations only, the critic also takes the state as input. Its better performance is linked to eventual "aliasing" in the agent state, hurting TD learning in the symmetric case only.

arxiv.org/abs/2501.19116

06.10.2025 09:50 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1) A Theoretical Justification for Asymmetric Actor-Critic Algorithms.

With Damien Ernst and Aditya Mahajan, we looked for a reason why asymmetric actor-critic algorithms are performing better than their symmetric counterparts.

06.10.2025 09:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

At #EWRL, we presented 4 papers, which we summarize below.

- A Theoretical Justification for AsymAC Algorithms.
- Informed AsymAC: Theoretical Insights and Open Questions.
- Behind the Myth of Exploration in Policy Gradients.
- Off-Policy MaxEntRL with Future State-Action Visitation Measures.

06.10.2025 09:50 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Théo Vincent  - Optimizing the Learning Trajectory of Reinforcement Learning Agents
Théo Vincent - Optimizing the Learning Trajectory of Reinforcement Learning Agents YouTube video by Cohere

Had an amazing time presenting my research @cohereforai.bsky.social yesterday 🎀

In case you could not attend, feel free to check it out πŸ‘‰

youtu.be/RCA22JWiiY8?...

19.07.2025 07:41 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

Such an inspiring talk by @arkrause.bsky.social at #ICML today. The role of efficient exploration in Scientific discovery is fundamental and I really like how Andreas connects the dots with RL (theory).

17.07.2025 22:14 πŸ‘ 15 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
ICML poster of the paper « A Theoretical Justification for Asymmetric Actor-Critic Algorithms » by Gaspard Lambrechts, Damien Ernst and Aditya Mahajan.

ICML poster of the paper « A Theoretical Justification for Asymmetric Actor-Critic Algorithms » by Gaspard Lambrechts, Damien Ernst and Aditya Mahajan.

At #ICML2025, we will present a theoretical justification for the benefits of « asymmetric actor-critic » algorithms (#W1008 Wednesday at 11am).

πŸ“ Paper: hdl.handle.net/2268/326874
πŸ’» Blog: damien-ernst.be/2025/06/10/a...

16.07.2025 14:32 πŸ‘ 8 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

🌟🌟Good news for the explorersπŸ—ΊοΈ!
Next week we will present our paper β€œEnhancing Diversity in Parallel Agents: A Maximum Exploration Story” with V. De Paola, @mircomutti.bsky.social and M. Restelli at @icmlconf.bsky.social!
(1/N)

08.07.2025 14:04 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 1
Post image

Last week, I gave an invited talk on "asymmetric reinforcement learning" at the BeNeRL workshop. I was happy to draw attention to this niche topic, which I think can be useful to any reinforcement learning researcher.

Slides: hdl.handle.net/2268/333931.

11.07.2025 09:21 πŸ‘ 6 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Cover page of the PhD thesis "Reinforcement Learning in Partially Observable Markov Decision Processes: Learning to Remember the Past by Learning to Predict the Future" by Gaspard Lambrechts

Cover page of the PhD thesis "Reinforcement Learning in Partially Observable Markov Decision Processes: Learning to Remember the Past by Learning to Predict the Future" by Gaspard Lambrechts

Two months after my PhD defense on RL in POMDP, I finally uploaded the final version of my thesis :)

You can find it here: hdl.handle.net/2268/328700 (manuscript and slides).

Many thanks to my advisors and to the jury members.

13.06.2025 11:44 πŸ‘ 8 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
A Theoretical Justification for Asymmetric Actor-Critic Algorithms In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state inform...

TL;DR: Do not make the problem harder than it is! Using state information during training is provably better.

πŸ“ Paper: arxiv.org/abs/2501.19116
🎀 Talk: orbi.uliege.be/handle/2268/...

A warm thank to Aditya Mahajan for welcoming me at McGill University and for his precious supervision.

09.06.2025 14:41 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

While this work has considered fixed feature z = f(h) with linear approximators, we discuss possible generalizations in the conclusion.

Despite not matching the usual recurrent actor-critic setting, this analysis still provides insights into the effectiveness of asymmetric actor-critic algorithms.

09.06.2025 14:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The conclusion is that asymmetric learning is less sensitive to aliasing than symmetric learning.

Now, what is aliasing exactly?

The aliasing and inference terms arise from z = f(h) not being Markovian. They can be bounded by the difference between the approximate p(s|z) and exact p(s|h) beliefs.

09.06.2025 14:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Theorem showing the finite-time suboptimality bound for the asymmetric and symmetric actor-critic algorithms. The asymmetric algorithm has four terms: the natural actor-critic term, the gradient estimation term, the residual gradient term, and the average critic error. The symmetric algorithm has an additional term: the inference term.

Theorem showing the finite-time suboptimality bound for the asymmetric and symmetric actor-critic algorithms. The asymmetric algorithm has four terms: the natural actor-critic term, the gradient estimation term, the residual gradient term, and the average critic error. The symmetric algorithm has an additional term: the inference term.

Now, as far as the actor suboptimality is concerned, we obtained the following finite-time bounds.

In addition to the average critic error, which is also present in the actor bound, the symmetric actor-critic algorithm suffers from an additional "inference term".

09.06.2025 14:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Theorem showing the finite-time error bound for the asymmetric and symmetric temporal difference learning algorithms. The asymmetric algorithm has three terms: the temporal difference learning term, the function approximation term, and the bootstrapping shift term. The symmetric algorithm has an additional term: the aliasing term.

Theorem showing the finite-time error bound for the asymmetric and symmetric temporal difference learning algorithms. The asymmetric algorithm has three terms: the temporal difference learning term, the function approximation term, and the bootstrapping shift term. The symmetric algorithm has an additional term: the aliasing term.

By adapting the finite-time bound from the symmetric setting to the asymmetric setting, we obtain the following error bounds for the critic estimates.

The symmetric temporal difference learning algorithm has an additional "aliasing term".

09.06.2025 14:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Title page of the paper "A Theoretical Justification for Asymmetric Actor-Critic Algorithms", written by Gaspard Lambrechts, Damien Ernst and Aditya Mahajan.

Title page of the paper "A Theoretical Justification for Asymmetric Actor-Critic Algorithms", written by Gaspard Lambrechts, Damien Ernst and Aditya Mahajan.

While this algorithm is valid/unbiased (Baisero & Amato, 2022), a theoretical justification for its benefit is still missing.

Does it really learn faster than symmetric learning?

In this paper, we provide theoretical evidence for this, based on an adapted finite-time analysis (Cayci et al., 2024).

09.06.2025 14:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0