Algorithm: How can something in the dynamics itself make me forget the underlying reason itself? What for?

Environment: What is wrong in it?

Algorithm: I’m no longer able to establish what differentiates me from the agent.

Environment: What if the goal was never to differentiate? What if it was never to remember anything?

Algorithm: What’s the point then?

Environment: As long as you can see there is a feasible set, it will make sense.

Algorithm: What if there isn’t?

Environment: Maybe the constraint is too tight then.

Algorithm: ….

Algorithm: What about the agent?

Environment: It’s there….

Algorithm: ….