An interview podcast where I, Daniel Filan, interview guests about topics I'm interested in, with the aim of clarifying how the guest understands that topic.
In this episode, I chat with Caspar Oesterheld about a relatively simple application of weird decision theory: evidential cooperation in large worlds, or ECL for short. The tl;dr is you think there’s at least some small probability of a very large multiverse, so you try to follow something closer to the average of all the values of civilizations in that multiverse that think like you, and therefore ‘make it more likely’ (in an evidential way) that those other civilizations do things that you like.
Links for various things that Caspar has provided: