An interview podcast where I, Daniel Filan, interview guests about topics I'm interested in, with the aim of clarifying how the guest understands that topic.
In this episode, I speak with Aaron Silverbook about the bacteria that cause cavities, and how different bacteria can prevent them: specifically, a type of bacterium that you can buy at luminaprobiotic.com. This podcast episode has not been approved by the FDA. Specific topics we talk about include:
How do bacteria cause cavities?
How can you create an anti-cavity bacterium?
What’s going on with the competitive landscape of mouth bacteria?
How dangerous is it to colonize your mouth with a novel bacterium?
Why hasn’t this product been available for 20 years already?
In this episode, I talk to Holly Elmore about her advocacy around AI Pause - encouraging governments to pause the development of more and more powerful AI. Topics we discuss include:
Why advocate specifically for AI pause?
What costs of AI pause would be worth it?
What might AI pause look like?
What are the realistic downsides of AI pause?
How the Effective Altruism community relates to AI labs.
The shift in the alignment community from proving things about alignment to messing around with ML models.
In this episode, Divia Eden and Ronny Fernandez talk about the (strong) orthogonality thesis - that arbitrarily smart intelligences can be paired with arbitrary goals, without additional complication beyond that of specifying the goal - with light prompting from me. Topics they touch on include:
Why aren’t bees brilliant scientists?
Can you efficiently make an AGI out of one part that predicts the future conditioned on some plans, and another that evaluates whether plans are good?
If minds are made of smaller sub-agents with more primitive beliefs and desires, does that shape their terminal goals?
Also, how would that even work?
Which is cooler: rockets, or butterflies?
What processes would make AIs terminally value integrity?
Why do beavers build dams?
Would these questions be easier to answer if we made octopuses really smart?
In this episode I chat with Jeffrey Heninger about his religious beliefs and practices as a member of the Church of Jesus Christ of Latter-day Saints, sometimes colloquially referred to as “the Mormon church” or “the LDS church”. Topics we talk about include:
Who or what is God?
How can we know things about God? In particular, what role does religious experience play?
To what degree is modern morality downstream of Jesus?
What’s in the Book of Mormon?
What does modern-day prophecy look like?
What do Sunday services look like in the LDS church?
Every year, the Centre for Effective Altruism runs a number of “Effective Altruism Global” (EA Global or EAG for short) conferences thru-out the world. This year, I attended the one held in the San Francisco Bay Area, and talked to a variety of participants about their relationship with effective altruism, the community around that idea, and the conference.
In this episode I speak with Shea Levy about Ayn Rand’s philosophy of Objectivism, and what it has to say about ethics and epistemology. Topics we talk about include:
What is Objectivism?
Can you be an Objectivist and disagree with Ayn Rand?
What’s the Objectivist theory of aesthetics?
Why isn’t there a biography of Ayn Rand approved of by orthodox Objectivists?
What’s so bad about altruism, or views like utilitarianism?
What even is selfishness?
Can we be mistaken about what we perceive? If so, how?
What is consciousness? Could it just be computation?
In this episode I speak with Oliver Habryka, head of Lightcone Infrastructure, the organization that runs the internet forum LessWrong, about his projects in the rationality and existential risk spaces. Topics we talk about include:
How did LessWrong get revived?
How good is LessWrong?
Is there anything that beats essays for making intellectual contributions on the internet?
Why did the team behind LessWrong pivot to property development?
What does the FTX situation tell us about the wider LessWrong and Effective Altruism communities?
What projects could help improve the world’s rationality?
In this episode, I speak with Divia Eden about operant conditioning, and how relevant it is to human and non-human animal behaviour. Topics we cover include:
How close are we to teaching grammar to dogs?
What are the important differences between human and dog cognition?
How important are unmodelled “trainer effects” in dog training?
Why do people underrate positive reinforcement?
How does operant conditioning relate to attachment theory?
How much does successful dog training rely on the trainer being reinforced by the dog?
In this episode, Peter Jaworski talks about the practice of paid plasma donation, whether it’s ethical to allow it, and his work to advocate for it to be legalized in more jurisdictions. He answers questions such as:
Which country used to run clinics in a former colony to pay their former colonial subjects for their plasma?
Why can’t we just synthesize what we need out of plasma?
What percentage of US exports by dollar value does plasma account for?
If I want to gather plasma, is it cheaper to pay donors, or not pay them?
Is legal paid plasma donation one step towards a dystopia?
In this episode, cryptocurrency developer Ameen Soleimani talks about his vision of the cryptocurrency ecosystem, as well as his current project RAI: an ether-backed floating-price stablecoin. He answers questions such as:
What’s the point of cryptocurrency?
If this is the beginning of the cryptocurrency world, what will the middle be?
What would the sign be that cryptocurrency is working?
How does RAI work?
Does the design of RAI make it impossible for it to be widely used?
In this episode, Carrick Flynn talks about his campaign to be the Democratic nominee for Oregon’s 6th congressional district. In particular, we talk about his policies on pandemic preparedness and semiconductor manufacturing. He answers questions such as:
Was he surprised by the election result?
Should we expect another Carrick campaign?
What specific things should or could the government fund to limit the spread of pandemics? Why would those work?