The premise in this podcast that LLM data harvesting is going to trigger a mass exodus (a la Reddit) from allowing crawling or public APIs seems a bit silly to me - traditional search would have done it and is still central to discoverability (plus most major platforms are already closed in this sense). But the point that we need to be aware that the expense of training models leads to market concentration is a vastly important one. LLM + walled gardens, sitting in a tree.