Substack presents itself as a neutral publishing platform: a champion of independent writers, free expression, and resistance to the perceived censorship of mainstream media and Big Tech. Launched in 2017, it now claims around 50 million users worldwide, with approximately five million paying subscribers. But beneath this rhetoric of liberation and open discourse lies a more troubling reality. Substack is not merely hosting far-right, Nazi and white supremacist content; it is actively profiting from it, algorithmically amplifying it, and embedding it within a subscription-based business model that monetises hate.
A Guardian investigation has exposed how newsletters promoting virulent antisemitism, Holocaust denial, white supremacy and explicit praise for Adolf Hitler operate openly on the platform, charging high subscription fees while benefiting from Substack’s discovery tools and recommendation systems. Substack takes roughly 10% of all subscription revenue. In practice, this means the company directly earns money from extremist propaganda; not incidentally, but structurally.
Nazi Ideology Behind a Paywall
Among the most egregious examples is NatSocToday, a Substack newsletter that openly identifies with National Socialism. Featuring a swastika as its profile image, a symbol inseparable from genocide and racial terror, the publication has approximately 2,800 subscribers and charges $80 (£60) per year for premium access. Its content includes claims that Jewish people were responsible for the second world war and that Adolf Hitler was “one of the greatest men of all time”.
This is not coded language or dog whistles. It is explicit Nazi propaganda.
Within just two hours of subscribing for investigative purposes, the Guardian’s account was algorithmically directed to 21 similar Substack profiles. These accounts regularly interact with one another, creating a networked ecosystem of extremist content that is not hidden in obscure corners of the internet but surfaced through Substack’s own recommendation systems.
This matters. Radicalisation is rarely a solitary act. It thrives in communities, feedback loops and reinforcement, precisely what Substack’s infrastructure enables.
A Network of Extremism
Other prominent accounts illustrate the breadth and confidence of Nazi content on the platform. Erika Drexler, a self-described “national socialist activist” based in the United States, runs a Substack with 241 subscribers and charges $150 annually. Her posts describe Hitler as her “hero” and “the most overqualified leader ever”. Another account, Third Reich Literature Archive, has over 2,100 subscribers and sells access to Nazi-era propaganda material, including postcards from the 1938 Nuremberg rallies.
Meanwhile, Ava Wolfe, believed to be based in the UK, operates a Substack with around 3,000 subscribers, charging £38 a year. Her profile prominently displays Nazi imagery and her content repeatedly engages in Holocaust denial. Earlier this month, she falsely claimed that Jews were not deliberately murdered by Germans and that deaths occurred only through disease and starvation, a claim that flies in the face of overwhelming historical, forensic and testimonial evidence.
Whether these individuals are using real names or pseudonyms remains unclear. What is clear is that Substack imposes no meaningful barrier to monetising extremist ideology, provided it avoids narrowly defined “incitements to violence”.
Algorithmic Amplification and Radicalisation
Substack’s defenders often argue that it is merely a passive host. The evidence contradicts this. The platform actively recommends content, connects users to similar publications, and amplifies material through algorithms designed to increase engagement and subscriptions. In doing so, it replicates the same dynamics seen on mainstream social platforms, but with a crucial difference: Substack monetises ideology directly.
This recommendation system does not merely expose users to controversial opinions; it pushes them towards interconnected extremist networks, including content promoting the “great replacement” conspiracy theory; the belief that white populations are being deliberately replaced by non-white migrants, a theory that has inspired multiple mass killers.
The consequences of such amplification are not abstract. Danny Stone, chief executive of the Antisemitism Policy Trust, has warned that online hate content consistently precedes real-world violence. From the 2018 Pittsburgh synagogue massacre to the 2022 Buffalo supermarket shooting, attackers were steeped in online ecosystems that normalised and reinforced racist and antisemitic narratives.
“People don’t wake up one morning and decide to commit atrocities,” Stone said. “They are radicalised.”
The Erosion of Holocaust Memory
The rise of Holocaust denial on platforms like Substack is especially alarming at a time when public knowledge of the Holocaust is already declining. Attendance at memorial events is falling, survivors are fewer with each passing year, and misinformation is increasingly filling the gap left by fading living memory.
Holocaust denial is not merely historical revisionism; it is a foundational pillar of modern antisemitism. By denying the scale, intent or existence of Nazi genocide, extremists seek to rehabilitate fascist ideology and delegitimise Jewish suffering. When such content is monetised, algorithmically boosted and framed as “alternative perspectives”, the damage is profound.
As Stone warned, when societies fail to learn the lessons of the past, they create the conditions for its repetition.
Political and Legal Scrutiny Mounts
The revelations have prompted renewed political concern in the UK. Joani Reid, Labour chair of the all-party parliamentary group against antisemitism, has said she plans to write to both Substack and Ofcom, warning that antisemitism is “spreading with impunity” and that online abuse is increasingly translating into offline violence.
The Holocaust Educational Trust has described Substack’s role as “a disgrace”, pointing out that while such material is not new, its reach and profitability are unprecedented.
Despite being contacted, Substack declined to respond to the Guardian’s findings.
“Free Speech” as Corporate Alibi
Substack’s leadership has repeatedly defended its approach. In 2023, co-founder Hamish McKenzie acknowledged the presence of Nazi content on the platform but argued that censorship or demonetisation would only make extremism worse.
“We don’t like Nazis either,” he wrote, while insisting that open discourse is the best way to strip bad ideas of their power.
This position rests on a familiar but deeply flawed premise: that platforms which profit from engagement can somehow be neutral arbiters of debate. Substack is not a public square. It is a commercial enterprise that curates content, promotes writers, and takes a cut of subscription revenue. Its claim to neutrality collapses the moment it accepts payment from Nazi propaganda and promotes it.
There is a profound difference between tolerating speech and subsidising it.
Profiting From Hate
At its core, this is not simply a debate about free expression. It is about business incentives. Substack’s subscription model rewards highly motivated, ideologically committed audiences, precisely the kind cultivated by extremist movements. Nazi and white supremacist newsletters charge premium prices because they promise community, belonging and ideological affirmation. Substack takes its share regardless of the harm caused.
In doing so, the platform has quietly become a haven for far-right extremism, not despite its business model, but because of it.
The question is no longer whether Substack hosts Nazi ideology. It demonstrably does. The real question is whether democratic societies are willing to accept a tech platform openly profiting from ideas that once plunged the world into catastrophe and whether “free speech” will continue to be used as a shield for corporate complicity in hate.






