The Fragility of Freedom

What Liberty Actually Depends On

Hey hey, welcome back to Taste of Truth Tuesdays. Today’s episode is where we dig into philosophy, culture, history, and the ideas that have shaped the world we’re living in—everything from classical texts to the American founding documents that are still very much relevant to how we should think about freedom today.

Listen here:

There’s a growing sense that something isn’t working.

You see it in the fragmentation of identity, the erosion of shared norms, and the breakdown of trust across institutions.

You don’t have to look very hard to notice it.

People don’t trust elections, medicine, or the media—sometimes all at once, and often for completely different reasons.

Dating is “freer” than it’s ever been, and yet it feels more unstable, more transactional, and more confusing than most people expected.

Corporations speak like moral authorities, issuing statements about justice and truth, while operating through incentives that have nothing to do with either.

Everything is still functioning. But less of it feels legitimate.

In my last piece, I traced one part of this problem back to a common assumption, that Christianity built the foundations of the West. But when you actually follow the development of those ideas, much of what we associate with Western thought—natural law, reason, and the structure of political life—has deeper roots in the Greco-Roman philosophical tradition.

That matters, because the frameworks we inherit shape what we think freedom is, and what we expect it to do.

This piece is a continuation of that question. Not only about where those ideas came from, but about what they require to hold together.

Because a free society doesn’t sustain itself on freedom alone. It depends on discipline, restraint, and a shared understanding of limits—conditions that the system itself cannot produce.

And when those begin to erode, the system doesn’t just break. It follows a pattern that’s been observed for a very long time.

Jefferson intentionally designed the Virginia Capitol in Richmond directly after the Roman temple Maison Carrée~16 CE

I. The Fear Beneath the Founding

This isn’t a new problem.

The relationship between freedom and instability shows up wherever societies try to govern themselves.

The American founding emerged out of that concern. The people designing the system weren’t just thinking about how to create liberty, they were trying to understand why it collapses.

The colonists weren’t casually referencing Rome. English translations of Vertot’s Revolutions that Happened in the Government of the Roman Republic (1720) were in almost every library, private or institutional, in British North America. They studied how free societies decay, how power shifts from shared trust into something self-serving, and how internal corruption (not just external threat) brings systems down.

They believed they were watching it happen in real time.

What they took from antiquity was not blind optimism about freedom, but caution.

And this wasn’t limited to classical history. As Bernard Bailyn observed, the colonists were immersed in dense and serious political literature, shaped by philosophy, and sustained reflection on the problem of power.

Part of what they were working with was an older line of thought running through Greek and Roman philosophy.

The idea that human life is not directionless. That there are patterns to how people live, and that some ways of living lead to stability and flourishing, while others lead to breakdown.

You can already see the foundation of this in Aristotle. He didn’t use the term “natural law,” but the structure is there. Human beings have a nature, and flourishing comes from living in alignment with it—not whatever we happen to want in the moment, but a way of life shaped by discipline, balance, and the cultivation of virtue over time.

The Stoics make this more explicit. They describe the world as ordered by reason—logos—and argue that human beings can come to understand that order.

From that perspective, moral truth isn’t something we invent. It’s something we discover. And law, at its best, should reflect that underlying structure rather than contradict it.

By the time you get to Rome, this idea is articulated more directly. Cicero describes a true law grounded in right reason and in agreement with nature—something universal, not dependent on custom or preference, but rooted in reality itself.

These ideas don’t disappear. They are carried forward and developed.

Christian thinkers later absorb and expand them, especially through Thomas Aquinas, who integrates Greek philosophy and Roman legal thought into a more explicit framework of natural law. And that influence is real. It’s part of the Western story whether we like it or not.

But that’s not the point of this piece.

What matters here is that by the time you reach the early modern period, this idea of a structured moral order—something that places limits on behavior and grounds freedom in discipline—is already well established.

You can see that continuity clearly in how the Founders and colonists read earlier political thought. Returning to those earlier sources, Plato describes how political systems degrade over time, arguing that excessive and undisciplined freedom can produce disorder, which eventually leads people to accept tyranny in the search for stability. Aristotle traces how democracies collapse when law gives way to persuasion and personality. Polybius maps the recurring cycle through which governments rise and decay.

What he described was called anacyclosis, a recurring cycle of political systems. Governments begin in relatively stable forms, rule by one, by a few, or by many, but over time they degrade. Kingship becomes tyranny. Aristocracy becomes oligarchy. Democracy, when it loses discipline, collapses into what he called ochlocracy, rule by the mob.

This wasn’t abstract to the colonists, like I said,  they believed they were watching this pattern unfold in real time. And it shows up just as clearly in the political language of the founding era itself.


As Bailyn explains, monarchy, aristocracy, and democracy were each seen as capable of producing human happiness. But left unchecked, each would inevitably collapse into its corrupt form: tyranny, oligarchy, or mob rule.


Writings like Cato’s Letters were widely read in the colonies and helped shape how ordinary people understood government, power, and liberty.

What’s striking when you read Cato more closely is how little confidence they placed in moral restraint alone. It doesn’t describe freedom as unlimited expression or personal autonomy. The idea that belief, fear of God, or good intentions would keep power in check is treated as dangerously naive. Power is not self-regulating, and it is not made safe by the character or beliefs of those who hold it. It has to be exposed, limited, and actively resisted—because even institutions and ideas meant to restrain it, including religion, can be repurposed to justify its expansion.

It describes government more as a trust—one that exists to protect the conditions that make ordinary life possible.

As Cato writes:

“Power is like fire; it warms, it burns, it destroys. It is a dangerous servant and a fearful master”

And more directly:

“What is government, but a trust committed…that everyone may, with the more security, attend upon his own?”

The assumption is clear. Power must be restrained. Freedom depends on it.

But in Cato’s framing, that restraint doesn’t come from structure alone. It depends on constant exposure and resistance. Freedom of speech and a free press aren’t treated as abstract rights, but as active safeguards—tools for uncovering corruption and preventing power from consolidating unchecked. The logic is simple but demanding: power does not correct itself. It expands, protects its own interests, and, if left unchallenged, begins to operate beyond the limits it was given.

The point of understanding the political cycles of revolution wasn’t to say that any one system was uniquely flawed. It was that all systems are vulnerable to the same underlying problem:

Human nature.

Self-interest eventually creeps in. Restraint erodes. Power shifts from a trust into something personal and extractive.

And once that shift happens, the form of government matters less than the character of the people within it. That thread runs directly into the founding.

The American system wasn’t designed as a pure democracy. It was an attempt to stabilize a problem earlier thinkers had already identified.

Rather than choosing a single form of government, the founders built a mixed system, blending elements of rule by one, rule by a few, and rule by many. An executive to act with decisiveness. A Senate to provide deliberation and continuity. A House to represent the people more directly.

This wasn’t accidental.

It reflected an awareness that each form of government carries its own risks, and that concentrating power in any one place tends to accelerate its corruption.

By distributing power across different institutions, the goal was to create tension within the system itself. Ambition would check ambition. Competing interests would slow the consolidation of power.

From my understanding, they weren’t trying to escape the cycle Polybius described. They were trying to manage it.

They weren’t designing a perfect system. They were attempting to design one built to withstand imperfect people.

But even that depended on something it could not guarantee.

In Federalist No. 10, James Madison writes:

“The latent causes of faction are thus sown in the nature of man.”

He’s not describing a temporary problem. He’s describing a permanent one.

Differences in opinion, interests, wealth, and temperament don’t disappear. They organize. They form groups. And those groups will sometimes pursue aims that are at odds with the rights of others or the stability of the system itself.

Madison’s conclusion is straightforward:

“The causes of faction cannot be removed… relief is only to be sought in the means of controlling its effects.”

That distinction is crucial. He doesn’t try to eliminate conflict or force unity. He assumes conflict is inevitable and builds a system around that reality.

Instead of requiring perfect discipline from individuals, the structure disperses power, multiplies interests, and forces negotiation. Representation slows decision-making. Scale makes domination more difficult.

Freedom is preserved not by removing conflict, but by structuring it.


They looked ahead with anxiety, not confidence. Because they believed liberty was collapsing everywhere. New tyrannies had spread like plagues. The world had become, in their words, “a slaughterhouse.” Across the globe:  Rulers of the East were almost universally absolute tyrants…Africa was described as scenes of tyranny, barbarism, confusion and violence. France ruled by arbitrary authority. Prussia under absolute government. Sweden and Denmark had “sold their liberties.” Rome burdened by civil and religious control. Germany is a hundred-headed hydra. Poland consumed by chaos. Only Britain (and the colonies) were believed to still hold onto liberty. And even there… barely. From revolutionary-era political writings, as compiled by Bernard Bailyn


University of Virginia Rotunda-Modeled after the Roman Pantheon

II. Ordered Liberty and the Kind of Person It Requires

The founders believed in liberty, but not as an unlimited good. They believed in ordered liberty. Freedom that exists within a framework of responsibility, discipline, and civic virtue. The system they designed assumed a certain kind of person, one capable of self-governance, restraint, and participation in a shared moral world.

That assumption was not optional. It was structural. It’s easy to miss how much is built into that.

And this is where the modern tension and the current understanding of freedom begins to diverge from its origins.

Classical Liberalism, in its earlier form, was not about as Deneen states in Why Liberalism Failed, detaching individuals from all institutions, identities, or relationships. But it was about protecting individuals from tyranny while preserving the conditions necessary for a functioning society. It assumed the continued existence of family, community, religious frameworks, and shared norms.

But where Deneen is right, early liberal thought did introduce something new. 

John Locke, for example, reframed institutions like marriage as voluntary associations rather than fixed, inherited structures. That didn’t mean early liberal political philosophy was designed to erode the family. But it did change how those institutions were understood. It placed individual choice alongside social stability in a way that could be expanded over time.

To understand where this expansion comes from, you have to look at what came before it


Without freedom of thought, there can be no such thing as wisdom; and no such thing as publick liberty, without freedom of speech: Which is the right of every man, as far as by it he does not hurt and control the right of another; and this is the only check which it ought to suffer, the only bounds which it ought to know. Cato’s letters No.15


III. The Moral Inheritance of the West

The Lia Fail Inauguration Stone on the Hill of Tara in County Meath Ireland

In many pre-Christian societies, moral life wasn’t organized primarily around abstract rules or universal doctrines, but around continuity. Identity was tied to lineage, family, and inherited roles. Authority came not from individual preference, but from what had been passed down—customs, obligations, and expectations shaped over generations. To live well wasn’t just a personal project. It meant upholding something larger than yourself: maintaining the reputation of your family, fulfilling your role within a community, and carrying forward a way of life that you didn’t create but were responsible for preserving.

You can see how this played out in places like Anglo-Saxon England, where social structure and legal life were more embedded in family and local custom than in centralized doctrine. Women, for example, could own property, inherit land, appear in legal proceedings, and in some cases exercise real economic and political influence. These weren’t modern equality frameworks, but they complicate the assumption that agency and rights only emerge through later “progress.”

That structure did more than organize society. It created cohesion. It gave people a shared reference point for what mattered, what was expected, and what should be restrained—even when no one was watching. Authority wasn’t something constantly renegotiated. It was inherited, lived, and reinforced through participation in a shared way of life.

Greek and Roman life was also structured around civic duty, hierarchy, and inherited roles.

Their moral frameworks reflected that structure. Thinkers like Aristotle emphasized virtue as balance, habits cultivated over time within a community, oriented toward harmony and the common good.

As Christianity spread, moral authority became less tied to lineage and local custom, and more anchored in universal doctrine—rules that applied across communities, not just within them. Obligation didn’t vanish, but it was increasingly reframed. Less about inherited roles within a specific people, more about the individual’s relationship to a broader moral order.

That shift didn’t happen all at once, and it’s not a simple story. The development of early Christianity, its integration into the Roman Empire, and the ways it reshaped intellectual life and authority are far more complex than a few paragraphs can capture here. I’ve gone into that in more detail elsewhere, particularly around the Constantinian period and the rise of revelation and fall of reason.

This development intensifies further with the rise of Protestantism, where that reframing of obligation becomes even more explicit.The movement from the Seven Deadly Sins to the Ten Commandments as a dominant moral framework.

Avarice (Avaritia), from “The Seven Deadly Sins”
Pieter van der Heyden Netherlandish
After Pieter Bruegel the Elder Netherlandish
Publisher Hieronymus Cock Netherlandish
1558

The Seven Deadly Sins, pride, greed, lust, envy, gluttony, wrath, and sloth, are not rules in the strict sense. They describe internal dispositions, patterns of character that distort judgment and pull a person out of balance. They are concerned with formation, with who you are becoming.

The Ten Commandments, by contrast, are structured as prohibitions. You shall not. They define boundaries, obedience, and transgression in relation to divine authority.

Both frameworks aim at moral order. But they operate differently. One is oriented toward the cultivation of character within a shared moral world. The other emphasizes compliance, law, and accountability before God.

The Protestant Reformation further reduced the role of mediating institutions, emphasizing personal conscience, direct access to scripture, and an individual relationship to truth. Authority became less external and more internalized, but also more individualized and less uniformly shared.

The emphasis is unmistakable. Moral responsibility is no longer primarily inherited or communal, but individual and direct.

This did not dissolve the community. But it did begin to relocate the moral center of gravity, from the maintenance of balance within a community, to the accountability of the individual before God.

A political system built on individual rights and self-governance emerged from a cultural framework that had already begun to center moral responsibility at the level of the individual.

At the same time, Christianity reshaped how the natural world was understood. Earlier traditions often treated nature as infused with meaning, order, or even divinity. Christianity maintained that the world was ordered, but no longer sacred in itself. It was created, not divine.

That distinction introduced a kind of distance. A world that is no longer sacred in itself becomes, over time, easier to treat as something external, something to study, measure, and ultimately use.

None of these shifts were inherently destabilizing on their own. But they altered the underlying framework.

Over time, they contributed to a gradual reorientation, one that made it easier to conceive of the individual as separate, autonomous, and capable of standing apart from inherited structures.

That development would later be expanded and amplified through liberal thought.

But the point is not that Protestant Christianity caused modern individualism. It is that it helped make it thinkable.

By the time you reach the Enlightenment and the American founding, those earlier shifts had not disappeared. They had been carried forward and reworked into a new framework—one increasingly shaped by reason, not as a rejection of religion entirely, but as a refusal to let authority go unquestioned simply because it claims moral or divine legitimacy.


The state of nature has a law of nature to govern it, which obliges every one: and reason, which is that law, teaches all mankind, who will but consult it, that being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions… (and) when his own preservation comes not in competition, ought he, as much as he can, to preserve the rest of mankind, and may not, unless it be to do justice on an offender, take away, or impair the life, or what tends to the preservation of the life, the liberty, health, limb, or goods of another.

-John Locke on the rights to life, liberty, and property of ourselves and others


IV. When Freedom Loses Its Structure

Over the next two centuries, that framework continued to expand. Early expansions focused on political participation—who could vote, who counted as a citizen, and who could take part in public life.

By the mid-20th century, that expansion accelerated through civil rights movements, which pushed the language of equality and access further into law, culture, and institutions.

In the 1960s into the 1970s, the focus widened into personal life. Questions of family, marriage, sexuality, and individual identity were increasingly reframed in terms of autonomy and personal choice.

The sexual revolution, in particular, was widely understood as an expansion of personal freedom: loosening traditional constraints around sex, marriage, and family life. But over time, some of the assumptions underlying that shift have come under renewed scrutiny. The idea that women can navigate complete sexual and relational autonomy without significant cost appears increasingly fragile, especially in the absence of the social structures that once provided stability and direction.

Expanding rights changes the system, not just access to it.

What’s often assumed is that this expansion is self-justifying—that extending rights is always a net good, and that the system can absorb that expansion without consequence. But that assumption is rarely examined.

As the scope of participation widens, so does the demand placed on the system and on the people within it.

A political system built on equal participation assumes a level of judgment, responsibility, and long-term thinking that is not evenly distributed. It assumes that individuals, given more freedom, will be able to navigate it without undermining the conditions that make it possible in the first place.

What we also see in modern times is the cultural and institutional structures that once shaped behavior—family expectations, community standards, shared moral frameworks have become much weaker, more contested, or easier to reject.

For most of known human history, moral behavior wasn’t just a matter of personal conviction. It was embedded in small, stable, reputation-based communities where actions were visible, remembered, and judged over time. Behavior carried consequences because it was tied to relationships that endured.

That community system relied on three conditions: shared standards, stable enforcement, and long-term relationships. As those weaken, accountability becomes less consistent or non existent. Not because human nature has changed, but because the structures that made behavior visible and tied to consequences have broken down.

Part of that shift is tied to the broader move toward secularism. As religious frameworks lose authority, the shared narratives that once provided cohesion, meaning, and moral orientation begin to fragment. This doesn’t eliminate the human need for structure—it shifts where people look for it. It disperses into competing sources of identity, morality, and meaning.

In The Republic, Plato makes a similar observation about belief itself. What matters is not just what people claim to believe, but whether those beliefs hold under pressure. “We must test them… to see whether they will hold to their convictions when they are subjected to fear, pleasure, or pain.”

Without shared structures reinforcing those convictions, belief becomes more reactive, more situational, and more easily reshaped by external forces.

We are left with a society of multiple, incompatible systems of belief—each with its own values, demands, and claims to legitimacy, but no widely accepted structure holding them together. 

What was once a shared moral world becomes a contested one.

In Propaganda, Edward Bernays makes a blunt observation: the conscious and intelligent manipulation of the masses is not only possible, but essential to managing modern society. That insight becomes more relevant, not less, in the absence of a shared framework.

Because when a society loses the unifying structures that once held it together, the vacuum doesn’t stay empty. New ideologies rush in (secular, political, cultural) offering belonging, morality, and meaning, often with more intensity than the systems they replaced.

More autonomy. Less formation. More fragmentation. Less agreement on what freedom even demands.

This raises a harder question: whether removing earlier constraints produced the kind of freedom it promised, or simply replaced one set of pressures with another.

As that imbalance deepens, people don’t simply become more independent. They look for stability elsewhere.

This is where Deneen’s observation becomes useful, even if I don’t fully agree with his framing. As traditional institutions weaken, dependence doesn’t disappear—it shifts. From local, relational structures to larger, more abstract systems like the state and the market.

Another way to see this is that societies don’t just rely on formal institutions. They rely on something less visible—a kind of cultural immune system. Shared norms, expectations, and informal boundaries that regulate behavior without constant enforcement.

When those weaken, systems don’t become freer. They become easier to exploit.

One of the clearest examples of that vulnerability is the modern corporation.

The American system was designed in deep suspicion of concentrated power, yet over time it has extended expansive protections to corporate entities, allowing large institutions, backed by wealth, media, and legal abstraction, to shape public life in ways the founding framework was poorly equipped to restrain. 

The founders were wary of concentrated power, but they were not designing a system for multinational corporations with vast economic and informational reach. Over time, constitutional doctrine expanded in ways that made these entities increasingly difficult to limit, culminating in decisions like Citizens United, where the Court held that independent political spending by corporations and unions could not be restricted under the First Amendment.

This is part of the same pattern. A system built to preserve liberty becomes easier to exploit when power no longer appears as a king, a church, or a visible ruling class, but as diffuse institutions operating through law, markets, and media.

And as we have seen that happen, trust has eroded, cooperation breaks down, and the very conditions that made freedom possible have begun to unravel.

But I don’t think that was the original aim of classical liberalism.

It’s not that it set out to dismantle the community. It’s that over time, through cultural, economic, and technological changes, the balance between freedom and structure eroded. And now we’re dealing with the consequences of that imbalance.

The more I read, the harder it is to ignore the tension at the heart of the American Revolution itself.

It speaks the language of liberty, but often operated through pressure, surveillance, and social enforcement. Groups like the Sons of Liberty didn’t just resist authority—they replaced it with their own forms of coercion, loyalty tests, and public punishment.

The sons of liberty regularly tar and feathered anyone who offended them or were officers of the British government.

I am not saying the ideals were wrong. It means liberty, on its own, doesn’t sustain itself.When formal authority is rejected, power doesn’t disappear. It simply relocates.

And without shared discipline or internal restraint, it often reappears in more fragmented, less accountable forms.

Liberty is not the absence of power.

It’s a problem of how power is structured, restrained, and lived.

There’s another reaction to this tension that’s worth acknowledging, even if it goes too far.

Thinkers like Mencken argued that the real problem isn’t the system, but the people—that democracy inevitably lowers the standard because it reflects the average citizen. 

And I understand the sentiment; but that framing misses something important.

The issue isn’t that people are inherently incapable of self-government.

It’s that self-government requires habits, discipline, and formation that a system alone cannot produce.

What makes this moment particularly interesting is that the unease people feel doesn’t map neatly onto political categories.

Across both the left and the right, there’s a growing intuition that something isn’t functioning the way it should.

You see it in the rare points of agreement. Public frustration over the lack of transparency in the Epstein files cuts across political lines, with overwhelming majorities convinced that key information is still being withheld and justice is yet to be served. 

You see it in foreign policy as well. Even in a deeply divided country, there is broad skepticism toward escalating conflicts like the war involving Iran, with many of us questioning the purpose, cost, and direction of involvement. 

That concern isn’t new. It shows up clearly in Cato’s Letters, where distrust of power wasn’t abstract—it was grounded in history. The Roman Empire was a constant reference point, especially in how standing armies, once established, could be turned inward, gradually eroding liberty and consolidating control.

They weren’t against defense. But they were deeply wary of permanent military power and foreign entanglements that primarily served those in control, not the public. War wasn’t just protection. It was one of the fastest ways power could expand.

And it’s hard not to wonder how they would look at what we now call the military-industrial complex—how permanent it’s become, how embedded it is, and how easily it justifies its own expansion. 

Power attracts interests that seek to influence it through money, proximity, and favor and over time those interests become embedded within the system itself, shaping decisions in ways that are no longer aligned with the public.

How this shows up today in modern times points to the fact that governmental power no longer feels like a trust. We The People who want to put America and her people’s needs First, are witnessing an occupied government like never before. And that our institutions are no longer held accountable. They have become self-protective and disconnected from the very people they’re meant to serve.


“Power, in proportion to its extent, is ever prone to wantonness.” — Josiah Quincy Jr., Observations on the Boston Port-Bill (1774)

“The supreme power is ever possessed by those who have arms in their hands.” (colonial political writing, mid-18th century)

Standing armies, they warned, could become “the means, in the hands of a wicked and oppressive sovereign, of overturning the constitution… and establishing the most intolerable despotism.” — Simeon Howard, sermon (c. 1773–1775)

Which is why Jefferson insisted on keeping “the military… subject to the civil power,” not the other way around (1774).


There’s also empirical evidence from over a decade ago pointing in that direction. 

Sometimes known as “the oligarchy study” published in 2014 by Martin Gilens and Benjamin Page analyzed nearly 1,800 policy decisions in the United States and found that economic elites and organized business interests have a substantial independent influence on policy outcomes, while average citizens have little to no independent impact.

Policies favored by the majority tend to pass only when they align with the preferences of the wealthy. When they don’t, public opinion has almost no measurable effect.

This one study doesn’t prove that the system has fully collapsed into oligarchy.

But it does reinforce our intuition that something has shifted, that power is no longer functioning as it should and that representation is much more limited than we assume.

What I’ve learned from putting this together is that this concern is not new. It’s ancient.

It’s the same fear that appears in the Greek philosophers, carries through Rome, reemerges in the founding era, and is now unfolding again in modern society.

This is the same dynamic Madison was pointing to in Federalist No. 10. When legitimacy starts to weaken, people don’t simply disengage.

They form groups around competing explanations for what’s gone wrong—different interests, different priorities, different visions of what should replace it.

Within the modern left, those responses are not all the same.

Establishment Democrats still operate within existing systems. Liberals tend to push for reform through policy. Progressives begin to question the structure itself. And further out, democratic socialists and revolutionary groups are not aiming to fix the system, but to replace it entirely.

That distinction matters. Because once you move from reform to replacement, you’re no longer arguing about how to use a system.

You’re arguing about whether it should exist at all. At the far end of that spectrum, some movements push toward dismantling foundational structures entirely, treating them as irredeemably corrupt.

You can see this in specific, coordinated efforts.

Large-scale protest movements like the recent “No Kings” demonstrations, like on March 28th, 2026, bringing 8 million people into the streets across the United States. With more than 3,300 coordinated events spanning all 50 states, the mobilization set a record for the largest single day of protest in U.S. history.

They have planned actions like May Day strikes, where activists are calling for mass labor disruption and economic shutdown. And organized noncooperation campaigns designed to train people in how to resist, overwhelm, or halt existing systems altogether.

Their logic is that the system of capitalism is no longer seen as something to work within, but something to resist, bypass, or bring to a stop.

Not reform. But disruption and replacement.

I’ve spent enough time around these spaces to understand the appeal. When institutions feel captured or unresponsive, the instinct is not to reform them—but to burn them down to the ground.

Freedom is not collapsing because people have rejected it. It’s becoming unstable because we can no longer agree on what it is, what it requires, or what its limits should be.

And as more of the burden falls on individuals while leadership fails to model it, people start to feel both responsible and powerless. And that’s where apathy begins to take hold——when it no longer feels like it matters, especially to the people at the top.

United States Capitol Rotunda — The Dome Painting “The Apotheosis of Washington” Painted by Constantino Brumidi in 1865

V. The Human Problem at the Center of Freedom

A republic doesn’t survive on laws alone.

It survives on citizens who can exercise restraint, who understand limits, who see freedom not just as permission, but as responsibility.

One way to understand this shift more clearly is through moral psychology. Human beings don’t arrive at morality purely through reasoning. We rely on a set of underlying intuitions (care, fairness, loyalty, authority, and a sense of the sacred) that shape how we judge right and wrong before we ever explain why.

In more conservative or traditional societies, these moral intuitions tend to operate together rather than in isolation. Care, fairness, loyalty, authority, and a sense of the sacred reinforce one another, creating a more unified moral framework. People may still disagree, but they are drawing from a shared moral language, with expectations around family, roles, restraint, and what should or should not be done.

But that kind of shared moral framework doesn’t hold evenly across modern society.

The second way to see this is by looking at how these moral intuitions organize into distinct patterns cluster across different groups. In the chart, you can see three broad orientations: progressives, conservatives, and libertarians. Progressives tend to cluster around care and fairness. Conservatives draw from a wider range, incorporating loyalty, authority, and a sense of the sacred alongside those concerns. Libertarians center heavily on liberty, placing less weight on the others. What looks like a disagreement about politics is often a difference in moral orientation—people emphasizing entirely different parts of the same moral landscape.

And the differences don’t just show up in orientation, but in intensity.

This bar graph illustrates this pattern more clearly when we look at how different groups actually prioritize these moral intuitions. 

Secular liberals and the religious left tend to emphasize care and fairness most strongly, focusing on reducing harm and promoting equality. By contrast, more traditional or socially conservative groups draw more evenly across a broader set of values, including loyalty, authority, and a sense of the sacred alongside care and fairness. Libertarians tend to narrow even further, prioritizing individual liberty while placing less emphasis on collective or traditional moral structures. 

The result isn’t just disagreement over morality—it’s a difference in what people are even measuring in the first place, which makes shared judgment harder to sustain.

You can see the split in how people respond to the same breakdown in trust.

For those on the left, freedom means removing constraint entirely and that leads to a push to dismantle systems they see as corrupt or oppressive. 

For those on the right, it produces deep suspicion: distrust of elections, media, public health authority, and government itself, along with a desire to restore order, stability, and clearer boundaries. In some cases, that turns into nostalgia for earlier structures: family roles, gender norms, and forms of religious authority that are seen as more stable, even if that restoration comes with its own trade-offs.

These aren’t just different political positions. 

They reflect different instincts about what matters most and different assumptions about what freedom is for.

And both risk missing the deeper question.

Not just: what system creates freedom?

But what kind of people can sustain it?

This is where Aristotle’s framework becomes difficult to ignore. In that sense, his may be closer to the truth than many modern assumptions. It starts from the premise that people are not equal in their capacity for judgment or self-governance—and builds from there, rather than pretending those differences don’t matter.

It shows up in how people live, how they make decisions, and how they exercise restraint. That’s where his framework of virtue comes in—not as an ideal, but as a way of describing what it actually takes to live well and participate in a functioning society.

He didn’t think virtue was about perfection. He thought of it as balance. Courage sits between cowardice and recklessness.

Self-control between indulgence and insensibility.

Generosity between stinginess and excess.

Virtue is not automatic. It is cultivated. And it can be lost.

He applied that same logic to political systems. A government can exist in a healthy form, oriented toward the common good, or in a corrupted form, serving only a faction. At that point, the difference isn’t just structural. It comes down to character.

One tension that keeps resurfacing in political thought is the gap between equality in principle and inequality in capacity.

You can see this play out in small, everyday ways. Give ten people the same freedom, the same opportunity, the same set of rules—and you don’t get the same outcomes. Some plan ahead. Some act impulsively. Some take responsibility. Others look for ways around it. The structure is equal, but the response isn’t. 

Because human beings are not identical in judgment, discipline, or temperament. Some are more capable of long-term thinking, self-restraint, and navigating complexity than others.

A free society doesn’t eliminate those differences. It has to operate in spite of them. And that creates the real challenge.

A system built on self-government depends on habits it cannot enforce, on restraint it cannot require, and on a shared understanding of limits it cannot guarantee.

Which raises a difficult question:

What happens when a system built on equal freedom depends on unequal capacities to sustain it?

Freedom is not self-sustaining. The more we treat it like it is, the more fragile it becomes. 

When those conditions weaken, the structure doesn’t collapse all at once. It loosens, then drifts, and eventually begins to follow the same pattern that earlier thinkers warned about. 

Not because the idea of freedom was flawed, but because it was always contingent on something more demanding than we like to admit.

And that’s what makes the older warnings so difficult to ignore. The concerns that show up in Greek philosophy, carry through Rome, and reappear in the founding era weren’t tied to one moment in history. They’re describing something recurring. Power doesn’t stay put. It accumulates. It protects itself. And without pressure against it, it shifts (often quietly) into something more self-serving than it was at the start.

The documents and letters from the founding era weren’t written for a stable world. They were written by people who assumed this drift was inevitable. That’s why they were obsessed over things like faction, corruption, and the abuse of power. Not just as political problems, but as moral ones. Because once corruption sets in, it doesn’t just distort institutions. It reshapes the people within them. A corrupt government cannot be a just government. That’s why they treated free speech, free press and an informed public less like ideals and more like important tools—ways of forcing power into the open before it had the chance to consolidate.

Cato’s letters, in particular, were relentless on this point. They knew that a society that becomes consumed with wealth, status, and self-interest doesn’t just become unequal. It becomes easier to manipulate, easier to divide, and eventually less capable of governing itself at all. Civic virtue wasn’t a side note. It was the condition that made freedom possible in the first place.

And when you look at it from that angle, it doesn’t feel like you’re reading writings from the 18th century. It feels familiar, much closer to home. 

Of course, the scale is different now. The mechanisms are different. But the tension is very much the same. Governments and corporations operate with a level of reach the founders never could have imagined with technology. Information is filtered, behavior is shaped, and power often moves through systems that don’t look like power at all. You don’t always see it directly. But you feel the effects of it.

So the responsibility doesn’t go away. It never did.

If anything, it becomes less obvious and more necessary at the same time.

A system like this doesn’t hold because it was designed well. It holds, when it does, because enough people are still paying attention. Still pushing back. Still unwilling to let power define its own limits.

And once that slips…once that expectation fades, the structure doesn’t fail all at once. It just stops holding in the way it used to. And the pattern continues.

United States Capitol Rotunda — The Dome Painting “The Apotheosis of Washington” Painted by Constantino Brumidi in 1865

Resources: 

This piece pulls from a mix of ancient sources, founding-era writing, and modern critiques. Not because I agree with all of them, but because each one sharpens a different part of the problem. If you want to work through it yourself, these are the ones that shaped how I’m thinking about it:

Corporate Rights and the Most Absurd Legal Fiction: A Reactionary History and Analysis of Corporate Personhood

Bernard Bailyn — The Ideological Origins of the American Revolution
Less about what the founders built, more about what they were reacting to—especially the collapse of earlier republics.

Alexander Hamilton, James Madison, John Jay — The Federalist Papers
A direct look at how they thought about human nature, power, and why freedom needs structure to hold.

Patrick Deneen — Why Liberalism Failed
I don’t agree with all of it, but the critique of modern individualism and the erosion of shared norms is worth taking seriously.

Plato — The Republic
Still one of the clearest descriptions of how excessive freedom destabilizes a society.

Aristotle — Politics
Helpful for understanding how democracies drift when law loses authority and personality takes over.

Polybius — Histories
His framework for how governments rise and decay is hard to unsee once you see it.

Louise Perry — The Case Against the Sexual Revolution
A modern example of how expanded freedom doesn’t always produce the outcomes people expect.

Jonathan Haidt — The Righteous Mind
Useful for understanding why reason alone doesn’t hold societies together—and why people experience morality so differently.

Charles Freeman — The Closing of the Western Mind
Explores how early Christianity reshaped intellectual life in the West.
Also recommended: The Opening of the Western Mind

Roger E. Olson — The Story of Christian Theology
A clear overview of how Christian thought developed over time and how its internal tensions evolved.

Judith Bennett — Women in the Medieval English Countryside
Insight into everyday life, structure, and roles in pre-modern society.

Christine Fell — Women in Anglo-Saxon England
A look at social organization and cultural norms in early English society.

Sacred or Strategic? Rethinking the Christian Origin Story

The Bible Isn’t History and Trump Isn’t Your Savior

It’s Been a Minute… Let’s Get Real

Hey Hey, welcome back to Taste of Truth Tuesdays! it’s been over a month since my last episode, and wow—a lot has happened. Honestly, I’ve been doing some serious soul-searching and education, especially around some political events that shook me up.

I was firmly against Trump’s strikes on Iran. And the more I dug in, the more I realized how blind I’d been completely uneducated and ignorant about the massive political power Zionism holds in this country. And it’s clear now: Trump is practically bent over the Oval Office for Netanyahu. The Epstein files cover-up only confirms that blackmail and shadow control are the real puppet strings pulling at the highest levels of power. Our nation has been quietly occupied since Lyndon B. Johnson’s presidency and that’s a whole other episode I’ll get into later.

But what really cracked something in me was this:

In the 1990s, Trump sponsored Elite’s “Look of the Year” contest—a glitzy, global modeling search that lured teenage girls with promises of fame and fashion contracts. Behind the scenes, it was a trafficking operation. According to The Guardian’s Lucy Osborne and the BBC documentary Scouting For Girls: Fashion’s Darkest Secret, these girls weren’t being scouted—they were being sold to rich businessmen.

This wasn’t just proximity. Trump was part of it.

Once I saw that, the religious right’s worship of him stopped looking like misguided patriotism and started looking like mass delusion. Or complicity. Either way, I couldn’t unsee it.

And that’s when I started asking the bigger questions: What else have we mistaken for holy? What else have we accepted as truth without scrutiny?

For now, I want to cut to the heart of the matter: the major problem at the root of so much chaos: the fact that millions of Christians still believe the Bible is a literal historical document.

This belief doesn’t just distort faith-it fuels political agendas, end-times obsession, and yes, even foreign policy disasters. So, let’s dig into where this all began, how it’s evolved, and why it’s time we rethink everything we thought we knew about Scripture.

Thanks for reading Taste of Truth! Subscribe for free to receive new posts and support my work.

For most Christians, the Bible is more than a book-it’s the blueprint of reality, the inspired Word of God, infallible and untouchable. But what if that belief wasn’t original to Christianity? What if it was a reaction…. a strategic response to modern doubt, historical criticism, and the crumbling authority of the Church?

In this episode, we’re pulling back the veil on the doctrine of biblical inerrancy, the rise of dispensationalism, and the strange marriage of American politics and prophetic obsession. From the Scofield Bible to the belief that modern-day Israel is a fulfillment of God’s plan, we’re asking hard questions about the origins of these ideas.

As Dr. Mark Gregory Karris said when he joined us on a previous episode: “Can you imagine two different families? One, the Bible is the absolute inerrant word of God every.Word, every jot and title, so to speak, is meant to be in there due to the inspiration of God. And so every story you read, you know, God killing Egyptian babies and God flooding the entire planet and thinking, well yeah, there’s gonna be babies gasping for air and drowning grandmothers and all these animals. And that is seen as absolute objective truth. But then in another family, oh, these are, these are myths. These are sacred myths that people can learn from. No, that wasn’t like God speaking and smiting them and burning them alive because they touch this particular arc or now that this is how they thought given their minds at the time, given their understandings of and then like you talked about oh look at that aspect of humanity interesting that they portrayed god and not like it becomes like wow that’s cool instead of like oh my gosh i need 3-4 years of therapy because I was taught the bible in a particular way.”

Once you trace these doctrines back to their roots, it’s not divine revelation you find: it’s human agendas.

Let’s get uncomfortable. Was your faith formed by sacred truth… or centuries of strategic storytelling?

How Literalism Took Over

In the 19th century, biblical literalism became a kind of ideological panic room. As science, archaeology, and critical scholarship began to chip away at traditional interpretations, conservative Christians doubled down. Instead of exploring the Bible as a complex, layered anthology full of metaphor, moral instruction, and mythology, they started treating it like a divine press release. Every word had to be accurate. Every timeline had to match. Every contradiction had to be “harmonized” away.

The Myth of Inerrancy

One of the most destructive byproducts of this era was the invention of biblical inerrancy. Yes, invention. The idea that the Bible is “without error in all that it affirms” isn’t ancient…. it’s theological propaganda, most notably pushed by B.B. Warfield and his peers at Princeton. Rogers and McKim wrote extensively about how this doctrine was manufactured and not handed down from the apostles as many assume. We dive deeper into all that—here.

Inerrancy teaches that the Bible is flawless, even in its historical, scientific, and moral claims. But this belief falls apart under even basic scrutiny. Manuscripts don’t agree. Archaeological timelines conflict with biblical ones. The Gospels contradict each other. And yet this doctrine persists, warping believers’ understanding and demanding blind loyalty to texts written by fallible people in vastly different cultures.

That’s the danger of biblical inerrancy: it treats every verse as historical journalism rather than layered myth, metaphor, or moral instruction. But what happens when you apply that literalist lens to ancient origin stories?

📖 “Read as mythology, the various stories of the great deluge have considerable cultural value, but taken as history, they are asinine and absurd.” — John G. Jackson, Christianity Before Christ

And yet, this is the foundation of belief for millions who think Noah’s Ark was a literal boat and not a borrowed flood myth passed down and reshaped across Mesopotamian cultures. This flattening of myth into fact doesn’t just ruin the poetry-it fuels bad politics, end-times obsession, and yes… Zionism.

And just to be clear, early Christians didn’t read the Bible this way. That kind of rigid literalism didn’t emerge until centuries later…long after the apostles were gone. We’ll get to that.

When we cling to inerrancy, we’re not preserving truth. We’re missing it entirely.

Enter: Premillennial Dispensationalism

If biblical inerrancy was the fuel, C.I. Scofield’s 1909 annotated Bible was the match. His work made premillennial dispensationalism a household belief in evangelical churches. For those unfamiliar with the term, here’s a quick breakdown:

  • Premillennialism: Jesus will return before a literal thousand-year reign of peace.
  • Dispensationalism: History is divided into distinct eras (or “dispensations”) in which God interacts with humanity differently.

When merged, this theology suggests we’re living in the “Church Age,” which will end with the rapture. Then comes a seven-year tribulation, the rise of the Antichrist, and finally, Jesus returns for the ultimate battle after which He’ll rule Earth for a millennium. Sounds like the plot of a dystopian film, right? And yet, this became the dominant lens through which American evangelicals interpret reality.

The result? A strange alliance between American evangelicals and Zionist nationalism. You get politicians quoting Revelation like it’s foreign policy, pastors fundraising for military aid, and millions of Christians cheering on war in the Middle East because they think it’ll speed up Jesus’ return.

But here’s what I want you to take away from this episode today: none of this works unless you believe the Bible is literal, infallible, and historically airtight.

How This Shaped Evangelical Culture and Politics

The Scofield Bible didn’t just change theology. It changed culture. Dispensationalist doctrine seeped into seminaries like Dallas Theological Seminary and Moody Bible Institute, influencing generations of pastors. It also exploded into popular culture through Hal Lindsey’s The Late Great Planet Earth and the Left Behind series. Fiction, prophecy, and fear blurred into one big spiritual panic attack.

But perhaps the most alarming shift came in the political realm. Dispensationalist belief heavily influences evangelical support for the modern state of Israel. Why? Because many believe Israel’s 1948 founding was a prophetic event. Figures like Jerry Falwell turned theology into foreign policy. His organization, the Moral Majority, was built on an unwavering belief that supporting Israel was part of God’s plan. Falwell didn’t just preach this, he traveled to Israel, funded by its government, and made pro-Israel advocacy a cornerstone of evangelical identity.

This alignment between theology and geopolitics hasn’t faded. In the 2024 election cycle, evangelical leaders ranked support for Israel on par with anti-abortion stances. Ralph Reed, founder of the Faith and Freedom Coalition, explicitly said as much. Donald Trump even quipped that “Christians love Israel more than Jews.” Whether that’s true or not, it reveals just how deep this belief system runs.

And the propaganda doesn’t stop there…currently Israel’s Foreign Ministry is funding a week-long visit for 16 prominent young influencers aligned with Donald Trump’s MAGA and America First movements, part of an ambitious campaign to reshape Israel’s image among American youth.

But Let’s Talk About the Red Flags

This isn’t just about belief-it’s about control. Dispensationalist theology offers a simple, cosmic narrative: you’re on God’s winning team, the world is evil, and the end is near. There’s no room for nuance, no time for doubt. Just stay loyal, and you’ll be saved.

This thinking pattern isn’t exclusive to Christianity. You’ll find it in MLMs, and some conspiracy theory communities. The recipe is the same: create an in-group with secret knowledge, dangle promises of salvation or success, and paint outsiders as corrupt or deceived. It’s classic manipulation-emotional coercion wrapped in spiritual language.

And let’s not forget the date-setting obsession. Hal Lindsey made a career out of it. People still point to blood moons, earthquakes, and global politics as “proof” that prophecy is unfolding. If you’ve ever been trapped in that mindset, you know how addictive and anxiety-inducing it can be.

BY THE WAY, it’s not just dispensationalism or the Scofield Bible that fuels modern Zionism. The deeper issue is, if you believe the Bible is historically accurate and divinely orchestrated, you’re still feeding the ideological engine of Zionism. Because at its core, Christianity reveres Jewish texts, upholds Jewish chosenness, and worships a Jewish messiah. That’s not neutrality it’s alignment.

If this idea intrigued you, you’re not alone. There’s a growing body of work unpacking how Christianity’s very framework serves Jewish supremacy, whether intentionally or not. For deeper dives, check out Adam Green’s work over at Know More News on Rumble, and consider reading The Jesus Hoax: How St. Paul’s Cabal Fooled the World for Two Thousand Years. You don’t have to agree with everything to realize: the story you were handed might not be sacred it might be strategic.

Why This Matters for Deconstruction

For me, one of the most painful parts of deconstruction was realizing I’d been sold a false bill of goods. I was told the Bible was the infallible word of God. That it held all the answers. That doubt was dangerous. But when I began asking real questions, the entire system started to crack.

The doctrine of inerrancy didn’t deepen my faith… it limited it. It kept me from exploring the Bible’s human elements: its contradictions, its cultural baggage, and its genuine beauty. The truth is that these texts were written by people trying to make sense of their world and their experiences with the divine. They are not divine themselves.

Modern Scholarship Breaks the Spell

Modern biblical scholarship has long since moved away from the idea of inerrancy. When you put aside faith-based apologetics and look honestly at the evidence, the traditional claims unravel quickly:

  • Moses didn’t write the Torah. Instead, the Pentateuch was compiled over centuries by multiple authors, each with their own theological agendas (see the JEDP theory).
  • King David is likely a mythic figure. Outside of the Bible, there’s no solid evidence he actually existed, much less ruled a vast kingdom.
  • The Gospels weren’t written by Matthew, Mark, Luke, and John. Those names were added later. The original texts are anonymous and they often contradict each other.
  • John didn’t write Revelation. Not the Apostle John, anyway. The Greek and style are completely different from the Gospel of John. The real author was probably some unknown apocalyptic mystic on Patmos, writing during Roman persecution.

And yet millions still cling to these stories as literal fact, building entire belief systems and foreign policies on myths and fairy tales.


🧠 Intellectual Starvation in Evangelicalism

Here’s the deeper scandal: it’s not just that foundational Christian stories crumble under modern scrutiny. It’s that the church never really wanted you to think critically in the first place.

Mark Noll, a respected evangelical historian, didn’t mince words when he wrote:

“The scandal of the evangelical mind is that there is not much of an evangelical mind.”

In The Scandal of the Evangelical Mind, Noll traces how American evangelicalism lost its intellectual life. It wasn’t shaped by a pursuit of truth, but by populist revivalism, emotionalism, and a hyper-literal obsession with “the end times.” The same movements that embraced dispensationalism and biblical inerrancy also gutted their communities of academic rigor, curiosity, and serious theological reflection.

The result? A spiritually frantic but intellectually hollow faith—one that discourages questions, mistrusts scholarship, and fears nuance like it’s heresy.

Noll shows that instead of grappling with ambiguity or cultural complexity, evangelicals often default to reactionary postures. This isn’t just a relic of the past. It’s why so many modern Christians cling to false authorship claims, deny historical context, and accept prophecy as geopolitical fact. It’s why Revelation gets quoted to justify Zionist foreign policy without ever asking who actually wrote the book or when, or why.

This anti-intellectualism isn’t an accident. It was baked in from the start.

But Noll doesn’t leave us hopeless. He offers a call forward: for a faith that engages the world with both heart and mind. A faith that can live with tension, welcome complexity, and evolve beyond fear-driven literalism.

What Did the Early Church Actually Think About Scripture?

Here’s what gets lost in modern evangelical retellings: the earliest Christians didn’t treat Scripture the way today’s inerrantists do.

For the first few centuries, Christians didn’t even have a finalized Bible. There were letters passed around, oral traditions, a few widely recognized Gospels, and a whole lot of discussion about what counted as authoritative. It wasn’t until the fourth century that anything close to our current canon was even solidified. And even then, it wasn’t set in stone across all branches of Christianity.

Church fathers like Origen, Clement of Alexandria, and Irenaeus viewed Scripture as spiritually inspired but full of metaphor and mystery. They weren’t demanding literal accuracy; they were mining the texts for deeper meanings. Allegory was considered a legitimate, even necessary, interpretive method. Scripture was read devotionally and theologically, not scientifically or historically. In other words, it wasn’t inerrancy that defined early Christian engagement with Scripture, it was curiosity and contemplation.

For a deeper dive, check out The Gnostic Informant’s incredible documentary that uncovers the first hundred years of Christianity, a period that has been systematically lied about and rewritten. It reveals how much of what we take for granted was shaped by political and theological agendas far removed from the original followers of Jesus.

If you’re serious about understanding the roots of your faith or just curious about how history gets reshaped, this documentary is essential viewing. It’s a reminder that truth often hides in plain sight and that digging beneath the surface is how we reclaim our own understanding.

Protestantism: A Heretical Offshoot Disguised as Tradition

The Protestant Reformation shook things up in undeniable ways. Reformers like Martin Luther and John Calvin challenged the Catholic Church’s abuses and rightly demanded reform. But what’s often missed (or swept under the rug) is how deeply Protestantism broke with the ancient, historic Church.

By insisting on sola scriptura—Scripture alone—as the sole authority, the Reformers rejected centuries of Church tradition, councils, and lived community discernment that shaped orthodox belief. They didn’t invent biblical inerrancy as we know it today, but their elevation of the Bible above all else cracked the door wide open for literalism and fundamentalism to storm in.

What began as a corrective movement turned into a theological minefield. Today, Protestantism isn’t a single coherent tradition; it’s a sprawling forest of over 45,000 different denominations, all claiming exclusive access to “the truth.”

This fragmentation isn’t accidental…. it’s the logical outcome of rejecting historic continuity and embracing personal interpretation as the final authority.

Far from preserving the faith of the ancient Church, Protestantism represents a fractured offshoot: one that often contradicts the early Church’s beliefs and teachings. It trades the richness of lived tradition and community wisdom for a rigid, literalistic, and competitive approach to Scripture.

The 20th century saw this rigid framework perfected into a polished doctrine demanding total conformity and punishing doubt. Protestant fundamentalism turned into an ideological fortress, where questioning is treated as betrayal, and theological nuance is replaced by black-and-white dogma.

If you want to understand where so much of modern evangelical rigidity and end-times obsession comes from, look no further than this fractured legacy. Protestantism’s break with the ancient Church set the stage for the spiritual and intellectual starvation that Mark Noll so powerfully exposes.

Rethinking the Bible

Seeing the Bible as a collection of human writings about God rather than the literal word from God opens up space for critical thinking and compassion. It allows us to:

  • Study historical context and cultural influences.
  • Embrace the diversity of perspectives in Scripture.
  • Let go of rigid interpretations and seek core messages like love, justice, and humility.
  • Move away from proof-texting and toward spiritual growth.
  • Reconcile faith with science, reason, and modern ethics.

When we stop demanding that the Bible be perfect, we can finally appreciate what it actually is: a complex, messy, beautiful attempt by humans to understand the sacred.

This shift doesn’t weaken faith…. I believe it strengthens it.

It moves us away from dogma disguised as certainty and into something deeper…. something alive. It opens the door for real relationship, not just with the divine, but with each other. It makes space for growth, for disagreement, for honesty.

And in a world tearing itself apart over whose version of truth gets to rule, that kind of open-hearted spirituality isn’t just refreshing-it’s essential.

Because if your faith can’t stand up to questions, history, or accountability… maybe it was never built on truth to begin with.

Let’s stop worshiping the paper and start seeking the presence.

🔎 Resources Worth Exploring:

  • “The Jesus Hoax: How St. Paul’s Cabal Fooled the World for Two Thousand Years” by David Skrbina
  • “Christianity Before Christ” by John G. Jackson
  • The Scandal of the Evangelical Mind” by Mark Noll – A scathing but sincere critique from within the evangelical tradition itself. Noll exposes how anti-intellectualism, biblical literalism, and cultural isolationism have gutted American Christianity’s ability to engage the world honestly.
  • Check out Adam Green’s work at Know More News on Rumble for more on the political and mythological implications of Christian Zionism
  • And don’t miss my interview with Dr. Mark Gregory Karris, author of The Diabolical Trinity: Wrathful God, Sinful Self, and Eternal Hell, where we dive deep into the psychological damage caused by toxic theology

Beneath the White Coats: Psychiatry, Eugenics, and the Forgotten Graves

Dogma in a Lab Coat

We like to believe science is self-correcting—that data drives discovery, that good ideas rise, and bad ones fall. But when it comes to mental health, modern society is still tethered to a deeply flawed framework—one that pathologizes human experience, medicalizes distress, and often does more harm than good.

Psychiatry has long promised progress, yet history tells a different story. From outdated treatments like bloodletting to today’s overprescription of SSRIs, we’ve traded one form of blind faith for another. These drugs—still experimental in many respects—carry serious risks, yet are handed out at staggering rates. And rather than healing root causes, they often reinforce a narrative of victimhood and chronic dysfunction.

The pharmaceutical industry now drives diagnosis rates, shaping public perception and clinical practice in ways that few understand. What’s marketed as care is often a system of control. In this episode, we revisit the dangers of consensus-driven science—how it silences dissent and rewards conformity.

Because science, like religion or politics, can become dogma. Paradigms harden. Institutions protect their power. And the costs are human lives.

But beneath this entire structure lies a deeper, more uncomfortable question—one we rarely ask:

What does it mean to be a person?

Are we just bodies and brains—repairable, programmable, replaceable? Or is there something more?

Is consciousness a glitch of chemistry, or is it a window into the soul?

Modern psychiatry doesn’t just treat symptoms—it defines the boundaries of personhood. It tells us who counts, who’s disordered, who can be trusted with autonomy—and who can’t.

But what if those definitions are wrong?

We’ve talked before about the risks of unquestioned paradigms—how ideas become dogma, and dogma becomes control. In a past episode, How Dogma Limits Progress in Fitness, Nutrition, and Spirituality, we explored Rupert Sheldrake’s challenge to the dominant scientific worldview—his argument that science itself had become a belief system, closing itself off to dissent. TED removed that talk, calling it “pseudoscience.” But many saw it as an attempt to protect the status quo—the high priests of data and empiricism silencing heresy in the name of progress. We will revisit his work later on in our conversation. 

We’ve also discussed how science, more than politics or religion, is often weaponized to control behavior, shape belief, and reinforce social hierarchies. And in a recent Taste Test Thursday episode, we dug into how the industrial food system was shaped not just by profit but by ideology—driven by a merger of science and faith.

To read more:

This framework—that science is never truly neutral—becomes especially chilling when you look at the history of psychiatry.

To begin this conversation, we’re going back—not to Freud or Prozac, but further. To the roots of American psychiatry. To two early figures—John Galt and Benjamin Rush—whose ideas helped define the trajectory of an entire field. What we find there presents a choice: a path toward genuine hope, or a legacy of continued harm.

This  story takes us into the forgotten corners of that history, a place where “normal” and “abnormal” were declared not by discovery, but by decree.

Clinical psychiatrist Paul Minot put it plainly:

“Psychiatry is so ashamed of its history that it has deleted much of it.”

And for good reason.

Psychiatry’s early roots weren’t just tangled with bad science—they were soaked in ideology. What passed for “treatment” was often social control, justified through a veneer of medical language. Institutions were built not to heal, but to hide. Lives were labeled defective. 

We would like to think that medicine is objective, that the white coat stands for healing. But behind those coats was a mission to save society from the so-called “abnormal.”
But who defined normal?
And who paid the price?


The Forgotten Legacy of Dr. John Galt

Lithograph, “Virginia Lunatic Asylum at Williamsburg, Va.” by Thomas Charles Millington, ca.1845. Block & Building Files – Public Hospital, Block 04, Box 07. Image citation: D2018-COPY-1104-001. Special Collections.

Long before DSM codes and Big Pharma, the first freestanding mental hospital  in America called Eastern Lunatic Asylum opened its doors in 1773—just down the road from where I live, in Williamsburg, Virginia. Though officially declared a hospital, it was commonly known as “The Madhouse.” For most who entered, institutionalization meant isolation, dehumanization, and often treatment worse than what was afforded to livestock. Mental illness was framed as a threat to the social order—those deemed “abnormal” were removed from society and punished in the name of care.

But one man dared to imagine something different.

Dr. John Galt II, appointed as the first medical superintendent of the hospital (later known as Eastern State), came from a family of alienists—an old-fashioned term for early psychiatrists. The word comes from the Latin alienus, meaning “other” or “stranger,” and referred to those considered mentally “alienated” from themselves or society. Today, of course, the word alien has taken on very different connotations—especially in the heated political debates over immigration. It’s worth clarifying: the historical use of alienist had nothing to do with immigration or nationality. It was a clinical label tied to 19th-century psychiatry, not race or citizenship. But like many terms, it’s often misunderstood or manipulated in modern discourse.

Galt, notably, broke with the harsh legacy of many alienists of his time. Inspired by French psychiatrist Philippe Pinel—often credited as the first true psychiatrist—Galt embraced a radically compassionate model known as moral therapy. Where others saw madness as a threat to be controlled, Galt saw suffering that could be soothed. He believed the mentally ill deserved dignity, freedom, and individualized care—not chains or punishment. He refused to segregate patients by race. He treated enslaved people alongside the free. And he opposed the rising belief—already popular among his fellow psychiatrists—that madness was simply inherited, and the mad were unworthy of full personhood.

Credit: The Valentine
Original Author: Cook Collection
Created: Late nineteenth to early twentieth century

Rather than seeing madness as a biological defect to be subdued or “cured,” Galt and Pinel viewed it as a crisis of the soul. Their methods rejected medical manipulation and instead focused on restoring dignity. They believed that those struggling with mental affliction should be treated not as deviants but as ordinary people, worthy of love, freedom, and respect.

Dr. Marshall Ledger, founder and editor of Penn Medicine, once quoted historian Nancy Tomes to summarize this period:

“Medical science in this period contributed to the understanding of mental illness, but patient care improved less because of any medical advance than because of one simple factor: Christian charity and common sense.”

Galt’s asylum was one of the only institutions in the United States to treat enslaved people and free Black patients equally—and even to employ them as caregivers. He insisted that every person, regardless of race, had a soul of equal moral worth. His belief in equality and metaphysical healing put him at odds with nearly every other psychiatrist of his time.

And he paid the price.

The psychiatric establishment, closely allied with state power and emerging medical-industrial interests, rejected his human-centered model. Most psychiatrists of the era endorsed slavery and upheld racist pseudoscience. The prevailing consensus was rooted in hereditary determinism—that madness and criminality were genetically transmitted, particularly among the “unfit.”

This growing belief—that mental illness was a biological flaw to be medically managed—was not just a scientific view, but an ideological one. Had Galt’s model of moral therapy been embraced more broadly, it would have undermined the growing assumption that biology and state-run institutions offered the only path to sanity. It would have challenged the idea that human suffering could—and should—be controlled by external authorities.

Instead, psychiatry aligned with power.

Moral therapy was quietly abandoned. And the field moved steadily toward the medicalized, racialized, and state-controlled version of mental health that would pave the way for both eugenics and the modern pharmaceutical regime.

“The Father of American Psychiatry”

Long before Auschwitz. Long before the Eugenics Record Office. Long before sterilization laws and IQ tests, there was Dr. Benjamin Rush—signer of the Declaration of Independence, founder of the first American medical school, and the man still honored as the “father of American psychiatry.” His portrait hangs today in the headquarters of the American Psychiatric Association.

Though many historians point to Francis Galton as the father of eugenics, it was Rush—nearly a century earlier—who laid much of the ideological groundwork. He argued that mental illness was biologically determined and hereditary. And he didn’t stop there.

Rush infamously diagnosed Blackness itself as a form of disease—what he called “negritude.” He theorized that Black people suffered from a kind of leprosy, and that their skin color and behavior could, in theory, be “cured.” He also tied criminality, alcoholism, and madness to inherited degeneracy, particularly among poor and non-white populations.

These ideas found a troubling ally in Charles Darwin’s emerging theories of evolution and heredity. While Darwin’s work revolutionized biology, it was often misused to justify racist notions of racial hierarchy and biological determinism.

Rush’s medical theories were mainstream and deeply influential, shaping generations of physicians and psychiatrists. Together, these ideas reinforced the belief that social deviance and mental illness were rooted in faulty bloodlines—pseudoscientific reasoning that provided a veneer of legitimacy to racism and social control within medicine and psychiatry.

The tragic irony? While Rush advocated for the humane treatment of the mentally ill in certain respects, his racial theories helped pave the way for the pathologizing of entire populations—a mindset that would fuel both American and European eugenics movements in the next century.

American Eugenics: The Soil Psychiatry Grew From

Before Hitler, there was Cold Spring Harbor. Founded in 1910, the Eugenics Record Office (ERO) operated out of Cold Spring Harbor Laboratory in New York with major funding from the Carnegie Institution, later joined by Rockefeller Foundation money. It became the central hub for American eugenic research, gathering family pedigrees to trace so-called hereditary defects like “feeblemindedness,” “criminality,” and “pauperism.”

Between the early 1900s and 1970s, over 30 U.S. states passed forced sterilization laws targeting tens of thousands of people deemed unfit to reproduce. The justification? Traits like alcoholism, poverty, promiscuity, deafness, blindness, low IQ, and mental illness were cast as genetic liabilities that threatened the health of the nation.

The practice was upheld by the U.S. Supreme Court in 1927 in the infamous case of Buck v. Bell. In an 8–1 decision, Justice Oliver Wendell Holmes Jr. wrote, “Three generations of imbeciles are enough,” greenlighting the sterilization of 18-year-old Carrie Buck, a young woman institutionalized for being “feebleminded”—a label also applied to her mother and child. The ruling led to an estimated 60,000+ sterilizations across the U.S.

And yes—those sterilizations disproportionately targeted African American, Native American, and Latina women, often without informed consent. In North Carolina alone, Black women made up nearly 65% of sterilizations by the 1960s, despite being a much smaller share of the population.

Eugenics wasn’t a fringe pseudoscience. It was mainstream policy—supported by elite universities, philanthropists, politicians, and the medical establishment.

And psychiatry was its institutional partner.

The American Journal of Psychiatry published favorable discussions of sterilization and even euthanasia for the mentally ill as early as the 1930s. American psychiatrists traveled to Nazi Germany to observe and advise, and German doctors openly cited U.S. laws and scholarship as inspiration for their own racial hygiene programs.

In some cases, the United States led—and Nazi Germany followed.

The International Congress of Eugenics’ Logo 1921

This isn’t conspiracy. It’s history. Documented, peer-reviewed, and disturbingly overlooked.


From Ideology to Institution

By the early 20th century, the groundwork had been laid. Psychiatry had evolved from a fringe field rooted in speculation and racial ideology into a powerful institutional force—backed by universities, governments, and the courts. But its foundation was still deeply compromised. What had begun with Benjamin Rush’s biologically deterministic theories and America’s eugenic policies now matured into a formalized doctrine—one that treated human suffering not as a relational or spiritual crisis, but as a defect to be categorized, corrected, or eliminated.

This is where the five core doctrines of modern psychiatry emerge.

The Five Doctrines That Shaped Modern Psychiatry

These five doctrines weren’t abandoned after World War II. They were rebranded, exported, and quietly absorbed into the foundations of American psychiatry.

1. The Elimination of Subjectivity

Patients were no longer seen as people with stories, pain, or meaning—they were seen as bundles of symptoms. Suffering was abstracted into clinical checklists. The Diagnostic and Statistical Manual of Mental Disorders (DSM) became the gold standard, not because it offered clear science, but because it offered utility: a standardized language that served pharmaceutical companies, insurance billing, and bureaucratic control. If you could name it, you could code it—and medicate it.

2. The Eradication of Spiritual and Moral Meaning

Struggles once understood through relational, existential, or moral frameworks were stripped of depth. Grief became depression. Anger became oppositional defiance. Existential despair was reduced to a neurotransmitter imbalance. The soul was erased from the conversation. As Berger notes, suffering was no longer something to be witnessed or explored—it became something to be treated, as quickly and quietly as possible.

3. Biological Determinism

Mental illness was redefined as the inevitable result of faulty genes or broken brain chemistry—even though no consistent biological markers have ever been found. The “chemical imbalance” theory, aggressively marketed throughout the late 20th century, was never scientifically validated. Yet it persists, in part because it sells. Selective serotonin reuptake inhibitors (SSRIs)—still widely prescribed—were promoted on this flawed premise, despite studies showing they often perform no better than placebo and come with serious side effects, including emotional blunting, dependence, and sexual dysfunction.

4. Population Control and Racial Hygiene

In Germany, this meant sterilizing and exterminating those labeled “life unworthy of life.” In the U.S., it meant forced sterilizations of African-American and Native American women, institutionalizing the poor, the disabled, and the nonconforming. These weren’t fringe policies—they were mainstream, upheld by law and supported by leading psychiatrists and journals. Even today, disproportionate diagnoses in communities of color, coercive treatments in prisons and state hospitals, and medicalization of poverty reflect these same logics of control.

5. The Use of Institutions for Social Order

Hospitals became tools for enforcing conformity. Psychiatry wasn’t just about healing—it was about managing the unmanageable, quieting the inconvenient, and keeping society orderly. From lobotomies to electroshock therapy to modern-day involuntary holds, psychiatry has long straddled the line between medicine and discipline. Coercive treatment continues under new names: community treatment orders, chemical restraints, and state-mandated compliance.

These doctrines weren’t discarded after the fall of Nazi Germany. They were imported. Adopted. Rebranded under the guise of “evidence-based medicine” and “public health.” But the same logic persists: reduce the person, erase the context, medicalize the soul, and reinforce the system.


Letchworth Village: The Human Cost

I didn’t simply read this in a textbook. I stood there—on the edge of those woods—next to rows of numbered graves.

In 2020, while waiting to close on our New York house, my husband and I were staying in an Airbnb in Rockland County. We were walking the dogs one morning nearing the end of Call Hollow Road, there is a wide path dividing thick woodland when we came across a memorial stone:

“THOSE WHO SHALL NOT BE FORGOTTEN.”

We had stumbled upon the entrance to Old Letchworth Village Cemetery, and we instantly felt it’s somber history. Beyond it, rows of T-shaped markers each one a muted testament to the hundreds of nameless victims who perished at Letchworth. Situated just half a mile from the institution, these weathered grave markers reveal only the numbers that were once assigned to forgotten souls—a stark reminder that families once refused to let their names be known. This omission serves as a silent indictment of a system that institutionalized, dehumanized, and ultimately discarded these individuals.

When we researched the history, the truth was staggering.

Letchworth was supposed to be a progressive alternative to the horrors of 19th-century asylums. Instead, it became one of them. By the 1920s, reports described children and adults left unclothed, unbathed, overmedicated, and raped. Staff abused residents—and each other. The dormitories were overcrowded. Funding dried up. Buildings decayed.

The facility was severely overcrowded. Many residents lived in filth, unfed and unattended. Children were restrained for hours. Some were used in vaccine trials without consent. And when they died, they were buried behind the trees—nameless, marked only by small concrete stakes.

I stood among those graves. Over 900 of them. A long row of numbered markers, each representing a life once deemed unworthy of attention, of love, of dignity.

But the deeper horror is what Letchworth symbolized: the idea that certain people were better off warehoused than welcomed, that abnormality was a disease to be eradicated—not a difference to be understood.

This is the real history of psychiatric care in America.


The Problem of Purpose

But this history didn’t unfold in a vacuum. It was built on something deeper—an idea so foundational, it often goes unquestioned: that nature has no purpose. That life has no inherent meaning. That humans are complex machines—repairable, discardable, programmable.

This mechanistic worldview didn’t just shape medicine. It has shaped what we call reality itself.

As Dr. Rupert Sheldrake explains in Science Set Free, the denial of purpose in biology isn’t a scientific conclusion—it’s a philosophical assumption. Beginning in the 17th century, science removed soul and purpose from nature. Plants, animals, and human bodies were understood as nothing more than matter in motion, governed by fixed laws. No pull toward the good. No inner meaning.

By the time Darwin’s Origin of Species arrived in the 19th century 1859, this mechanistic lens was fully established. Evolution wasn’t creative—it was random. Life wasn’t guided—it was accidental.

Psychiatry, emerging in this same cultural moment, absorbed this worldview. Suffering was pathologized, difference diagnosed, and the soul reduced to faulty genetics and broken wiring.

Today, that mindset is alive in the DSM’s ever-expanding labels, in the belief that trauma is a chemical imbalance, that identity issues must be solved with hormones and surgery, and in the reflex to medicate children who don’t conform.

But what if suffering isn’t a bug in the system?

What if it’s a signal?

What if these so-called “disorders” are cries for meaning in a world that pretends meaning doesn’t exist?

The graves at Letchworth aren’t just a warning about medical abuse. They are a mirror—reflecting what happens when we forget that people are not problems to be solved, but souls to be seen.

Sheldrake writes, “The materialist denial of purpose in evolution is not based on evidence, but is an assumption.” Modern science insists all change results from random mutations and blind forces—chance and necessity. But these claims are not just about biology. They influence how we see human beings: as broken machines to be repaired or discarded.

As we said, in the 17th century, the mechanistic revolution abolished soul and purpose from nature—except in humans. But as atheism and materialism rose in the 19th century, even divine and human purpose were dismissed, replaced by the ideal of scientific “progress.” Psychiatry emerged from this philosophical soup, fueled not by reverence for the human soul but by the desire to categorize, control, and “correct” behavior—by any mechanical means necessary.

What if that assumption is wrong? What if the people we label “disordered” are responding to something real? What if our suffering has meaning—and our biology is not destiny?

“Genetics” as the New Eugenics

Today, psychiatry no longer speaks in the language of race hygiene.

It speaks in the language of genes.

But the message is largely the same:

You are broken at the root.

Your biology is flawed.

And the only solution is lifelong medication—or medical intervention.

We now tell people their suffering is rooted in faulty wiring, inherited defects, or bad brain chemistry—despite decades of inconclusive or contradictory evidence.

We still medicalize behaviors that don’t conform.

We still pathologize pain that stems from trauma, poverty, or social disconnection.

We still market drugs for “chemical imbalances” that have never been biologically verified.

And we still pretend this is science—not ideology.

But as Dr. Rupert Sheldrake argues in Science Set Free, even the field of genetics rests on a fragile and often overstated foundation. In Chapter 6, he challenges one of modern biology’s core assumptions: that all heredity is purely material—that our traits, tendencies, and identities are completely locked in by our genes.

But this isn’t how people have understood inheritance for most of human history.

Long before Darwin or Mendel, breeders, farmers, and herders knew how to pass on traits. Proverbs like “like father, like son” weren’t based on lab results—they were based on generations of observation. Dogs were bred into dozens of varieties. Wild cabbage became broccoli, kale, and cauliflower. The principles of heredity weren’t discovered by science; they were named by science. They were already in practice across the world.

What Sheldrake points out is that modern biology took this folk knowledge, stripped it of its nuance, and then centralized it—until genes became the sole explanation for almost everything.

And that’s a problem.

Because genetics has been crowned the ultimate cause of everything from depression to addiction, from ADHD to schizophrenia. When the outcomes aren’t clear-cut, the answer is simply: “We haven’t mapped the genome enough yet.”

But what if the model is wrong?

What if suffering isn’t locked in our DNA?

What if genes are only part of the story—and not even the most important part?

By insisting that people are genetically flawed, psychiatry sidesteps the deeper questions:

  • What happened to you?
  • What story are you carrying?
  • What environments shaped your experience of the world?

It pathologizes people—and exonerates systems.

Instead of exploring trauma, we prescribe pills.

Instead of restoring dignity, we reduce people to diagnoses.

Instead of healing souls, we treat symptoms.

Modern genetics, like eugenics before it, promises answers. But too often, it delivers a verdict: you were born broken.

We can do better.

We must do better.

Because healing doesn’t come from blaming bloodlines or rebranding biology.

It comes from listening, loving, and refusing to reduce people to a diagnosis or a gene sequence.


The Hidden Truth About Trauma and Diagnosis

As Pete Walker references Dr. John Briere’s poignant observation: if Complex PTSD and the role of early trauma were fully acknowledged by psychiatry, the Diagnostic and Statistical Manual of Mental Disorders (DSM) could shrink from a massive textbook to something no larger than a simple pamphlet.

We’ve previously explored the crucial difference between PTSD and complex PTSD—topics like trauma, identity, neuroplasticity, stress, survival, and what it truly means to come home to yourself. This deeper understanding exposes a vast gap between real human experience and how mental health is often diagnosed and treated today.

Instead of addressing trauma with truth and compassion, the system expands diagnostic categories, medicalizes pain, and silences those who suffer.

The Cost of Our Silence

Many of us know someone who’s been diagnosed, hospitalized, or medicated into submission.

Some of us have been that person.

And we’re told this is progress. That this is compassion. That this is care.

But when I stood at the edge of those graves in Rockland County—row after row of anonymous markers—nothing about this history felt compassionate.

It felt buried. On purpose.

We must unearth it.

Not to deny mental suffering—but to reclaim the right to define it for ourselves.

To reimagine what healing could look like, if we dared to value dignity over diagnosis.

Because psychiatry hasn’t “saved” the abnormal.

It has often silenced, sterilized, and sacrificed them.

It has named pain as disorder.

Difference as defect.

Trauma as pathology.

The DSM is not a Bible.

The white coat is not a priesthood.

And genetics is not destiny.

We need better language, better questions, and better ways of relating to each other’s pain.

And that brings us full circle—to a man most people have never heard of: Dr. John Galt II.

Nearly 200 years ago, in Williamsburg, Virginia, Galt ran the first freestanding mental hospital in America. But unlike many of his peers, he rejected chains, cruelty, and coercion. He embraced what he called moral treatment—an approach rooted in truth, love, and human dignity. Galt didn’t see the “insane” as dangerous or defective. He saw them as souls.

He was influenced by Philippe Pinel, the French physician who famously removed shackles from asylum patients in Paris. Together, these early reformers dared to believe that healing began not with force, but with presence. With relationship. With care.

Galt refused to segregate patients by race. He treated enslaved people alongside the free. And he opposed the rising belief—already popular among his fellow psychiatrists—that madness was simply inherited, and the mad were unworthy of full personhood.

But what does it mean to recognize someone’s personhood?

Personhood is more than just being alive or having a body. It’s about being seen as a full human being with inherent dignity, moral worth, and rights—someone whose inner life, choices, and experiences matter. Recognizing personhood means acknowledging the whole person beyond any diagnosis, disability, or social status.

This question isn’t just philosophical—it’s deeply practical and contested. It’s at the heart of debates over mental health care, disability rights, euthanasia and even abortion. When does a baby become a person? When does someone with a mental illness or cognitive difference gain full moral consideration? These debates all circle back to how we define humanity itself.

In Losing Our Dignity: How Secularized Medicine Is Undermining Fundamental Human Equality, Charles C. Camosy warns that secular, mechanistic medicine can strip people down to biological parts—genes, symptoms, behaviors—rather than seeing them as full persons. This reduction risks denying people their dignity and the respect that comes with being more than the sum of their medical conditions.

Galt’s approach stood against this reduction. He saw patients as complex individuals with stories and struggles, deserving compassion and respect—not just as “cases” to be categorized or “disorders” to be fixed.

To truly recognize personhood is to honor that complexity and to affirm that every individual, regardless of race, mental health, or social status, has an equal claim to dignity and care.

But… Galt’s approach was pushed aside.

Why?

Because it didn’t serve the state.

Because it didn’t serve power.

Because it didn’t make money.

Today, we see a similar rejection of truth and compassion.

When a child in distress is told they were “born in the wrong body,” we call it gender-affirming care.

When a woman, desperate to be understood, is handed a borderline personality disorder label instead.

When medications with severe side effects are pushed as the only solution, we call it science.

But are we healing the person—or managing the symptoms?

Are we meeting the soul—or erasing it?

We’ve medicalized the human condition—and too often, we’ve called that progress.

We’ve spoken before about the damage done by Biblical counseling programs when therapy is replaced with doctrine—how evangelical frameworks often dismiss pain as rebellion, frame anger as sin, and pressure survivors into premature forgiveness.

But the secular system is often no better. A model that sees people as nothing more than biology and brain chemistry may wear a lab coat instead of a collar—but it still demands submission.

Both systems can bypass the human being in front of them.

Both can serve control over compassion.

Both can silence pain in the name of order.

What we truly need is something deeper.

To be seen.

To be heard.

To be honored in our complexity—not reduced to a diagnosis or a moral failing.

It’s time to stop.

It’s time to remember that human suffering is not a clinical flaw. It’s time to remember the metaphysical soul/psyche. 

Our emotional pain is not a chemical defect.

That being different, distressed, or deeply wounded is not a disease.

It’s time to recover the wisdom of Dr. John Galt II.

To treat those in pain—not as problems to be solved—but as people to be seen.

To offer truth and love, not labels, not sterilizing surgeries and lifelong prescriptions.

Because if we don’t, the graves will keep multiplying—quietly, behind institutions, beneath a silence we dare not disturb.

But we must disturb it.

Because they mattered.

And truth matters.

And the most powerful medicine has never been compliance or chemistry.

It’s being met with real humanity.

Being listened to. Believed.

Not pathologized. Not preached at. Not controlled.

But loved—in the deepest, most grounded sense of the word.

The kind of love that doesn’t look away.

The kind that tells the truth, even when it’s costly.

The kind that says: you are not broken—you are worth staying with.

Because to love someone like that…

is to recognize their personhood.

And maybe that’s the most radical act of all.

SOURCES:

  • “Director of the Kaiser Wilhelm Institute for Anthropology, Human Heredity, and Eugenics from 1927 to 1942, [Eugen] Fischer authored a 1913 study of the Mischlinge (racially mixed) children of Dutch men and Hottentot women in German southwest Africa. Fischer opposed ‘racial mixing, arguing that “negro blood” was of ‘lesser value and that mixing it with ‘white blood’ would bring about the demise of European culture” (United States Holocaust Memorial Museum, “Deadly Medicine: Creating the Master Race,” HMM Online: https://www.ushmm.org/exhibition/deadly-medicine/ profiles/). See also, Richard C. Lewontin, Steven Rose, and Leon J. Kamin, Not in Our Genes: Biology, Ideology, and Human Nature 2nd edition (Chicago: Haymarket Books, 2017), 207.
  • Gonaver, The Making of Modern Psychiatry
  • Saving Abnormal-The Disorder of Psychiatric Genetics-Daneil R Berger II
  • Lost Architecture: Eastern State Hospital – Colonial Williamsburg
  • 📘 General History of American Eugenics
    Lombardo, Paul A.
    Three Generations, No Imbeciles: Eugenics, the Supreme Court, and Buck v. Bell (2008)
    This book is the definitive account of Buck v. Bell and American eugenics law. It documents how widespread sterilizations were and provides legal and historical context.
    Black, Edwin.
    War Against the Weak: Eugenics and America’s Campaign to Create a Master Race (2003)
    Covers the U.S. eugenics movement in depth, including funding by Carnegie and Rockefeller, Cold Spring Harbor, and connections to Nazi Germany.
    Kevles, Daniel J.
    In the Name of Eugenics: Genetics and the Uses of Human Heredity (1985)
    A foundational academic history detailing how early American psychiatry and genetics were interwoven with eugenic ideology.

    🧬 Institutions & Funding
    Cold Spring Harbor Laboratory Archives
    https://www.cshl.edu
    Documents the history of the Eugenics Record Office (1910–1939), its funding by the Carnegie Institution, and its influence on U.S. and international eugenics.
    The Rockefeller Foundation Archives
    https://rockarch.org
    Shows how the foundation funded eugenics research both in the U.S. and abroad, including programs that influenced German racial hygiene policies.

    ⚖️ Sterilization Policies & Buck v. Bell
    Supreme Court Decision: Buck v. Bell, 274 U.S. 200 (1927)
    https://supreme.justia.com/cases/federal/us/274/200/
    Includes Justice Holmes’ infamous quote and the legal justification for forced sterilization.
    North Carolina Justice for Sterilization Victims Foundation
    https://www.ncdhhs.gov
    Reports the disproportionate targeting of Black women in 20th-century sterilization programs.
    Stern, Alexandra Minna.
    Eugenic Nation: Faults and Frontiers of Better Breeding in Modern America (2005)
    Explores race, sterilization, and medical ethics in eugenics programs, with data from states like California and North Carolina.

    🧠 Psychiatry’s Role & Nazi Connections
    Lifton, Robert Jay.
    The Nazi Doctors: Medical Killing and the Psychology of Genocide (1986)
    Shows how American eugenics—including psychiatric writings—helped shape Nazi ideology and policies like Aktion T-4 (the euthanasia program).
    Wahl, Otto F.
    “Eugenics, Genetics, and the Minority Group Mentality” in American Journal of Psychiatry, 1985.
    Traces how psychiatric institutions were complicit in promoting eugenic ideas.
    American Journal of Psychiatry Archives
    1920s–1930s issues include articles in support of sterilization and early euthanasia rhetoric.
    Available via https://ajp.psychiatryonline.org