Machiavelli 2.0: Why Ethics Is Now the Ultimate Competitive Strategy
- andrea mazingo
- Jun 7
- 8 min read
Rewriting The Operating System Of Legitimacy Itself
By Stephen Klein and Andi Mazingo

A recent Vanity Fair exposé revealed Meta’s quiet confession: yes, they took your data. No, they don’t believe it had value. This isn’t just a legal maneuver, it’s a worldview. One in which the creative commons is seen as raw material, and ethical justification becomes retrofitted PR.
Meta’s position wasn’t abstract. It was a direct response to one of the dozens of lawsuits now confronting Big AI. As of mid-2024, there are over 35 active cases, ranging from The New York Times to bestselling novelists, alleging systematic copyright infringement. These lawsuits target not just Meta, but also OpenAI, Google, Microsoft, Anthropic, and Stability AI. Together, five companies have vacuumed up and monetized the overwhelming majority of the world’s public internet, claiming everything from Wikipedia to Reddit threads to song lyrics as fair game.
The defense? The data had no value. Therefore, no harm was done.
That argument isn’t just false. It’s foundationally dangerous. And it proves the stakes of this paper: that we are no longer merely debating legality. We are rewriting the operating system of legitimacy itself.
This moment changes everything. It validates what this article argues: In a world where legal systems lag technological innovation, corporations are no longer simply market actors, they are becoming de facto regulators. As generative AI reshapes the foundations of value creation, the rules of competition are shifting. The strategic use of ethics and values, once dismissed as "soft," may now be the most Machiavellian move a company can make.
This article explores a provocative thesis: that in a post-IP, post-regulation, and post-tech world, values-driven governance becomes a fiduciary obligation. Not because leaders believe in it. Not because it’s moral. But because there may be no other sustainable path to competitive advantage.
The Premise: A Post-IP, Post-Regulation, and Post-Tech World
We begin with four core assumptions that define the new competitive landscape:
Post-IP: In the age of generative models trained on the internet’s collective output, intellectual property rights are functionally unenforceable at scale. The law has not, and may never, catch up to the reality that nearly all foundational models contain embedded traces of copyrighted material. Legal recourse, even if technically viable, is economically and logistically out of reach for most creators. [1]
Post-Regulation: The pace of AI advancement has overwhelmed traditional legal systems. Agencies are under-resourced, politically fractured, and fundamentally reactive. In 2023 alone, 25 U.S. states proposed AI-related legislation, but a coherent federal strategy remains absent. [2] Meanwhile, regulators like the SEC have begun penalizing companies for "AI-washing"—a sign that enforcement is playing catch-up, not leading. [3]
Charitably, we can refer to this as Post-Regulation, but with additional context, it approaches Post-Democracy. Tech companies are functioning as sovereign entities, governing AI systems that affect military, speech, elections, employment, and mental health without democratic accountability. Consider OpenAI’s partnership with the US military or Altman’s Worldcoin project, framed as a path to corporate-instituted universal income.
As Adam Becker writes in More Everything Forever, rather than respond to valid criticism, tech billionaires increasingly cloak their antidemocratic ambitions in apocalyptic longtermism. Elon Musk, for instance, calls longtermism a “close match” to his philosophy. The result hauntingly echoes Supreme Court Justice Louis Brandeis’s warning from 1941: “We may have democracy, or we may have wealth concentrated in the hands of a few, but we can’t have both.” [9]
Post-Fiduciary: As Meta considers reincorporating from Delaware to Texas, arguably to eschew accountability to shareholders, public benefit corporations (PBCs) like xAI, Anthropic, and now OpenAI benefit from their corporate form’s comparative absence of stakeholder enforcement mechanisms. With minimally established shareholder inspection rights—and no inspection rights and legal standing for public stakeholders—the ethical compliance of PBCs often is relegated to self-reporting. In contemplating the potential for bias in self-reporting, consider Anthropic using its own AI model to audit itself during emergent behavior tests. While shareholders may theoretically enforce PBCs’ duties, public benefit goals often are too uncertain to litigate. Thus, you will find PBCs’ AI’s, like xAI’s Grock, surfacing misinformation like Holocaust denial without accountability.
Post-Tech (The Closed-Sourced Collapse): The flood of open-source models has reduced technological advantage to a window of months. GitHub reported over 100,000 new AI-related repositories launched in 2023 alone. [4] Meta alone manages dozens of active open-source projects, with thousands of contributors and no IP enforcement model capable of stemming derivative creation. [5]
The convergence of these four forces, post-IP, post-reg, post-fiduciary, and post-tech, means that the traditional tools of competitive advantage are eroding. What remains is something harder to fake, easier to lose, and more essential to scale: trust.
Background: The Legitimacy Crisis Unfolding in Court
As of early 2025, more than 39 active copyright infringement lawsuits are pending in U.S. courts against the largest generative AI companies—including Meta, OpenAI, Microsoft, Google, Anthropic, and Stability AI. These suits have been brought by a diverse range of plaintiffs, including novelists, visual artists, news organizations, musicians, and independent content creators. The core allegation is consistent: these companies scraped vast swaths of the public internet, including copyrighted books, articles, lyrics, artwork, and posts, without permission or compensation, to train commercial AI models now valued in the billions.
Meta, for example, faces multiple lawsuits related to its Books3 dataset, a controversial compilation of more than 180,000 copyrighted books used to train its LLaMA models. In legal filings and public statements, Meta has defended its actions by arguing that the individual books had "no economic value," a claim that has sparked outrage among authors and rights groups.
This defense strategy may succeed in court, but it raises a deeper problem: If companies can justify appropriation on the grounds that the content was abundant or undervalued, then the very foundation of intellectual property law, and trust in the digital economy, begins to erode.
Meanwhile, AI companies avoid other categories of legal liability while the legal industry plays catch-up. Personal injury law has not adapted to AI-specific harms like relational exploitation, therapy chatbot malpractice, or nervous system dysregulation or psychosis caused by emotionally charged hallucinations. Nor have employment protections caught up with the pretextual use of AI to justify potentially biased mass layoffs—or with the bias-vulnerable use of AI to select employees for inclusion in layoffs.
Even where legal claims exist, arbitration provisions, safe harbors for business judgment, and a politically influenced judiciary makes redress elusive. But reputational consequences remain unpredictable and potent. Consider the case of former OpenAI developer Suchir Balaji: following his purported suicide, his mother raised over $100K on GoFundMe under the theory that OpenAI contributed to his death. [10] Whatever the legal facts, the cultural narrative took root. Risk does not vanish when rights go unenforced; it mutates.
Machiavellian Ethics: Power by Legitimacy
Historically, Niccolò Machiavelli wrote that power is preserved through fear, manipulation, and control. But today, fear-based leadership no longer scales. In the AI age, where velocity and transparency are absolute, legitimacy, not force, is the new currency.
And legitimacy cannot be faked indefinitely.
What happens when Machiavelli, to remain Machiavellian, is forced to compete on ethics and values? He does what he’s always done: adapts.
Companies will begin to treat operationalized ethics, not as virtue signaling, but as competitive infrastructure. They will not need to believe in it. They will not need to mean it. But they will need to demonstrate it, measure it, and integrate it across product design, data usage and privacy practices, hiring and compensation systems, vendor and partner policies, and shareholder communications.
Here's the challenge: today’s AI elite tend to share worldviews distinct from the general public. As observed in a 2021 study, elite technologists increasingly form an epistemic class of their own. [6] These cultural echo chambers ignore dissent and flatten nuance. But ethical intuition comes from the margins.
Some neurodiverse or autistic forms of cognition—marked by pattern sensitivity, resistance to groupthink, and rigid integrity—may be especially attuned to subtle moral incoherences. These modes of perception, long dismissed as too rigid or intense, may provide essential early warnings of misalignment. This is only true, however, when ethical principles are not collapsed into abstract reasoning.
Ethical foresight requires more than abstract reasoning; it requires embodied reflection—a felt capacity to question the shape of the problem itself. As Francisco Varela writes in The Embodied Mind, when reflection is embodied, “it can cut the chain of habitual thought patterns and preconceptions . . . open to possibilities other than those contained in one’s current representations of the life space.” [7] Ethical clarity comes not only from logic, but from sensing what context demands.
Similarly, Shannon Vallor’s framing of Aristotle’s definition of “greatness” as “founded in a life-time of moral and social efforts rather than relatively meaningless zero-sum contests of ego” modernizes the ancient wisdom that “the magnanimous are those who have rightly earned the moral trust of others.” [11]
Stakeholder Capitalism as Competitive Reality
This is where the fiduciary shift takes on added complexity. As more employees become shareholders, through equity, options, and 401k participation, the line between "internal culture" and "market accountability" begins to blur.
In this context, values alignment isn’t soft. It’s economic. It’s risk management. It’s brand durability. It’s talent retention.
And AI, far from replacing ethics, can become the instrument through which it is enforced:
Internal LLMs that guide ethical decision-making
Generative systems that flag bias or hallucination risk in content and code
Distributed ledger technology to record model transparency and accountability
The future isn't just AI-powered. It’s ethically operationalized AI. Not because it’s better. But because it's more likely to win.
Along with an ethically attuned AI, companies can iterate legitimacy through:
Transparent correction (e.g., not Altman joking about sycophancy, but publishing public audits);
Stakeholder co-design (e.g., not firing ethicists, but elevating dissent);
Restorative alignment (e.g., not issuing generic apologies, but engaging harmed communities, correcting processes, and demonstrating change).
Forward-looking fiduciaries ask not only "Is this legal?" but: "Who could credibly claim harm—and how might that shape trust, investor confidence, or litigation risk?"
A Call to Boards and Executives
In many ways, the boardroom is now the arbitrator of law and ethics. Fiduciary duty must evolve with the landscape it governs. The strategic mandate is no longer only about growth, efficiency, or disruption. It’s about legitimacy engineering.
If corporations become the new sovereigns, then trust is the new GDP.
And the companies that master it, not through rhetoric but through repeatable design, will define the next era of economic leadership.
Stephen Klein is the Founder and CEO of Curiouser.AI and teaches AI Ethics at UC Berkeley. Andi Mazingo is the Founding Attorney of Lumen Law Center focusing on employment and corporate governance law. They are collaborating on the future of ethics, law, and legitimacy in the age of Generative AI.
References
Wired, "Tracking AI Copyright Lawsuits" — https://www.wired.com/story/ai-copyright-case-tracker/
National Conference of State Legislatures, "AI 2023 Legislation" — https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation
Barron’s, "SEC Cracks Down on AI-Washing" — https://www.barrons.com/articles/sec-penalizes-investment-firms-for-ai-washing-49e17971
The Verge, "GitHub’s Explosion in AI Repos" — https://www.theverge.com/24221978/github-thomas-dohmke-ai-copilot-microsoft-openai-open-source
Meta Engineering Blog, "Meta Open Source by the Numbers" — https://engineering.fb.com/2025/04/02/open-source/meta-open-source-by-the-numbers
Brockmann, H., Drews, W., & Torpey, J. (2021). A class for itself? On the worldviews of the new tech elite. PloS one, 16(1), e0244071, https://doi.org/10.1371/journal.pone.0244071.
Francisco J. Varela et. Al, The Embodied Mind: Cognitive Science and Human Experience (2016 Massachusetts Institute of Technology) at 27, 145.
Douglas R. Hofstadter, Godel, Escher, Bach: an Eternal Golden Braid (1999) at 661.
Adam Becker, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, (2025 Basic Books) at 288-89.
Justice for Suchir Balaji — https://www.gofundme.com/f/justice-for-suchir-balaji?attribution_id=sl:aca98d09-f8a9-4f30-9c43-eadea723c01c&utm_campaign=unknown&utm_medium=undefined&utm_source=undefined
Shannon Valor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press 2016) at 152-54.
Comentarios