This is Part 2 of a three-part series. Part 1—The Blast Radius—mapped how AI militarization, governance lag, and economic displacement converge on the common person. This part builds the response.
In Part 1, I described a pattern: AI capabilities advance, military and institutional adoption accelerates, the law lags, and civilians absorb the risk. The Anthropic-Pentagon confrontation of February 2026 demonstrated that pattern in real time—a company was designated a national security threat for refusing to remove safety guardrails, and a competitor stepped in within hours to fill the gap.
That pattern will not be broken by ethics statements, voluntary principles, or corporate responsibility branding. Those are cosmetic. They operate at the discretion of the institution deploying them and can be withdrawn when they become inconvenient—which is exactly what happens when state pressure meets corporate survival instincts.
What is needed instead is structural: governance mechanisms that impose friction by design, and rights protections that give individuals standing to push back. Neither layer works alone. Governance without rights becomes bureaucratic theater. Rights without governance become unenforceable aspirations.
This part presents both layers: a governance blueprint with five mechanisms, and a civilian and worker rights charter with seven articles. Together, they form a dual-layer shield—top-down constraints and bottom-up protections—designed to function even when the political environment is hostile to oversight.
Governance Blueprint
1. High-risk AI designation
Principle: Certain uses of AI are structurally dangerous and must be treated as high-risk by default—not on a case-by-case basis, not after an incident, but as a precondition of deployment.
The designation applies to three broad categories.
State power. Any AI system used in defense, military operations, intelligence, national security, policing, border control, or elections and core democratic processes.
Critical infrastructure. Any AI system embedded in power grids, water systems, transportation networks, communications infrastructure, or healthcare systems.
Population-level labor governance. Any AI system used for hiring, firing, scheduling, performance scoring, or deactivation at scale—including gig and platform work governance systems. This category is often excluded from AI governance proposals. That exclusion is a mistake. When an algorithm can terminate a worker’s income with no human review and no appeal, it is exercising power over livelihood. That is a high-risk use by any reasonable definition.
High-risk designation triggers four obligations.
Independent red-teaming, covering not just technical robustness but abuse scenarios, discrimination risks, labor impacts, and security vulnerabilities.
Pre-deployment risk assessment with documented mitigations—not a checkbox exercise, but a substantive analysis reviewed by someone with authority to halt deployment.
Periodic reporting to an oversight body. This could be a parliamentary committee, a regulatory agency, or an institutional board, depending on context. The point is that the deploying entity does not get to be its own auditor.
Training data documentation at least at the category level—what kinds of data were used, from what sources, with what known gaps. Full transparency of training data may not always be feasible. Categorical documentation is a minimum.
2. Model-and-API-level hard boundaries
Principle: Some capabilities must be technically and contractually out of reach, even for states.
This is the mechanism Anthropic was trying to enforce when the Pentagon designated it a supply chain risk. The fact that the mechanism was punished does not mean it was wrong. It means it is urgent.
Three capabilities require non-overrideable prohibition.
Bulk domestic surveillance of a population. Not targeted, court-authorized surveillance of specific individuals—that is a different legal question. Bulk, suspicionless, population-scale monitoring.
Population-scale biometric identification in public spaces. Real-time face recognition, gait analysis, voiceprint matching, or similar technologies deployed against a general population rather than specific, legally authorized targets.
Autonomous selection and engagement of targets for lethal force. AI may inform targeting decisions. It may not make them. The distinction between “informing” and “deciding” is where the governance boundary must be drawn and enforced.
Implementation requires two things beyond contractual language.
First, separate deployment configurations for government and defense tenants where prohibited capabilities are simply not exposed. This is an architecture decision, not a policy decision. If the API endpoint does not exist, the capability cannot be misused regardless of what the contract says.
Second, immutable, access-controlled audit logs for all high-risk endpoints. “Immutable” means the deploying entity cannot edit, delete, or selectively redact the logs. “Access-controlled” means a regulator, court, or inspector general can later verify whether the provider honored its own red lines.
This is one of the few governance mechanisms that works even when the law is slow—because it is enforced at the infrastructure layer, not the legal layer.
3. Human accountability for life, liberty, and livelihood
Principle: AI may inform decisions, but humans remain accountable for decisions that can destroy a life.
This principle prevents “the model said so” from becoming a shield for state or institutional action.
AI may not be the sole basis for any of the following:
Arrest, detention, or criminal charging. Watchlisting or no-fly decisions. Immigration denial or deportation. Child removal or family separation. Use of lethal force. Termination, blacklisting, or major income-impacting employment decisions.
Each of these categories represents a decision with the potential to permanently alter a human life. Delegating that decision entirely to an automated system—even a sophisticated one—eliminates the human judgment that due process, employment law, and basic accountability require.
Three requirements enforce this principle.
A named human decision-maker must be responsible for the outcome. Not “the system” or “the process” or “the algorithm”—a person, with a name, who can be held to account.
A recorded rationale must exist that can be challenged in court or an equivalent forum. If the decision cannot be explained, it cannot be defended. If it cannot be defended, it should not be made.
Disclosure that AI was used in the decision chain. The person affected has a right to know that their outcome was shaped by an automated system. Without this knowledge, they cannot meaningfully contest the result.
4. Procurement-based governance
Principle: If the state—or any large institution—buys it, it must be governable.
Procurement is one of the fastest-moving governance levers available because procurement rules already exist. They do not require new legislation or new agencies. They require AI-specific clauses in processes that are already operational.
For any AI system procured by government or critical institutions, three requirements apply.
The system must support independent audit. “Trust us” is not an audit. A meaningful audit requires access to model behavior, decision outputs, and performance metrics by someone outside the deploying organization.
The system must provide data source documentation at least at the categorical level. What kinds of data were used? From what types of sources? With what known limitations? This is not a demand for trade secrets. It is a demand for enough transparency to assess risk.
The system must include a public or board-visible impact assessment, with sections covering civil liberties implications, discrimination and equity risks, and labor and workforce impacts.
Three things should be prohibited in procurement.
Systems relying on undisclosed data brokers—entities whose data collection practices are opaque and unaccountable.
Contracts that forbid meaningful audit or public disclosure of findings.
Systems that embed non-contestable automated employment decisions—meaning systems where an algorithm can terminate someone’s income and the affected person has no pathway to challenge the outcome.
The Anthropic-Pentagon dispute demonstrated what happens when procurement governance is weak. The contracts were structured as Other Transactions, exempt from standard acquisition regulations. The terms were whatever the parties negotiated. When one party demanded safety restrictions the other did not want, the state had the leverage to walk away, designate the company a risk, and find a more compliant supplier within hours. Stronger procurement rules would not have prevented this entirely. But they would have made the process more visible, more constrained, and more subject to oversight.
5. International and sectoral minimum norms
Principle: Even soft norms create leverage.
In the next three to five years, a comprehensive binding international treaty on military AI is not realistic. The geopolitical dynamics described in Part 1 make that clear. But what is achievable—and what matters—is a set of soft-law instruments that establish a baseline of expectations.
Baseline norms to endorse and propagate include four commitments.
Human control in lethal decision-making. AI may support the human in the loop. It may not replace the human in the loop when the decision involves killing.
Rejection of fully autonomous weapons systems. A weapon that selects and engages targets without human authorization is not a tool. It is a delegation of the power to kill. That delegation should be prohibited.
Rejection of population-scale biometric surveillance. States should not build—and companies should not supply—the infrastructure for monitoring entire populations in real time.
Rejection of fully opaque, automated workforce governance. Black-box systems that fire, deactivate, or blacklist workers at scale with no explanation and no appeal should not be normalized as acceptable labor practice.
These norms can live in UN resolutions, OECD principles, G7 and G20 statements, industry codes, trade agreements, and sectoral standards. They do not magically bind. But they shape expectations, give civil society and courts language to point to, and make egregious violations reputationally costly. Soft power is still power—especially when it creates the vocabulary that later becomes hard law.
Civilian and Worker AI Rights Charter
The following seven articles are written so they can stand alone as a charter—a document that could be embedded in a policy preamble, a statutory findings section, an institutional code of conduct, or a public declaration. Each article states a principle, defines its scope, and specifies minimum guarantees.
Article 1—Freedom from AI-mediated mass surveillance
Every person has the right not to be subject to generalized, suspicionless, AI-mediated surveillance.
Population-scale, real-time biometric identification in public spaces—face recognition, gait analysis, voiceprints, or similar—is prohibited absent individualized legal authorization.
Targeted surveillance using AI requires a specific legal basis equivalent to a warrant, narrow scope defining who may be monitored, where, and for how long, and strict retention and deletion rules governing what is collected and when it must be destroyed.
This is the single largest buffer between “AI-enabled state” and totalizing panopticon. Once mass surveillance becomes normalized, it is nearly impossible to roll back. The infrastructure, the institutional habits, and the political incentives all favor expansion. The time to establish this right is before the default hardens—not after.
Article 2—Right to contest AI-influenced decisions
Every person has the right to know, understand, and challenge AI-influenced decisions that materially affect their life.
This right covers three domains: liberty, including criminal justice, policing, and immigration; livelihood, including employment, gig work, credit, and housing; and essential services and public benefits.
Four minimum guarantees apply. Notice that AI was used in the decision. Access to an explanation in human-readable terms—not a technical readout, but an account a reasonable person can understand and evaluate. Right to human review by a qualified decision-maker, not a rubber stamp. Right to appeal to an independent body—a court, tribunal, or regulator with the authority to reverse the outcome.
Without these guarantees, AI becomes an unchallengeable oracle. A system that can deny you a job, a loan, a benefit, or your freedom—and that you cannot question, cannot understand, and cannot appeal—is not governance. It is domination wearing an interface.
Article 3—Data minimization and purpose limitation
Every person has the right not to have their data hoarded and repurposed indefinitely.
Personal data used in AI systems must be limited to what is necessary for a clearly defined purpose. Data collected for one purpose—health records, educational assessments, welfare eligibility, employment performance—may not be silently repurposed for policing, immigration enforcement, or intelligence without new, explicit legal authorization.
Retention schedules and deletion obligations must be defined at the point of collection and enforced throughout the data lifecycle.
This is where most abuses historically emerge. Not in the original collection—which often has a legitimate basis—but in the quiet repurposing. A health record becomes a risk score. An employment file becomes a watchlist input. A welfare application becomes a policing lead. Function creep—the silent expansion of data use beyond its original purpose—is one of the oldest patterns in institutional surveillance. AI accelerates it by making the connections faster, cheaper, and less visible.
Article 4—Right to know when interacting with AI
Every person has the right to know when they are interacting with an AI system rather than a human, particularly in high-stakes contexts.
Mandatory disclosure applies in government services and benefits administration, law enforcement and border interactions, healthcare and mental health services, education and assessment, and employment and platform work—including screening, scheduling, rating, and deactivation.
This right enables three things that the other articles depend on. Informed consent—you cannot consent to a process you do not know is automated. Contestation—you cannot exercise your right to human review if you do not know a machine made the decision. And future accountability—if the AI system was defective, you need to know it was involved in order to have standing to challenge the outcome.
Article 5—Protection against AI-driven discrimination and economic exclusion
Every person has the right not to be unfairly discriminated against by AI systems.
Anti-discrimination protections extend to AI-mediated profiling and risk scoring, predictive policing, and hiring, promotion, pay, scheduling, and deactivation algorithms.
High-risk AI systems must undergo bias and disparate-impact testing before deployment and periodically thereafter. When systemic bias is found, the response must include remedial action—not just a note in the audit log. Remediation means compensation for affected individuals, redesign of the system, and in severe cases, suspension of deployment until the problem is resolved.
AI does not create new forms of discrimination. It accelerates and scales existing ones. A biased hiring algorithm does not invent prejudice—it operationalizes it at a speed and volume no human recruiter could match. A discriminatory risk-scoring model does not create inequality—it encodes it into infrastructure and makes it look objective. The danger is not that AI is biased. The danger is that AI makes bias look like math.
Article 6—Protection for whistleblowers and independent researchers
Society has a right to know when AI systems are dangerous, unlawful, or abusive. That right depends on people who are willing to say “this is broken”—and who are protected when they do.
Employees who disclose unlawful, unsafe, or grossly unethical AI uses are protected from retaliation, including termination, demotion, reassignment, and informal blacklisting.
Independent researchers who probe AI systems for bias, security flaws, or misuse are protected from overbroad anti-hacking claims, retaliatory lawsuits, and professional blacklisting.
Without these protections, the only information the public receives about AI systems is the information institutions choose to release. History is clear about what happens when oversight depends entirely on institutional self-reporting: the worst abuses stay hidden until they are too large to conceal, and by then the damage is done.
Article 7—Economic security as a precondition for meaningful AI rights
AI-era rights are hollow without a basic level of material security. Every right in this charter—the right to contest, to appeal, to opt out, to participate in governance—requires resources that economic precarity destroys.
This article bridges the gap between abstract rights and lived material conditions.
Large-scale AI-driven displacement or degradation of work should trigger strengthened social safety nets, access to legal aid for AI-related disputes, and support for collective bargaining and worker representation in AI deployment decisions.
Institutions deploying AI at scale in labor contexts must assess and disclose workforce impact before deployment, engage with worker representatives in design and implementation, and avoid system designs that make contestation practically impossible—such as instant, opaque deactivation with no appeal pathway.
A person who cannot afford to miss a day of work cannot afford to challenge an algorithmic denial. A person who depends on a platform algorithm for income cannot afford to report that the algorithm is biased. A person living paycheck to paycheck does not have the bandwidth to navigate an appeal process designed for people with lawyers.
Economic security is not a separate issue from AI governance. It is the foundation on which every other protection rests. Without it, this entire charter becomes a document that protects people who were never the most vulnerable in the first place.
How the two layers reinforce each other
Governance mechanisms without rights protections become hollow—administrative processes that check boxes but leave individuals with no standing to challenge outcomes.
Rights protections without governance mechanisms become unenforceable—legal entitlements on paper with no institutional infrastructure to deliver them.
The combination creates a dual-layer shield.
Top-down constraints: procurement rules, model-level guardrails, high-risk classification, international norms. These impose friction on institutions before harm occurs.
Bottom-up protections: civil liberties, contestability, transparency, anti-surveillance rights, economic security. These give individuals standing and resources after harm occurs.
This is the only configuration that can realistically protect civilians while nations embed AI into defense infrastructure, corporations embed it into labor markets, and the law struggles to keep pace with both.
But a shield is only as strong as the institutions enforcing it. The governance blueprint needs regulators. The rights charter needs enforcement mechanisms. The question of who holds the levers—and what tools they have—is Part 3.

