共和国の死とAIの対立
AI企業と政府の対立筆者は父の死を通して、アメリカ合衆国の緩やかな衰退を認識している。
近年、AI企業Anthropicと政府の間で契約上の制限に関する対立が起きており、これは既存の共和国の死を象徴する出来事だ。
Anthropicは、機密情報へのAI活用を制限する契約を交わしたが、政府は技術利用の制限自体に反対している。
この事態は、共和国の死と再生というサイクルの中で、新たな時代の到来を告げる可能性もある。
アメリカの作家が、自身の体験と国家の衰退に重ね合わせた考察を綴ったエッセイを基に、AI企業と米国政府の対立について解説します。同氏は、父の死と息子の誕生を経験し、その中で「死」と「生」は瞬間ではなくプロセスであると痛感したとのこと。その視点から、アメリカ合衆国が緩やかに衰退している現状と、AI技術の進展という二つのテーマを深く結びつけています。
父の死と息子の誕生:生と死のプロセス
作家は、父の死を間近で経験し、その過程を詳細に描写しています。心臓手術後、徐々に衰弱していった父の姿、そして最期を迎えるまでの時間、呼吸の弱まり、かすかに聞こえる死の音…。その体験を通して、作家は「死」もまた、瞬間ではなく緩やかなプロセスであることを理解したとのことです。同時に、息子の誕生という新たな命の始まりも経験し、生と死は表裏一体であることを痛感したと述べています。これらの個人的な体験は、国家の衰退というテーマへと繋がっていくのです。
アメリカ合衆国の「死」:複合的な要因
作家は、アメリカ合衆国もまた、緩やかに「死」を迎えつつあると指摘しています。その原因は一つではなく、政治、経済、社会、技術など、様々な要因が複雑に絡み合っているとのことです。特定の出来事や人物、政策などが直接的な原因であると断定することはできませんが、それらすべてがアメリカ合衆国の衰退に寄与していると考えられます。作家は、現在まさに「ホスピス」と呼ばれる終末期医療のような状態にあると表現し、その現状を冷静に見つめていると述べています。
AI技術と国家の衰退:未来への展望
AI技術の発展と、アメリカ合衆国の衰退という二つの大きなテーマを、作家は密接に結びつけています。AI技術の進展について議論する際、国家の現状を無視することはできないと述べており、AI技術の利用が、アメリカ合衆国の未来にどのような影響を与えるのかを慎重に見極める必要があると示唆しています。アメリカ合衆国が新たな「創世」を迎え、再出発する可能性も残されていますが、そのための「美徳」や「知恵」が失われているかもしれないという懸念も表明しています。
まとめ
作家は、AI企業と米国政府の対立という具体的な事例を取り上げながら、アメリカ合衆国が直面する課題と、未来への展望について深く考察を深めています。国家の衰退という重いテーマを、個人的な体験と重ね合わせることで、読者はより深く問題を理解し、未来について考えるきっかけとなるでしょう。
原文の冒頭を表示(英語・3段落のみ)
I.A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. He had moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He laid on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel. My mother and I exchanged knowing glances, but we never said the obvious nor asked any questions on both of our minds. We knew there would not be much longer. There was nothing to say or ask that would furnish any useful information; inquiry, at that stage, can only inflict pain.I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came back into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.II.At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed. I don’t know where we are in the death process, but I know we are in the hospice room. I’ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don’t like to talk about it; I am at the stage where talking about it usually only inflicts pain.Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day. I am not saying this incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.” If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.III.Here are the facts as I understand them: during the Biden Administration, the AI company Anthropic negotiated a deal with the Department of Defense (now known as the Department of War, hereafter referred to as DoW) for the use of the AI system Claude in classified contexts. That deal was expanded by the Trump Administration in July 2025 (full disclosure: I worked in the Trump Administration at that time, though did not work on this deal). Other language models are available in unclassified settings, but until very recently, only Claude could be used for classified work, which is where the things that involve intelligence gathering, active combat operations, and the like occur.The deal, first negotiated between the Biden team and Anthropic—and it is worth noting here that several of the core architects of Biden’s AI policy joined Anthropic immediately after Biden’s term ended—included two usage restrictions. First, Claude could not be used for mass surveillance on Americans. Second, Claude could not be used to control lethal autonomous weapons, which are weapons that can identify, track, and kill targets with no human in the loop at any point in the process. When it negotiated the expanded deal, the Trump Administration had the opportunity to review these terms. It did, and it accepted them.Trump officials claim to have changed their mind not so much because they want to do mass surveillance on Americans or use autonomous lethal weapons imminently, but because they object altogether to the notion of privately imposed limitations on the military’s use of technology. The Administration’s change of heart on the terms of this deal have caused them to commit to a policy decision intended to harm or even destroy Anthropic, one of the fastest-growing firms in the history of capitalism, and arguably the current world leader in AI, an industry the Administration claims to believe is crucial to our country’s future. But we’ll get to that in due time.IV.The Trump Administration has a point: it does not sound right that private corporations can impose limitations on the military’s use of technology. Yet of course, thousands of private corporations do just that. Every transaction of technology between a private firm and the military involves a contract (indeed, the companies that do this are called defense contractors for a reason), and these contracts routinely contain operational use restrictions (“system X cannot be used in countries Y,” a common restriction with telecommunications technology such as Elon Musk’s Starlink), technological limitations (“this fighter jet is only certified for uses in X conditions and use of it outside those conditions is a breach of warranty”), and intellectual-property restrictions (“the contractor owns, and may repurpose and resell, the knowhow and IP associated with X weapon system developed with public funds”).In some ways, Anthropic’s terms resemble these traditional examples of privately imposed contractual limits on the military’s use of technology. The company’s position on autonomous lethal weapons, for example, is not one of outright opposition to the use of such weapons but instead a judgment that today’s frontier AI systems are not capable enough to autonomously make decisions about human life or death. This seems similar to the second example above (the limitations on the fighter jet’s use).The big difference, however, is that Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military. Think of the difference between “this fighter jet is not certified for flight above such-and-such an altitude, and if you fly above that altitude, you’ve breached your warranty,” and “you may not fly this jet above such-and-such an altitude”). It is probably the case that the military should not agree to terms like this, and private firms should not try to set them.But the Biden Administration did agree to those terms, and so did the Trump Administration, until it changed its mind. That alone should make one thing clear: terms like this are not some ridiculous violation of the norms of defense contracting. Anyone attempting to convince you otherwise is misinformed or lying. It is that simple.There is no law that says “contractual terms between the military and the private sector can have technical limitations, but not policy limitations,” in part because the line between those things is awfully hard to draw in timeless and universally applicable words (i.e., in a statute). The contract was not illegal, just perhaps unwise, and even that probably only in retrospect. Note that this is true even if you agree with the underlying substance of the limitations. You can support restrictions on mass domestic surveillance and lethal autonomous weapons, but disagree that a defense contract is the optimal vehicle to achieve that policy outcome. The way you achieve new policy outcomes, under the usual rules of our republic, is to pass a law.Except the notion of “passing a law” is increasingly a joke in contemporary America. If you are serious about the outcome in question, “passing a law” is no longer Plan A; the dynamic is more like “well of course, one day, we’ll get a law passed, but since we actually care about doing this sometime soon, as opposed to in 15 years, we’ll accomplish our objective through [some other procedure or legal vehicle].” With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam every goal he has through his existing power in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish.There are two concerns that the Trump Administration says caused it to change its mind: number one, that Anthropic may impose these policy restrictions on them, by, say, pulling Claude from military use during active military operations. Number two, that these policy restrictions would be imposed by Anthropic in its capacity as a subcontractor for other DoW contractors. In other words, DoW could come to rely upon some other company’s technology, which is in turn enabled by Claude and governed by the same terms of use that restrict domestic mass surveillance and autonomous lethal weapons (or, in the DoW’s mind, arbitrary new restrictions Anthropic could add at any time). Add to this the reality that the Trump Administration perceives Anthropic to be its political enemy (they are probably right about this), and you have a situation in which the military suddenly realizes it is building reliance upon a firm it does not trust.The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable. They could also have dealt with the above-mentioned subcontractor problem using a variety of tools, such as:Issuing guidance advising contractors to avoid agreeing to terms with subcontractors that constitute policy/operational constraints as opposed to technical or IP constraints;A new DFARS (Defense Federal Acquisition Regulation Supplement) clause pertaining specifically to the procurement of AI systems in classified settings that prevents both primes from imposing such constraints directly and accepting such constraints from their subcontractors, along with a procedure for requiring subcontractors with non-compliant terms to waive such terms within a prescribed time period.These are the least-restrictive means to accomplishing the end in question. If Anthropic refused to compromise on its red lines for the military’s use of AI, the execution of these policies would mean that Anthropic would be restricted from business with DoW or any of its contractors in those contractors’ fulfillment of their classified DoW work.But this is not what DoW did. Instead, DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having “any commercial relations” with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic’s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said “we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.” I don’t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will. The government won’t quite “steal” it from you—they’ll compensate you—but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a “supply chain risk,” not to mention have the other litany of policy obstacles the government can throw at you.This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not been labeled supply chain risks (yes, really; this government says Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.No entity with meaningful ties to government business would use DeepSeek, simply because the regulatory risk was too high. Now that the government has applied this regulation to an American company, the regulatory risk simply exists for all software. In a sense, DeepSeek is now somewhat less risky to use (since it’s almost as risky from a regulatory perspective as any American AI), and American AI is profoundly riskier than it was last week. This, combined with the broader political risk the government has created, will increase the cost of capital for the AI industry. Put more simply, this will mean less AI infrastructure and associated energy generation capacity.Stepping back even further, this could end up making AI less viable as a profitable industry. If corporations and foreign governments just cannot trust what the U.S. government might do next with the frontier AI companies, it means they cannot rely on that U.S. AI at all. Abroad, this will only increase the mostly pointless drive to develop home-grown models within Middle Powers (which I covered last week), and we can probably declare the American AI Exports Program (which I worked on while in the Trump Administration) dead on arrival.The only thing that would alleviate these self-imposed consequences is if we are really living through a rapid “takeoff” to transformative AI. There is some chance, in that world, that the capabilities of the leading American AI systems are just too significant for corporations or governments to pass up, and that the regulatory risk is worth it. This is the world I think we live in, it is worth noting. But consider the following:Even if I am right that we live in the “rapid capabilities growth” world, it will still be the case that the adoption of U.S. AI will be seen as especially risky—a vulnerability to be corrected once viable alternatives are available;The Trump Administration does not think we live in that world, and instead thinks that AI capabilities began to plateau around GPT-5 last summer. Thus, on the logic of the Trump Administration—where AI is a “normal” technology—this was an especially bad move that we did not have the leverage to pull off, since AI is about to become a commodity.If we do live in that world, on the other hand, the Trump Administration just cast itself as the enemy of the industry that is about to birth the most powerful technology ever conceived—as well as an enemy of the technology itself.In short, I can see only downsides to the Trump Administration’s decision to designate Anthropic a supply chain risk, particularly considering the far less costly policy alternatives it could have employed. One gets the sense that the people making these decisions at DoW are not acting with strategic clarity nor any respect for the basic principles of the American republic—not to mention in stark contrast to President Trump’s own stated vision of letting AI thrive in America.V.With each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary, and capricious—a gradual descent into madness. It is hard to know at what point ordered liberty itself simply evaporates and we fall into the purely tribal world. Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done. Even in the narrowest supply-chain risk designation, the government has still said that they will treat you like a foreign adversary—indeed, they will treat you in some ways worse than a foreign adversary—simply for refusing to capitulate to their terms of business. Simply for having different ideas, expressing those ideas in speech, and actualizing that speech in decisions about how to deploy and not deploy one’s property. Each of these things is fundamental to our republic, and each was assaulted—not anything like for the first time but nonetheless in novel ways—by the Department of War last week. Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign. There is something deeper about the damage done by the government, too. The Anthropic-DoW skirmish is the first major public debate that is truly about where the proper locus of control over frontier AI should be. Our public institutions behaved erratically, maliciously, and without strategic clarity. Our political leaders conveyed little understanding of their own actions, to say nothing of the technology and its stakes. They got off on an extraordinarily bad footing, and it is hard to imagine them ever recovering, because they do not seem to care about improvement. They are a cartoonish depiction of the American political elite, but sadly their failings
※ 著作権に配慮し、引用は冒頭3段落までです。続きは元記事をご覧ください。