学術界はAIに目を覚ます必要がある
AI技術の急速な進展により、社会科学研究や学術出版のあり方が大きく変化しつつある。
AIは既存の論文作成を低コストで可能にし、査読システムを圧迫する可能性があり、学術界は現状維持が困難な状況にある。
現在、AIの活用に抵抗を示す学者は少なくないが、その背景には自身の地位を守ろうとする心理が隠れている可能性がある。
今後は、AIのセキュリティと検証に重点を置き、研究プロセスにおける人間の役割を再定義する必要がある。
AIの進化が学術界のあり方を根本から変えつつあると、専門家が警鐘を鳴らしています。筆者は、社会科学の研究者という立場から、AIが自身の研究プロセスに与えた影響を分析。既存の学術論文の形式や出版システムそのものが、現在のAIの進歩に対応できない「持続不可能な状態」にあると指摘しています。
AIによる研究成果の質的向上
AIは、単なるデータ処理やコード実行に留まらず、高度な文献レビューや既存のアイデアの再結合といった、研究の核となる部分で高い成果を上げていると指摘されています。一部の専門家は、AIのプロンプト入力だけで、一流学術誌に掲載可能なレベルの論文を短時間で生成できると報告しています。これにより、論文作成にかかるコストは大幅に削減される見通しです。ただし、AIを効果的に活用するには、研究ワークフローを細かく指示する「スキル」が必要であると付け加えられています。
伝統的な学術論文形式の限界
AIの進化は、従来の「30ページにわたる学術論文」という形式自体が時代遅れになっている可能性を示唆しています。AIが文献レビューや査読の補助を担うようになれば、論文の役割は「真の科学的問い」や「事前分析計画」にシフトすると見られています。また、商業的なジャーナルシステムも存続が危ぶまれています。論文作成が短時間で安価になることで投稿数が爆発的に増加し、現在の査読体制では処理しきれず、収益モデルが崩壊する可能性があると試算されています。
学術界のダブルスタンダードへの批判
筆者は、AIの誤情報(ハルシネーション)に対する懸念を指摘しつつも、学術界がAIに対して「二重基準」を適用していると批判しています。人間が書いた論文についても、データエラーや再現性の低い研究結果が頻繁に発表されている現状を挙げ、その質の低さはAIの誤引用リスクと比べても深刻であると主張しています。AIの出力に適用する懐疑心を、人間が作り出した研究にも適用すべきだと提言しています。
まとめ
AIは、若手研究者のキャリア形成におけるボトルネックを「実行」から「独創的な思考」へと変化させる大きな機会をもたらしています。しかし、学術界全体がこの技術的変革を認識し、論文の定義や評価基準を根本的に見直すことが急務であると、筆者は強く訴えています。
原文の冒頭を表示(英語・3段落のみ)
Update: I’ve written a Part II with reflections on the responses to this piece and a Part III with ideas about what academics can do next.Please like, share, comment, and subscribe. It helps grow the newsletter without a financial contribution on your part. Thank you for reading.This piece is inspired by a wave of recent AI-related writing from people I respect: Dan Williams, Alex Imas, Ben Ansell, Tibor Rutar, scott cunningham, Kevin Munger, Hollis Robbins, Claude (yes!) Blattman, Kevin Bryan, Andy Hall, Kelsey Piper, Sean Westwood, and many others. So here, I’m continuing the tradition of writing the takes that are upsetting but needed.I study immigration and public opinion, not AI. But I’ve spent the last few months watching AI transform my own research workflow, and I have some things to say to my colleagues. For the first time in my life, I genuinely do not know what academia will look like in five years.1 Even if progress stalls completely and we are stuck with the current models forever, the changes already in motion will transform my field of academic research and publishing beyond recognition. The status quo is unsustainable. It may take time, because academia is the most dispositionally conservative institution on the planet. But it will change.Here are ten theses for my colleagues, most of whom still seem oblivious.1. AI can already do social science research better than most professors.This is not hyperbole. Tibor Rutar recently described generating a full research paper using AI prompts alone, producing work he considers publishable in first-quartile journals. Paul Novosad reportedly accomplished similar results in 2-3 hours. Yascha Mounk claims that Claude can produce a publishable-quality political theory paper in under two hours with minimal feedback. Scott Cunningham estimates that manuscript creation now basically costs roughly $100 in editing services plus a Claude subscription.And this goes well beyond crunching numbers or running pre-existing Stata code. Yes, what I’m claiming here is that LLMs produce excellent literature reviews and generate fruitful recombinations of existing ideas. Let’s be honest: academics haven’t been particularly great at writing either, and AI can make your ideas far more accessible to the people who actually need them. But effective use requires investment: Aziz Sunderji describes building a ~200-line instruction file encoding his research workflow, judgment calls, and behavioral guardrails. This takes a skill.2. The academic paper is a dead format walking.Sean Westwood put it bluntly: “AI does lit reviews better. AI will do peer review. Users will skim AI summaries. The real science is the question, the pre-analysis plan, and the analysis. The 30-page paper is just vestigial wrapping paper.” He got roasted on Bluesky for saying this. But he’s absolutely right, and the backlash proves his point: the field can’t even discuss the obvious without circling the wagons. Arthur Spirling is also right that we need conversations about what a paper is, what “review” means, and the correct role of generative AI. Perhaps it’d be a good thing if AI finally pushes us to move on from a system where universities spend taxpayer money to pay commercial publishers to very slowly produce paywalled PDFs2 with outdated results of publicly funded research.3. The commercial journal system may not survive this.Cunningham’s latest piece models the math. If manuscript creation drops to a couple of hours and ~$100, submissions could increase fivefold while journal slots stay fixed. Desk rejection rates would go from ~50% to ~90%. The revenue model collapses. Peer review, already strained, becomes impossible at scale. Kevin Munger makes the case for submission fees, paid reviewers, post-publication review, and LLM-assisted screening. The question is whether journals adapt or get bypassed. My bet is most get bypassed.4. Academics hold AI to absurd double standards.Hallucinating content is concerning, and researchers should always verify their sources. But just like with self-driving cars, we need a reference point: human writers have been superficially citing papers based on the abstract for ages. Journals already publish studies with data errors, p-hacked results, and non-replicable findings at alarming rates. One estimate puts the share of genuinely useful published papers at around 4%. An LLM that occasionally hallucinates a citation is competing against a system that routinely produces junk science dressed in enough jargon to pass review. If we applied the same skepticism to human-produced research that we apply to AI outputs, we’d shut down half the journals tomorrow.5. Junior scholars face the biggest disruption and opportunity.This is probably bad news for junior academics trying to advance their careers in the middle of this shake-up. Jason Fletcher argues that the strategic logic of tenure hasn’t changed—survive the gate first—but AI fundamentally alters how you get there. Teaching prep costs drop. Data cleaning and debugging get delegated to AI. The bottleneck shifts from execution to verification and original thinking. Gauti Eggertsson observes that the returns on conceptual thinking and original ideas are now relatively higher compared to technical grunt work. A junior scholar with good ideas and Claude Code can now produce research at a pace that would have required a full lab a few years ago. But so can everyone else, and the evaluation criteria haven’t caught up.36. I don’t envision a research assistant role in my workflow anymore.I still think it’s invaluable to have mentees and co-authors. But their role is changing fast. I’m not going to hire someone to clean data, run regressions, or draft literature reviews when AI does all of it faster and at negligible cost. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don’t have a clean answer for how to replace it. Fletcher’s complementary framework—AI produces initial analyses, human researchers independently replicate from scratch—points in a promising direction. But it’s clear that the trend for increased co-authorship in social sciences, for instance, may reverse very soon. 7. Much of the opposition to AI is status protection dressed up as principle.I recently wondered on Twitter how much of the distaste for AI telltale signs is basically a new version of grammar policing—people enforcing status markers through language gatekeeping. Kevin Bryan said it plainly: “I get the desire for artisanal, hand-crafted research, with the matrices hand-inverted. But our job is to move the frontier of knowledge, not self-actualization.”Dan Williams has written persuasively about how highbrow misinformation flourishes inside institutions where nearly everyone shares the same biases. I think something similar is happening with AI denial. Many academics—especially those concentrated on Bluesky4 and, I suspect, those who are completely offline—are in complete denial about what’s already happening. Chris Blattman went from a Claude Code skeptic to building an entire AI workflow toolkit in a matter of weeks. Robert Wright recently hosted Alex Hanna and Emily Bender arguing that LLMs are useless. Smart people claiming that a tool millions find useful is fundamentally broken. This smug attitude is exactly why populists are winning, and it applies to AI denial just as much as to politics. 8. The productive worries are about security and verification.My challenge for anyone who dismisses AI capabilities: spend one week alone in a room with Claude Code or Codex. Not the chatbot—the agent. Most people still think of AI as a search engine that sometimes makes stuff up. They have no idea what agentic AI systems can do.Focusing on whether LLMs “truly understand” or produce “real” knowledge is a philosophical indulgence that takes away from the things worth worrying about. How do we verify AI-generated claims at scale? How do we prevent p-hacking? (Andy Hall’s team found that AI agents are surprisingly resistant to sycophantic p-hacking—but can be jailbroken with modest effort.) How do we protect sensitive data when AI tools access institutional repositories? How do we ensure that online survey respondents are real? These are solvable engineering and institutional design problems, the kind that Hollis Robbins calls “last mile” challenges—things that live in the edges of expertise, in the contextual and the unsettled. Debating whether Claude is “really” intelligent is like debating whether a calculator “really” does math while your competitor finishes the problem set.9. We are about to get much better science.There are some silver linings, however. On my own turf, immigration: we can now automatically catalogue policy and opinion changes across countries and suggest fixes in real time. We can build algorithms to better match refugees and migrants to destination communities. We can make sure research and evidence are accessible to policymakers and voters who never read an academic journal.More concretely, Yamil Velez and Patrick Liu have been building AI-generated experimental designs since 2022; tailored Qualtrics experiments can now be created in 15 minutes via prompts. Velez’s work points to something even bigger: AI doesn’t just speed up existing survey methods, it makes entirely new forms of interactive, adaptive surveys possible—designs that would have been impractical to program manually. David Yanagizawa-Drott has taken things further still, launching a project to produce 1,000 economics papers with AI—not as a stunt, but as a stress test of what happens when the cost of generating research drops to near zero.Non-native English speakers also stand to benefit enormously: researchers in Cairo, Sao Paulo, and Jakarta can now produce prose that reads as well as anything coming out of Cambridge or Stanford. Eggertsson suspects AI will erode the monopoly that top US schools have long enjoyed, since their advantage rested partly on knowledge transmission that is now nearly instantaneous. If you care about democratizing science, this matters more than most of the things universities spend money on.10. Apart from the doomsday scenarios, AI is genuinely exciting.Yes, there are real risks. Job displacement for some academics (and most other folks) is not hypothetical. The alignment and safety concerns are genuine, even if unlikely to play out in the worst-case scenarios. I take those seriously and I fear our uncertain future somewhat.But here’s what I keep coming back to: AI is useful and fun. My sense is the “agentic AI is making us dumb” crowd is probably right about some things. But I’ve also noticed my procrastination bar going up. Instead of doomscrolling, I now slack off by trying side projects in Claude Code. May be the most productive form of non-work there is. I’ve been vibecoding a few pretty exciting projects over the past few weeks. Stay tuned.The wise Yiqing Xu advises that we should all pause for a month to reassess and redesign our workflow, then resume. I agree. The payoff will be large. Lock yourself in a room with Claude Code and see what happens.P.S. This post was entirely generated and posted on Substack by agentic AI using my new Claude Code (Opus 4.6) workflow. Make of that what you will.P.P.S. That is, entirely generated based on my artisanal, hand-crafted human social media posts and thoughts on the topic. So who wrote it, really? You tell me.Thanks for reading Popular by Design! This post is public so feel free to share it.Share1Matthew Yglesias recently described how AI uncertainty has given him writer’s block, because every medium-run policy analysis now collapses into arguments about AI’s trajectory. I recognize the feeling.2Of course, now we know that we need to use Markdown, not PDF.3On a related note: I’m currently hiring a postdoc at Notre Dame. The ad explicitly asks for interest in agentic AI tools. I suspect this will become standard in hiring criteria within a few years.No posts
※ 著作権に配慮し、引用は冒頭3段落までです。続きは元記事をご覧ください。