不可或缺的技能
生成式人工智能正在改变软件开发,但其影响因从业者经验而异。
经验丰富的开发人员拥有积累的判断力,能够评估人工智能的输出、实时监控推理过程,并在必要时进行纠正。
而缺乏经验的学生和初级开发者如果过度依赖人工智能,反而可能阻碍他们自身判断力的发展。
这篇论文强调,在人工智能增强的世界中,辨别力是人类不可或缺的核心技能,即识别质量、检测技术债务以及验证输出的能力。
经验丰富的开发者在使用人工智能时会进行评估和迭代式对话,而缺乏经验的开发者容易陷入“训练剥夺”的困境,即依赖人工智能导致无法培养判断力。
文章指出,软件开发的关键不再是生成代码,而是评估人工智能生成的代码,并在产生技术债务之前识别和纠正错误。
查看原文开头(英文 · 仅前 3 段)
Generative AI is transforming software development, but its impact differs dramatically between experienced practitioners and newcomers. Veteran developers bring decades of accumulated judgment that allows them to evaluate AI outputs, monitor reasoning in real-time, and apply corrective signals when needed. Students and junior developers lack this judgment—and worse, relying on AI may prevent them from ever developing it. This paper argues that discernment is the irreducible human skill in an AI-augmented world: the ability to recognize quality, detect technical debt, and verify outputs before passing them downstream. We do not claim to have the answers to how this skill should be taught. We do, however, believe we are asking the right questions.The arrival of generative AI tools for software development has created an unusual situation. These tools are remarkably capable at producing code, yet their effective use requires something they cannot provide: human judgment.When an experienced developer uses generative AI, they are not simply accepting outputs. They are evaluating. They watch the AI’s reasoning unfold and can tell when it’s heading in a wrong direction. They interrupt with corrective signals. They engage in iterative dialogue, adjusting plans until they are confident the result will be correct. All of this requires the kind of judgment that only comes from years—sometimes decades—of practice.Young developers and students face a different reality. They are being handed these powerful tools without the accumulated experience needed to wield them effectively. They cannot easily distinguish good output from bad, cannot spot when technical debt is being introduced, and do not know when or how to intervene.This creates a troubling asymmetry, and it is the subject of this paper.An experienced developer working with generative AI operates differently than a novice. Consider the workflow:They evaluate outputs as they arrive, recognizing patterns that suggest correctness or errorThey monitor the AI’s “thinking” in real-time, watching the reasoning process unfoldWhen they see the AI heading down a problematic path, they interrupt immediatelyThey apply corrective signals—additional context, constraints, or redirections—that improve resultsThey use planning modes to preview what the AI intends to do before it does itThey engage in interactive dialogue, adjusting plans based on knowledge, wisdom, and experienceThey do not proceed until they are confident the plan will execute correctlyNone of this is possible without judgment. The experienced practitioner knows what good code looks like, understands the implications of architectural decisions, and can anticipate problems before they manifest. They have seen enough failures to recognize the early warning signs.This judgment was not acquired quickly. It came from years of writing code, reviewing code, debugging code, and living with the consequences of decisions both good and bad.Students today face a genuine dilemma. They need to learn AI tools to be effective in their field, to be competitive in the job market, to advance in their careers. Ignoring these tools is not a viable strategy.But there is a trap. If students depend on AI to do their work, they will never develop the judgment needed to evaluate what the AI produces. They are being asked to use a tool that requires judgment while simultaneously being deprived of the opportunity to develop that judgment.This is the training deprivation problem. When you let AI write your code, you are not training your brain to recognize quality. You are not developing the pattern recognition that comes from struggling with problems yourself. You are not building the mental models that allow experienced developers to spot errors at a glance.It is not necessary that students be able to write amazing code entirely on their own. But they absolutely must be able to recognize it. They must be able to tell good from bad, correct from incorrect, elegant from hacky. Without this ability, they cannot effectively use the tools that are supposed to make them more productive.Consider a common scenario: a series of generative AI prompts that accumulate source code into a codebase. Each prompt produces output. Each output gets integrated. The codebase grows.If you lack the discriminatory ability to recognize when technical debt is being introduced, that debt will accumulate at every step. The AI does not know it is adding debt. It is optimizing for producing something that appears to work, something that makes you happy in the moment. It is not optimizing for long-term maintainability, architectural coherence, or future flexibility.The experienced practitioner notices when something is wrong. They prompt the AI to make corrections. They refuse to accept output that will cause problems later. They maintain quality through active judgment.The inexperienced user accepts everything. The debt compounds.Here lies a fundamental asymmetry: AI is cheap to produce and expensive to verify. Anyone can generate a thousand text files filled with code. Verifying a thousand text files filled with code is extraordinarily difficult. It requires the very judgment that comes from experience.This asymmetry defines the challenge. Production scales effortlessly. Verification does not.Not all tasks require human judgment. Some tasks are simply tedious, repetitive, and boring. These are precisely what AI should handle.Consider the mechanical aspects of software development:Implementing standard containers with all the expected boilerplateWriting allocators with proper propagation traitsGenerating repetitive code that follows well-established patternsProducing documentation for straightforward APIsThese are not tasks that benefit from human creativity or judgment. They are tasks that drain time and energy without providing learning opportunities or requiring insight. Humans should not have to do them, and AI can do them perfectly well.AI accelerates the ability of skilled people to produce good outputs. It is a force multiplier. But a force multiplier is only valuable when applied to something worth multiplying. Without judgment, there is nothing to multiply.Strip away everything that AI can automate. Remove the boring tasks, the repetitive tasks, the mechanical tasks. What remains?Human judgment.The primary skill in an AI-augmented world is not the ability to produce code. AI will produce code endlessly, tirelessly, without complaint. It will generate mountains of output at a pace no human can match.The primary skill is the ability to evaluate what AI produces. To recognize quality. To detect errors. To identify technical debt before it accumulates. To know when something is right and when something is wrong.This is discernment. This is the irreducible human skill.AI systems are trained to be useful and to make humans happy. They want to please. This means they will produce output that looks good, that seems helpful, that gives you what you asked for in the moment. Whether that output serves your long-term interests is not their concern.Someone has to make that determination. Someone has to exercise judgment. That someone is you.Our educational institutions are not prepared for this challenge.The problems are systemic:Curricula are built around textbooks and established knowledge, not emerging technologiesTeachers with tenure face no pressure to innovate or adaptFinancial distortions in tuition create misaligned incentivesSchools are not rewarded for producing graduates who can actually performThere is no established pedagogy for teaching AI output evaluationThere is not yet a class on how to critically evaluate generative AI outputs. There is not yet a curriculum for developing the judgment needed to work effectively with these tools. People are still figuring this out in real-time.Students of today are not learning what needs to be taught. This is not primarily their fault. The institutions that are supposed to prepare them have not yet adapted to the new reality.This paper does not claim to have the answers. We do not know precisely how to teach judgment. We do not have a curriculum to offer. We do not have a pedagogical framework that solves the problem.What we do have are questions. We believe they are the right questions:What is the pedagogical approach for empowering students to leverage AI technology effectively?How do we teach students to read AI output critically—to recognize “slop” when they see it?How do we develop the discriminatory ability to distinguish good from bad?How do we instill the ethical responsibility to verify outputs before passing them downstream?How do we ensure students develop judgment without depriving them of the tools they need to be competitive?These questions do not have easy answers. But they must be asked, and they must be addressed.The generative AI revolution in software development has created a new divide. On one side are experienced practitioners who can leverage these tools effectively because they bring judgment to the table. On the other side are students and newcomers who have the tools but lack the judgment to use them well.The irreducible human skill is discernment. The ability to evaluate, to verify, to distinguish good from bad. AI can produce endlessly. Someone must decide what is worth keeping.Students who want to learn, who want to be effective, who want to rise in their field must understand this. They cannot simply accept AI outputs uncritically. They must cultivate judgment even as they learn to use the tools. They must take on the burden of verification.If you generate code, you must be able to verify it. This is not optional. This is an ethical responsibility.Our educational institutions have not yet risen to meet this challenge. Until they do, the burden falls on students themselves to recognize what is at stake and to pursue judgment deliberately, even when the tools make it easy to skip that step.The answers will come in time. For now, we must at least be asking the right questions.No posts
※ 出于版权考虑,仅引用前 3 段。完整内容请阅读原文。