What do vibe coding and coding agents mean for learning to program? An open letter to INST126 students, Spring 2026
Dear INS126 students,
2025 was a wild year for software engineers.
This was the year that vibe coding was coined as a term. LLMs got good enough that some started experimenting with using LLMs to “one shot” substantial coding projects. This was interesting, but not yet transformative in practice. For professional work, the basic pattern for LLM adoption seemed to follow the same trajectory as IDEs, mostly using LLMs to speed up boilerplate or well-defined tasks via “autocomplete” in coding editors like VSCode and Cursor. I think of this as autocomplete for steroids.
That changed at the end of 2025. I started noticing this on social media in ~Nov/Dec and also experienced it myself - many devs took advantage of some downtime over the winter to experiment with Claude Code — and other coding agent systems like Codex and Antigravity — for their dormant side projects, and were blown away at what they were able to accomplish. Now I am starting to see prominent cases of software engineers writing proportionally less of their own code for production-grade software, delegating more and more to coding agents that they manage. This shift isn’t industry-wide yet, but it’s now a significant trend. Even Linus Torvalds (the famously cranky creator and lead maintainer of the Linux operating system) is getting into it!! If you’re interested in a grounded and balanced yet up to date view of these industry trends, I recommend following Simon Willison’s blog, George Orozsly’s “Pragmatic Engineer” newsletter1, and Geoffrey Litt. If you’re curious about why this shift happened, you can jump down to the Appendix for this letter.
What do “vibe coding” and “coding agents” mean for learning to program?
Ok enough preamble, let’s move on to the real question that concerns all of us this semester: What does this all this AI stuff mean for me, the INST126 student? Is it still worth my while to take this class? What should I be learning?
The short answer is that we’ll still be learning to write code, but the class will have proportionally more emphasis on engineering practices of problem specification, debugging, and testing. For example, I’ll be asking you to practice writing specifications, updating them, using them to develop tests, and so on. I’ll expose you to testing practices in your problem sets.
You’ll also notice a shift in emphasis away from syntax: we want to emphasize internalizing the concepts (data structures, functional decomposition, control structures, etc.). This is part of the motivation for our in-class exams, which emphasize not memorization but reasoning with/about code solutions to problems.
There are two broad reasons why I’m taking this route for our class this semester.
Computational thinking skills (still) matter (a LOT!)
First, it turns out that a huge chunk of skills and practices needed to make coding agents work well sound a lot like standard software engineering expertise: great specifications, careful problem decomposition, test-driven development, and so on. Engineers who get a lot out of Claude Code are often senior/experienced engineers who are used to reviewing code by others they only sort of trust (including themselves!), so swapping in coding agents isn’t actually that big of a change. 2 This means that even as the specific value of being able to write code is dropping in software engineering practice, other core computational thinking skills are increasing in value3: decomposing a problem into clear requirements and specifications, defining clear specifications of desired outcomes in terms of tests, understanding an existing codebase and debugging it systematically. These were always best practices, but engineers often were able to skip them and be kind of ok as long as they were shipping code (since they were the only ones who could do it). Now, they’re the skills that separate those who can solve problems effectively in practice from those who can’t (and are thus worth hiring/promoting)4.
If LLM coding agents can produce code (given a good spec) way faster than a human, the value is now coming not from the writing of the code, but from the engineering - this is a big reason why I see software engineering transforming, but dying, as a profession5. Perhaps one way it’ll change is that folks who enter the industry will get to do more creative, senior engineering work faster? That’s for us as a society to work out, but I’m pretty comfortable betting this is a plausible way things could go, and worth our while to prepare for. Happily for me, a big chunk of these engineering skills are “computational thinking” skills that we already have in our syllabus, so I’m going to lean into them more.
You (probably) still need to have written some code yourself to do programming work in mission-critical domains
Second, even though coding agents are really good, they still require pretty tight steering if you care about correctness, and the jury is still very much out on whether you can get proper steering expertise for mission-critical applications without ever having written any task-relevant code. There’s a level of tacit expertise in knowing when the agent is bullshitting you, and to give really good guidance, that great engineers have. As one writer memorably put it, “linus can vibe code. you probably can’t.” Plus, if correctness is important, you’re often writing your test suite yourself (or at least very closely supervising it, definitely not “vibe coding” it).
How do you get all this expertise? And can you get it without at least some critical mass of time writing code, and experiencing the bumps and bruises and errors? I think that’s actually kind of an open research question now! Many universities now offer LLM-forward courses, but many of them require that you’ve already taken a basic CS / programming class6. And the research I’m tracking is still inconclusive on whether you can get this level of expertise to guide coding agents without having written any code in the first place7. Some places like UCSD are making a bet that you can do this8. You’re welcome to check out their courses to see how it feels for you. I could be wrong, and would love to hear about it! But here, we’ll take the bet that it certainly won’t hurt to learn to code still, at least in INST126, while you don’t yet have pressure to produce a lot of working code for money.
Conclusion
Now, I know not all of you want to be software engineers. But I also know that pretty much all of you will at some point wish to use code to solve an important problem, with a “blast radius” that includes someone else9. Correctness and robustness and maintainability still matter at lot. For many of us whose mission includes building appropriate technology in highly complex, sociotechnical settings with vulnerable populations, correctness matters even more! This is very different from vibe coding a toy project or little program that is just for you and is sandboxed appropriately. By all means, feel free to explore that on your own time, outside of classwork. I’d be delighted to discuss those projects with you, especially as a way for you to explore the intersection of your unique interests and programming! But for most of the use cases we’re concerned with preparing you for, the practices and skills above matter, even if you end up telling a computer what to do in 1s and 0s via a coding agent.
So I hope you take this seriously. It’s honestly a wild time to be alive: there’s a lot of possibility for good, but also serious possibility for harm. Please take advantage of the space you have to learn, insulated at least for now from the pressure to produce. I’m here for you this semester to navigate this with you. We’re in this together, we’ll figure it out, and keep working towards a better, more just world, with the skills and knowledge we’ll hone together.
Sincerely,
Your instructor, Joel Chan
Appendix: What’s behind the vibe shift with coding agents?
The last time I taught INST126 was in Spring 2025. I had written up a document about what LLMs mean for learning to program in ~Fall 2024, and I think much of it is still valid.
But it is missing at least two big developments that are the reason why Claude Code is a thing, and SWE is transforming: 1) reinforcement learning from verifiable rewards (RLVR), and tool use. I’ll try to share some intuition about each of these advances, since they’re pretty cool! If you want to dive a bit deeper, I recommend this review of AI advances in 2025 (RVLR is mentioned in point 1, and tool use in point 4). Simon Willison’s 2025 in review from the POV of a developer is also quite good and covers similar ground.
RLVR
In general, LLMs still work by “hallucinating” (i.e., using a LOT of complicated math to produce a sequence of tokens that are most “likely” to fit the preceding sequence of tokens). LLM architecture by itself has no mechanism for actually verifying that the output sequence is correct (or useful, etc.). Verification falls on the user of the LLM’s outputs, so you need a way to verify the output, and that in most professional use cases means you need to have relevant expertise. LLMs are great at generating answer-shaped things, but you still need to verify that it’s actually an answer. So an intuition about the fundamental gap between likely and correct is still a good place to start.
But! For some domains, we can actually verify correctness in a straightforward and repeatable way, like math and programming. This allows us to do reinforcement learning to help shape the probabilities in the LLM, to reward pathways that produce answer-shaped outputs that are verified to be correct, and prune away from pathways that produce answer-shaped outputs that are verified to be incorrect. RLVR has turned out to be a powerful way to shape LLMs’ predictions to where the likelihood of outputs that turn out to be correct can go up dramatically. This doesn’t eliminate the need for verification, but it does increase the “hit rate” of model outputs for math/code a lot.
RVLR being a key driver of performance for coding means we should be very cautious about extrapolating the gains in performance for LLMs for RLVR-able domains like coding and programming to other more open-ended domains like writing or creative problem solving. In fact, there is some quite reasonable theorizing that RLVR can make models worse at creative tasks!
And the fundamental intuition that generalization is still largely unsolved remains solid: if you’re working on a fairly standard sort of project with a well-documented and stable set of architectures and patterns (e.g., a standard React-driven Todos app), you’re far more likely to get useful outputs than if you’re fiddling around at the edges of obscure programming architectures or languages.
Tool use
The second big shift is “tool use”, which basically means allowing LLMs to generate and run commands that can control things on your computer.
For math, this can look like generating code to run formal verification programs, which in combination with human expert verification, has led to some verified solutions to well-scoped but still open math research problems10; for programming, this can look like like using bash to search for appropriate files to add to the context window, use the git command-line-interface to do version control, and perhaps most importantly, verify program behavior and correctness by running test suites or even checking functionality in a live browser!
Tool use is what powers a system like Claude Code: you can think of it as a “harness” that hooks up an LLM to a bunch of tools (in this case, via your terminal, which is the swiss army knife of doing stuff on your computer).
Footnotes
-
I do not recommend trying to follow trends about AI and the future of work on LinkedIn or X (too infested with influencers!! Tough to separate reality from hype). Bluesky is still pretty good for now, feel free to follow me there: I repost stuff sometimes that pass my BS filter, and there is now good thoughtful engagement from humanists, designers/etc. on what this all means for the future of work. ↩
-
See Simon’s discussion of “vibe engineering”. Code velocity is a big potential change that is leading to broader reimagining of what maintenance means, how to ship software, etc. This is a fast-moving space, and I’m tracking some stuff here, you’re welcome to follow and chat me up about it anytime. ↩
-
Some good discussion of this here: https://newsletter.pragmaticengineer.com/p/tdd-ai-agents-and-coding-with-kent (h/t Simon Willison: https://simonwillison.net/2025/Jun/22/kent-beck/) ↩
-
See here: https://bits.logic.inc/p/ai-is-forcing-us-to-write-good-code The industry at large is also actively trying to figure out what else might be needed to make good on the possibilities of coding agents. I like this piece on testing/verification: https://alperenkeles.com/posts/test-dont-verify/, this piece on emerging convergence around best practices for specification (which look suspiciously like regular specification best practices to me!): https://addyo.substack.com/p/how-to-write-a-good-spec-for-ai-agents?, and LOVE this blog on “regenerative software” as a new paradigm for software engineering: https://aicoding.leaflet.pub/ ↩
-
See some discussion of this here: https://github.blog/news-insights/octoverse/the-new-identity-of-a-developer-what-changes-and-what-doesnt-in-the-ai-era/ and here: https://newsletter.pragmaticengineer.com/p/when-ai-writes-almost-all-code-what?ref=blog.pragmaticengineer.com ↩
-
For instance, Stanford’s new AI-forward software engineering course “assumes you have foundational software engineering knowledge and builds upon it with contemporary practices.” ↩
-
Accessible summary here: https://www.teachcswithai.org/courses/research There’s also some beautiful arguments about other reasons to still write code, for expression/discovery/as a way of thinking: https://medium.com/@markguzdial/learning-to-program-matters-for-liberal-arts-and-sciences-students-in-the-age-of-ai-bcfd3d2a7c6e ↩
-
You can browse a curated list here: https://www.teachcswithai.org/courses ↩
-
See discussion of blast radius in here and how it relates to the appropriateness of AI-generated code: https://aicoding.leaflet.pub/3maob46kbz22v ↩
-
https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erd%C5%91s-problems (tracked by Fields Medalist Terence Tao!) ↩