Are you ready for the Advent of Agentic Coding?

📅 Monday, Dec 29, 2025

📖 Reading time: 8 min


We stand at a pivotal moment in software engineering history. The transition from manual coding to Agentic Coding, where developers use AI tools to build software, is already underway.

The real question is no longer if this shift will happen, but how quickly and whether we are prepared for it.

Trying to answer those questions led me to approach this year’s Advent of Code (AoC) not just to collect stars, but because I realized that if I don’t master this new workflow, I risk being left behind.

The industry is moving fast, and the definition of a “competent engineer” is evolving from someone who knows programming languages and software implementation techniques to someone who can effectively direct, review, and integrate the work of AI tools (AI agents for example). I don’t want to miss this train.

Advent of Code 2025 Agentic Coding

What is Agentic Coding?

Unlike traditional AI coding assistants that focus on autocomplete or line-by-line suggestions, Agentic Coding introduces autonomous AI agents that take an active, goal-driven role in the software development lifecycle. These systems can autonomously plan, execute, test, and refine complex tasks with minimal human intervention.

Some key characteristics of Agentic Coding include:

  • Autonomy: Agents independently decide how to approach a task and iterate on solutions based on feedback or test failures.
  • Goal-Driven Work: You provide a high-level goal (e.g.“Analyze requirements, find algorithms to solve the task, and suggest an implementation”), and the agent breaks it down into discrete steps.
  • Tool Integration: Agents can interact with compilers, debuggers, terminal environments, version control systems, MCP servers, and much more. This allows them to analyze problems, plan solutions, implement code, and verify their own work.

For readers who want to go deeper into this topic, the following resources provide excellent overviews and perspectives:

My Experiment

For Advent of Code 2025, I established a strict operational rule:

I am the Senior Engineer. Gemini is the Junior - a very capable one.

I used the Gemini CLI not merely as a helper, but as my primary interface for software creation. My role shifted from write syntax to define specifications. I treated the AI as a capable but junior engineer under my mentorship.

The workflow was rigorous:

  1. Strategy First: I would not allow a single line of code to be written until we had discussed and agreed upon an algorithmic approach in plain English.
  2. Code Generation: I would instruct the agent to implement the solution based on our agreed strategy.
  3. Review & Refine: I would review the generated code, run tests, and if it failed, I wouldn’t fix it manually. Instead, I would explain the error to the agent, forcing me to understand the root cause well enough to articulate it clearly.

Even with this workflow, I almost always stepped into the code to improve structure, organization, or documentation.

How It Went?

As of today, the experiment has been a resounding success. I didn’t just solve the puzzles; I built robust solutions that I’m genuinely proud of. Along the way, I’ve learned Python syntax and some new algorithms.

The initial days required a significant change. I fought the urge to just “grab the keyboard” when the agent misunderstood a nuance of the puzzle. However, as I refined my ability to prompt, treating it less like a search query and more like a technical spec, the efficiency skyrocketed.

One lesson became very clear: context is king. Providing the agent with the full picture (description, input format, constraints, and architectural expectations) leads to near-perfect implementation. When I was lazy with my context, the code was lazy with its logic.

However, it wasn’t all smooth sailing. There were moments of genuine frustration where the agent would get “stuck” on a wrong approach. Even when explicitly instructed to change tactics (e.g., “Stop trying to brute force it, use xyz!”), it would sometimes hallucinate a fix that was just a variation of the same broken logic. Other times, a simple request to change one detail in a small part of the code made the agent change lots of lines across the whole file. In those moments, I had to step in, wipe the slate clean, and force a hard reset of the strategy. It reinforced that while the agent is powerful, it lacks the high-level intuition to know when it’s digging a hole.

Most importantly, I’ve gained a level of fluency in Python that would have taken months of traditional study. By reading and reviewing the agent’s high-quality output, I’ve internalized the language’s best practices through osmosis.

The Advantages: Velocity and Mentorship

1. Breaking the Syntax Barrier

As an Embedded Software Engineer, my brain is wired for C and C++. I wanted to use this opportunity to master Python. In a traditional setting, I would spend hours looking up “how to map a list in Python” or “idiomatic way to parse strings.” With Agentic Coding, I simply describe the logic. The AI handles the syntax, often teaching me the most “Pythonic” way to do it. It’s learning on steroids.

2. Focus on Architecture

By offloading the implementation details, my mental energy stays on the algorithm and the system design. I think more about data structures, edge cases, and complexity (Big O notation) because I’m not bogged down by missing semicolons or indentation errors.

3. Engineering Excellence: Clean Code, Docs, and Tests

One of the most surprising benefits of this workflow is the consistency of the output. When coding manually under pressure (like in AoC), it’s tempting to cut corners: naming variables x, y, or temp, skipping comments or ignoring unit tests.

The agent, however, is a relentless enforcer of good engineering practices:

  • Coding Principles: I can explicitly instruct the agent to use descriptive naming, follow SOLID principles, and apply appropriate design patterns, which consistently results in cleaner, more maintainable code.
  • Documentation on Demand: Generating docstrings, explanatory READMEs, and high-level architectural summaries is no longer a chore; it’s a zero-friction part of the process.
  • Instant Sanity Checks: Creating simple sanity tests for input parsing or basic logic flows is incredibly easy. I can ask the agent to “write five test cases for this grid navigation,” and it provides them in seconds. It turns the “it works on my machine” gamble into a verifiable engineering process.
  • Automated Verification: I’ve integrated a “Test-First” mentality. For every challenge, the agent generates a suite of unit tests. This ensures that even when we refactor the logic to optimize performance, we have a safety net to prevent regressions.

The Challenges: The “Human in the Loop” Problem

1. The Illusion of Correctness

The biggest danger is complacency. AI writes code so confidently that it’s easy to assume it works. But when it fails, it fails subtly. Debugging code you didn’t write is a specific skill that is becoming harder and more necessary. You cannot abdicate responsibility; you must review every line with a critical eye.

2. The Ambiguity Tax

AI agents are literal. If your requirements are poorly defined or contradictory, the agent will still produce code, but it will be a perfect implementation of a flawed idea. In traditional coding, you might catch a requirement gap as you type; in Agentic Coding, the speed of implementation can mask these gaps until you’re deep into testing.

“Prompt Engineering” is just a fancy term for clear technical communication. If I can’t articulate the problem precisely, the agent will solve the wrong problem perfectly. I’ve found that my ability to describe a technical challenge has improved significantly, simply because the AI forces me to be unambiguous.

3. Dependency Risk

There is a fear that my raw coding skills might atrophy. If I rely on the agent for every for-loop, will I forget how to write one?

Conclusion

Agentic Coding is not about doing less work. It’s about doing higher-value work.

It turns the solitary act of coding into a managerial and architectural discipline. Programming can be lonely, and getting stuck on a bug for hours is draining. AI acts as a tireless pair programmer, offering alternative perspectives, catching logic errors I missed, and suggesting refactors that improve readability.

I see this shift the same way I see the transition from Assembly to C. We are moving up a layer of abstraction. It’s raising the bar for what “writing code” means. With AI, it is more important to have “good taste” (experience + judgement), than to remember all the syntax of a programming language or framework.

Mastering these tools is a career accelerator. It allows me to punch above my weight class, delivering the output and quality of a senior team in a fraction of the time. It shifts the value proposition from “how fast can I type” to “how well can I solve business problems.” In a future where AI handles the implementation, the engineers who thrive will be the ones who can orchestrate these agents to build complex, reliable systems.

This journey through Advent of Code convinced me that this future isn’t coming… It’s already here!

The train is leaving the station, and I intend to be driving it.

Check out my progress on GitHub ☃️🎄🎁