ChatGPT is Killing your Developer Potential

Programming10590

The rise of AI and LLM

I was thoroughly impressed when AI (Artificial Intelligence) and LLM (Large Language Model) first became accessible to the public. It was the first time I had ever had a discussion with a machine that truly resembled a discussion with a real human. Long gone are the days of AI bots like Cleverbot which, contrary to its name, isn't quite so clever:

Cleverbot Discussion Forgot Question

The AI boom, particularly with large-scale AI models like ChatGPT, started gaining significant momentum in late 2022 when OpenAI released ChatGPT (based on GPT-3.5) to the public in November 2022. The AI market has continued to rapidly grow since, with the most popular tool, ChatGPT, leading the charge in monthly visits:

It was not long before developers were using these tools daily when programming both for personal projects and professionally at work. Admittedly, I was a very slow adopter of AI. I didn't particularly envision it improving my output or productivity so I mostly ignored it. However, this all changed when I decided to begin slowly incorporating the use of AI (namely ChatGPT) in my day-to-day life over the past year or so. I have gone from not using AI tools at all to using them regularly for a myriad of tasks.

In this post I will share with you the good, the bad, and the downright ugly side of using AI from a developer's perspective.

Benefits of AI for developers

So why would a developer want to use AI?

To answer this question we should first explore how our ancient ancestors from the stone ages compiled their code in a pre-AI world. They used the Software Development Life Cycle (SDLC) to get things done.

The Software Development Life Cycle (SDLC)

⚡Lightning quick breakdown:

  1. Planning: Determine the overall goals, scope, and feasibility of the project.
  2. Analysis: Finalise requirements that determine what the software must achieve.
  3. Design: Creating a design document for what you are building (UI mockups, examples, architecture diagrams).
  4. Implementation: Writing the actual code.
  5. Testing & Integration: Checking to see your implementation actually works and integrates with other systems before you ship it to prod.
  6. Maintenance: Keeping the lights on. Making sure the thing you built continues to work as the ecosystem around it evolves.

Now it becomes clearer where AI fits in the modern age. AI will now completely dominate the Implementation phase. Developers will feed the AI a prompt that includes their requirements from the Analysis phase and any relevant design choices from the Design phase. After some back and forth iteration, the output they get is then copy pasted hastily into an IDE before a commit is signed and sent upstream. Oftentimes the Testing & Integration phase is skipped or poorly executed.

The main benefits of using AI are convenience and development speed. It's much easier to ask someone else to do something than do it yourself. Especially if the other person is extremely competent at doing said thing. It is also a major plus that these AI models are very quick at generating output (and thus code). They can generate hundreds of lines of code in mere milliseconds. The best part is that this is code that actually "works" (note the speech marks because we'll touch on this later).

Key Factors in Effective Software Development

When writing a function on your own, you typically have to consider:

  • requirements
  • your inputs & outputs
  • edge cases
  • error cases
  • documentation
  • logging
  • security concerns
  • performance
  • accessibility (a11y) and internationalization (i18n)
  • licencing of code
  • extensibility in the future

When writing a prompt for AI, you typically consider:

  • requirements
  • your inputs & outputs

Yeah, that's it. Think about the last time you asked AI to generate a function for you.
Beyond stating the requirements for the function and the inputs & outputs, did you mention anything else in your prompt?

Did you consider any edge cases? Error cases?
Did you add any documentation that is actually useful, not boilerplate and worthy of the bytes it takes up?
Did you ask it to add useful logs to help future you (or a co-worker) debug that NullPointerException?
Did you consider security? (Or are you really going to execute that SQL query from user input directly as a string)
Did you do a Big O complexity analysis for performance and space?
Did you consider accessibility and internationalization?
Did you consider the licencing of the code? Is it open source / free to use or was your left-pad pulled straight from Oracle's proprietary software repository which means you are now in violation of 34,921 patents including 7 that were registered as I was making this blog post.
Did you consider extensibility, and how the code might mutate and evolve in the future?

If you answered No to any of the above questions, you might be at risk of, *drum roll please*, killing your developer potential.

Developer Potential?

What is developer potential?
Don't ask me, I just coined the term 30 seconds ago but lets ask our good friend ChatGPT.

ChatGPT Prompt

What is developer potential?

ChatGPT Response

Developer potential refers to the untapped skills, capabilities, and growth opportunities a software developer possesses, which can lead to greater productivity, innovation, and career advancement over time.

Basically, developer potential is how good you can become at developing over time. To become a better developer, you have to practice writing code. You have to be challenged and figure out how to overcome those challenges. This comes naturally when you solve problems and write actual code on a day to day basis. However, this does not happen when you write prompts and copy-paste code that you don't understand.

I have noticed that the use of ChatGPT has had a negative impact on my personal developer potential.

Case Studies in AI Fumbles

Let's go through some of my recent projects and explore how I used ChatGPT to solve problems. I want to demonstrate the scenario in which the AI negatively impacted my developer potential.

Forgotten chess moves

I asked ChatGPT to generate a function for me for my chess game Chessort. I wanted a function that could lookup a chess move in SAN notation and return a nice SVG to display for the piece being moved. For example, if provided with the move Ke2 and the color Color.Black it should produce an SVG url for a black king piece.

ChatGPT Prompt

Generate a function that takes in the SAN notation for a chess move and a Color (either Color.White, Color.Black or Color.Neutral) and returns a SVG string for the piece being moved. Providing a color of neutral should result in an error. Use the getPieceSvg() function from the current code.

ChatGPT Response

export type PieceChar = 'K' | 'Q' | 'R' | 'B' | 'N' | 'P';

/**
 * Returns the SVG for the piece being moved in SAN notation.
 *
 * @param san - The SAN notation of the move.
 *              This can be in the form of a piece move 
 *              (e.g., "Nf3" for knight move), or
 *              a pawn move (e.g., "e5").
 * @param color - The color of the piece.
 *                Must be either Color.White or Color.Black.
 *                Color.Neutral is not supported and will throw an error.
 * @returns The SVG for the piece being moved.
 *          It returns the SVG for the piece specified in the SAN notation.
 */
export const getPieceSvgBySan = (san: string, color: Color) => {
  if (color === Color.Neutral) {
    throw new Error('Neutral color does not have associated piece SVGs.');
  }

  // Determine the piece from the SAN notation
  const pieceChar: PieceChar = san[0] >= 'A' && san[0] <= 'Z' 
    ? (san[0] as PieceChar) : 'P';

  return getPieceSvg(pieceChar, color);
};

With little iteration, the produced code works well! It is well documented and even handled the early exit case for the Color.Neutral case I mentioned. I do some manual testing and the SVGs appear correctly in the UI during games. Brilliant! I move on and forget about this code.

Some time later, I discovered a bug in this code. ChatGPT (and myself by extension) failed to handle a very important edge case. In chess we have the short castling O-O and long castling O-O-O moves. These moves involve the king and one of the two rooks. Therefore the following additional code is needed to handle this case:

// Handle castling
if (san === 'O-O' || san === 'O-O-O') {
  const kingSvg = getPieceSvg('K', color);
  return kingSvg;
}

This special case is blatantly obvious if I had spent even 10 seconds thinking about legal chess moves but because I relied on my prompt to generate the working code I completely missed this. A rookie mistake I would have never made if I used my brain instead of letting AI use theirs. One could argue I should have caught this in unit tests but don't forget many end up using AI to generate their unit tests too. If the AI's implementation didn't catch this, you can be sure its unit tests will not either.

In this scenario, we've limited our developer potential by not identifying special cases and not thoroughly testing the code generated by the AI.

Cycle of wrong solutions

Sometimes you run into a rare problem that hasn't been solved before. This is precisely what happened to be as I was developing my new website using canary builds of Next.js 15 and React 19. I ran into an issue where using two or more distinct SVGs in the same server component (via the svgr webpack integration) would lead to an uncaught runtime error. At first, I assumed this was a misconfiguration on my part. So I asked my AI helper to fix the issue for me:

ChatGPT Prompt

In my Next.js 15, React 19 canary project, if I use 2 or more distinct SVG's in the same server component I get a runtime error.

Here is the exception:
TypeError: Cannot read properties of undefined (reading '$$typeof')
    at getOutlinedModel (react-server-dom-webpack-client.browser.development.js:1056:1)
    at parseModelString (react-server-dom-webpack-client.browser.development.js:1249:1)
    at Object.eval (react-server-dom-webpack-client.browser.development.js:1833:1)
    at JSON.parse (<anonymous>)
    at initializeModelChunk (react-server-dom-webpack-client.browser.development.js:881:1)
    at getOutlinedModel (react-server-dom-webpack-client.browser.development.js:1046:1)
    at parseModelString (react-server-dom-webpack-client.browser.development.js:1249:1)
    at Array.eval (react-server-dom-webpack-client.browser.development.js:1833:1)
    at JSON.parse (<anonymous>)
    at initializeModelChunk (react-server-dom-webpack-client.browser.development.js:881:1)
    at getOutlinedModel (react-server-dom-webpack-client.browser.development.js:1046:1)
    at parseModelString (react-server-dom-webpack-client.browser.development.js:1249:1)
    at Array.eval (react-server-dom-webpack-client.browser.development.js:1833:1)
    at JSON.parse (<anonymous>)
    at initializeModelChunk (react-server-dom-webpack-client.browser.development.js:881:1)
    at resolveModelChunk (react-server-dom-webpack-client.browser.development.js:858:1)
    at processFullBinaryRow (react-server-dom-webpack-client.browser.development.js:1823:1)
    at progress (react-server-dom-webpack-client.browser.development.js:1974:1)

The above error occurred in the <NotFoundErrorBoundary> component.

React will try to recreate this component tree from scratch using the error boundary you provided, ReactDevOverlay.

How can I fix this?

ChatGPT Response

The runtime error you're seeing is likely due to React Server Components' limitations with handling static assets like SVGs. This happens when multiple SVGs are used directly in server components.

To resolve this:

- Move SVG usage to Client Components, as server components are not ideal for rendering static assets like SVGs.
- Use static URLs for SVGs instead of importing them directly in server components.
- Ensure proper SVG handling in your build configuration (like using @svgr/webpack).

This will prevent server-side processing issues with SVGs.

This answer is not useful and suggests some basic diagnostics steps. After telling ChatGPT the solution does not work we go into a long spiral of back and forth messages in which ChatGPT continues to offer one wrong solution after another. It then begins to repeat itself as it desperately tries to find a solution. At no point did I personally stop to think deeply about the issue and what might be causing it. My sole focus was to keep prompting the AI until it magically spat out the right answer.

Finally, I closed the AI tool (shamefully several hours later) and tackled the issue on my own. After little time I was able to setup a test project and reproduce the issue with a minimal example. I then tweaked the various canary build versions and discovered the bug only occurred on various versions of Next.js and React. Shortly after I had legitimate bug issue raised on the Github Next.js project which was verified and fixed a few weeks later by a Vercel engineer.

ChatGPT never suspected the use of canary builds could be the problem. I also did not consider this when it's blindingly obvious it might be the culprit if you take a step back and look at the projects dependencies. Instead I wasted several hours mindlessly tweaking prompts in the hope that the AI would help me. It was frustrating. It wasn't until I tackled the problem solo that I was able to put my engineering skills to use and track down the root cause and action it.

I was only able to track down the root cause because I had analysed many different simpler bugs over the years and developed the skills needed to tackle bigger ones like this. Otherwise, I'd be helpless not knowing what to do next after the AI failed me.

In this scenario, we've limited our developer potential by not critically thinking and analysing a bug before sending countless prompts to the AI.

The sort function from hell

I needed a reusable function I could use to sort BlogPostMeta objects that contained meta data about blog posts on my blog. I asked ChatGPT if it could generate this for me. Now it's important to note that this is extremely trivial code. Why did I even bother asking ChatGPT to generate such simple code? It was due to a developing bad habit of relying on AI for everything.

ChatGPT Prompt

Generate a comparison sort function called sortBlogPostMetaByPublishedAtDate that takes two BlogPostMeta objects and sorts them by the publishedAt date.

ChatGPT Response

import { BlogPostMeta } from '@/types/blog';

export const sortBlogPostMetaByPublishedAtDate = (a: BlogPostMeta, b: BlogPostMeta) => {
  // Convert to date objects
  const dateA = new Date(a.post.publishedAt);
  const dateB = new Date(b.post.publishedAt);
  
  // Extract year, month, and date separately
  const yearDiff = dateB.getFullYear() - dateA.getFullYear();
  if (yearDiff !== 0) return yearDiff;

  const monthDiff = dateB.getMonth() - dateA.getMonth();
  if (monthDiff !== 0) return monthDiff;

  const dayDiff = dateB.getDate() - dateA.getDate();
  if (dayDiff !== 0) return dayDiff;

  // Fallback to comparing the full time (including hours, minutes, and seconds)
  return dateB.getTime() - dateA.getTime();
};

What complete garbage is this? Who contributed this to the AI model? (and where do they live, asking for a friend). This code "works" from a functionality perspective but it's poorly written to say the least.

Issues:

  • Comments aren't particularly insightful.
  • Needlessly converting a Date to a Date for no reason. This is already type safe code as we are using Typescript. publishedAt is guaranteed to be a Date already.
  • Comparing years/months/days as a special case before falling back to a getTime() comparison anyway.

After considering all life choices made since birth to date, I swiftly deleted this monstrosity and rewrote it myself:

import { BlogPostMeta } from '@/types/blog';

export const sortBlogPostMetaByPublishedAtDate = 
  (a: BlogPostMeta, b: BlogPostMeta) =>
    b.post.publishedAt.getTime() - a.post.publishedAt.getTime();

The scariest part is that I almost intuitively copied over this code and moved on without considering what I was doing. It was the length of the code that raised some alarm bells and irked me into challenging the quality of the code I was putting in my code base.

In this scenario, we've limited our developer potential simply by not creating quality code ourselves and blindly accepting whatever the AI vomits in our direction.

Killing your Developer Potential

Sloth Copy and Pastes from ChatGPT

By using ChatGPT regularly for various software projects over the past year I have noticed that I am coding and critically thinking far less and prompt engineering far more. I have become more stupid, lazy and slothlike. I have developed a reliance on ChatGPT. My first instinct when running into a problem is not to explore official documentation, think it out on pen & paper or chat with a colleague but to ask an AI robot to do all the hard word for me and fix it. I want instant gratification with zero effort on my part.

AI is not good at solving very hard or specific problems that are only applicable to your project. This is to be expected. When you are unable to solve a hard problem with AI, you will have to solve it manually using your brain. This will be challenging when you have not been developing any skills to solve easier problems due to the use of AI.

Using AI has led to an increase in development speed but at the expense of crucial learning and self-improvement. Is that a trade-off you or I want to make long term? I don't think it is.

I believe the problem is notably worse for junior developers in the industry. I have years of experience in a pre-AI world and consider myself to be a fundamentally well rounded and strong developer. I attribute all of this to the hard work involved in solving hard problems. If I had a strong reliance on AI when I was first started programming I would be 1/10th of the developer I am today. If you are a junior developer I would highly recommend limiting your AI usage beyond basic prompts like "what is a good tool to solve X problem?" or "this is some code I wrote to do X, does it have any issues?". Perhaps consider using AI for one day and not using it the next to see which day is more productive and which day is more valuable from a learning perspective. Ignoring this advice might lead to little career progression long term. Don't forget that you can't use AI to solve your interview questions (though some have tried).

I've personally cut down my reliance on AI quite a bit, now using it more as a reference or guide while I handle all the coding, validation, and testing myself. While AI can still provide helpful suggestions or quick solutions for smaller tasks, I've found that taking the time to think through problems on my own has not only improved the quality of my work but also deepened my understanding of the code I write.

By no means am I saying AI is bad and should be avoided. I believe it is a useful tool that can enhance productivity. AI will often find a great solution to the problem it is presented with. However, it is clear to me that depending too heavily on AI is a detriment to your own professional development. One must audit and manage the way they use these tools regularly to avoid being negatively affected. Striking the right balance between using AI as an aid and continuing to develop critical thinking skills is crucial for long-term success.


Leave a comment

(required)(will not be published)(required)

Comments

There are no comments yet. Be the first to add one!