Skip Navigation
16 comments
  • The problem with this article is that he stresses that you need to check the code and step in when needed - yet relying heavily on LLMs will invariably make it impossible for you to tell what's wrong and eventually how to even read the code (since it will produce code using libraries you never experimented with because the LLM can just write the code).

    Also "vibe-coding" is stupid af. You take out the human element altogether because you just accept all changes without reading them and then copy/paste errors back in without any context.

  • If I'm doing something in a language I only half way know and rarely use in depth I'll use them more. Like bash scripting I use them all the time. For Java I basically never touch them because I don't need them.

  • I'm on the fence.

    I've used Perplexity to take a javascript fragment, identify the language it was written in and describe what it's doing. I then asked it to refactor it into something a human could understand. It nailed both of these, even the variable names were meaningful (the original ones were just single letters). I then asked it to port it to C and use SDL, which it did a pretty good job of.

    I also used it to "untangle" some really gnarly mathy Javascript and port it to C so I could better understand it. That is still a work in progress and I don't know enough math to know if it's doing a good job or not, but it'll give me some ability to work with the codebase.

    I've also used it to create some nice helper python scripts like pulling all repositories from a github user account or using YouTube's API to pull the video title and author data if given a URL. It also wrote the skeleton of some Python scripts which interact with a RESTful API. These kinds of things it excelled at.

    My most recent success was using it to decode DTMF in a .WAV file, then create a new .WAV file using the DTMF start/end times to create cue points to visually show me what it saw and where. This was a mixed bag: I started out with Python, it used FFT (which was the obvious but wrong choice), then I had it implement a Goertzel filter which it did flawlessly. It even ported over to C without any real trouble. Where it utterly failed was with the WAV file creation/cue points. Part of this is because cue points are rather poorly described in any RIFF documentation, the python wrapper for the C wave processing library was incomplete and even then, various audio editors wanting the cue data in different ways, but this didn't stop the LLM from lying through its damn teeth about not only knowing how to implement it, but assure me that the slop it created functioned as expected.

    I've found that it tends to come apart at the seams with longer sessions. When its answers start being nonsensical I sometimes get a bit of benefit from starting over without all the work leading up to that point. LLMs are really good at churning out basic frameworks which aren't exactly difficult but can be tedious. I then take this skeleton and start hanging the meat on it, occasionally getting help from the LLM but usually that's the stuff I need to think about and implement. I find that this is where LLMs really struggle, and I waste more time trying to explain what I want to the LLM than if I just wrote it myself.

  • I was going to say "Who?" until I looked at his bio, he helped start Django which I use. I need to go lay down.

  • I mainly use it to create boilerplate (like adding a new REST API endpoint), or where I'm experimenting in a standalone project and am not sure how to do something (odd WebGL shaders), or when creating basic unit tests.

    But letting it write, or rewrite existing code is very risky. It confidently makes mistakes, and rewrites entire sections of working code, which then breaks. It often goes into a "doom loop" making the same mistakes over and over. And if you tell it something it did was wrong and it should revert, it may not go back to exactly where you were. That's where frequently snapshotting your working code into git is essential, and being able to reset multiple files back to a known state will save your butt.

    Just yesterday, I had an idea for a WebGL experiment. Told it to add a panel to an existing testing app I run locally. It did and after a few iterations, got it working. But three other panels stopped working, because it decided to completely change some unrelated upstream declarations. Took 2x time to put everything back to where it was.

    Another thing to consider is that every X units of time, you'll want to go back and hand edit the generated material to clean up sloppy code. For example, inefficient data structures, duplicate functions in separate sections, unnecessarily verbose and obvious comments, etc. Also, better if using mature tech (with lots of training examples) vs. a new library or language.

    If just starting out, I would not trust AI or vibe coding. Build things by hand and learn the fundamentals. There are no shortcuts. These things may look like super tools, but they give you a false sense of confidence. Get the slightest bit complex, and they fall apart and you will not know why.

    Mainly using Cursor. Better results with Claude vs other LLMs, but still not perfect. Paid versions of both. Have also tried Cline with local codegen through Llama and Qwen. Not as good. Claude Code looks decent, but the open-ended cost is too scary for indie devs, unless you work for a company with deep pockets.

16 comments