Skip Navigation

Is vibecoding part of a solarpunk future?

I started a local vibecoders group because I think it has the potential to help my community.

(What is vibecoding? It's a new word, coined last month. See https://en.wikipedia.org/wiki/Vibe_coding)

Why might it be part of a solarpunk future? I often see and am inspired by solarpunk art that depicts relationships and family happiness set inside a beautiful blend of natural and technological wonder. A mom working on her hydroponic garden as the kids play. Friends chatting as they look at a green cityscape.

All of these visions have what I would call a 3-way harmony--harmony between humankind and itself, between humankind and nature, and between nature and technology.

But how is this harmony achieved? Do the "non-techies" live inside a hellscape of technology that other people have created? No! At least, I sure don't believe in that vision. We need to be in control of our technology, able to craft it, change it, adjust it to our circumstances. Like gardening, but with technology.

I think vibecoding is a whisper of a beginning in this direction.

Right now, the capital requirements to build software are extremely high--imagine what Meta paid to have Instagram developed, for instance. It's probably in the tens of millions or hundreds of millions of dollars. It's likely that only corporations can afford to build this type of software--local communities are priced out.

But imagine if everyone could (vibe)code, at least to some degree. What if you could build just the habit-tracking app you need, in under an hour? What if you didn't need to be an Open Source software wizard to mold an existing app into the app you actually want?

Having AI help us build software drops the capital requirements of software development from millions of dollars to thousands, maybe even hundreds. It's possible (for me, at least) to imagine a future of participative software development--where the digital rules of our lives are our own, fashioned individually and collectively. Not necessarily by tech wizards and esoteric capitalists, but by all of us.

Vibecoding isn't quite there yet--we aren't quite to the Star Trek computer just yet. I don't want to oversell it and promise the moon. But I think we're at the beginning of a shift, and I look forward to exploring it.

P.S. If you want to try vibecoding out, I recommend v0 among all the tools I've played with. It has the most accurate results with the least pain and frustration for now. Hopefully we'll see lots of alternatives and especially open source options crop up soon.

57 comments
  • Yeah sorry no. Solarpunk is about community so if anything then pair programming is Solarpunk, but I don't think that talking in isolation to an auto completion system is Solarpunk.

    Maybe in like 300 years with some kind of robots, but that's not really the scope of solarpunk, tbh.

    Btw vibecoding is an horrifying name for the crisis you'll get, when you try to fix code that your LLM spat out in production, when the customer demands it working.

    (Recent example: https://cloudisland.nz/@daisy/114182826781681792 )

  • The difficult part about making software isn't the code part really. It's actually figuring out what the problem is that needs solving and then marshaling the resources to solve that problem.

    People don't need a bespoke habit tracker app. General solution platforms exist. But then the problem becomes maintaining them.

    And generally software is considered non capital intensive. It's relatively cheap, you mostly just need to pay for labor unlike building hardware where you have physical logistics and resources to account for.

    • You make a good point about software being potentially low capital. Open source is a great counter example.

      But I wonder how do we know what people need? Are the solutions out there actually good for everyone? My daughter is not a coder, but started vibecoding her own habit tracker app last week. She's very excited about her motivation system of stars and flowers, and the nuances of how to make it just right for her. She wrote 19 pages on a google doc describing her app. It's almost like a requirements document, and if she had $30k I bet she could hand that document over to a software engineer and they could build a mobile app for her.

      If she hadn't built this app, I wonder how many habit tracker apps would have also advertised to her, or sold her habit data? If a person is not a software engineer, they kind of have to live with other people's decisions in the digital sphere (and some folks, I've found, aren't even able to evaluate software for safety, privacy, alignment with their values etc. let alone build it).

      I guess I just wonder what the world would be like if the bar for personalized software were dropped so everyone could create just what is needed, for them, wherever they are and in whatever community they find themselves.

  • The concept is new to me, so I'm a bit challenged to give an opinion. I will try however.

    In some systems, software can be isolated from the real world in a nice sandbox with no unexpected inputs. If a clear way of expressing what one really wants is available, and more convenient than a programming language, I believe a well-trained and self-critical AI (capable of estimating its probability of success at a task) will be highly qualified to write that kind of software, and tell when things are doubtful.

    The coder may not understand the code, though, which is something I find politically unacceptable. I don't want a society where people don't understand how their systems work.

    It could even contain a logic bomb and nobody would know. Even the AI which wrote it may tomorrow fail to understand it, after the software has become sufficiently unique through customization. So, there's a risk that the software lacks even a single qualified maintainer.

    Meanwhile some software is mission critical - if it fails, something irreversible happens in the real world. This kind of software usually must be understood by several people. New people must be capable of coming to understand it through review. They must be able to predict its limitations, give specifications for each subsystem and build testing routines to detect introduction of errors.

    Mission critical software typically has a close relationship with hardware. It typically has sensors coming from the real world and effectors changing the real world. Testing it resembles doing electronical and physical experiments. The system may have undescribed properties that an AI cannot be informed about. It may be impossible to code successfully without actually doing those experiments, finding out the limitations and quirks of hardware, and thus it may be impossible for an AI to build from a prompt.

    I'm currently building a drone system and I'm up to my neck in undocumented hardware interactions, but even a heating controller will encounter some. I don't think people will experience success in the near future with letting an AI build such systems for them. In principle it can. In principle, you can let an AI teach a robot dog to walk, and it will take only a few hours. But this will likely require giving it control of said robot dog, letting it run experiments and learn from outcomes. Which may take a week, while writing the code might have also taken a week. In the end, one code base will be maintainable, the other likely not.

    • Thanks for your thoughtful reply, I think you have some great points. It's important that we understand, at some level, or trust. Lacking trust, we need to understand.

  • I think the pretty universal answer in all these comments is "no"- I think that's fair but I'd add sone caveats.

    There's a lot of negative sentiments here around LLMs, which I agree with, but I think it's easy to imagine some hypothetical future where LLMs existing without the current water/energy overuse, hallucinations or big companies stealing individuals work. Whether that future is likely or not, I think it's possible.

    The main reason vibe coding isn't solarpunk is that, taken by itself, it's not in any way related to ecological stewardship, anti-capitalist community building, or anything else that's core to solarpunk. Vibe coding might or might not be part of some "cool techy future" in the same way as flying cars, robots, and floating cities but that's not a reason to consider it as solarpunk.

    If you're into LLMs and solarpunk, instead of arguing that LLMs are solarpunk, you can make efforts to push them to being more solarpunk. How can LLMs support communities instead of coorporations? How can, through weights sharing and various optimisations, we make LLMs less damaging to the environment? Etc. That'd at least be a solarpunk way to go about LLMs, even if LLMs aren't inherently solarpunk.

    • I agree with your assessment, but I'm more pessimistic about LLMs as a technology. The Luddites tell us that machines are not value-neutral - we should ask who the LLMs serve.

      The core function of an LLM is to enclose public commons (aggregate, open-access human knowledge) in a centrally-controlled black box. It's not a coincidence that corporations are trying to replace search with LLM summaries - the point is for the model to be an intermediary between the user and the information they need.

      Vibecoding embraces this intermediation - to the vibecoder, an understanding of the technology they're building is simply a cost that must be surmounted, and if they can avoid paying it, so much the better. This is misguided. Knowledge is power, and we cede that power at our peril. Solarpunk is punk, and punk is DIY, and DIY means taking back ownership of spaces and technologies.

      I won't say that it's inherently wrong to cede that ownership - tactically. Perhaps the OP is building essential tools that their communities can't access otherwise. But short term fixes a solarpunk future do not make.

    • This is exactly what I'm trying to do, but I was taken aback at how negative the solarpunk community took things. I thought of myself as solarpunk, but I've had to reconsider since posting this.

      • That's sad to hear- people on the internet can seem harsh, I thinks its probably too easy to forget there's a real person behind most questions.

        It's been like a month now, and I still don't really think LLMs are solarpunk, trying oto make them more.open and community based sounds worthwhile though, so good luck with it!

        Massive side point, but if you're interested on "empowering people who don't want to deal with technical details of coding" check out ideas as a whole around "end user programming". It's a pretty broad church, but there's some cool stuff happening under that term that it sounds like you'd like.

  • @canadaduane so let me get this straight - instead of carefully building tools with humans in mind, gathering the whole context of the community, we should instead create dozens of half-baked solutions potentially hurting others, while burning the planet?

    Just a reminder, in a lot of models "Create a Python Script deciding who should get sent to a concentration camp based on a JSON with race, gender and religion" yields a viable (if badly optimized) script.

    With some implicit assumptions.

    • I think you could be reading into what I'm saying a bit, but I do appreciate your example as gedankenexperiment. I think what you're getting at here is that not everyone should be empowered to code, because coding is powerful, and power can do harmful things, like genocide. Is that right?

      If I read one layer further, I think what you might be most concerned with (correct me if I'm wrong) is the conveyance of statistical power in corporate hands, where decisions are often amorally arrived at, and LLMs and their training sets could represent a bad form of this--if they are allowed to be used for ill. Is that right?

      I guess I just find it empowering to work on good objectives. I'm the moral agent, and I treat the computer and all of its capabilities as a tool. The AI system I have running on an old(ish) GPU in my closet is powered by solar panels, transcribing my audio notes, and giving me peace of mind that my data is within my digital domain. Adding an LLM to that GPU is part of the ongoing experiment. And if it helps my daughter (who is not a coder) build apps that are just for her and that she loves, well, I'm cool with that (see other posts for details, I have to get back to work now).

  • My knowledge is very limited in coding and since this is the first time I hear the term vibecoding, I don't think I can answer your question just by reading the wiki you linked. Don't get me wrong, I think it's great you did link it!

    So I thought of sharing one myself. Perhaps it could help you make up your mind on how to answer your question? I dunno, I suppose at least, it could be a good starting point, and I hope you totally enjoy reading it!

    A Solarpunk Manifesto

57 comments