I started a local vibecoders group because I think it has the potential to help my community.

(What is vibecoding? It’s a new word, coined last month. See https://en.wikipedia.org/wiki/Vibe_coding)

Why might it be part of a solarpunk future? I often see and am inspired by solarpunk art that depicts relationships and family happiness set inside a beautiful blend of natural and technological wonder. A mom working on her hydroponic garden as the kids play. Friends chatting as they look at a green cityscape.

All of these visions have what I would call a 3-way harmony–harmony between humankind and itself, between humankind and nature, and between nature and technology.

But how is this harmony achieved? Do the “non-techies” live inside a hellscape of technology that other people have created? No! At least, I sure don’t believe in that vision. We need to be in control of our technology, able to craft it, change it, adjust it to our circumstances. Like gardening, but with technology.

I think vibecoding is a whisper of a beginning in this direction.

Right now, the capital requirements to build software are extremely high–imagine what Meta paid to have Instagram developed, for instance. It’s probably in the tens of millions or hundreds of millions of dollars. It’s likely that only corporations can afford to build this type of software–local communities are priced out.

But imagine if everyone could (vibe)code, at least to some degree. What if you could build just the habit-tracking app you need, in under an hour? What if you didn’t need to be an Open Source software wizard to mold an existing app into the app you actually want?

Having AI help us build software drops the capital requirements of software development from millions of dollars to thousands, maybe even hundreds. It’s possible (for me, at least) to imagine a future of participative software development–where the digital rules of our lives are our own, fashioned individually and collectively. Not necessarily by tech wizards and esoteric capitalists, but by all of us.

Vibecoding isn’t quite there yet–we aren’t quite to the Star Trek computer just yet. I don’t want to oversell it and promise the moon. But I think we’re at the beginning of a shift, and I look forward to exploring it.

P.S. If you want to try vibecoding out, I recommend v0 among all the tools I’ve played with. It has the most accurate results with the least pain and frustration for now. Hopefully we’ll see lots of alternatives and especially open source options crop up soon.

  • alxd ✏️ solarpunk prompts@writing.exchange
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    9 days ago

    @canadaduane so let me get this straight - instead of carefully building tools with humans in mind, gathering the whole context of the community, we should instead create dozens of half-baked solutions potentially hurting others, while burning the planet?

    Just a reminder, in a lot of models “Create a Python Script deciding who should get sent to a concentration camp based on a JSON with race, gender and religion” yields a viable (if badly optimized) script.

    With some implicit assumptions.

    • @canadaduane what if instead we turned to communities and created more modular, free, open, peer revieved code which could be re-used to build something else?

      What if instead of another startup “starting from scratch” we could have trusted software designers (UX designers) helping people understand what they really need and how not to hurt themselves?

      Even if vibe coding would work, asking for a “gamified dieting app” will lead people to eating disorders.

    • canadaduaneOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      8 days ago

      I think you could be reading into what I’m saying a bit, but I do appreciate your example as gedankenexperiment. I think what you’re getting at here is that not everyone should be empowered to code, because coding is powerful, and power can do harmful things, like genocide. Is that right?

      If I read one layer further, I think what you might be most concerned with (correct me if I’m wrong) is the conveyance of statistical power in corporate hands, where decisions are often amorally arrived at, and LLMs and their training sets could represent a bad form of this–if they are allowed to be used for ill. Is that right?

      I guess I just find it empowering to work on good objectives. I’m the moral agent, and I treat the computer and all of its capabilities as a tool. The AI system I have running on an old(ish) GPU in my closet is powered by solar panels, transcribing my audio notes, and giving me peace of mind that my data is within my digital domain. Adding an LLM to that GPU is part of the ongoing experiment. And if it helps my daughter (who is not a coder) build apps that are just for her and that she loves, well, I’m cool with that (see other posts for details, I have to get back to work now).

      • @canadaduane

        I am not saying that not everyone should create tools / apps.

        I am saying that each software program is a simplification - and it’s really easy to run into misconceptions, or worse, hidden assumptions, biases. Especially with AI code.

        Hence my post below about the dieting app leading to eating disorders.

        http://opentranscripts.org/transcript/programming-forgetting-new-hacker-ethic/ is a really good writeup.

        #solarpunk is about humans and ecosystems in the center, careful analysis, not throwing mindless apps at everything.

        • @canadaduane

          Right now I am not arguing against #AI as a corporate tool, as an unsustainable, power-hungry privacy nightmare.

          There are ways to train it locally and host on more sustainable infrastructure.

          I am arguing against the very basic #technosolutionism of AI, which is the anti-thesis of #solarpunk :

          Trying to solve each problem with an app which you do NOT understand, which you do NOT analyze, which can lead to terrible consequences down the road.

          • @canadaduane your daughter did THE worthwhile work by creating a product description. Being able to make it into an app looks shiny, sure, but is she now able to analyze what is the app doing versus what she designed it to do?

            I’m afraid that if we teach young people to trust the machines to “do the work for them” instead of constantly questioning and double-checking, we will set them up to be manipulated by whoever is controlling the machines.