New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD’s licensing goals) and cannot be committed to NetBSD.

https://www.NetBSD.org/developers/commit-guidelines.html

    • Terces@lemmy.world
      link
      fedilink
      English
      arrow-up
      112
      ·
      1 month ago

      How do they know that you wrote it yourself and didn’t just steal it?

      This is a rule to protect themselves. If there is ever a case around this, they can push the blame to the person that committed the code for breaking that rule.

      • Destide@feddit.uk
        link
        fedilink
        English
        arrow-up
        36
        arrow-down
        2
        ·
        1 month ago

        This is the only reason rules exist, not to stop people doing a thing but to be able to enforce or defect responsibility when they do.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 month ago

          I mean, generally rules at least are to strongly discourage people from doing a thing, or to lead to things that WOULD prevent people from doing a thing.

          A purely conceptual rule by itself would not magically stop someone from doing a thing, but that’s kind of a weird way to think about it.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      12
      ·
      1 month ago

      I’m saddened to use this phrase but it is literally virtue signalling. They have no way of knowing lmao

      • best_username_ever@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 month ago

        It’s actually simple to detect: if the code sucks or is written by a bad programmer, and the docstrings are perfect, it’s AI. I’ve seen this more than once and it never fails.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          I’m confused, do people really use copilot to write the whole thing and ship it without re reading?

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 month ago

            I literally did an interview that went like this:

            1. Applicant used copilot to generate nontrivial amounts of the code
            2. Copilot generated the wrong code for a key part of the algorithm; applicant didn’t notice
            3. We pointed it out, they fixed it
            4. They had to refactor the code a bit, and ended up making the same exact mistake again
            5. We pointed out the error again…

            And that’s in an interview, where you should be extra careful to make a good impression…

          • neclimdul@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 month ago

            Not specific to AI but someone flat out told me they didn’t even run the code to see it work. They didn’t understand why I would or expect that before accepting code. This was someone submitting code to a widely deployed open source project.

            So, I would expect the answer is yes or very soon to be yes.

          • best_username_ever@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Around me, most beginners who use that don’t have the skills to understand or even test what they get. They don’t want to learn I guess, ChatGPT is easier.

            I recently suspected a new guy was using ChatGPT because everything seemed perfect (grammar, code formatting, classes made with design patterns, etc.) but the code was very wrong. So I did some pair programming with him and asked if we could debug his simple application. He didn’t know where the debug button was.

            • Zos_Kia@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 month ago

              Guilty as charged, ten years into the job and I never learned to use a debugger lol.

              Seriously though that’s amazing to me I never met one of those… I guess 95% of them will churn out of the industry in less than five years…

              • Tja@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 month ago

                Debug button? There is a button that inserts ‘printf(“%s:%s boop! \n” , __FUNCTION__, __LINE__) ;’?

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 month ago

          So your results are biased, because you’re not going to see the decent programmers who are just using it to take mundane tasks off their back (like generating boilerplate functions) while staying in control of the logic. You’re only ever going to catch the noobs trying to cheat without fully understanding what it is they’re doing.

          • best_username_ever@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 month ago

            You’re only ever going to catch the noobs.

            That’s the fucking point. Juniors must learn, not copy paste random stuff. I don’t care what seniors do.

    • ceasarlegsvin@kbin.social
      link
      fedilink
      arrow-up
      12
      arrow-down
      5
      ·
      1 month ago

      Because they’ll be shit?

      Docstrings based on the method signature and literal contents of a method or class are completely pointless, and that’s all copilot can do. It can’t Intuit anything that docstrings are actually there for.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Definitely not my experience. With a well structured code base it can be pretty uncanny. I think it’s context is limited to files that are currently opened in the editor, so that may be your issue if you’re coding with just one file open?

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 month ago

          GitHub Copilot introduced a new keyword a little while ago, “@workspace”, where it can see everything in your project. The code it generates uses all your own functions and variables in your libraries and it figures out how to use them correctly.

          There was one time where I totally went “WTF”, because it spat out Python. In a C++ project. But those kind of hallucinations are getting more and more rare. The more code you write, the better it gets. It really does become sort of like a “Copilot”, sitting there coding alongside you. The mistake people make is assuming it’s going to come up with ideas and algorithms for them without spending any mental energy at all.

          I’m not trying to shill. I’m not a programmer by trade. Just a hobbyist who started on QBasic in the ancient times. But I’ve been trying to learn it off and on for the past 30 years, and I’ve never learned so much and had so much fun as in the last 1.5 with AI help. I can just think of stuff to do, and shit will just flow out now.

  • Todd Bonzalez@lemm.ee
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    12
    ·
    1 month ago

    Lots of stupid people asking “how would they know?”

    That’s not the fucking point. The point is that if they catch you they can block future commits and review your past commits for poor quality code. They’re setting a quality standard, and establishing consequences for violating it.

    If your AI generated code isn’t setting off red flags, you’re probably fine, but if something stupid slips through and the maintainers believe it to be the result of Generative AI, they will remove your code from the codebase and you from the project.

    It’s like laws against weapons. If you have a concealed gun on your person and enter a public school, chances are that nobody will know and you’ll get away with it over and over again. But if anyone ever notices, you’re going to jail, you’re getting permanently trespassed from school grounds, and you’re probably not going to be allowed to own guns for a while.

    And, it’s a message to everyone else quietly breaking the rules that they have something to lose if they don’t stop.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      5
      ·
      1 month ago

      Lots of stupid people asking “how would they know?”

      That’s not the fucking point.

      Okay, easy there, Chief. We were just trying to figure out how it worked. Sorry.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        1 month ago

        It was a fair question, but this is just going to turn out like universities failing or expelling people for alleged AI content in papers.

        They can’t prove it. They try to use AI tools to prove it, but those same tools will say a thesis paper from a decade ago is also AI generated. Pretty sure I saw a story of a professor accusing someone based off a tool having his own past paper fail the same tool

        Short of an admission of guilt, it’s a witch hunt.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 month ago

    This is a good move for international open source projects, with multiple lawsuits in multiple countries around the globe currently ongoing, the intellectual property nature of code made using AI isn’t really secure enough to open yourself up to the liability.

    I’ve done the same internally at our company. You’re free to use whatever tool you want but if the tool you use spits out copyrighted code, and the law eventually has decided that model users instead of model trainers are liable for model output, then that’s on you buddy.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 month ago

      Yup. We don’t allow AI tools on our codebase, but I allow it for interviews. I honestly haven’t been impressed by it at all, it just encourages not understanding the code.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      Does this mean you have indicated to your employees and/or contractors that you intend to hold them legally liable in the case someone launches litigation against you?

  • Elias Griffin@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    edit-2
    1 month ago

    So proud of you NetBSD, this is why I sponsor you, slam dunk for the future. I’m working on a NetBSD hardening script and Rice as we speak, great OS with some fantastically valuable niche applications and I think, a new broad approach I’m cooking up, a University Edition. I did hardening for all the other BSD, I saved the best for last!

    [EDIT 5/16/2024 15:04 GMT -7] NetBSD got Odin lang support yesterday. That totally seals the NetBSD deal for me if I can come up with something cool for my workstation with Odin.

    If you would like to vote on whether, or by what year, AI will be in the Linux Kernel on Infosec.space:

    https://infosec.space/@wravoc/112441828127082611

  • jaybone@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 month ago

    I was hoping they ban it because it’s shit, but banning it for copyright reasons is fine too.

  • bamboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    1 month ago

    I can understand why a project might want to do this until the law is fully implemented and testing in court, but I can tell most of the people in this thread haven’t actually figured out how to effectively use LLMs productively. They’re not about to replace software engineers, but as a software engineer, tools like GitHub copilot and ChatGPT are excellent at speeding up a workflow. ChatGPT for example is an excellent search engine that can give you a quick understanding of a topic. It’ll generate small amounts of code more quickly than I could write it by hand. Of course I’m still going to review that code to ensure it is to the same quality that hand written code would be, but overall this is still a much faster problem.

    The luddites who hate on LLMs would have complained about the first compilers too, because they could write marginally faster assembly by hand.

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago

      Same for intellisense, IDEs, Debuggers, linters, static analyzers, dynamic languages, garbage collection, NoSQL databases…

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Hell yeah! Get that shit… OUTTA HERE!!!

    Ok but seriously, that is a very good reason to ban it. Who knows what would happen if the AI just fully ripped someone else’s code off that’s supposed to be like GPL licensed or something. If humans can plagiarize, than AIs can plagiarize.

    But also, how are they still using CVS? CVS is so slow and so bad. Even Subversion would be an upgrade.

  • cum@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    1 month ago

    I get banning for quality, but for potential copyright is pretty stupid.