• jadero
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I agree with most of what you said, but I think I was not clear in my presentation of the domain of operations. I was not speaking to the rewriting of an existing system, but if gathering requirements for a system that is intended to replace existing manual systems or to create systems for brand new tasks.

    That is, there is no existing code to work with, or at least nothing that is fit for purpose. Thus, you are starting at the beginning, where people have no choice but to describe something they would like to have.

    Your reference to hallucination leads me to think that you are limiting your concept of AI to the generative large language models. There are other AI systems that operate on different principles. I was not suggesting that a G-LLM was the right tool for the job, only that AI could be brought to bear in analyzing requirements and specifications.

    • Iteria@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I wasn’t talking about rewriting an existing system either. I’m talking about adding to a system. In order to do that effectively, you need to understand the system as it stands and consider how any requirement could clash or be impossible with the current set of requirements. This is why I bring up the AI needing to pull a set of requirements from the existing code. You cannot add requirements without knowing the requirements that already exist.

      I think that hallucination is still a massive issue. I don’t even like to call it hallucination because what it really is bad guesses. We should never forget that all any AI does is guess. It doesn’t reason about anything or connect information together. AI will hold contradictory positions because of this.

      Currently we have no way to make an AI declare that it just doesn’t know or even very often ask for more information in order to make a decision because the method of training an AI is literally guess and check.

      For that reason, I don’t think that AI will ever be the tool for the job when it comes to any kind of requirements gathering. I mean I guess you could, but I always run the risk of being like that lawyer who had made up cases in this result. AI made things up because all it does it make its best guess and it doesn’t care I’d that guess is grounded in much of anything at all.

      • jadero
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ah, I understand now. Yes, I think that maybe I agree with you in general.

        I still think that AI operated by ethical experts has much to offer when used not an automated replacement, but as a tool that can save time and help verify accuracy. I’m thinking in terms of a kind of teamwork where one member of the team is an AI system or assistant.