• Lumisal@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    5 months ago

    The sandwich analogy doesn’t work, because there are not enough variables to cause significant chaos to the point of where a will can be proven. Will implies thinking and decision making in a chaotic environment so as to assume intelligence, but being only able to choose three choices and starting out with 2 demonstrates no more intelligence then random chance.

    Intelligent choice is part of free will, because otherwise it is only instinctual choice. But intelligence by nature allows malevolence, because it allows you to create choices where there were none.

    Also, a paradox doesn’t disprove the existence of a god - if anything, any omnipotent being of any sort would be paradoxical by nature, as omnipotence can only exist in a paradoxical state. If you’re wondering how that could be possible, light is a good example - it is both a wave and a particle, and yet it exists. Being a paradox doesn’t exclude the possibility of something existing.

    Lastly, omnipotence doesn’t exclude desire. For example, if you suddenly gained omnipotent abilities, would you actively use them all? Would you change certain things? Would you change yourself? Would you create something?

    Why?

    The same questions could be true for any omnipotent being.

    All that said, this simplified chart is missing some options, but then condensing philosophy into a simplified chart is already quite reductivist anyhow.

    • KoboldOfArtifice@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      You make the claim that a will relies on some idea of chaos, which definitely requires some actual explanation.

      The amount of choices one has is irrelevant in the comparison to random chance. If the person uses reason to decide for one of several options, they, in the most common sense, clearly have acted out of free will. Assuming that a free will exists in a physical universe, but we’re in metaphysics anyways.

      I am not sure what it even means to create choices where there were none. If you end up making a decision, then it clearly was an option to begin with, by the definition of what that word means.

      What pointing out the paradox here entails is that amongst the presumptions we made, at least one of them must be false. The argument used in the OP does not disprove the existence of some divine being at all and it’s not trying to. It’s trying to disprove the concept of a deity that has the three attributes of being all-powerful, all-loving and all-knowing. In the argument given, it is shown that at least one of these attributes is not present, given the observation of evil in the world.

      Your comparison to light being described as a particle and a wave is to your own detriment. The topic of this duality arose in the first place from the fact that our classical particle based models of the universe began to become insufficient to correctly predict behaviours that had been newly observed. A new model was created that can handle the problem. The reason this is a weak argument here is that no physicist would ever claim that the models describe the world precisely. Physical models are analogies that attempt to explain the world around us in terms humans can understand.

      In your last question, you make the mistake of misunderstanding the argument once again. You grant the person omnipotence and leave it at that. The argument is arguing about the combination of omnipotence, omniscience and all-lovingness. The last of these deals with your question directly, explaining the drive to make the changes in question. The other two grant the ability to do so without limitation.

      This chart isn’t reducing that much at all. It’s explaining a precise chain of reasoning. It may or may not be missing some options, but you haven’t named any so far that weren’t fallacies.

      • Lumisal@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Ah, you’re right, I did forget the “all-loving” part actually. My bad. I thought you were talking about the Christian Trinity paradox.

        As for chaos needed for determination of will, that’s because will requires intelligence. A controlled environment doesn’t lead to intelligent choice but rather patterned outcome. ChatGPT is a good example of this

        As for the “all-loving” part, an argument could only be made for that, from my perspective at least, depending on how you define “love” here. If they sees us the same way we see creations we make and love, then it would explain to some degree why the suffering is still allowed. If you build a rugged all terrain vehicle, you might love what you made, but it’s purpose would still be go out there and get scuffed up. I know it’s not the same for us - a vehicle ≠ a person - but to an omnipotent creator being, it could be the same point of view that we have towards a vehicle. In which case it would fit that condition on a technicality.

        I do have a question though - what would it mean if he made both a universe where suffering exists, and one where none does, simultaneously? What would that entail?

        • Olgratin_Magmatoe@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          As for chaos needed for determination of will, that’s because will requires intelligence. A controlled environment doesn’t lead to intelligent choice but rather patterned outcome. ChatGPT is a good example of this

          So what turns a controlled environment into a chaotic environment? And what is the problem with a patterned outcome? Intelligence was still used, so what do the results matter?

          This all seems quite arbitrary.

          As for the “all-loving” part, an argument could only be made for that, from my perspective at least, depending on how you define “love” here. If they sees us the same way we see creations we make and love, then it would explain to some degree why the suffering is still allowed.

          The problem with this is than an all loving, omni-benevolent being not just has love for all, but maximal love for all, which contradicts the notion of willingly allowing suffering to exist in any form.

          it could be the same point of view that we have towards a vehicle.

          “You are so lowly that it is permissible to harm you” is not the point of view of an omni-benevolent being.

          • Lumisal@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            So what turns a controlled environment into a chaotic environment?

            Honestly, don’t know. Maybe mathematicians do, but I imagine it’s a philosophical question. The only agreed upon thing would be that significant varied complexity is what is needed to be determined a chaotic environment, philosophically. How significant would be the disagreement.

            And what is the problem with a patterned outcome? Intelligence was still used, so what do the results matter?

            Well, we’re still trying to determine exactly, precisely is “intelligence”. But ChatGPT is definitely not intelligent, that I do know. I think Google really helped elucidate that point recently to Americans.

            The problem with this is than an all loving, omni-benevolent being not just has love for all, but maximal love for all, which contradicts the notion of willingly allowing suffering to exist in any form.

            Again, that depends what kind of “maximal” love. You have maximal love for your parents for example (assuming you had good parents), but that’s definitely not the same as romantic maximal love.

            If there’s a God and they created everything, well, I assume the “maximal love” would be akin to a human creating something and loving that creation. Considering the massive difference between an omnipotent being and a mortal human, I’m hesitant to even say it’s similar to a human and self aware robot.

            Maybe the old Honda bots?

            • Olgratin_Magmatoe@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              The only agreed upon thing would be that significant varied complexity is what is needed to be determined a chaotic environment, philosophically. How significant would be the disagreement.

              Ok, then let’s assume there is a sufficient number of choices to be deemed chaotic. You have 1000 condiments for the sandwich at your disposal, it’s chaotic. However none of them are options which are evil.

              The rather arbitrary requirement of chaos is present, a choice is still at hand meaning free will is still present, all without evil.

              Well, we’re still trying to determine exactly, precisely is “intelligence”. But ChatGPT is definitely not intelligent, that I do know. I think Google really helped elucidate that point recently to Americans.

              So do humans who play tic tac toe lack intelligence? There is a finite and very small number of choices a player can take. It’s a patterned outcome.

              • Lumisal@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                Ok, then let’s assume there is a sufficient number of choices to be deemed chaotic. You have 1000 condiments for the sandwich at your disposal, it’s chaotic. However none of them are options which are evil.

                That’s not varied complexity, that’s still just a lot of one thing - condiments.

                Significant varied complexity would be more of 5 condiment choices, 2 bread choices, 3 ham choices but 1 might be expired even though it’s your favorite, 3 vegetable choices, peanut butter, 3 jam choices.

                And then between all that, other things are going on too. You might suddenly decide you don’t want sandwich. A roach is wondering if it should scurry across the bread you laid down or near your feet, possibly causing you to injure yourself with the knife. A painter who was painting something dark red may knock accidentally on your door leading to a misunderstanding. And more.

                None of these choices are evil, but they can lead to suffering or the potential to make a bad choice. And then there’s still defining “evil”. Would eating ham be evil? What about the jam? It could involve minor deforestation for monoculture - is that evil? Is spraying crops with pesticides evil? What about GMOs? These are things that depending who you ask, range from evil, bad, neutral, to good.

                So do humans who play tic tac toe lack intelligence? There is a finite and very small number of choices a player can take. It’s a patterned outcome.

                False equivalence. The thing is, you can play tic-tac-toe without intelligent decision. You could win a game through sheer randomness by just flipping a coin (heads = x, tails = o) and randomly picking a square. Want to take it further? You can draw the # on ground in the autumn, and leaves could just fall in place (red vs yellow) and form what looks like a game of tic tac toe. You don’t need intelligence to play tic tac toe, even though an intelligent being is capable of playing tic tac toe. You do need intelligence to invent tic tac toe out of unrelated nothingness, however.

                • Olgratin_Magmatoe@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  Significant varied complexity would be more of 5 condiment choices, 2 bread choices, 3 ham choices but 1 might be expired even though it’s your favorite, 3 vegetable choices, peanut butter, 3 jam choices.

                  This doesn’t fundamentally change what I’m getting at. Of all the choices, none of them are evil. Yet they are still choices.

                  None of these choices are evil, but they can lead to suffering or the potential to make a bad choice.

                  Call it evil/suffering/sin/etc, the label is irrelevant to my point.

                  False equivalence. The thing is, you can play tic-tac-toe without intelligent decision. You could win a game through sheer randomness by just flipping a coin (heads = x, tails = o) and randomly picking a square. Want to take it further? You can draw the # on ground in the autumn, and leaves could just fall in place (red vs yellow) and form what looks like a game of tic tac toe.

                  I don’t think you quite understood what I was getting at, so let me rephrase. An intelligent actor with free will and an unintelligent actor without it will both have patterned outcomes to games of tic tac toe.

                  So patterned outcome cannot be a deciding factor for what is and what is not free will.