• Tolstoshev@lemmy.world
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    5
    ·
    8 months ago

    It was ever thus:

    A lie gets halfway around the world before the truth has a chance to get its pants on.

    Winston Churchill

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    8 months ago

    And trust me, these generated images are getting scarily good.

    I have to agree, I would not be able to spot a single one of them as fake. They look really convincingly authentic IMO.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      146
      arrow-down
      3
      ·
      8 months ago

      Stalin famously ordered people he had killed erased from photos.

      Imagine what current and future autocratic regimes will be able to achieve when they want to rewrite their histories.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          26
          arrow-down
          2
          ·
          8 months ago

          Probably just because some people really like Stalin, and have become convinced his accounts are the truthful ones and everyone else lies about him.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 months ago

            That’s a scary thought!! But all kinds of crazy exist, and I mean people have to be literally crazy to want to live under a regime like Stalin made.

        • StarkWolf@kbin.social
          link
          fedilink
          arrow-up
          9
          ·
          8 months ago

          With AI video also getting increasingly impressive and believable, I worry that we will soon live in a world where you could have actual video evidence of a murder, and that evidence being dismissed or cast into doubt because of how easy, or supposedly how easy it would be to fake.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 months ago

            Absolutely, only video from trusted sources can be used. But isn’t that already the case?

            • StarkWolf@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              I think they are both equally scary. I’m imagining cases where photo and video evidence have played major roles in proving police abuses of power for example. We will certainly have an onslaught of people making faking evidence of all sorts of things to push a political narrative, but equally in any politicized narrative, any politically inconvenient photos or videos of real things that really happened might be swept under the rug as “someone probably just faked that for political gain.” Sure you could have an investigation to look into the authenticity of the evidence, or look at other forensic evidence, but probably only if you can afford to have such an investigation done, or enough public attention gets drawn to it. I fear we are reaching a scary time where, in a sense, reality will be whatever people want it to be, and we will increasingly be unable to trust anything we see as real with absolute certainty. We have been headed down this road for a very long time, but this will just make it much worse

        • fuckwit_mcbumcrumble@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          6
          ·
          8 months ago

          “Photoshopping” something bad existed for a long time at this point.AI generated images doesn’t really change anything other then the entire photo being fake instead of just a small section.

          • TheFriar@lemm.ee
            link
            fedilink
            English
            arrow-up
            23
            arrow-down
            4
            ·
            8 months ago

            I’d disagree. It takes, now, zero know-how to convincingly create a false image. And it takes zero work. So where one photo would take one person a decent amount of time to convincingly pull off, now one person can create 100 images or more in that time, each one a potential time bomb that will go off when it starts getting passed around as evidence of something. And there are uncountable numbers of bad actors on the internet trying to cause a ruckus. This just increased their chances of succeeding at least 100-fold, and opened the access to many, many others who might just do it accidentally, for a joke, or who always wanted to create waves but didn’t have the photoshop skills necessary.

          • Flying Squid@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            2
            ·
            8 months ago

            It changes a lot. Good Photoshopping skills would not create the images as shown in the article.

            • Aniki 🌱🌿@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              8 months ago

              Yeah some of these would be like 100 layer creations if someone was doing it themselves in photoshop – It would take a professional or near-professional level of skills.

          • uienia@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            8 months ago

            The easy and speed with which AI created photos, of a quality most photoshoppers could only dream, can be created of does very much change everything.

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        8 months ago

        Digital image editing has been really good for this kind of stuff for quite a while. Now it’s even easier with content aware fill.

        Unless you’re the PR manager for the British Royal family. Then you somehow lack the basic skills to make convincing edits.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 months ago

        Honestly, it looks like the picture on the left is fake, like the guy was inserted into it. Just look at his outline, compared with the rest of the background.

        (I’m no Stalin fan, just commenting on the picture itself.)

      • smileyhead@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        I can Imagine such regimes novadays to develop some sort of cryptographic photo attestation, so any photo not signed by them is going to be shown as untrusted, regardless if it’s fake or not. And all the code from processor to camera app would need to be approved by their servers in order to get a sign.

        Oh wait! Our great friends at Adobe, Intel, Google and Microsoft are already working on just that: https://c2pa.org/

  • anticurrent@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    edit-2
    8 months ago

    the cat is out of the bag. every nation and company is racing to invent the most advanced AI ever. and we are entering times when negative impact of AI outweighs the positive use of it.

    I am really feeling uneasy about the uncertain times ahead of us.

    • VinnyDaCat@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      I used to be excited about it, especially the image generation AI.

      I believe that the internet has already lost a lot of authenticity in general. The amount of misinformation boomers and gen X lap up on their socials is unreal.

      Having advanced image/video AI that would force people to call everything into question, to double check and to fact check sounded good. Except, people aren’t fact checking.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    35
    arrow-down
    7
    ·
    8 months ago

    The article opens:

    When I first started colorizing photos back in 2015, some of the reactions I got were, well, pretty intense. I remember people sending me these long, passionate emails, accusing me of falsifying and manipulating history.

    So this is hardly an AI-specific issue. It’s always been something to be on guard for. As others in this thread have pointed out, Stalin was airbrushing out political rivals from photos back in the 30s. Heck damnatio memoriae goes back as far as history itself does. Ancient Pharoahs would have the names of their predecessors chiseled off of monuments so they could “claim” them as their own work.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      8 months ago

      I mean, the ability to churn out maybe amounts of these fake photos with no effort on the part of the user, causing them to pollute real Internet searches (also now “augmented” by MLB themselves) is definitely AI specific.

      Also, colorizing photos is not the same thing as making fake ones.

      • Couldbealeotard@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        The internet has never been a reliable source of information. The only thing that changes is how safe you feel about it. When the internet first began it was mysterious and scary, then at some point people felt safe, now we go back to scary.

        People should not feel safe on the internet. It is inherently unsafe.

    • yamanii@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 months ago

      Is there a non zero chance Nero was slandered by political opponents? Remember reading that on one of those old “secret history” type books.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        Yes. In general most of what we think we know about the emperors in terms of anecdotes are suspect relative to positive or negative biases in sources.

        It’d be kind of like history fans in 4024 talking about George Washington and cherry trees.

      • sunbytes@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        edit-2
        8 months ago

        And for a reasonable price, the AI corporations will sell you the chance to survive in the world they created for you.

        • Neato@ttrpg.network
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 months ago

          All we really need is some black hat ai developers or power users to make enough compromising and hard to detect deep fakes of Congress people in the US and all of this Gen AI will be banned so fast. I’m surprised it hasn’t happened yet.

          • Transporter Room 3@startrek.website
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            8 months ago

            Sir, this is the internet.

            You can find Ai porn of a great many members of congress.

            It just isn’t popular to post on major websites like certain celebrities have been recently.

          • uienia@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 months ago

            It doesn’t really matter if they ban it or not, it is way too late for that any way. It will be out there and used by people who want to use it. It is like nuclear technology, once it is out of the bag it cannot be put back in. Only this is way easier to use than nuclear technology, a single gradeschooler can use it.

      • hansl@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        8 months ago

        Check out Adobe’s Content Authentication Initiative. It won’t prevent those images but it will allow you to verify their source, which in this case should not authenticate.

  • hamid@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    8 months ago

    The past we know is a carefully crafted and curated story and not at all accurate as it is. It is valuable to learn and understand but also be skeptical. I don’t really think wide spread forgery changes that. Historiography is a very important field.

    Any serious historical research will have to verify the physical copies exist or existed in a documented way to be admitted as evidence. This is called chain of custody and is already required.

    • pup_atlas@pawb.social
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      8 months ago

      That’s for historians and professional researchers. It may not sway the field at large, but it’s still a huge risk to public opinion. I shudder to think of the propaganda implications for rewriting history in a near indistinguishable way.

      • hamid@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        8 months ago

        Public opinion can be swayed with a TV advertisement by a game show host and real estate developer. We are all so insanely propagandized now it can’t be more so.

        • pup_atlas@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          Sure, but a TV ad takes (at the least) and editor, or (at the most) a cast and crew. They take by money and time to create, and loop average working people into the process. Of course there will be people in any profession that will make whatever they’re paid to, but by and large, most of the acting/editing industry has some form of ethics.

          People debunking false claims takes time too, but since creating them take time as well, things have a chance to balance out (obviously that’s happening less, but there’s still a chance for it to happen). But if an AI model can pump out fake history autonomously, almost instantly, and without any chance for a human with ethics to intervene in the process, debunking/fighting misinformation becomes WAY harder. Because you’re not fighting a person with limited time and resources anymore, you’re fighting a firehose of false content that will bury you without even breaking a sweat.

        • Manmoth@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          8 months ago

          Agreed. “AI” will add some interesting augmentations but when it comes down to it they can just tell a bald-faced lie and most people will believe it because it’s on the news.

  • Cosmic Cleric@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    8 months ago

    From the article…

    The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.

    Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?

    Should all realistic AI generated things be labeled as such?

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      4
      ·
      8 months ago

      There’s no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        7
        ·
        edit-2
        8 months ago

        There’s no realistic way to enforce that.

        You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).

        As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.

        But ultimately Humanity has to start considering laws that affects the whole species, ones that don’t just stop at an individual country border.

        • Drewelite@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 months ago

          Don’t get me started on the sham that is recycling icons 😂

          I’m all for making regulation that would require media companies to disclose that something is fake if it could be reasonably taken as truth. But that doesn’t solve the problem of anyone with a computer pumping fake images on to the web. What you’re suggesting would require a world government that has chip level access to anything with a CPU.

          As for the public enforcing the truth; that’s what I’m suggesting. Assume anything you see online could be fake. Only trust trustworthy institutions that back up their media with verifiable facts.

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            What you’re suggesting would require a world government that has chip level access to anything with a CPU.

            Well, not something that harsh, but I think we’re looking at losing some of the faux anonymity that we have (no more sock puppet accounts, etc.).

            Most people haven’t thought far enough ahead on what this means, all of the ramifications, if we let AI run rampant on the human ‘public square’.

            Instead of duplicating my other comment on this subject, I’ll just link to it here.

        • Antagnostic@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          8 months ago

          Physical products are not the same as digital products. Your suggestions are very unrealistic.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Problem with that is that for data, it’s much easier to lie and get away with it. If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.

          There could be hundreds, or even thousands, and the moment they pin one down, more will appear.

          By comparison, physical products can only be made and enter the country so quickly. There are physical factories where they can be tracked down, and it’s prohibitively expensive to spin up a new product line every time the other one is shut down.

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            Hot take incoming…

            If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.

            Well they would just start with the person who has the user account, or the site that the user account is associated with (we might end the days of being able to have sock puppet accounts). Or they get that information from the NSA (the government knows every one of your porn fetishes).

            Honestly, I realize what I’m stating is not as easy to do as I’m saying it is, and making it actually work would be kind of ugly and not completely fair to all parties, but it is something that is actually doable, and needed.

            We shouldn’t just throw up our hands on day one and say “fuck it, nothing can be done about it”, and then we all suffer in the pollution of the human conversational-sphere to the point that no one can converse with each other anymore because of all the garbage.

            When we stop talking to each other, because we think everything is just AI generated, that’s a formula for destruction for the human race. We have to be able to talk to each other, and be confident that we’re actually talking to each other, and not a robot.

            /getsoffsoapbox

            • Drewelite@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.

              Now, as we’ve seen with torrenting, if any country doesn’t comply or enforce laws against how their citizens should interact with the internet you can just VPN through that country to do what you want.

              Ok so

              1. Create the infrastructure for an entire world government.
              2. Force every country to join and fully enforce laws tying every person to their online accounts.
              3. Of course this will create a dangerous police-state like China’s government for many countries where speaking out against your government is dealt with harshly. So either abolish free speech or fix all corruption in all the countries in the world.
              4. Of course this level of control over the world will attract a lot of corruption itself, so build an unassailable global set of checks and balances for how this government should be run that literally everyone on earth can agree on.

              Or

              Proper journalism.

              • Cosmic Cleric@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                8 months ago

                Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.

                We never had the ability to con each other over so completely and in very large numbers than we do today with the Internet and specialized Networks.

                And more importantly, you always knew you were talking to another person, and not a conflict bot or an astroturfing bot, or a political party bot, etc. Now, you don’t, which is my point. We can’t solve problems if we don’t know we’re talking to a person versus a not person.

                I wouldn’t be so quick to dismiss what I’m saying.

              • Cosmic Cleric@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 months ago

                Proper journalism.

                If the last couple of years proves anything, that’s not going to save us, not that alone.

                You’re making an assumption that 100% of people are aware enough to consume the proper journalism and make the proper decisions.

                Right now large swaths of people are being convinced the things that are not true through improper journalism.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    8 months ago

    AI is creating fake XY, and that is problems, problems, problems everywhere…

    During the last decades, IT guys and scientists have always dreamed about using AI for good things. But now AI has become so much better at creating fake things than good things :-(

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        8 months ago

        Yes but it took a lot of work and the person doing it had spent a long time training. AI has made it very fast and very accessible.

        • Laticauda
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          You also needed an original to make the fake with Photoshop, with AI you don’t need that so there are no receipts, so to speak, to pull to prove that it’s fake.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          8 months ago

          It kind of takes the wind out of the sails, though. Everyone freaked out when Photoshop became a thing because it made doctoring images easier than doing it by hand. If Photoshop didn’t break the world, I have a hard time believing that “easier Photoshop” will either.

    • Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      8 months ago

      It’s not really a new problem, people were doing it with their imaginations and stories long before AI came around. The tools of the digital age simply amplified the effect. Healthy skepticism is still the solution, that hasn’t changed.

      It’ll never actually go away, though. Of all the possibile ways of looking at any given situation, the vast majority will always be inaccurate. Fiction simply outnumbers nonfiction. Wrong answers outnumber correct answers.

      So, the adjustment has to be inside of us, and again, it’s always been necessary. This isn’t fundamentally new.

      • uienia@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        8 months ago

        The new thing is the scope in which fake content is being created. In a very near future most internet content will be fake, including history. That is not something that has happened before in history.

        The current AI situation is completely unprecedented in history.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          8 months ago

          I would disagree. I think if we go back even a few centuries, we find that virtually nobody had a firm grasp on historical fact, due to the printing press not being invented yet, alongside archeological techniques not existing.

          • Laticauda
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            As someone with an academic background in history, historical record keeping both written and oral existed long before the printing press.

            • Carrolade@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              8 months ago

              Certainly, but before widespread literacy, did a large portion of the populace have interest in and access to them? Particularly an accurate understanding of how their own culture fit into the broader scope of human history?

              • Laticauda
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                8 months ago

                That depends on the culture and the method of distribution, many cultures that practice oral history did have widespread interest and access to it and an understanding of how their culture fit into the broader scope of the world to some degree, though the way they understood or related to it might differ from culture to culture (some cultures tie their history to places, or names, or events, or people or seasons, etc). As another example, the Romans are well known for their prolific historiography and many of their surviving texts are still referenced to this day. Look up Pliny the Elder and Pliny the Younger, who were just as well known and respected as historians at the time as they are now. While written works such as the Encyclopedia Natural History (written by Pliny the Elder and believed to be the first encyclopedia) would often be released to the public to be copied and spread, they would also often recite written works orally so illiteracy wasn’t as much of a barrier as you’d think. Oral history is a lot more important in providing a record of a culture’s history as well as making that history accessible to others than a lot of people think. It was important in ancient Greece as well, and is a huge part of many other cultures around the world including many indigenous ones. It’s also not as inaccurate or unreliable as some people might think, as there were many methods these cultures used and still use to preserve the accuracy of their oral history as it was passed down from generation to generation.

                Now in terms of awareness, obviously there was propaganda and rewritten history going on back then just as there is now, but it’s not as if none of the citizens would have been aware of that. One of the papers I wrote for a class about the importance of comparing primary sources featured 3 different accounts of what Athens was like and the views people there held at a certain point in history from 3 different people of varying social and financial status, and there was absolutely awareness of that sort of dissonance between what their government claimed and what the reality was even among the more common folk. So I would say they did certainly have a significant understanding of how their culture fit into the broader scope of human history.

                • Carrolade@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  8 months ago

                  Which is why they claimed their city was founded by a couple brothers of divine origin, right? And calling Plinys Naturalis Historia respected by modern historiography is laughable, I’m sorry. Naturally it wasn’t his fault, he was mainly compiling other primary sources of his time, but it is in no way something that should be simply taken at face value.

                  Regardless, my broader point was never to try to say that history began with the printing press or something. Clearly, if it were not for older records in everything from the knotwork language of ancient Peru to newly readable scrolls recovered from the Vesuvius eruption, we wouldn’t have any clue what happened previous to the 15th century, now would we? Which, clearly we do.

                  Instead, I was making a point about the nature of information accuracy, and the importance of skepticism in approaching information. In the same way I wouldn’t want to read Pliny and assume it’s contents were 100% accurate, I also wouldn’t want to just believe everything I see online. It’s not new to have reason to doubt our information space, and thus the effects of AI misinformation are overblown imo. Appropriate skepticism and critical thinking skills are still a viable solution.

                  Lastly, please explain how this:

                  So I would say they did certainly have a significant understanding of how their culture fit into the broader scope of human history.

                  follows from this:

                  One of the papers I wrote for a class about the importance of comparing primary sources featured 3 different accounts of what Athens was like and the views people there held at a certain point in history from 3 different people of varying social and financial status, and there was absolutely awareness of that sort of dissonance between what their government claimed and what the reality was even among the more common folk.

                  I fail to see how three people disagreeing about Athenian history means they understood how Athenian history fit into global history.

        • djnattyp@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          I mean, maybe it has happened before in history, but someone changed it via AI and we just don’t know…

  • RIPandTERROR@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    19
    ·
    8 months ago

    “statement headline” + “and here’s how you should think” = fuck right the unholy toe fungal hell off.

  • LockheedTheDragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    7
    ·
    8 months ago

    When I read the title I sarcastically thought “Oh no, why is AI deciding to create fake historical photos? Is this the first stage of the robot apocalypse?” I find the title mildly annoying because it putting the blame on the tool and ignoring that people are using it to do bad things. I find a lot of discussions about AI do this. It is like people want to avoid that it is how people are using and training the tool is the issue.

    • Laticauda
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      8 months ago

      At this point that’s the equivalent of complaining about people calling gun violence a problem because “guns don’t kill people, people kill people”. If you hand the public easy access to a dangerous tool then of course they’re going to use it to do dangerous things. It’s important to recognize the inherent danger of said tool.

      • LockheedTheDragon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        AI is more like torrents, password cracking software, TOR, ect than guns. Just because they can be used for bad or illegal things doesn’t mean those software programs are bad. When companies in the past tried to get certain software banned they ran into the issue that if it could be used for legal reasons that enough for them to exist legally.

        Now AI does have the issues with how it is trained so the AI itself can be problematic.

        I didn’t say we shouldn’t talk about the problem with the AI I have issues with people making the AI the complete issue ignoring that people use the AI. It reminds me of how automakers tried to make the people driving cars the reason for deaths in car crashes.Thankfully that didn’t work and automakers where forced to make cars safer making it safer on the road. It didn’t stop car crashes from happening since the human element is there. Which there are things in place that partly address that (Such as Driver’s license test, taking away some people’s driver’s license, ads reminding people of the rules of the road.). I’m annoyed that articles are doing the opposite of what car makers did. Humans are using the AI to do bad stuff mentioned that also! How can we change that? Yeah, it will probably be best to do something to the AI program, but we can’t ignore the human element since they the one who are creating the AI, using the AI, and consuming the AI products.

        People use guns to kill people so we need to look at both to make it happen less.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      9
      ·
      8 months ago

      Isn’t the tool part of the issue? If you sell bomb-making parts to someone who then blows up a preschool with them, aren’t you in some way culpable for giving them the tool to do it? Even if you only intended it to be used in limestone quarries?

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        8 months ago

        That really depends on whether the bomb making part is specific to bombs, and if their purchase of that item could be considered legitimately suspicious. Many over the counter products have the potential to be turned into bombs with enough time or effort.

        If a murderer uses a hammer, do you think the hardware store they purchased the hammer from should be liable?

        You can make crude chemical weapons by mixing bleach with other household items. Should the supermarket be liable for people who use their products in ways they never intended?

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Exactly this, many times over.

          Most tools with legitimate uses also have unethical uses.

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        8 months ago

        Everything needed to make a bomb can be found at your local Walmart. Nobody blames the gas companies when something gets molotoved.

      • 4am@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        Maybe if the tool’s singular purpose was for killing. I think guns might be a better metaphor there. Explosives have legitimate uses and if you took the proper precautions to vet your customers then it’d be hard to blame you if someone convincingly forged credentials, for example.

      • Grangle1@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I would say the supplier is culpable if the tool supplied is made for the purpose of the harm intended or if the supplier is giving the tool to the person who does the harm with the explicit intent for that person to use it for that harm. For example, giving someone an AK-47 to shoot someone or a handgun/rifle with the intent that the user shoot someone with it. If the supplier gives someone a tool to use for one legit purpose but the user uses it for a harmful purpose instead, I don’t think you can blame the supplier for that. For example, giving someone a knife to cut food with, and then the user goes and stabs someone with it instead. That’s entirely on the user and nobody else.

          • Grangle1@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            ·
            8 months ago

            To clarify, instead of intent a better word may be knowledge. If the supplier knows that the user is going to use the tool for harm but gives the tool to the user anyway, then the supplier shares culpability. If the supplier does not (reasonably) know, either through invincible ignorance (the supplier could not reasonably know) or the user’s deception (lying to the supplier), then the supplier is not culpable.

    • Cosmic Cleric@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      8 months ago

      For the worse.

      Not necessarily.

      But we’re going to have to deal with the basic issue of deceiving someone with AI, and if any AI generated thing should be labeled or not as such.

      Basically, a legislative fix, and not just a free market free for all.

      • Couldbealeotard@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        How do you enforce labelling when there will never be a way to reliably test if something was ai generated?

        Basic is not a word that fits the situation.

        • Cosmic Cleric@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          How do you enforce labelling when there will never be a way to reliably test if something was ai generated?

          If the icon is not there and then it’s determined that it’s AI generated, as it was with that British royal family picture the other day; crowd sourced.

          • Couldbealeotard@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            8 months ago

            I am not understanding you. Or perhaps you’re not understanding me.

            Firstly, the British royal family photo was not ai generated.

            If you can’t find a way to test if something is ai generated, who decides what is or isn’t ai generated?

    • plz1@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I just listened to the Criminal podcast on that, recently. Fascinating cultural moment.

  • OmegaMouse@pawb.social
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    8 months ago

    Interesting article, and a worrying trend. Stamping a bit of text like ‘Generated by Midjourney’ is ridiculously weak protection though. I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

    Just found the wikipedia page for steganography. Have any AI companies tried using this technique I wonder? 🤔

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      22
      ·
      8 months ago

      The problem is that even if Midjourney did that, there will be other creators have no such moral or ethical issues with people using their software to make these fake photos without any sort of hidden or obvious data to show that they are fakes. And then there will be the ones which have money from a state behind them, and possibly a very large library of surveillance photos for the AI to learn from.

    • Olgratin_Magmatoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      8 months ago

      I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

      Said protection would also be hilariously weak. It would be easy for malicious actors to strip/alter the metadata of the image. And embedding the flag in the image itself is something that can be circumvented by using a model that doesn’t apply any flag.

      We’re about to live in a world where nobody can tell truth from fiction.

      • Carrolade@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        We’re about to live in a world where nobody can tell truth from fiction.

        I would argue that our long history of devising myths indicates we have always lived so.

        • Olgratin_Magmatoe@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 months ago

          That’s a fair assessment, but I think it’s going to get a whole lot worse.

          Before, to the degree that nobody could figure out the truth, it was largely due to lack of information/evidence. The future will instead have evidence manufactured for whatever opinion you like.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      Specific programs can. You can probably train specific models and alter datasets to include them as well.

      But we’re past the point where photo and video is sufficient on its own. Especially when there’s a possibility of state level actors benefiting.

    • hansl@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 months ago

      There is the Content Authentication Initiative which keeps track of the source of an image (it was taken by this camera, etc). It’s technically impossible to fake as it’s validated, registered and traceable, but who knows. It’s more a database of known images.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Have any AI companies tried using this technique I wonder?

      Yes, I have read that they want to do something like that. Stamp all images that their AI has created.

      But of course it won’t be hard to remove the stamp, if you want to.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Yeah, the only real way to do it is have people digitally sign their images, but it still comes down to a trust element. You need to trust the person who created/signed the original content. It also means getting content from 3rd parties is going to be a lot harder in the scientific/historical communities of the world.