China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.

  • some_dude@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    5 hours ago

    This is a smart and ethical way to include AI into everyday use, though I hope the watermarks are not easily removed.

    • umami_wasabi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      4 hours ago

      Think a layer deeper how can it misused to control naratives.

      You read some wild allegation, no AI marks (they required to be visible), so must written by someone? Right? What if someone, even the government jumps out as said someone use an illiegal AI to generate the text? The questioning of the matter will suddently from verifying if the allegation decribed happened, to if it itself is real. The public sentiment will likely overwhelmed by “Is this fakenews?” or “Is the allegation true?” Compound that with trusted entities, discrediting anything become easier.

      Give you a real example. Before Covid spread globally there was a Chinese whistleblower, worked in the hospital and get infected. He posted a video online about how bad it was, and quickly got taken down by the government. What if it happened today with the regulation in full force? Government can claim it is AI generated. The whistleblower doesn’t exist. Nor the content is real. 3 days later, they arrested a guy, claiming he spread fakenews using AI. They already have a very efficient way to control naratives, and this piece of garbage just give them an express way.

      You though that only a China thing? No, every entities including governments are watching, especially the self-claimed friend of Putin and Xi, and the absolute free speech lover. Don’t think it is too far to reach you yet.

      • LadyAutumn@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 hours ago

        It’s still a good thing. The alternative is people posting AI content as though it is real content, which is a worldwide problem destroying entire industries. All AI content should by law have to be clearly labeled.

        • umami_wasabi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          Then what AI generated slop without label are to the plain eyes? That label just encourge the laziness of the brain as an “easy filter.” Those slop without label just evelated itself to be somewhat real, becuase the label exist exploiting the laziness.

          Before you said some AI slop are clearly identifiable, you can’t rule out everyone can, and every piece are that identifiable. And for those images that looks a little unrealistic, just decrease the resolution to very grainy and hide those details. That will work 9 out of 10. You can’t rule out that 0.1% content that pass sanity check can’t do 99.9% damage.

          After all, human are emotional creatures, and sansationism is real. The urge of share something emotional is why misinformation and disinformation are so common these days. People will overlook details when the urge hits.

          Somethimes, labeling can do more harm than good. It just give a false sense.

          • LadyAutumn@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 hours ago

            Just because something is theoretically circumventable doesn’t mean we shouldn’t make it as hard as possible to circumvent it.

            The reason why misinformation is so common these days is because of concerted effort by fascists to obtain control over media companies. Once they are in power and have significant influence within those companies they can poison them, turning them into massive misinformation engines churning out content at a pace even faster than we ever believed possible. This problem has existed since the rise of mass media especially in the 19th century. But social media presents far faster and more direct throughlines to spreading misinformation to the masses.

            And those masses do not care if something is labeled as AI or not. They will believe it one way or the other. This still doesn’t change that it is necessary to directly label AI generated content as such. What is and isn’t made by a human is extremely important. We cannot equate algorithms with people, and it’s necessary to make that distinguishment as clearly as possible.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      It will be relatively easy to strip that stuff off. It might help a little bit with internet searches or whatever, but anyone spreading deepfakes will probably not be stopped by that. Still better than nothing, I guess.