most of the time you’ll be talking to a bot there without even realizing. they’re gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.

for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they’ll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.

the only real humans you’ll find there are the shills that run the place, and the poor suckers that fall for the scam.

it’s gonna be a shithole.

  • EnglishMobster@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    This is already happening.

    Bots are being used to astroturf the protests on Reddit. You can see at the bottom how this so-called “user” responds “as an AI language program…”

    • Arotrios@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Holy fucking shit I’m dying. That’s fucking hilarious.

      I now want to make a bot that detects bots, grades their responses as 0% - 100% bot, posts the bottage score, and if they determine bottage, engage the other bot in endless conversation until it melts down from confusion.

      We can live stream the battles. We’ll call the show Babblebots.

      Any devs interested?

        • Empyreal@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 years ago

          Or its another form of a human-monitored bot account. Those have existed for years

          Or its just another bot response. I’ve had arguments with bots that I have banned from my subreddit before. Some of their response mechanisms are quite creative.

    • livus@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Let’s face it, they already had it on some of the big default subs as well.

      I went though a phase of bot hunting, and it was not unusual to find comment chains of 3 bots replying to each other near the top of big threads, sometimes with a hapless human or two in the mix.

      They use snippets of comments from downthread (and usually downvote their “donor” comments to lower visibility) so it seems kind of organic. Sometimes they use a thesaurus or something and re word it somewhat.

      What was really sad was when you’d see a human writing screeds of long arguments in reply to them.

      • HotDogFingies@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Excuse my ignorance, but how were you able to recognize the bots?

        The repost bots were fairly easy to spot, but I sadly never found a situation like the one you’re describing. I don’t use reddit anymore, but the information may be useful elsewhere.

        • YouveCatToBeKittenMe@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          To add to what other people said: As a casual user who didn’t go deliberately looking for bots, I mostly caught them when they posted a comment that was a complete non sequitur to the comment they replied to, like they were posted in the wrong thread. Which, well, is because they were–they were copied from elsewhere in the comment section and randomly posted as a reply to a more prominent thread. Ctrl+F came in very handy there. (They do sometimes reword things, but generally only a couple of words, so searching for bits and pieces of their comment still usually turns up results.)

          Also, the bot comments I caught were usually just a line or two, not entire paragraphs, even if they were copied from a longer comment.

        • Aesthesiaphilia@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          The past year or so, they’ve been in every single thread with more than 50 comments. If you expand the comments and do a little ctrl+f searching, you’ll see how they copy comments from users and then repost and have their fellow bots upvote them for visibility. Look at the timestamps on the posts.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          It’s a bit like finding a single thread and unravelling it.
          I used to get dozens of these things banned a day, there were a lot of us bot hunters reporting bots.

          They sometimes sound “off”, stop in mid sentence, reply to people as if they think it’s the OP, reply as if they are OP, or post 💯 by itself. Or they have a username that fits a recent bot pattern (e.g. appending “rp” to existing usernames)
          .
          If you see one slip up once, then looking at its other comments will often lead you to new bots simply because they are all attracted to the same positions (prominent but a few comments deep).

          Certain subs like AITA and r/memes are more prone to them so I would go there for easy leads.

          Also if you check its actual submissions, a karma laden bot will often repost hobby content, then have a second bot come and claim to have bought a t shirt or mug with that content and post a malicious link. Then a third bot will pose as another redditor saying thanks I just ordered one to the second bot. Following those bots leads you to even more bots, etc.

          @XiELEd copying you in here.