It was all fun and games two years ago when most AI videos were obvious (6 fingers, 7 fingers, etc.).

But things are getting out of hand. I am at a point I’m questioning if Lemmy, Reddit, Youtube comments etc. are even real. I wouldn’t even be suprised if I was playing Overwatch 5v5 with 9 AIs while three of them are programmed to act like kids, 4 being non toxic etc…

This whole place could just be an illusion.

I can’t prove it. Its really less fun now.

The upside is I go to the gym more frequently and just hang out with people I know are 100% real. Nothing worse than having a conversation with AI person. It was just an average 7/10 like I am an average 5/10 so I thought it could be a real thing but turned out I was chatting with AI. A 7/10 AI. The creator made the person less perfect looking to make it more realistic.

Nice. What is the point of internet when everything is fake but can’t even or only be identified as fake with deep research.

I’m 32 and I know many young people who also hate it. To be fair I only know people who hate on AI nowadays. This has to end.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    edit-2
    1 day ago

    This (Lemmy) is one of the least populated by bots places I have been on the internet in the last ten years.

    Look, critical thinking is tough, and part of the reason things like this are done are explicitly to make you question reality.

    It’s literally a symptom of why the Trump nuts are so unhinged. Like us, they can tell something is wrong, they know they can’t fully trust traditional media, for example. But the problem is they stop believing it entirely, and then they don’t know what to believe so they start believing almost anything.

    Please be careful to not fall down that hole of thinking. Use critical thinking and consider where you’re at, what the sources are, and whether it’s even worth your time to care about. Don’t throw the baby out with the bathwater and stop believing in anything.

    We’ll know our disinformation program is complete when everything the American public believes is false.” - William J. Casey, CIA Director (1981)

    It takes effort, and it’s not nice. But it’s necessary. Just put on your skepticism hat while on the internet and try not to let it get to you.

    Final point: Technically Lemmy isn’t really experiencing growth. We’re not big enough to be on the radar of people pushing this AI bullshit. Kind of like how Private Torrent Trackers stay under the radar by keeping their user numbers low. It takes a critical mass of piracy for anti-piracy measures to be taken, and private trackers just aren’t big enough these days for authorities to bother with. (Pirate streaming sites are huge on the other hand, and that’s where the enforcement is cracking down on lately) It’s similar with the groups pushing AI. AI isn’t free, it’s costly and requires a lot of compute power. They aren’t wasting it on no-name sites like Lemmy with a small but stable userbase. It’s too costly and easier to just ignore us. It doesn’t mean they aren’t here at all (looking right tf at you realbitcoin.cash), there’s definitely bots and astroturfers, but they’re genuinely in the minority compared to real users.

    https://lemmy.fediverse.observer/stats

    • Whats_your_reasoning@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      9 hours ago

      critical thinking is tough

      To preface, I don’t know a whole lot about AI bots. But we already see posts of the limitations of what AI can do/will allow, like bots refusing to repeat a given phrase. But what about actual critical thinking? If most bots are trained off human behavior, and most people don’t run on logical arguments, doesn’t that create a gap?

      Not that it’s impossible to program such a bot, and again, my knowledge on this is limited, but it doesn’t seem like the aim of current LLMs is to apply critical thought to arguments. They can repeat what others have said, or mix words around to recreate something similar to what others have said, but are there any bots actively questioning anything?

      If there are bots that question societal narratives, they risk being unpopular amongst both the ruling class and the masses that interact with them. As long as those that design and push for AI do so with an aim of gaining popular traction, they will probably act like most humans do and “not rock the boat.”

      If the AI we interact with were instead to push critical thinking, without applying the biases that constrain people from applying it perfectly, that’d be awesome. I’d love to see logic bots that take part in arguments on the side of reason - it’s something a bot could do all day, but a human can only do for so long.

      Which is why when I see a comment that argues a cogent point against a popular narrative, I am more likely to believe they are human. For now.