• aname@lemmy.one
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    9 months ago

    Yes, corrected.

    But my point stads: claiming there is no intelligence in AI models without even knowing what “real” intelligence is, is wrong.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      I think the point is more that the word “intelligence” as used in common speech is very vague.

      I suppose a lot of people (certainly I do it and I expect many others do it too) will use the word “intelligence” in a general non-science setting in place of “rationalization” or “reasoning” which would be clearer terms but less well understood.

      LLMs easilly produce output which is not logical, and a rational being can spot it as not following rationality (even of we don’t understand why we can do logic, we can understand logic or the absence of it).

      That said, so do lots of people, which makes an interesting point about lots of people not being rational, which nearly dovetails with your point about intelligence.

      I would say the problem is trying to defined “inteligence” as something that includes all humans in all settings when clearly humans are perfectly capable of producing irrational shit whilst thinking of themselves as being highly intelligent whilst doing so.

      I’m not sure if that’s quite the point you were bringing up, but it’s a pretty interesting one.