• fartsparkles@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It’s also a bunch of brainfarting drivel that could be summarized:

    Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

    Or

    Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

      You make his position sound way more measured and responsible than it is.

      His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        This guy is going to be very upset when he realizes that there is no absolute morality.

        • AcausalRobotGod@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken

            and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working