• stuner@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 hours ago

    IMHO the OSI is right, the designation “open source” should be reserved for those models that are actually open source (including training data). And apparently there are a few models that actually meet this criterion: “Though none are confirmed, the handful of models that Bdeir told MIT Technology Review are expected to land on the list are relatively small names, including Pythia by Eleuther, OLMo by Ai2, and models by the open-source collective LLM360.” (https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/)

    Perhaps it would also be useful to have a name for models that release their weights under an OSI license, maybe “open weight”? However, this model would not even meet that… (same for Llama).

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      13 hours ago

      Perhaps it would also be useful to have a name for models that release their weights […]

      open-weight?

      I think the companies mostly stopped releasing the training data after a lot of them got sued for copyright infringement. I believe Meta’s first LLaMA still came with a complete list of datasets that went in. And I forgot the name if the project but the community actually recreated it due to the licensing of the official model at that time that only allowed research. But things changed since then. Meta opened up a lot. Training got more extensive and is still prohibitively expensive (maybe even more so). And the landscape got riddled with legal issues, compared to the very early days where it was mostly research with less attention by everyone.