A group of hackers that says it believes “AI-generated artwork is detrimental to the creative industry and should be discouraged” is hacking people who are trying to use a popular interface for the AI image generation software Stable Diffusion with a malicious extension for the image generator interface shared on Github.

ComfyUI is an extremely popular graphical user interface for Stable Diffusion that’s shared freely on Github, making it easier for users to generate images and modify their image generation models. ComfyUI_LLMVISION, the extension that was compromised to hack users, is a ComfyUI extension that allowed users to integrate large language models GPT-4 and Claude 3 into the same interface.

The ComfyUI_LLMVISION Github page is currently down, but a Wayback Machine archive of it from June 9 states that it was “COMPROMISED BY NULLBULGE GROUP.”

  • awesomesauce309@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I really don’t understand this. All these search engine companies give millions of users a single button to create the most soulless art you’ve ever seen, but instead of caring about that they attack the tool that most enables the user to have control over their generation. You can argue that unlimited competition is bad for commission artists, but this attack is not “Pro Art”.

    Using creative cloud isn’t a sin, but helping maintain Adobes industry stranglehold should be.

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      Honestly, I feel like being a Luddite and everytime someone shows art from now on, critique the ever loving hell out of their process.

      “Did you make the brushes yourself from sheep you raised? Did you grind the pigments from plants you grew yourself?”

      Art is amazing, but artists are some of the most delicate people. Their entire career is, in a way, a showcase of themselves, and if you take any part of that away from them or judge it, they become incredibly hostile and take it deeply personally. But literally the same kind of criticisms they’re making now are taught in art history about previous advancements. It’s just the same fragile egos afraid that they’re not as special anymore.

  • A_Very_Big_Fan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Honestly I still don’t understand the “stealing” argument. Does the stealing occur during training? From everything I’ve learned about the technology, the training, in terms of the data given and the end result, isn’t any different than me scrolling through Google images to get a concept of how to draw something. It’s not like they have a copy of the whole Internet on their servers to make it work.

    Does it occur during the image generation? Because try as I might, I’ve never been able to get it to output copyrighted material. I know over fitting used to be an issue, but we figured out how to solve that issue a long time ago. “But the signatures!!” yeah, it’s never outputted a recognizable/legible signature, it just associates signatures with art.

    Shouldn’t art theft be judged like any other copyright matter? It doesn’t matter how it was created, it matters if it violates fair use. I really don’t think training crosses that line, and I’ve yet to see these models output a copy of another image outside of image-to-image models.

    • retrospectology@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      2 months ago

      It’s theft of labor without any compensation, aimed at cheapening the very value of that labor.

      A human artist can, and often does, train simply by looking at the real world. The art they then produce is a result of that knowledge being interpreted and stylized by their own brain and perception. The decision making on how to represent a given subject, what details to add and leave out to achieve an effect, is done by the artist themselves. It’s a product of their internal mental laboring.

      By contrast, if you trained an AI on photos alone it would never, ever produce anything that looks like a drawing or a piece of art, it would never create a stylized piece of art or make a creative decision of its own.

      In order to produce art the AI must be fueled with human created art, that humans labored to produce. The human artists are not being compensated for the use of that labor, and even worse the AI is leveraging that to make the human labor worth less. And what’s more, that AI’s ability will stagnate without further theft of newer, more novel art and concepts.

      Without that keystone of human labor the AI simply can’t function.

      Ripping off so many people at once and so chaotically that you can’t distinguish exactly how any given individual is being exploited doesn’t mean those people aren’t still being ripped off. The machine that the tech bros created could not exist without the stolen labor of the artists.