Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
New piece from Baldur Bjarnason: AI and Esoteric Fascism, which focuses heavily on our very good friends and their link to AI as a whole. Ending quoteās pretty solid, so Iām dropping it here:
Anyways, hereās my personal sidenote:
As Iāve mentioned a bajillion times before, Iāve predicted this AI bubble would kill AI as a concept, as its myriad harms and failures indelibly associate AI with glue pizzas, artists getting screwed, and other such awful things. After reading through this, its clear Iāve failed to take into account the political elements of this bubble, and how itād affect things.
My main prediction hasnāt changed - I still expect AI as a concept to die once this bubble bursts - but I suspect that AI as a concept will be treated as an inherently fascist concept, and any attempts to revive it will face active ridicule, if not outright hostility.
Well, how do you feel about robotics?
On one hand, I fully agree with you. AI is a rebranding of cybernetics, and both fields are fundamentally inseparable from robotics. The goal of robotics is to create artificial slaves who will labor without wages or solidarity. Weāre all ethically obliged to question the way that robots affect our lives.
On the other hand, machine learning (ML) isnāt going anywhere. In my oversimplification of history, ML was originally developed by Markov and Shannon to make chatbots and predict the weather; we still want to predict the weather, so even a complete death of the chatbot industry wonāt kill ML. Similarly, some robotics and cybernetics research is still useful even when not applied to replacing humans; robotics is where we learned to apply kinematics, and cybernetics gave us the concept of a massive system that we only partially see and interact with, leading to systems theory.
Hereās the kicker: at the end of the day, most people will straight-up refuse to grok that robotics is about slavery. Theyāll usually refuse to even examine the etymology, let alone the history of dozens of sci-fi authors exploring how robots are slaves or the reality today of robots serving humans in a variety of scenarios. They fundamentally donāt see that humans are aggressively chauvinist and exceptionalist in their conception of work and labor. Itās a painful and slow conversation just to get them to see the word
robota
.Good food for thought, but a lot of that rubs me the wrong way. Slaves are people, machines are not. Slaves are capable of suffering, machines are not. Slaves are robbed of agency they would have if not enslaved, machines would not have agency either way. In a science fiction world with humanlike artificial intelligence the distinction would be more muddled, but back in this reality equivocating between robotics and slavery while ignoring these very important distinctions is just sophistry. Call it chauvinism and exceptionalism all you want, but I think the rights of a farmhand are more important than the rights of a tractor.
Itās not that robotics is morally uncomplicated. Luddites had a point. Many people choose to work even in dangerous, painful, degrading or otherwise harmful jobs, because the alternative is poverty. To mechanize such work would reduce immediate harm from the nature of the work itself, but cause indirect harm if the workers are left without income. Overconsumption goes hand in hand with overproduction and automation can increase the production of things that are ultimately harmful. Mechanization has frequently lead to centralization of wealth by giving one party an insurmountable competitive advantage over its competition.
One could take the position that the desire to have work performed for the lowest cost possible is in itself immoral, but that would need some elaboration as well. Itās true that automation benefits capital by removing workersā needs from the equation, but itās bad reductionism to call that its only purpose. Is the goal of PPE just to make workers complain less about injuries? I bought a dishwasher recently. Did I do it in order to not pay myself wages or have solidarity for myself when washing dishes by hand?
The etymology part is not convincing either. Would it really make a material difference if more people called them āautomataā or something? Äapek chose to name the artificial humanoid workers in his play after an archaic Czech word for serfdom and it caught on. Itās interesting trivia, but itās not particularly telling specifically because most people donāt know the etymology of the term. The point would be a lot stronger if we called it āslavetronicsā or āindenture engineeringā instead of robotics. You say cybernetics is inseparable from robotics but I donāt see how steering a ship is related to feudalist mode of agricultural production.
I think the central challenge of robotics from an ethical perspective is similar to AI, in that the mundane reality is less actively wrong than the idealistic fantasy. Robotics, even more than most forms of automation, is explicitly about replacing human labor with a machine, and the advantages that machine has over people are largely due to it not having moral weight. Like, you could pay a human worker the same amount of money that electricity to run a robot would cost, it would just be evil to do that. You could work your human workforce as close to 24/7 as possible outside of designated breaks for maintenance, but it would be evil to treat a person that way. At the same time, the fantasy of āhard AIā is explicitly about creating a machine that, within relevant parameters, is indistinguishable from a human being, and as the relevant parameters expand the question of whether that machine ought to be treated as a person, with the same ethical weight as a human being should become harder. If we create Data from TNG he should probably have rights, but the main reason why anyone would be willing to invest in building Data is to have someone with all the capabilities of a person but without the moral (or legal) weight. This creates a paradox of the heap; clearly there is some point at which a reproduction of human cognition deserves moral consideration, and it hasnāt been (to my knowledge) conclusively been proven impossible to reach. But the current state of the field obviously doesnāt have enough of an internal sense of self to merit that consideration, and I donāt know exactly where that line should be drawn. If the AGI crowd took their ideas seriously this would be a point of great concern, but of course theyāre a derivative neofascist collection of dunces so the moral weight of a human being is basically null to begin with, neatly sidestepping this problem.
But I also think youāre right that this problem is largely a result of applying ever-improved automation technologies to a dysfunctional and unjust economic system where any improvement in efficiency effectively creates a massive surplus in the labor market. This drives down the price (i.e. how well workers are treated) and contributes to the immiseration of the larger part of humanity rather than liberating them from the demands for time and energy placed on us by the need to eat food and stuff. If we can deal with the constructed system of economic and political power that surrounds this labor it could and should be liberatory.