- cross-posted to:
- china@lemmy.ml
- cross-posted to:
- china@lemmy.ml
As the article mentions, the AI assistants being rolled out are required to be supervised by human employees, meaning they are more like pocket calculators than AGI robotic workers. There doesn’t really seem to be much of an issue here tbh.
Exactly, as long as the human bears the responsibility for the work, I don’t see any problem with this either.
I think AI is stupid, no matter if it’s China or America that is doing it. 🤷♂️
I think that some of the criticism levied at AI, certainly in the way that it is used under neoliberal capitalism, is absolutely valid. And i have my own worries about how it may affect human development going forward when you can essentially “cheat” your way to answers to a broad array of problems without ever having put in the work to really learn and understand the subject you are dealing with.
But we have to acknowledge that, good or bad, this is still a powerful tool. The question is, how should socialist societies approach this new tool? And unfortunately (or fortunately, depending on your viewpoint), i think we’re already past the point where we can afford not to use it. Pandora’s box has already been opened and there’s no turning back the clock.
In a way this is a bit like the atom bomb. Yes, it may be dangerous and perhaps humanity would be better off if it had never been invented. But the one thing we can’t do is allow only the enemies of socialism to possess this weapon.
I don’t see how one can reconcile being anti-technology with being a Marxist. You’d be better served in an anarchist community.
It’s very silly to say that because I don’t like LLMs, that I’m anti-technology.
You are, you’re opposed to automation technology, that’s literally Luddism, which is a form of anti-communism.
I don’t think it’s fair to accuse someone of Luddism, let alone anti-communism, just because they have reservations or skepticism about a new technology, especially one that is already being misused by capitalist interests to harm workers. There also seems to be some disagreement about the terminology, in sense that some of the things you call “AI” someone else might not see as such. So first everyone needs to agree on what “AI” even is.
Of course in a general sense automation has immense potential to benefit us as a species. The question is whether certain aspects of what is now called “AI” really do constitute useful automation, particularly when it comes to generating large amounts of what is essentially garbage content. I think we should be careful making pronouncements this early.
My view is that we need to wait and see how this technology will develop and what impact it will really have on society in the long term. What i am sure about though is that this technology is here to stay whether we like it or not.
I don’t think it’s fair to accuse someone of Luddism, let alone anti-communism, just because they have reservations or skepticism about a new technology,
I appreciate you saying this. Very strange to see someone immediately attack someone else just because I don’t share their enthusiasm
AI is largely used interchangeably with an ANN. Sometimes companies might use it even more broadly than ANN for marketing purposes, but if you actually go take a class in AI at university you will be learning about ANNs.
We used ANNs for research back in my uni days long before the “AI” hype began. If that is all that is meant by “AI” then that is a category so broad as to make any discussion of whether it’s good or bad virtually pointless, because there are so many different shapes that an ANN can take and so many functions they can fulfil that nobody actually knows what it is actually, concretely, that is being debated.
That’s… the point. That’s like, literally the entire point I am making. It makes no sense to be “anti-AI” because AI is such an incredibly broad spectrum of technology. It’s fine to be critical of specific applications of AI (indeed, there are many examples of AI making things worse or even being used for evil) but being “anti-AI” in an absolute sense is an incredibly dogmatic and entirely unreasonable position and I am utterly appalled so many people here are unironically trying to defend it.
Exactly. A lot of the points that
pcalau12i
makes, muddies that distinction in favor of LLMs and gives credit to LLM development when it is in fact a different field that is responsible for those advances
Sure, but no one uses the term ANN. It is basically useless and not even that precise. I agree that calling LLMs AI is pretty misleading. But it is better to call them what they are. If you want umbrella term for LLMs, computer vision, reinforcement learning etc. I would go with machine learning instead of ANN. Even in universities, you won’t learn much about ANN (as in the mathematic model) aside from like first lecture.
There are many approaches in machine learning and some, not all of them, use ANN.
No one uses the term ANN because most people don’t know what it means so it’s not good for marketing, so AI is used in its place, but it refers to the same kind of technology. Machine learning isn’t a good replacement precisely for the reason you say: it is broad and includes things that aren’t ANNs and would not fit under what is generally understood to be AI. If a person bought a piece of tech that said it is powered by AI and used something like a k-means clustering algorithm they probably would feel a bit ripped off and would expect something with an actual AI model that does intelligent processing, they would expect something that could take advantage of an AI accelerator, which is the consumer-end name for a piece of hardware that does AI inferencing, which is specific to ANNs!
It is just undeniably true that when “AI” is used in the overwhelming majority of articles, papers, etc these days people very specifically have ANNs in mind. If you deny this you are just denying factual reality, you are denying that 2+2=4 and that point you are being too unreasonable to carry on the discussion with. I am going to tap out of this discussion as none of y’all are being reasonable in the slightest and stretching to the moon to look for “gotchas” to justify a reactionary anti-technology stance and refusing to listen to someone with background in this field.
The AI Derangement Syndrome mind virus seems to impervious to reason and people will come up with any excuse to justify it. I refuse to engage with this further. Stop replying to me, I do not care to engage further. I do not want to argue with 4 people at once trying to pull out excuses to why it’s somehow evil for China to invest in technology because muh AI scawy. If you are willing to be educated to understand why this technology is important, educated from someone who has a computer science degree and works in this field, then I can teach you, but none of you want to learn and just want to play word games to justify your anti-AI hysteria and I have no interest in engaging with this.
AI is largely used interchangeably with an ANN.
I won’t claim authority on the subject but you are the first person to make this claim, that I have ever read. I do not think this is a commonly accepted viewpoint. At least until a couple years ago it seemed to me that there was an attempt to avoid calling Neural Networks artificial intelligence because of the previous AI hype cycles and winters that occurred
I think it’s far more telling how you conflate automation with Large Language Models (colloquially being called AI even though it’s not).
Much of those technologies that you cite as examples and call AI (OCR, computer vision), I don’t understand why you do that. Those technologies existed long before LLMs.
I find the protein folding example especially perplexing since protein folding simulation existed far, far before LLMs and machine learning, and it is ahistorical to claim those as being AI innovations.
I don’t agree with your AI boosterism, but I think what is more perplexing is how misinformed it is.
They are all artificial neural networks, which is what “AI” typically means… bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.
bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.
You are toxic, as well as being incredibly arrogant. A true example of Dunning-Kruger effect. If you want to have a tantrum then by all means do so, but don’t pretend that you are on some sort of high ground when you make your pronouncements.
Every conversation you have had with me, you project opinions that I do not have (maxism vs anarchism, calling me a luddite, etc) and construct strawmen arguments that I did not make
Do some self crit
Robots are dubbed “pearls on the crown of the manufacturing industry.” A country’s achievement in robotics research, development, manufacturing and application is an important yardstick with which to measure its level of scientific and technological innovation and high-end manufacturing…China will be the largest robot market in the world
— Xi Jinping
my food-obsessed brain reading the opening sentence “There’s a city called Hotpot? I bet it has good hotpot”
Ngl this stuff is kind of terrifying. not from a “China bad” perspective, but from just how much this technology is going to change. And how fast it’s happening.
We might be living through an equivalent of the industrial revolution here.
I don’t think so. Until we get actual intelligence (called AGI now) and Fusion power we won’t have a second industrial revolution. Because once the power issue is solved, we only have a resource problem. We’ll see an explosion of industry if humanity can achieve fusion.
I know the meme is that fusion is always 20 years away, but with recent developments and China’s private companies achieving 2/3 critical conditions for extended fusion it really does feel like we’re less than 2 decades out from commercial fusion. Will be world changing, for better or for worse, and I’m excited that we’ll be around to see what it does.
AGI isn’t real, it’s largely a buzzword without a rigorous definition. We will continue to gradually improve the quality of artificially intelligent systems as we improve the hardware and make more progress in understanding intelligence, but there will not be some turning point where there is a sudden explosion in progress from AI when we cross some non-existent AGI threshold. It will just continue to gradually improve over time.
I’m not talking about it coming from current LLM slop. I mean an actual system that is completely new.
A lot of automation can be done without AGI already. We can see automated factories, ports, buses, etc. There are general purpose robots being put to use as seen here. The article discusses how many processes within the government are becoming automated. All of this was human labor before. Just as automation created explosive technological growth in the 19th century, we could see similar kind of thing happen today.
This seems magnitudes worse, at least with the industrial revolution, you could argue that labor wasn’t being fully eliminated, but re-distributed and re-oriented to mass production and factory work. AI is the total ELIMINATION of human labor altogether. Even with other big tech advancements like the internet, it still created work in terms of all the infrastructure that had to be built, the expertise required to maintain and improve it, as well as generally creating many jobs that could not exist without the internet.
AI is the only situation I see where it can completely remove humans from the system, even for the purpose of maintenance and upkeep, it could do that on it’s own. The infrastructure? It already exists. What do we as workers get from this? What’s left to look forward to?
The capitalists can’t automate away labor. That’s the whole fundamental limitation of the capitalist mode of production. The higher your “organic composition of capital”, the lower your profit rates (for the industry as a whole). The organic composition of capital is the ratio of constant capital (buildings, machinery, robots, energy) to variable capital (human wages).
The more the capitalists try to escape having to pay wages through automation (or escape competition through monopoly), the more they dig the graves of their whole class.
In a practical sense as well, China leads the world in robotics because you need a vast government system to produce highly skilled engineers, reliable/cheap utilities and an industrial policy to generate demand for automation.
You can never fully eliminate labor, that goes against the labor theory of value. Also robots cannot grease themselves & computer servers need maintenance. Just as the internet replaced a lot of hard print publishing, helper robots will free up people to work in less automated areas like building infrastructure.
I see the strength of LLMs as something that is for regular people to interact with. Not so much for automation of paperwork in a work setting although that is one application.
E.g. Sometimes older people don’t interact with technology well. They only see buttons and menus with very brief labels on them, which can be daunting. They’re afraid of hitting the wrong thing. Often they don’t submit forms online because they don’t want to make a mistake. With many companies/organisations using online websites as a big part of their customer facing presence, older people get alienated.
An AI that converses to guide them and answer any questions would make technology more accessible.
I see the strength of LLMs as something that is for regular people to interact with. […] E.g. Sometimes older people don’t interact with technology well.
I think this argument is a bit flawed. If the main benefit of LLMs is facilitating the use of technology for the older generation, that, having not grown up as immersed in technology as we are today, is not as well versed in its use, then does this benefit not disappear when that older generation dies out and the new “older generation” will be those of us who have grown up with technology and are thus proficient in it? Why then do we still need this facilitator at that point?
In fact, i would take this line of thinking one step further: What happens when the new younger generation grows up with LLMs constantly facilitating their interfacing with technology? Will they perhaps become dependent on LLMs, having had no necessity to learn how to interact with technology without the LLM interface? Does this not just mean that LLMs will be self-perpetuating the need for their own existence? Is there not a risk that one day the skill to use technology without the crutch of LLMs will be lost altogether?
This is a risk vs reward question: Does the reward of convenience outweigh the risk of atrophy of certain skills? Of course this is basically a rhetorical question because i know what the historical answer of our societies to this question has always been, for any such new technology. It has always ended up being yes. And inevitably, we are going to end up embracing this new technology too, in some form or another, just like we did all the others in the past. That is just the way these things go.
I don’t think we’ll see elimination of all human labour in the near future, what’s much more likely is that human labour is going to be augmented by AIs. Ultimately though, to me the ultimate goal of a communist society is to free people from necessary labour as much as possible, and allow people to pursue their interests and self development. If all our necessities are met by automation, then we can focus on doing whatever we find interesting individually or collectively.
I understand your fears and concerns, but I think you are slightly overreacting. Even with the inventions of better A.I. and robotics, there will still be more jobs created eventually. Supervision and improvement of A.I. and robotics, new industries that previously may not have been possible.
The Chinese government has been very clear, that at least for now, robots/A.I. won’t completely replace human labor or thinking, just supplement it.
I don’t know what to think of this.
I don’t know what to think of AI, dammit.
Just get off the AI derangement syndrome forums that are convincing you to hate tech and realize technology is just a tool which can be good or bad depending upon its application and you do not need to have a generalized opinion on it as a whole. It’s like saying “I don’t know what to think of knives.” It’s just a weird statement. Knives are just knives, you can use them for bad things like stabbing people or good things like cutting up some peppers to go in hot pot. No need to have an opinion on knives in general. Same with AI.
I always thought AI was really cool, like inherently and on the face of it. Generative AI makes you feel like you’re out of Star Trek sometimes. The distaste so many people have with AI comes down to the fact it violates copyright in a nebulous way (lib shit) and that it’s a genuine threat to the livelihood of artists (real shit.) It will be easier to feel optimistic about AI when we can be sure we’re living in an economy that prioritizes lives over profit, because only in that society can generative AI and artists truly live together peacefully.
What about the energy consumption?
My experience with AI is pretty limited to what I can fuck around with on my own system. So I’m not seeing its energy use as any bigger of a problem than how capitalists will be using it to further trim down production and steal even more wealth. I don’t think anything about the current state of AI is great, but that has more to do with the current state of the world than AI itself.
In my perfect future, even if peak generative AI was an energy-draining unviable mess of a technology, the novelty would still be enough that I’d want to see it in a specialty museum, or the service available to the public in some way.
I think it’s a net positive as long as there’s a human in the loop. The key bit is this in my opinion:
While AI is playing a growing role in government work, officials say it is intended to assist, not replace, human workers — despite referring to such systems as “employees.” Futian’s regulatory framework requires each AI system to be monitored by a designated human supervisor to prevent errors and ensure compliance with ethical standards.
“The guardian of the AI-powered employee is responsible for overseeing its operation, and if any issues arise, the guardian is held accountable,” said Gao.
The AI isn’t the decision maker, it’s an automation tool that allows a human worker to do their job more efficiently, but responsibility still lies with the human.
I’ll star this to keep the quote in mind.
👍
That’s very cool.
Oh my fucking God. Like, mere weeks after releasing AI and the Chinese are already using it to make society better. What have the US and Europe done with it? Deep fake porn, spam, and ruining everything.
well to be fair the USA does use it positively in many ways as well, USPS is largely ran on AI.
It’s pretty incredible to watch how differently this tech is applies in China and the west. It’s such a great illustration of how different sets of social and economic rules impact the development of society as a whole.