- cross-posted to:
- fuck_ai@lemmy.world
No thanks. Iām perfectly capable of coming up with incorrect answers on my own.
youāre right tho
Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldnāt be giving it away, much less begging you to use it.
Youāre right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.
Speaking of off-putting, that friggin copilot logo floating around on my Word document is so annoying. And the menu that pops up when I paste text ā wtf does āpaste with Copilotā even mean?
They are trying to saturate the user base with the word copilot. At least microsoft isnt very sneaky about anything.
customers dont want AI, but only thhe corporation heads seem obssed with it.
Itās partly that and partly a mad dash for market share in case the get it to work usefully. Although this is kind of pointless because AI isnāt very sticky. Thereās not much to keep you from using another companyās AI service. And only the early adopter nerds are figuring out how to run it on their own hardware.
One of the mistakes they made with AI was introducing it before it was ready (Iām making a generous assumption by suggesting that āreadyā is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isnāt ready for prime time. Now theyāll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
iOS still doesnāt do what was sold in the ads, almost a full year later.
Edit: also things like email summary donāt work, the email categories are awful, notification summaries are straight up unhinged, and I donāt think anyone asked for image playground.
Insert āFull Self Drivingā Here.
Also, outlookās auto alt text function told me that a conveyor belt was a picture of someoneās screen today.
Calling it āFull Self Drivingā is such blatant false advertising.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
Add Amazonās Alexa+ to that list. Itās nearly a year overdue and still nowhere in sight.
capitalism working against itself
More like: capitalism reaching its own logical conclusion
(Iām making a generous assumption by suggesting that āreadyā is even possible)
It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.
The current marketing for AI wonāt apply to anything that meets the marketing in the foreseeable future. The desired complexity isnāt going to exist in silicone at a reasonable scale.
Iām making a generous assumption by suggesting that āreadyā is even possible
To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, itās easy to be fooled into believing itās happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear itās unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever Iām rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.
There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isnāt going to be possible to get where they want to go.
OpenAI admitted that with o1! they included graphs directly showing gains taking exponential effort
do you have any articles on this? i have heard this claim quite a few times, but im wondering how they put numbers on the capabilities of those models.
Sorry nope didnt keep a link.
Yeah but first to market is sooooo good for stock price. Then you can sell at the top and gtfo before people find out itās trash
The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.
Ah, like with the TPM blackbox?
This would only work for a service that customers want or need
I they didnāt over promise, they wouldnāt have had mountain loads of money to burn, so they wouldnāt have advanced the technology as much.
Tech giants canāt wait decades until the technology is ready, they want their VC money now.
Sure, but if the tech in the end doesnāt deliver itās all that money burnt.
If it does deliver itās still oligarchs deciding what tech we get.
Yes. The ones that have power are the ones that decide. And oligarchs by definition have a lot of power.
I think people care.
They care so much they actively avoid them.
Oh we care alright. We care about keeping it OUT of our FUCKING LIVES.
AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesnāt do what was promised and nobody gives a shit.
Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.
HD-DVD was comparable to BluRay and just happened to lose out because the industry wonāt allow two similar technologies to exist at the same time.
Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.
BluRay was slightly better if I recall correctly. With the rise in higher definition televisions, people wanted to max out the quality possible, even if most people (still) canāt tell the difference
Blu-ray also had the advantage of PS3 supporting the format without the need for an external disc drive.
@philycheeze @xkbx yes, I think Microslopās fumble of selling the HD DVD drive only as an external add-on really hindered the format
@philycheeze @xkbx I bought one anyway. 10 years later, mind you :p
Theyāre not necessarily bad, itās just an extra barrier to entry.
Blue ray also had the advantage of not having multiple Dās in its name.
Thatās not why it won, though. It won because the industry wanted zone restrictions, which only Blu-Ray supported. They suck for the user, but allows the industry to stagger releases in different markets. In reality it just means that I canāt get discs of most foreign films, because they wonāt work in my player.
Iām sure that was a factor, but Blu-ray won because the most popular Blu-ray player practically sold itself
Itās hard to say what was the final nail in the coffin, but it is true that Blu-Ray went from underdog to outselling HD-DVD around the time the PlayStation 3 came out. Iām not sure how much those early sales numbers matter, though, because Iām sure both were still miniscule compared to DVD.
When 20th Century Fox dropped support for HD-DVD, they cited superior copy protection as the reason. Lionsgate gave similar sentiment.
When Warner later announced they were dropping HD-DVD, they did cite customer adoption as the reason for their choice, but they also did it right before CES, so Iām pretty sure there were some backroom deals at play as well.
I think the biggest impact of the PlayStation 3 was accelerating adoption of Blu-Ray over DVD. Back when DVD came out, VHS remained a major player for years, until the year there was a DVD player so dirt cheap that everyone who didnāt already have a player got one for Christmas.
Nah Blu-ray was significantly better, 50gb capacity vs 30gb
The big plus for HD DVD was it was far cheaper to produce, it didnāt need massive retooling for manufacturing.
Not just that, space. BluRays have way more space than DVDās. Remember how many 360 games came with multiple discs? Not a single PS3 game did, unless it was a bonus behind the scenes type thing.
Xbox 360 used DVDs for game discs and could play video DVDs. They āsupportedā HDDVDs - you needed an addon which had a separate optical drive in it. Unsurprisingly this didnāt sell well.
Afaik betamax did not have any porn content, which might have contributed to the sale of VHS systems.
Dude donāt throw Betamax in there, that was a better product than the VHS. AI is just ass.
I was just about to mention porn and how each new format of the past came down to that very same factor.
If AI computers were incredible at making AI porn I bet you theyād be selling a lot better hahaBetamax actually found use in Television broadcast until the switch to HDTV occurred in 2009
the later digital variants of beta werenāt retired by sony until ~ 2016.
I had no clue that they did digital betamaxā¦
That would make senes thoughā¦
There was at one point an HDVHS as well it was essentially a 1080P MPEG stream on a VHS tape
No, Iām sorry. It is very useful and isnāt going away. This threads is either full of Luddites or disingenuous people.
nobody asked you to post in this thread. you came and posted this shit in here because the thread is very popular, because lots and lots of people correctly fucking hate generative AI
so I guess please enjoy being the only ānon-disingenuousā bootlicker you know outside of work, where everyoneās required (under implicit threat to their livelihood) to love this shitty fucking technology
but most of all: donāt fucking come back, none of us Luddites need your mid ass
@blarth @TheThrillOfTime huh. You totally name at least one use case then, huh
You only didnāt because itās so blindingly obvious(Itās BS)
Also, learn about Luddites, manI have friends who are computer engineers and they say that it does a pretty good job of generating code, but thatās not a general population use case. For most people, AI is a nearly useless product. It makes Google searches worse. It makes your phone voice assistant worse. Itās not as good as human artists. And itās mostly used to create dumbass posts on Reddit to farm engagement. In my life, AI has not made anything better.
Maybe Iām just getting old, but I honestly canāt think of any practical use case for AI in my day-to-day routine.
ML algorithms are just fancy statistics machines, and to that end, I can see plenty of research and industry applications where large datasets need to be assessed (weather, medicine, ā¦) with human oversight.
But for me in my day to day?
I donāt need a statistics bot making decisions for me at work, because if it was that easy I wouldnāt be getting paid to do it.
I donāt need a giant calculator telling me when to eat or sleep or what game to play.
I donāt need a Roomba with a graphics card automatically replying to my text messages.
Handing over my entire lifeās data just so a ML algorithm might be able to tell me what that one website I visited 3 years ago that sold kangaroo testicles was isnāt a filing system. Thereās nothing I care about losing enough to go the effort of setting up copilot, but not enough to just, you know, bookmark it, or save it with a clear enough file name.
Long rant, but really, what does copilot actually do for me?
Our boss all but ordered us to have IT set this shit up on our PCs. So far Iāve been stalling, but I donāt know how long I can keep doing it.
Tell your boss you talked to legal and they caution that all copilot data is potentially discoverable.
Set it up. People have to find out by themselves.
same here, i mostly dont even use it on the phone. my bro is into it thought, thinking ai generate dpicture is good.
Itās a fun party trick for like a second, but at no point today did I need a picture of a goat in a sweater smoking three cigarettes while playing tic-tac-toe with a llama dressed as the Dalai Lama.
Itās great if you want to do a kids party invitation or something like that
That wasnāt that hard to do in the first place, and certainly isnāt worth the drinking water to cool whatever computer made that calculation for you.
The only feature that actually seems useful for on-device AI is voice to text that doesnāt need an Internet connection.
As someone who hates orally dictating my thoughts, thatās a no from me dawg, but I can kinda understand the appeal (though Iāll note offline TTS has been around for like a decade pre-AI)
longer: dragon dictate and similar go back to the mid 90s (and I bet the research goes back slightly earlier, not gonna check now)
similar for TTS
deleted by creator
Before ChatGPT was invented, everyone kind of liked how you could type in ābirdā into Google Photos, and it would show you some of your photos that had birds.
I use it to speed up my work.
For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
Then thereās just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? Thereās tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a secā¦
itās really embarrassing when the promptfans come here to brag about how theyāre using the technology thatās burning the earth and itās just basic editor shit they never learned. and then you watch these fuckers āworkā and itās miserably slow cause theyāre prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task thatās just a fucking key combo.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.
(100% natural rant)
I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing
vf.gU
in fucking vim with a fraction of the amount of the energy thatās enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even itās possible. You!
You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.
And somehow these executives claim AI improves fucking productivityā½
exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with
ysiw]
where the last character is the bracket you want. itās incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electricianās screwdriver or another semi-specialized tool. thereās a million things you can do with it, but we donāt teach any of them to programmers. thereās no vim or emacs class, and Iāve seen the quality of your average bootcampās vscode material. your average programmer bounces between fad editors depending on whatās being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldnāt be ā Iād gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.
and now weāre trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how youāre just jealous that their nephew is going to become god first, because of course itās also a weirdo cult underneath it all, thatās what happens when you vilify the concept of knowing fuck all about anything.
The only things Iāve seen it do better than I could manage with a script or in Vim are things that require natural language comprehension. Like, āhereās an email forwarded to an app, find anything that sounds like a deadlineā or āgiven this job description, come up with a reasonable title summary for the page it shows up onā⦠But even then those are small things that could be entirely omitted from the functionality of an app without any trouble on the user. And thereās also the hallucinations and being super wrong sometimes.
The whole thing is a mess
presumably everyone who has to work with you spits in your coffee/tea, too?
adding brackets and changing upper/lower capitalization
I have used a system wide service in macOS for that for decades by now.
changing upper/lower capitalization
Thatās literally a built-in VSCode command my dude, it does it in milliseconds and doesnāt require switching a window or even a conscious thought from you
Gotta be real, LLMs for queries makes me uneasy. Weāre already in a place where data modeling isnāt as common and people donāt put indexes or relationships between tables (and some tools didnāt really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, itās usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesnāt really inspire confidence.
If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if thatās the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, Iām totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk Iām not sold.
Data understanding is part of the job anyhow, thatās largely conceptual which maybe LLMs could work as an extension for, but I really wouldnāt trust it to generate full on queries in most of the environments Iāve seen, data is overwhelmingly super messy and orgs donāt love putting effort towards governance.
Iāve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.
Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar Iāve found it common for them to switch between them without warning.
Yeah I can totally understand that, Genie is databricksā one and apparently itās surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.
Yeah, the more data you have around the database the better, but thatās always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.
When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?
what in the utter fuck is this post
The first two examples I really like since youāre able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?
they donāt verify any of it
I use it to parse log files, compare logs from successful and failed requests and that sort of stuff.
and now weāre up to inaccurate, stochastic
diff
. fucking marvelous.Stay tuned for inaccurate, stochastic
ls
.
How about real-time subtitles on movies in any language you want that are always synced?
VLC is working on that with the use of LLMs
I tried feeding Japanese audio to an LLM to generate English subs and it started translating silence and music as requests to donate to anime fansubbers.
No, really. Fansubbed anime would put their donation message over the intro music or when there wasnāt any speech to sub and the LLM learned that.
All according to k-AI-kaku!
Weāve had speech to text since the 90s. Current iterations have improved, like most technology has improved since the 90s. But, no, I wouldnāt buy a new computer with glaring privacy concerns for real time subtitles in movies.
Youāre thinking too small. AI could automatically dub the entire movie while mimicking the actors voice while simultaneously moving their lips and mouth to form the words correctly.
It would just take your daily home power usage to do a single 2hr movie.
Theyāre great for document management. You can let it build indices, locally on your machine with no internet connection. Then when you want to find things you can ask it in human terms. Iāve got a few gb of documents and finding things is a bitch - Iām actually waiting on the miniforums a1 pro whatever the fuck to be released with an option to buy it without windows (because fuck m$) to do exactly this for our home documents.
a local search engine but shitty, stochastic, and needs way too much compute for āa few gb of documentsā, got it, thanks for chiming in
Offline indexing has been working just fine for me for years. I donāt think Iāve ever needed to search for something esoteric like āthe report with the blue header and the photo of 3 goats having an orgyā, if I really canāt remember the file name, or what itās associated with in my filing system, I can still remember some key words from the text.
Better indexing / automatic tagging of my photos could be nice, but thatās a rare occurrence, not a āI NEED a button for this POS on my keyboard and also want it always listening to everything I doā kind of situation
I wish that offline indexing and archiving were normalized and more accessible, because itās a fucking amazing thing to have
Apparently itās useful for extraction of information out of a text to a format you specify. A Friend is using it to extract transactions out of 500 year old texts. However to get rid of hallucinations the temperature reds to be 0. So the only way is to self host.
Setting the temperature to 0 doesnāt get rid of hallucinations.
It might slightly increase accuracy, but itās still going to go wrong.
Well, LLMs are capable (but hallucinant) and cost an absolute fuckton of energy. There have been purpose trained efficient ML models that weāve used for years. Document Understanding and Computer Vision are great, just donāt use a LLM for them.
Reducing computer performance:
Turbo button š¤ AI button
now that you mention it, kinda surprised I havenāt ever seen a spate of custom 3D-printed turbo buttons from overclocker circles
it could turn on the RGB! though that would imply that the RGB could be turned off in the first place, which is optimistic on my part
itās the button for more RGB
saw a microphone with RGB and iām like wtf is this thing supposed to do, flash disco lights when youāre on stream shouting slurs at your esteemed fellow gamers
shouting slurs at your esteemed fellow gamers
Theyāre called āheated gaming momentsā /j
nah, just call a fuckwit a fuckwit. even jokingly giving them breathing room is something they know how to abuse.
reheated gaming moments
Fresh(?) off the PUBG Bridge
Same issue from when we had turbo buttons: why have a button for something you donāt turn off?
your comment demonstrates a remarkable lack of imagination
Better option: An array of flip switches for throttling to different speeds.
Best option: Mount these flip switches above you on an overhead control panel.
And a clear lack of understanding of what the turbo button actually did
I thought it makes the game tick faster or slower, such that you have to have it set correctly or itās unplayable.
Some early PC software, mostly games, were written expecting the computer ran at a fixed speed which was the speed of the original IBM PC which used an Intel 8088 that ran at 4.77 MHz. If the IBM PC was more like computers such as the Commodore 64 which changed little during its production run, that would have been fine. But eventually faster PCās were released that ran on 286, 386, 486, etc. CPUs that were considerably faster and hence software that expected the original IBM PC hardware ran way too fast.
The turbo button was a bit of a misnomer since you would normally have it on and leave it on, only turning it off as sort of a compatibility mode to run older software. How effective it was varied quite a bit - some computers turning it off would get you pretty close to the original IBM PC in terms of speed, but others would just slow the computer down, but not nearly enough, making it mostly useless for what it was intended for.
Kind of, though itās about the CPUās clock speed rather than the details of the game.
So, pedantically? no.
Experientially? yes.
I had one on my PC in the late 90s, early 2000s.
Thatās not fair! I care! A lot!
Just had to buy a new laptop for new place of employment. It took real time, effort, and care, but Iāve finally found a recent laptop matching my hardware requirements and sense of aesthetics at a reasonable price, without that hideous copilot button :)
quite annoyed that the Snapdragon laptops are bootlocked cos theyād make great Linux boxes
How are they bootlocked? Just need the right iso. I have done it, because I didnāt know they came with Linux for this particular client and they put windows on it, had to get a specific iso to reinstall when they borked it.
oh really? I thought MS had demanded boot locking for the ARM laptops.
Iām not 100% sure, I just know I did it once. Let me see if I can get the iso I used for Linux.
yeah looks like Iām thankfully wrong!
Which laptop did you buy if you donāt mind sharing?
Decided on this:
Still had some issues under Linux / NixOS a couple of weeks ago (hardware-wise everything worked; but specific programs, esp. Librewolf, will randomly start eating CPU and battery out of nowhere, with what looks like noops. Havenāt investigated further, yet.
sweet, glad to know it generally works with linux. this is available in my part of the world. been shopping around for a personal for-work laptop since my company is stingy. And I plan to move on anyways.
It generally works, yes, but Iād hold off for another month or two in the hopes of the issues being resolved in the kernel
I really wanted to like that laptop but the screen is so incredibly glossy that unless youāre in a totally dark room it becomes a mirror.
I think itās a matter of preference. Havenāt noticed the screen being a mirror yet, but then again I feel like any even mildly matte screen looks like itās being viewed through a veilā¦
I am a bit worried/curious about how the oled will deal with my very static waybars though, lol
wtf is going on with that touchpad - is it a tap calculator input?
Numpad/pin input. Utterly useless in my opinion. Also apparently activates itself pretty regularly by accident from palms resting when typing. YouTube comments are full of people desperate for a windows/driver update which lets you deactivate this thing.
Oh, btw, I did not go through the trouble of enabling support under Linux (you can, but itās optional, because, well⦠Linux)
Imagine that, a new fledgingly technology hamfistedly inserted into every part of the user experience, while offering meager functionality in exchange for the most aggressive data privacy invasion ever attempted on this scale, and no one likes it.
Yāall remember when 3D TVs were going to be revolutionary?
3D TVs I can see happening, if thereās some breakthrough that fixes the current tech shortcomings .
But NFTs, and blockchain in general? Hahahahhah.
To many separate components need to improve.
-
Hardware. Right now, 3D TVās require special glasses, or they only support a single viewer in a very narrow viewing range.
-
Content. Movies made for 3D with depth effects are better than old shows remastered to have āpop outā effects. I saw Pacific Rim in Imax 3D and it was amazing. I also saw Nightmare Before Christmas remastered for 3 and it was fucking terrible.
-
Infrastructure. Cable/Service providers need to provide services capable of streaming 3D movies consistently with solid performance.
-
User acceptance. To me, the market is still prioritizing picture, sound, and frame rate over 3D effects. People just donāt care for it right now.
-
Turned out all we needed was a higher frame rate.
A friend of mine is a streamer. On his discord, the topic of the Switch 2 came up, and one of his fans stated their desire for it to support 3D TV. Rather than saying my gut reaction ā āare you crazy?ā ā I simply asked why. I consider it a great moment of personal self control.
I mean the thought of big screen 3ds emulation would be pretty fun, but yeah that technology died a decade ago. Thats like asking why the Switch 2 doesnāt have a slot for SNES carts!
Removed by mod
WTF is an AI computer? Is that some marketing bullshit?
afaict theyāre computers with a GPU that has some hardware dedicated to the kind of matrix multiplication common in inference in current neural networks. pure marketing BS because most GPUs come with that these days, and some will still not he powerful enough to be useful
This comment is the most importantly one in this thread. Laptops already had GPUs. Does the copilot button actually result in you conversing with an LLM locally or is inference done in the cloud? If the latter, itās even more useless.
deleted by creator
@Matriks404 @dgerard got it in one! Itās MSās marketing campaign for PCs with a certain amount of āAIā FLOPS
IDK if the double pun was intended, but a FLOPS is a measurement of how many (floating point) operations can a computer make per second
āY2k readyā vibes.
Yes.
Itās not care. Its want. We donāt want AI.
FR I think more people actively dislike it, which is a form of care.
Depends on the implementation.
Just about everyone I know loves how iPhones can take a picture and readily identify a plant or animal. Thatās actually delightful. Some AI tech is great.
Now put an LLM chatbox where people expect a search box, and see what happens⦠yeah that shit sucks.
loves how iPhones can take a picture and readily identify a plant or animal.
As I biologist Iād like to correct your sentence: āiPhones can take a picture and pretend to sometimes manage to get close to identifying a plantā
It sucks at identification too
Hotdog or not hotdog
Whenever I ask random people who are not on IT, they either donāt know about it or they love it.
Thatās a boring perspective fuck you for sharing.
Thats an interesting perspective thanks for sharing.
I work in IT and have recently been having a lot of fun leveraging AI in my home lab to program things as well as doing audio\video generation (which is a blast honestly.) So⦠I mean, I think it really depends on how itās integrated and used.
āI work in ITā says the rando, rapaciously switching between support tickets in their web browser and their shadow-IT personal browser
āIāve been having a lot of funā continues the rando, in a picture-perfect replica of every other fucking promptfan posting the same selfish egoist bullshit
āSo⦠I mean, I think it really depends on how itās integrated and usedā says thee fuckwit, who canāt think two words beyond their own fucking nose
deleted by creator
mmmm, yes, being told what I donāt know⦠oh yes, thatās the stuff. best kind of thing to read after the kind of week Iāve had!
Removed by mod
Removed by mod
look, Iāll do you the disfavour of giving you an actually detailed reply
you know exactly fucking nothing about me, about what I do, and about my competencies. if you did just a liiiiiiittle bit of work you might get an inkling, but: I know you didnāt, and I know you donāt.
thatās not a judgement, thatās just fact.
trying to flippantly rage at my derision of your shitty post⦠I mean, points for effort? but⦠be more interestingā¦? youāre factory-line-identical outrage, and itās boring
para (2): sure, I made some inferred guesses. still donāt think Iām wrong (and your little tagline ragefest there isnāt helping, either)
paras (1) and (3): I lul. once again, if you knew anything about meā¦
but sure, go off queen. Iām sure your emanated bilge will be received with vim and verve.
Speak for yourself.
Google, Apple, Microsoft, Nvidia and everyone else is hyping up AI. Consumers are not really seeing much benefit by making everything AI-ified. Executives are raving over it but maybe arenāt realize that people outside of the C-suite arenāt that excited? Having it shoved in our faces constantly, or crammed in places companies hope they can save money is not helping either.
Itās FOMO amplified by capitalistic competition. No company wants to be the one left behind. I guarantee Google, Meta and even OpenAI know the limitations of their products. They donāt care, they just want to be at least as good as their competitors, because they assume at some point one of them will reach āgood enough.ā And at that moment, if theyāre not in position to grab market share, theyāll lose a once-in-a-generation chance for billions or trillions of dollars in value.
Weāre the casualties, because the people in the middle - companies with no AI but whose C-suite buys into the hype - demand we use unworkable products because theyāre too willfully ignorant to know theyāre not panaceas to whatever is bothering those C-suite execs at the moment.
Quarterly Driven Development
My problem is that itās not that fucking useful. I got the Pixel 9 specifically because of its advertised AI chip for the assistant and I swear itās just gotten worse since the Pixel 7. I used to be able to ask Google anything through the assistant, and now 90% of my questions are answered with ācanāt find the information.ā
They also advertised (or at least heavily alluded to) the use of the AI chip when you are in low network areas but it works just as good outside of 4g+ coverage as it ever did without the stupid chip.
Whats the point of adding AI branded nonsense if thereās no practical use for it. And that doesnāt even start to cover the issues with AIās reliability as a source of information. Garbage in = garbage out.
i dint get a pixel for that reason after PIxel 5a died, the exonys chip is significantly weaker than other flagship phones, and they sacrificed thier battery power/efficiency capacity since 5A(which was a very defective phone) just to prop up AI.
We know google was saving money on not using QUALCOMM/snapdragon chips, which most others are using. AI is just thier excuse so they can put less effort into making quality product.
When Gemini can find the information, they added flowery āsocialā bullshit before, in the middle and after the information I asked for wasting my time
I was looking at new phones and basically every one was advertising their AI assistant. Are any of them better than the digital assistants from 2016?