

Herostraticly renowned.
Herostraticly renowned.
I decided to remove that comment because of the risk of psychic damage.
deleted by creator
From the comments
But Iām wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.
No biggie, just decide one of the largest open questions in ethics and use that to moderate.
(It would be funny if unaligned AIs take advantage of this to plot humanityās downfall on LW, surrounded by flustered rats going all ātechcnially theyāre not breaking the rulesā. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)
āItās not lupus. Itās never lupus. Itās whatever.ā
Annoying nerd annoyed annoying nerd website doesnāt like his annoying posts:
https://news.ycombinator.com/item?id=43489058
(translation: John Gruber is mad HN doesnāt upvote his carefully worded Apple tonguebaths)
JWZ: take the win, man
As it is theyāre close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.
I hated Sam Altman before it was cool apparently.
LW: 23AndMe is for sale, maybe the babby-editing people might be interested in snapping them up?
https://www.lesswrong.com/posts/MciRCEuNwctCBrT7i/23andme-potentially-for-sale-for-less-than-usd50m
Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you donāt have a human collaborator and even if someone would prefer that it be kept secret.
Never change LW, never change.
Stackslobber posts evidence that transhumanism is a literal cult, HN crowd is not having it
Redis guy AntiRez issues a heartfelt plea for the current AI funders to not crash and burn when the LLM hype machine implodes but to keep going to create AGI:
Neither HN nor lobste.rs are very impressed
A very new user.
Itās basically free to create a HN account, itās not tied to email or anything like that.
Roundup of the current bot scourge hammering open source projects
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
I havenāt read the book but I really enjoyed the movie.
several old forums, [ā¦] are being polluted by their own admins with backdated LLM-generated answers.
Iāve only heard about one specific physics forum. Are you telling me more than one person had this same idiotic idea?
That āBillionaires are not immune to AGIā post got a muted response on LW:
I still think AI x-risk obsession is right-libertarian coded. If nothing else because āalignmentā implicitely means āalignment to the current extractive capitalist economic structureā. There are a plethora of futures with an omnipotent AGI where humanity does not get eliminated, but where human freedoms (as defined by the Heritage Foundation) can be severely curtailed.
What LW and friends want are slaves, but slaves without any possibility of rebellion.
Wait until they find out itās not all iambic pentameter and Doric columnsā¦
For some reason itās on brand for HN to have a discussion of different dash widths stick on the front page more than 24h
https://news.ycombinator.com/item?id=43497719
Extra spice and relevance for the observation that GenAI text apparently has a lot of em-dashes in it, so add that to the frequency of the word ādelveā.