• 1 Post
  • 161 Comments
Joined 11 months ago
cake
Cake day: September 24th, 2023

help-circle


  • And you’re making the assumption that it could be. Why am I the only one who needs to show anything?

    “Could be” is the null hypothesis.

    any person

    Hmm I’m guessing you don’t have children.

    What do you mean, “certain of the answer?” It’s math

    Oh dear. I dunno where to start here… but basically while maths itself is either true or false, our certainty of a mathematical truth is definitely not. Even for the cleverest mathematicians there are now proofs that are too complicated for humans to understand. They have to be checked by machines… then how much do you trust that the machine checker is bug free? Formal verification tools often have bugs.

    Just because something “is math” doesn’t mean we’re certain of it.

    Can I ask where’s your proof?

    I don’t have proof. That’s my point. Your position is no stronger than the opposite position. You just asserted it as fact.


  • I think understanding requires knowledge, and LLMs work with statistics around text, not knowledge.

    You’re already making the assumption that “statistics around text” isn’t knowledge. That’s a very big assumption that you need to show. And before you even do that you need a precise definition of “knowledge”.

    Ask me a thousand times the solution of a math problem, and a thousand times I’ll give you the same solution.

    Sure but only if you are certain of the answer. As soon as you have a little uncertainty that breaks down. Ask an LLM what Obama’s first name is a thousand times and it will give you the same answer.

    Does my daughter not have any knowledge because she can’t do 12*2 reliably 1000 times in a row? Obviously not.

    it’ll just make things up

    Yes that is a big problem, but not related to this discussion. Humans can make things up too, the only difference is they don’t do it all the time like LLMs do. Well actually I should say most humans don’t. I worked with a guy who was very like an LLM. Always an answer but complete bullshit half the time.

    they contain information about—the statistical relations between tokens. That’s not the same as understanding what those tokens actually mean

    Prove it. I assert that it is the same.


  • This is a timely reminder and informative for people who want to seem smug I guess? Or haven’t really thought about it? … that the word “understand” is definitely not defined precisely or restrictively enough to exclude LLMs.

    By the normal meaning of “understand” they definitely show some level of understanding. I mean, have you used them? I think current state of the art LLMs could actually pass the Turing test against unsophisticated interviewers. How can you say they don’t understand anything?

    Understanding is not a property of the mechanism of intelligence. You can’t just say “it’s only pattern matching” or “it’s only matrix multiplication” therefore it doesn’t understand.


  • Ada is not strictly safer. It’s not memory safe for example, unless you never free. The advantage it has is mature support for formal verification. But there’s literally no way you’re going to be able to automatically convert C to Ada + formal properties.

    In any case Rust has about a gazillion in-progress attempts at adding various kinds of formal verification support. Kani, Prusti, Cruesot, Verus, etc. etc. It probably won’t be long before it’s better than Ada.

    Also if your code is Ada then you only have access to the tiny Ada ecosystem, which is probably fine in some domains (e.g. embedded) but not in general.




  • Would be cool to have more people on Linux finding and fixing these little details.

    Unlikely to happen. This is very complicated low level stuff that’s often completely undocumented. Often the hardware is buggy but it works with Windows/Mac because that’s what it’s been tested with, so you’re not even implementing a spec, you’re implementing Windows’ implementation.

    Also the few people that have the knowledge to do this a) don’t want to spend a ton of money buying every model of monitor or whatever for testing, and b) don’t want to spend all their time doing boring difficult debugging.

    I actually speak from experience here. I wrote a semi-popular FOSS program for a type of peripheral. Actually it only supports devices from a single company, but… I have one now. It cost about £200. The other models are more expensive and I’m not going to spend like £3k buying all the other models so I can test it properly. The protocol is reverse engineered too so… yeah I’ll probably break it for other people, sorry.

    This sort of thing really only works commercially IMO. It’s too expensive, boring and time consuming for the scratch-an-itch developers.


  • Yeah there’s no way I trust their methodology has stayed that stable over 15 years. Hell if you just look in the last year supposedly 3% of global users jumped from Mac to Windows in a single month (Nov 2023).

    There are also loads of new Linux device classes that may have Linux in their user agent but aren’t really “the year of the Linux desktop” that you’re thinking of. It seems they try to count ChromeOS (though badly - seems like “Unknown” contains a lot of ChromeOS depending on the month), and obviously Android, but what about Steam Deck? Smart devices with web browsers built in? Is your Tesla desktop Linux?

    I’d buy it’s gone up; not to 4% though. I would be moderately surprised if 4% of web users had even heard of Linux.




  • I agree, too little regard for backwards compatibility. They also removed distutils which meant I had to fix a load of code that used it. It was bad code that shouldn’t have used it even when written, but still… seems like they didn’t learn their lesson from Python 2.

    It’s not like it would be difficult to avoid these issues either. Everyone else just makes you declare your “target version” and then the runtime keeps things compatible with that version - Android via SDK target version, Rust with its editions, hell even CMake got this right. CMake!!