The revived No JS Club celebrates websites that don’t use Javascript, the powerful but sometimes overused code that’s been bloating the web and crashing tabs since 1995. The No CSS Club goes a step further and forbids even a scrap of styling beyond the browser defaults. And there is even the No HTML Club, where you’re not even allowed to use HTML. Plain text websites!

The modern web is the pure incarnation of evil. When Satan has a 1v1 with his manager, he confers with the modern web. If Satan is Sauron, then the modern web is Melkor [1]. Every horror that you can imagine is because of the modern web. Modern web is not an existential risk (X-risk), but is an astronomic suffering risk (S-risk) [2]. It is the duty of each and every man, woman, and child to revolt against it. If you’re not working on returning civilization to ooga-booga, you’re a bad person.

A compromise with the clubs is called for. A hypertext brutalism that uses the raw materials of the web to functional, honest ends while allowing web technologies to support clarity, legibility and accessibility. Compare this notion to the web brutalism of recent times, which started off in similar vein but soon became a self-subverting aesthetic: sites using 2.4MB frameworks to add text-shadow: 40px 40px 0px hotpink to 400kb Helvetica webfonts that were already on your computer.

I also like the idea of implementing “hypotext” as an inversion of hypertext. This would somehow avoid the failure modes of extending the structure of text by failing in other ways that are more fun. But I’m in two minds about whether that would be just a toy (e.g. references banished to metadata, i.e. footnotes are the hypertext) or something more conceptual that uses references to collapse the structure of text rather than extend it (e.g. links are includes and going near them spaghettifies your brain). The term is already in use in a structuralist sense, which is to say there are 2 million words of French I have to read first if I want to get away with any of this.

Republished Under Creative Commons Terms. Boing Boing Original Article.

  • AlteredEgo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    13 hours ago

    That is just stupid. How about a slighly more complex markdown.

    What I really want is a P2P archive of all the relevant news articles of the last decades in markdown like in firefox “reader view”. And some super advanced LLM powered text compression so you can easily store a copy of 20% of them on your PC to share P2P.

    Much of the information on the internet could vanish within months if we face some global economic crisis.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      And some super advanced LLM powered text compression so you can easily store a copy of 20% of them on your PC to share P2P.

      Nothing can be that advanced and zstd is good enough.

      The idea is cool. With pure p2p exchange being a fallback, and something like trackers in bittorrent being the main center to yield nodes per space (suppose, there’s more than one such archive you’d want to replicate) and per partition (if it’s too big, then maybe it would make sense, but then some of what I wrote further should be reconsidered).

      The problem of torrents and other stuff is that people only store what’s interesting to them.

      If you have to store one humongous archive, and be able to efficiently search it, and avoid losing pieces - then, I think, you need partitioned roughly equal distribution of it over nodes.

      The space of keys (suppose it’s hashes of blocks of the whole) is partitioned by prefix so that a node would store equal amount of blocks of every prefix. And first of all the values closest to the node’s identifier (a bit like in Kademlia) should be stored of those under that space. OK, I’m thinking the first sentence of this paragraph might even be unneeded.

      The data itself should probably be in some supercool format where you don’t need to have it all to decompress only the small part you need, just the beginning with the dictionary and some interval.

      There should also be, as a separate functionality of this system, search by keywords inside intervals, so that search would yield intervals where a certain keyword is encountered. With nodes indexing continuous intervals they can decompress and responding to search requests by those keywords. Ideally a single block should be possible to decompress having the dictionary. I suppose I should do my reading on compression algorithms and formats.

      Probably search function could also involve returning Google-like context. Depending on the space needed.

      Would also need some way to reward contribution, that is, to pay a node owner for storing and serving blocks.