The following applies to minimal websites that focus primarily on text. It does not
apply to websites that have a lot of non-textual content. It also does not apply to
websites that focus more on generating revenue or pleasing investors than being good
This is a “living document” that I add to as I receive feedback. See the changelog.
I realize not everybody’s going to ditch the Web and switch to Gemini or Gopher today
(that’ll take, like, a month at the longest). Until that happens, here’s a
non-exhaustive, highly-opinionated list of best practices for websites that focus
primarily on text:
- Final page weight under 50kb without images, and under 200kb with images.
- Works in Lynx, w3m, links (both graphics and text mode), Netsurf, and Dillo
- Works with popular article-extractors (e.g. Readability) and HTML-to-Markdown
converters. This is a good way to verify that your site uses simple HTML and works
with most non-browser article readers (e.g. ebook converters, PDF exports).
- No scripts or interactivity (preferably enforced at the CSP level)
- No cookies
- No animations
- No fonts–local or remote–besides
monospace. More on this
- No referrers
- No requests after the page finishes loading
- No 3rd-party resources (preferably enforced at the CSP level)
- No lazy loading (more on this below)
- No custom colors OR explicitly set the both foreground and background colors. More
on this below.
- A maximum line length for readability
- Server configured to support compression (gzip, optionally zstd as well) and
HTTP/2. It’s a free speed boost.
- Supports dark mode via a CSS media feature and/or works with most “dark mode”
browser addons. More on this below.
- A good score on Mozilla’s HTTP Observatory
- Optimized images.
I’d like to re-iterate yet another time that this only applies to websites that
primarily focus on text. If graphics, interactivity, etc. are an important part of
your website, less (possibly none) of this article applies.
Early rough drafts of this post generated some feedback I thought I should address
below. Special thanks to the eight IRC users who provided feedback!
If you really want, you could use
serif instead of
sans-serif, but serif fonts
tend to look worse on low-res monitors. Not every screen’s DPI has three digits.
To ship custom fonts is to assert that branding is more important than user choice.
That might very well be a reasonable thing to do; branding isn’t evil! It isn’t
usually the case for textual websites, though. Beyond basic layout and optionally
supporting dark mode, authors generally shouldn’t dictate the presentation of their
websites; that is the job of the user agent. Most websites are not important enough
to look completely different from the rest of the user’s system.
A personal example: I set my preferred fonts in my computer’s fontconfig settings.
Now every website that uses
sans-serif will have my preferred font. Sites with
sans-serif blend into the users’ systems instead of sticking out.
But most users don’t change their fonts…
The “users don’t know better and need us to make decisions for them” mindset isn’t
without merits; however, in my opinion, it’s overused. Using system fonts doesn’t
make your website harder to use, but it does make it smaller and stick out less to
the subset of users who care enough about fonts to change them. This argument isn’t
about making software easier for non-technical users; it’s about branding by
asserting a personal preference.
Can’t users globally override stylesheets instead?
It’s not a good idea to require users to automatically override website stylesheets.
Doing so would break websites that use fonts such as Font Awesome to display vector
icons. We shouldn’t have these users constantly battle with websites the same way
that many adblocking/script-blocking users (myself included) already do when there’s
a better option.
That being said, many users do actually override stylesheets. We shouldn’t
require them to do so, but we should keep our pages from breaking in case they do.
Pages following this article’s advice will probably work perfectly well in these
cases without any extra effort.
But wouldn’t that allow a website to fingerprint with fonts?
I don’t know much about fingerprinting, except that you can’t do font enumeration
send requests after the page loads and have no scripts, fingerprinting via font
enumeration is a non-issue on those sites.
don’t need to stop at seeing what sans-serif maps to; they can see all the available
fonts on a user’s system, the user’s canvas fingerprint, window dimensions, etc. Some
of these can be mitigated with Firefox’s
privacy.resistFingerprinting setting, but
that setting also understandably overrides user font preferences.
Ultimately, surveillance self-defense on the web is an arms race full of trade-offs.
If you want both privacy and customizability, the web is not the place to look; try
Gemini or Gopher instead.
About lazy loading
For users on slow connections, lazy loading is often frustrating. I think I can speak
for some of these users: mobile data near my home has a number of “dead zones” with
abysmal download speeds, and my home’s Wi-Fi repeater setup occasionally results in
packet loss rates above 60% (!!).
Users on poor connections have better things to do than idly wait for pages to load.
They might open multiple links in background tabs to wait for them all to load at
once, or switch to another window/app and come back when loading finishes. They might
also open links while on a good connection before switching to a poor connection; I
know that I often open 10-20 links on Wi-Fi before going out for a walk in a
Unfortunately, pages with lazy loading don’t finish loading off-screen images in the
background. To load this content ahead of time, users need to switch to the loading
page and slowly scroll to the bottom to ensure that all the important content appears
on-screen and starts loading. Website owners shouldn’t expect users to have to jump
through these ridiculous hoops.
Wouldn’t this be solved by combining lazy loading with pre-loading/pre-fetching?
A large number of users with poor connections also have capped data, and would prefer
that pages don’t decide to predictively load content ahead-of-time for them. Some go
so far as to disable this behavior to avoid data overages. Savvy privacy-conscious
users also generally disable pre-loading because they don’t have reason to trust that
linked content doesn’t practice dark patterns like tracking without consent.
Users who click a link choose to load a full page. Loading pages that a user hasn’t
clicked on is making a choice for that user.
Can’t users on poor connections disable images?
I have two responses:
- If an image isn’t essential, you shouldn’t include it inline.
- Yes, users could disable images. That’s their choice. If your page uses lazy
loading, you’ve effectively (and probably unintentionally) made that choice for a
large number of users.
About custom colors
Some users’ browsers set default page colors that aren’t black-on-white. For
instance, Linux users who enable GTK style overrides might default to having white
text on a dark background. Websites that explicitly set foreground colors but leave
the default background color (or vice-versa) end up being difficult to read. Here’s
If you do explicitly set colors, please also include a dark theme using a media
@media (prefers-color-scheme: dark). For more info, read the relevant docs
Some image optimization tools I use:
I put together a quick
to optimize images using these programs in my dotfile repo.
You also might want to use HTML’s
<picture> element, using jpg/png as a fallback
for more efficient formats such as WebP or AVIF. More info in the MDN
Other places to check out
The 250kb club gathers websites at or under 250kb, and also
rewards websites that have a high ratio of content size to total size.