For a few years now I've been toying with the idea of redoing my website.
And I'm moving home at the moment and I needed something to stay sane (at least somewhat).
So a new website it is. π
Anyway, it's been ~7 years since the last time I touched this website (apart from writing content).
And over the years I collected a small list of ideas I wanted to play around with.
But before starting with new toys I did a small retrospective of what worked well for those 7 years:
-
The old page chugged along for 7 years without any manual mantenance. Well, Debian auto-updates, but that's it.
This was thanks to only using bare PHP. No framework, no extra package manager and packages for developers.
Just Debians own PHP package for Apache2. This approach worked exceptionally well, so I'm keeping that.
-
Storing posts and comments as simple files and using the file browser (via SSH) as an admin interface.
Basically to write new posts and to moderate comments.
This also worked remarkably well, but had some small teething issues. I just open my server in my local file browser
and edit files like local files. But creating and saving files over SSH like that can change
file and directory permissions and the webserver might not be happy about that. But that only became a problem when I
wanted to publish a post (did that by dragging it into another directory). So that will be done via a console command
in the future and it can update permissions accordingly.
But appart from that it's a hell of a complexity-to-usefulness ratio. π No database, no dedicated admin interface and
user management (and hence a lot less attack surface), stupidly easy backup & restore, β¦. So yeah, sticking with that.
-
Newsfeed for notifications
This website is a low traffic page (obviously). I also don't post that often (obviously), and comments also don't
happen that often (obviously). An (Atom) newsfeed was all I needed to stay up-to-date. It's simple, easy and something
also useful to vistors. Keeping that.
-
Simple HTML & CSS based design
It didn't break in 15 years. 7 years ago I rewrote the "backend", but the design I reused from the previous version.
So yeah, it's mostly 15 years old (I already said it's time for a change π ). Browsers take backwards compatiblity very
seriously. At least for the core stuff. I think I can't emphasis enough what it means in modern web development that a
website looked basically the same for 15 years without any glitches.
On the other hand browsers have grown into very complex things. And I have the impression that this stability is
bought with the sanity (and increasing insanity) of browser developers. I feel a bit guilty about that but at least
want to say "thank you" for that sacrifice.
Also, I don't need to maintain a complex development environment. Just open the developer tools (especially Firefoxes
"Styles" tab) and I'm good to go. I work in a lot of different environments and toolchain maintenance is a factor. But
not here. So I'll stick to that approach.
-
Style switcher
The old website (4th version) had two different designs. An even older one (2nd version) had 3 designs. And you
could switch between them via a style switcher. At first the design switched automatically based on the time of day
(green for morning, blue for the day, red for sunset and the night). But that confused people and annoyed myself, so
I disabled it.
I also stumbled upon a Chrome bug
when I implemented the style switcher in JavaScript in 2018 (alternate styles are actually a pretty old browser
feature). It took a few years to get fixed, but then it was a low-priority bug with a known workaround.
Ok, strike that, I just opened the old test page in Chromium 138 and it's still buggy (link in the linked post).
So much about that.
Anyway, no point in having multiple designs anymore. Meaning no point in having a style switcher. Not keeping that.
-
List of personal projects
The old website had a list of my projects. At least of those that were somewhat interesting.
Anyway, I didn't spend any time to keep that list up-to-date. And honestly, creating it was mostly a nice trip through
memory lane for myself. Not keeping that, just needs maintenance work I'm not going to invest.
Ok, writing about that got a lot longer than I expected. Sorry about that.
Now on to the fun part. New toys and ideas to play around with:
- The design
- Static page generation
- What to do with tags
- Markdown and syntax highlighting
A new design. It's about time. ΒΆ
The two main inspirations for the design.
For the overall structure I wanted something like Bartosz Ciechanowski. Something simple
that directs the readers attention towards the content, not the design itself. A kind of minimalist aesthetic.
The old design and page structure was also meant for small but many blog posts, e.g. showing multiple posts on a single
page. But I've gravitated more towards fewer but longer posts (like the one about subpixel text rendering)
and the structure of Bartosz Ciechanowski just seems like a good fit for that.
Bartosz Ciechanowski has a pretty bright color scheme (except for the parts where he uses dark backgrounds like
about the moon), but I wanted to do a dark-mode like color scheme this time. While looking around for inspirations I
found Chirpy, which looks very impressive. For a short time I
even considered playing around with Jekyll. But a look at the dependencies ended that train of thought pretty fast.
Anyway, I liked the colors and shades.
Did I already mention that Chirpy looks slick? Well, after sketching around for
a bit I noticed that Chirpy has a lot of bells and whistles. They look nice, but also draw a lot of attention away from
the content towards the design (like the animations in the table of contents). I get why it's like that, but not
something I want for my page.
You should remember the content, not the design.
Of course there were a lot of other inspirations, like Universe Today, other blog posts and
color schemes like base16-edge-dark
from highlight.js. But the two above were
the main ones.
Here's what I came up with in the end. And to make it fun (and somewhat embarrassing) I dragged screenshots of all
previous versions out of my personal archive:
All designs over the years. In my defense I did know even less about design back then than I do now (which still isn't much).
Left to right: v5 from 2025-07, v4 from 2018-05, v3 from 2010-07, v2 from 2006-07 and v1 from 2005-10. v2 is a blend of all 3 color schemes.
Scoped styles and CSS nesting ΒΆ
Now on to more technical aspects of the design. While implementing it and migrating old content I stumbled upon <style scoped>
.
It would have allowed you to put all article specific styles inside the <article>
element and the styles would only affect that article. Neat, simple and would have been useful to me.
But alas, it's not yet there and it seems to have morphed into @scope
which can do the same, albeit with a bit more boilerplate.
But it's not ready yet. Maybe the next time I redo my website I can use something like that.
I've also had some funny situations with CSS nesting and specificity.
This was the first time I could use CSS nesting while redoing a complete website.
And usually I start with the general rules that make up the design and then add the special cases. And in a reasonably complex design there are a lot of special cases.
Human perception of color and spacing is complex and sometimes you have to apply different spacing to make it look consistent (even if it isn't on a technical level).
Line heights of fonts are not consistent and you have to nudge that for some font combinations. The list goes on.
CSS selectors fit this pattern pretty well: General rules have simple selectors, hence a low specificity.
The special cases usually have more complex selectors, hence a high specificity, and overwrite the general rules.
Actually, I never had to think about specificity. It just worked for me. Specific rules overwrite general ones. How else should it be?
But with CSS nesting this no longer worked for me.
I used nesting to document the relevant HTML structure while writing the general rules.
This makes the interplay of multiple elements more obvious (e.g. to configure layout models like grid, flexbox, positioning, β¦).
But all those nested selectors compound their specificity and started to overwrite the rules for special cases.
All in all I think I just have to adjust my mental model and how I write stylesheets. But it simply surprised me.
Combining the rules-based nature of CSS with the block-based structure of nesting led to unexpected complexity for me.
Maybe I have to look into CSS layers but this would make it only more complex.
Another concept to juggle around and a more complex mental model.
Well, something to experiment with in the next few projects.
Static page generation ΒΆ
This was something I wanted to play around with for a long time. I saw it in my apprenticeship back in 2003 and used it myself back in ~2006 (in Ruby with ERB).
But back then this wasn't trendy and I was stupid, so I stopped doing it. What should I say, this was my "frameworks are awesome" phase.
Anyway, the plan this time was to generate static pages for the entire website.
If I write a new post or someone adds a comment, just generate a new static page and reload.
Simple concept and if something breaks you can always use the static pages as a read-only version of the website to fall back on.
The only difference to a "pure" static page generator is that there's still some code on the server that regenerates pages on demand.
I could have gone pure static pages and do the comments via JavaScript and some extra thing that takes care of comments.
But I wanted to keep it simple (and static) and wanted to keep the comments on my own server. No point in spewing user data around unnecessarily.
At first I though about writing it in Ruby and using ERB again. But the packages and dependencies required to parse Markdown and do syntax highlighting discouraged me from that (more on syntax highlighting in a bit).
Well, PHP has a builtin template syntax and using that to build a template system is about 5 lines of code.
Something to render static pages about 25 lines.
Also there's Parsedown, a single-file library with no dependencies. Ok, two files if you count ParsedownExtra.
Still, a lot less complexity to get acquainted with and check.
So PHP it is this time.
Strange language and library selection ΒΆ
That approach to language and library selection might seem strange to you. After all, libraries are there to make development easy.
But to put it bluntly: If I use a library, I execute code from someone else.
Does this code contain a crypto miner? I don't know, I have to check.
Does it exfiltrate data my users entrusted to me? I don't know, I have to check.
Will it join a botnet and make my server into a zombi? I don't know, I have to check.
In a perfect world I could just call a library function in it's own little sandbox.
Then it could only access the data I gave to it and give me back the result.
Just like you could do on the old Cambridge CAP computer from 1970.
But alas, we don't live in that world and there seems little interest to get there.
Which annoys the hell out of me btw., because it would be cheap by todays hardware standards.
So I have to find another way to handle the trust users implicitly put into software while still working with 3rd party code (because I don't want to reinvent the wheel).
Some projects are under enough public scrutiny to trust them. Stuff like the Linux kernel, the PHP or Ruby interpreters, compilers, etc.
But most libraries are not.
The only way I found for my own projects is to use simple libraries and check manually.
I know, very high-tech. I hope we'll arrive in the 1970s before I retire, but honestly I've lost hope about that. I did say this situation annoys the hell out of me for a reason. π
But that is one of the reasons why I avoid complexity like the plague.
Because if a library needs 50 classes and who knows how many methods to solve a simple problem, checking that codes quickly becomes impractical.
But simple libraries, those I can check.
At least doing all that comes with fringe benifits.
Knowing details about a library makes it easier to work with and extend.
And when you find bugs in the libraries you have an easier time fixing them.
Funnily enough this was the main reason why I started to read library code years and years ago.
Back then I found library bugs in pretty much every project I did. Usually more than one.
Or the documentation was incomplete and I had to read the code to figure out how to use the API.
This got better the more I focused on simple libraries, but sometimes you have to use complex libraries like ffmpeg or x265.
Then software supply chain attacks became a thing and that put a much more serious spin on things.
And here we are. π
Sorry about that adhoc rant. I put a heading over it to make it look like I planned to write about it, but I didn't.
Anyway, moving on.
Actually I wanted to remove them.
I don't know about you, but I never found it very useful to see a list of posts that have a given tag.
Thats what the old page and most blogs do when you click on a tag.
Posts similar to the brute-force substring search post.
I get the impression they're meant to be used like categories.
There you put each post into just one category and have a nice list of mutually exclusive categories.
Unfortunately the real world is complex and posts usually touch multiple topics, meaning they don't fit nicely into just one category.
Instead what I usually want is: Are there similar posts to this one? And maybe explore a little.
But then I had a strange thought: What if I just build what I want?
Yeah, sometimes my mind works in strange ways.
Just show posts ranked by how many tags they have in common with a given post.
Then throw in the old tag cloud to easily explore tag combinations.
That was what first came to mind and it worked surprisingly well (see the screenshot or tags page).
I guess it depends a lot on how you use tags. So this won't work for everyone.
But I had all those tagged posts lying around and it seemed like a waste to throw that away.
Markdown and syntax highlighting ΒΆ
This was an odd side quest.
Before implementing a project in earnest I usually do little isolated experiments to check out all critical parts.
Markdown processing is one of those parts for this project.
Parsedown is a reasonable simple library that's easy to extend. But it doesn't do syntax highlighting and this time around I wanted to do that in the static page generation.
I found a few libraries that combine Parsedown with various syntax highlighting libraries, but while looking them over they all seemed way to complex for what I needed (mostly meaning the syntax highlighters).
With syntax highlighting there are two ends of the spectrum:
- Just color some parts of the source code to make it pretty and to provide recognizable visual patterns.
- Properly parse the code with the programming languages grammar.
If you're building an IDE and developers use your syntax highlighting for direct feedback you probably want to be closer to number 2.
But this can get quite complicated and requires a lot of code.
For a blog where we just want to make source code look pretty? Pretty much number 1. And with regular expressions we really don't need much complexity / code to do that.
To my surprise most syntax highlighting libraries I looked at were leaning towards complex parsing. And hence had large and complex code bases. Not what I need for this project.
Other libraries were mostly concerned with smashing the logic into pieces and squirreling them away into quite a few classes. Not what I want for this project.
Wraping parts of a string into <span>
elements and coloring them isn't that complex of a problem.
So after a day or two of searching, reading code and experimenting, I gave up.
Just to make this clear: All libraries I looked at worked. I just wasn't happy with how much complexity / code they pulled in to solve my relatively simple problem.
You can probably guess the rest of the story. I wrote a small syntax highlighting function myself.
A regex to match interesting parts of a given programming language, each part as a named pattern.
Then a PHP function that applies that regex to the given source code and wraps each found named pattern into a <span>
.
With the name of the pattern becomming the class name, more or less.
Code:
[section name]
name=value
Regex:
(
^ (?<name> \w+ ) = (?<value> .+ )
| ^ (?<section> \[ [^]]* \] )
)xm
Result:
<span class=section>[section name]</span>
<span class=name>name</span>=<span class=value>value</span>
I tried a few of different variations and (of course) profiled a lot, but in the end it boiled down to ~40 lines of PHP code.
And a 20 - 30 line regex for each supported programming language (or sometimes split it into two).
I spare you the details, if you're interested take a look at the GitHub repo.
Anyway, I spend about a day on the PHP side of things. Then one or two days (don't remember) writing the language definitions.
That was actually something I quite enjoyed.
I learned some nifty little details of some languages and I'm still amazed at how many useful little things Ruby has (just look at those percent literals!)
The whole thing is only meant for code on my website, but thats still GLSL, Ruby, Bash, C, HTML, CSS, JS, PHP, SQL and Java.
But it only needs to be pretty, not correct, and this makes writing something like that a lot faster (and enjoyable, especially in regex101).
Doing it by myself also came handy in some not-so-popular cases I used in some articles, for example Javas text blocks.
A lot of the highlighters I tried didn't process them, but I could just implement them and be done with it.
I also played around with some rather unusual highlighting ideas, e.g. highlighting GLSL vector swizzle patterns like blend_weights.xxyy
.
That was a fun little (unexpected) detour.
Other stuff ΒΆ
I added reactions to posts and comments. Mostly because I wanted to give readers a quicker form of feedback.
It was surprisingly difficult to find a good set of emojis with filled and outline variants.
In the end I went with a small set in FontAwesome. Not perfect, but gets the job done.
Comments also got extended into a tree, mostly because I found that useful.
Reply chains without branches are flattened when displayed. This avoids those annoying reply cascades for simple conversations.
Both are a bit of overkill for this blog, but I wanted to play around with it. So I did. π
Anyway, I better stop here or I'll never get this post done. If anyone is interested in more details, feel free to ask.
In the end I was quite happy that Parsedown itself was by far the most complex part of the entire website.
If you want some very rough numbers:
Common PHP code for the website is ~320 LoC,
pages and templates that render to HTML or newsfeeds are ~570 LoC and then there are ~100 LoC that rerender pages on-demand e.g. when someone posts a comment.
Syntax highlighting is ~300 LoC (including all language definitions).
Parsedown and ParsedownExtra come to about ~2100 LoC.
Measured highly professionally by scrolling through the code and eyebaling how many comments and real code there is. So don't take them to seriously. π
We'll see which of those ideas will survive the next rewrite. If you're still with me, thanks for dropping by and reading all the way to the end. π