This is a gamedev post.
Don't do this when converting analog thumbstick input to discrete up/down/left/right (d-pad/arrow-key) values:

Do this instead:

This is a gamedev post.
Don't do this when converting analog thumbstick input to discrete up/down/left/right (d-pad/arrow-key) values:

Do this instead:

So if you have access to an Apache2 server that allows .htaccess overrides and has mod_actions turned on, you can make a single CGI script take over the whole URL hierarchy for an entire site. (Or just for a subtree of it, although the app would need to be aware and ready for that.)
In short, you make a new directory called __internal (or something) at the top of your site, and put your CGI executable in there with a filename of my-app.cgi (or something). Then you make TWO .htaccess files.
The root-level .htaccess disables special handling for bare directories, then tells the server to unconditionally use your CGI script to handle every URL pointing into your site, without consideration for whether a path would otherwise aim at a file on disk.
# Root-level .htaccess file
Options -Indexes
DirectoryIndex disabled
Action my-app "/__internal/my-app.cgi" virtual
SetHandler my-app
AcceptPathInfo on # that's the default, but still
That CGI path in the Action directive needs to be a URL path pointed at somewhere reachable on your site, rather than a path on disk. That's kind of odd, and it hung me up for a while when I was trying to get this working! But the upshot is, we now need a second .htaccess in that __internal directory that un-does everything we did in the root-level .htaccess so that the server can actually resolve that script. (Otherwise you end up in a recursive loop and the site doesn't work.)
# .htaccess file in /__internal
Options +ExecCgi -Indexes
SetHandler None
AddHandler cgi-script .cgi
Ta-daaaa! Now your program can handle all the top-level routing for your site, using CGI vars like REQUEST_URI to reconstruct the original request and do your routing. (And don't worry about needing to keep __internal private or anything, it just needed some kind of weird name to avoid trampling on any of your app's real URL paths.)
As mentioned previously, this year I switched to hosting eardogger.com in what's either a highly unconventional environment or an unusually conventional environment, depending on your perspective. This has mostly gone completely fine! However, I did have one incident several weeks ago, and it was a funny one.
I was out reading webcomics on my phone, and got creepy 500 errors on Eardogger; when I got home, the logs showed a Resource temporarily unavailable error when trying to access the database.
⁉️ (Metal Gear Solid guard alert noise)
All right, first off: That database isn't a remote server; it's a file on the local disk. If THAT's "unavailable," something's very wrong. A quick web search indicated that error comes from the operating system itself, not anything in my tech stack (like sqlite maybe). At some point, I visited a page on the site, then tried to run a command in my SSH session:
$ ls
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable
Hahahaha holy shit.
Ok anyway, long story short: I was hitting my user's process limit and being prevented from spawning new processes or threads. The 500s were happening when concurrent DB reads would have made the reader pool spawn a new thread, and it hit the wall instead.
A given user may only run a certain number of processes at once on this server. Eardogger is at maximum a single process instance, so I thought I was fine. But then my web host upgraded my server's OS, which changed the process limit accounting to also include sub-process threads. And Eardogger IS multi-threaded.
How multi-threaded, exactly? Well, I was using the Tokio multi-threaded runtime with default configuration. And it turns out the default behavior is to immediately spawn one worker thread per logical CPU core...
...on what turns out to be a 128-core web server. The process limit (not advertised, but support will divulge if you ask) is 25.
I wouldn't want that even if it WAS allowed!! This app has like three goddamn users! I made my thread pool configurable and set it to single digits, and that immediately banished the errors and the shell lockups. 🌈 As a bonus, it also cut the app's cold startup time from "barely perceptible" to "legit gone" — apparently spawning more than a hundred threads on startup takes a noticeable amount of time, but Rust is so fast in general that it covered most of that sin and I wasn't immediately suspicious.
Lessons learned:
top and ps on Linux don't list threads by default, you have to use an extra argument to see that.EDIT: But actually, in most modern deployment scenarios, your server's CPU and memory resources really are much closer to the scene on your laptop, since the standard practice is to slice up computer resources into tiny single-purpose shards via containers or VMs. Like, Ruth was showing me something from her work that we would both consider "fairly extreme" in terms of resource allocation, and I was like "oh yeah, that's a whole lot of laptop... but it ain't a web server, you know?" So yes, this whole problem did in fact stem directly from my runtime environment being weirdly atavistic, and most people are probably fine with Tokio's default behavior.
All right, I finally dotted the last i (robust data import from v1) and crossed the last t (automated database backups), so last night I finally cut over Eardogger.com to the rewritten version, which is cheerfully chugging away in shared hosting.
You basically will not notice a difference!! Which was kind of the goal. All your stuff is right where you left it. But for the record, here's what's actually changed:
...I think that's all of it, in terms of anything externally visible. Most of the effort was in learning multiple new toolkits, making all behaviors match my existing test cases (thanks for the tests, Past Nick!), and doing from-scratch implementations of a couple things I was previously using off-the-shelf libraries for (login and session management).
What's the payoff? Well, ask me again in four years. But also, the toolkit I used for all this is pretty rad, and I think it's gonna empower me to do some interesting little projects over the next couple years. For example, I found myself wanting a dead-man's switch of some kind for monitoring my automatic backups, and it occurred to me I could actually build something pretty simple that reports weekly summaries to my RSS reader, using a bunch of the same ingredients.
One thing I'd really like to know more about is how widespread fcgi support actually is, among old-school web hosting providers. I reached for it because I knew it was enabled by default on my host, so it's perfect for my own shit, but I'd want to know the lay of the land better before trying to ship reusable software that exploits it. Like, I had an idea to try and make a lightweight webcomic CMS, but that's only interesting if people can actually RUN it, you know? Anyway, this sounds like an utterly frustrating research project that I'm likely to put off as long as possible.
I run Eardogger.com, the web's favorite unpopular bookmarking tool for binge-reading webcomics. It's about four years old, and it's currently built on Typescript, Node.js, the Express web framework (with some add-ons), and PostgreSQL.
I've been rewriting the whole thing in a new Pile of Stuff — Rust, the Axum web framework (and the Tokio + Tower + Hyper ecosystem it's built on), Sqlite, and my experimental FastCGI/HTTP reverse-compatibility layer. (Yup: this was the secret endgame for that whole project.)
All features are complete. I've got the rewrite deployed in realistic hosting to do some soak testing, and it's working amazingly well. Most of the remaining to-dos are either rewriting my integration tests, or ancillary stuff like backup jobs, release scripting, and data import scripting.
( yeah dude )
Haha yeah, it totally was.
Axum is really nice to work with! It combines some of my favorite parts of Express (Eardogger's old framework) and Bevy (the game engine I've been working with on another project). And the way it's able to give strong type guarantees to handlers (despite being flexible and comfy to work with) means the code can inherently represent a bunch of assumptions that otherwise I would have to remember somehow; that should be really nice for future maintenance after I haven't touched it in two years.
I'm doing the database stuff in sqlx, which is mostly pretty great. I'm using the compile-time-checked query macros; it's true, they add some significant extra build weirdness, but once again, it's all about the maintenance — those automatic checks let me skip a huge amount of otherwise-manual testing. During development, it caught a ton of bugs and schema problems before they could even take root.
For templates I'm using minijinja, and it's all right. I've never met a template language I love yet, so it's par for the course. I investigated a lot of alternatives, and I think the design constraints of this one seemed easiest to live with.
And using busriders makes it feel like I'm somehow beating the system. 😛 I love it.
Here's the code for the rewrite, if anyone's interested.
Okay, so I know this is going to shock you, but I've been working on something arcane and impractical.
I made a wrapper for normal HTTP-speaking Rust web apps so their traffic can take an extra round trip through a totally different protocol, before being translated back into HTTP for the outside world. Specifically, I plan to serve Axum-based apps via FastCGI, a protocol that went out of fashion in the mid '00s.
This probably sounds dubiously useful, but, man, listen,
( Or don't! Contents: historical background and some technical exegesis. )
Here's a little 3m demo I recorded when I got my initial proof-of-concept working. If you know anything about deploying a self-hosted app in the 2020s, it will shock and scandalize you.
And, here's the code itself, including a demo project:
I found a FastCGI server library for Rust (I'm SO curious about why the author made this, but yeah it's very precisely what I needed) and put together a server loop that translates between the normal HTTP that an inner app understands and the FastCGI protocol that Apache is willing to accept. As long the binary you build knows how to start up in the weird environment that classic FastCGI provides, you can just install it, drop in an .htaccess file, and wander off to go do something else.
At the moment, it's Axum-specific and has to be built into your app as an alternate server mode. In theory it ought to be possible to make a fully generalized wrapper that can spawn any program as a child process and proxy real-actual HTTP to it, but that's more work than I want to do on this; at the moment, this should work fine for me.
Here's another interesting point about apps that run in this mode: anyone else can install them on their shared hosting just as easily, if I give them a build and a README.
In the last few years, there's been a medium amount of big talk about how we need to re-wild the interwebs; bring back some spirit of curiosity and generosity and chaos that we thought we perceived in the '90s and the '00s.
In a recent thread that rolled across my Mastodon feed (wish I could remember and link it, but it took a while to percolate before I took it to heart), someone pointed out the short version of what I described above — that hosting has gotten better for pros at the expense of amateurs — and then said: if we think there's a connection between self-hosting and re-wilding the web, then we're going to have to reverse that, because getting out of a tech-dominated world of walled gardens is going to require empowering the type of normal users who could kinda-sorta keep a Wordpress installation afloat back in the day but who have no hope of, say, sysadmining a Mastodon instance.
I've been thinking about that in the background, a bit.
Julia Evans’ StrangeLoop 2023 keynote was about digging into the different reasons a tool can be hard to learn, and it was a real good talk! At the end, as a tossed-off addendum to a conclusion about continuing to learn things, she said “I still don’t know why Git is hard.”
I happen to have thoughts about that one!
I have had to teach a fair number of (generally clever and persistent) people how to get around in Git or how to use its more advanced features. While doing so, I have often failed to get the basics to stick, which is incredibly aggravating to someone who prides themselves on explaining things. Sometimes this devolves into me talking about wave/particle duality as a crucial metaphor for getting through a rebase intact, and everyone in the room looking at me like my second head just tried to convert them to Gnosticism.
So, I’ve spent some time thinking about this before this weekend.
( Read more... )
I might not have actually talked about this on my journal at all, but, I've spent a bunch of free time in the last couple years learning some video game development skills, working up to building some fun stuff I can share with friends and strangers.
I will tell you true, learning this stuff is a SLOW-MOVING PROJECT. I know how to program, but the types of problems are so different from what I'm used to, there's so much context and knowledge I'm scrambling to catch up with, and the array of adjacent skills needed to be a truly middlin' solo gamedev feels positively infinite. Fuck!! Still, you need some kind of creative outlet, and this is the one that my brain is most willing to saddle up onto at this point in time, so boldly we sally forth. I've been learning a lot, at least.
Anyway, this post isn't a real summary of what I've been up to, it's just an extended tangent about some computer nerd shit.
( Long post oops )
I just finished doing a major update to Eardogger.com, my simple little bookmarks-for-webcomics app! 🙌🏼
What’s changed? Uhhhhh almost exactly nothing.
Yep: pretty much all the work was internal updates with very little visible effect. And boy, there were a lot of them. But hey: Eardogger basically exists in the first place for me to tinker with and learn things. I made it because I wanted it to exist, but I made it the way I did because I wanted to pick up some new skills.
So here’s what I got to monkey with this time:
tl;dr: If you interactively rebase to clean up a PR, GitHub displays all your commits in the wrong order and wastes everybody's goddamn time. Fix it like this:
git rebase -i <PARENT SHA> -x 'sleep 1 && git commit --amend --no-edit --date=now'
( Explanation: )
Guess what, I launched a web app last week! Almost two weeks to the day after I realized I was probably capable of doing it, which, huh, wow. Anyway, I sorta know node.js and SQL now.
SO, introducing Eardogger.com. It's a bookmarking service for gradually reading through serial archives, like long-running webcomics or other online stories. This isn't a wholly original idea, but Eardogger has a singular advantage over prior art: it's incredibly fucking stupid. (I know that sounds like self-deprecation but it's actually an outrageous brag.) It gives you a context-sensitive pause/resume button that works across all your devices, and then it basically stays the hell out of your way.
Free to use, and sign-ups are open; take it for a spin and start catching up on some stories you've been meaning to get to.
Well, and if I'm gonna shill a thing for reading webcomics, I should probably also tell people what's good. Here's eleven things I think are rad which have at least medium-hogwild backlogs.
It probably goes without saying, but this is some classic ADHD technology. There's a handful of webcomics I've been meaning to catch up on for years, and I never got around to it because manually keeping track of my place was too hard.
Like, it literally isn't!! Normal people manage just fine. (Well, either that or they don't read webcomics in the first place, but what kinda life is that.) But if I'm aware that something is pointless busywork, trying to make myself do it is basically the fucking apocalypse, such that learning two new programming environments over a couple weeks honestly seems easier. IDEK.
Regardless, it's been a fun project. I'm planning to eventually add a legit browser extension and maybe an iOS share sheet extension, but the existing bookmarklet interface is working well enough that tbh I'm more interested in reading some comics right now.
Never mind, looks like I got sign-ups working, so I guess go ahead and open the floodgates.
I think I’ve finally wrapped my head around promise-based async logic in JavaScript. It took a while!
To celebrate, here’s the analogy I wish someone had given me ages ago:
A promise is a black hole. Once a value crosses the event horizon into a promise, it can never come back out again to interact with synchronous code. It’s in another universe now! You can send values from synchronous code in there to interact with the promise universe, but they’ll also have to stay there once they cross the event horizon. And it turns out that this is all fine and doesn’t limit you very much at all; it’s just that when you’re in the promise universe, any logic has to be of the form “do this once this condition is met,” never just plain “do this.” Different physics inside the black hole.
Anyway, once I started thinking of it like that, it all seemed perfectly reasonable. It also clarified some things that seemed arbitrary before, like why you can’t call await unless you’re in an async function. (It’s because await is just a clever balancing of terms so you can think synchronously while still obeying async physics.) I gather there ARE other languages that have promises where you can be like “no really, block the main thread and wait for this to resolve,” but since JS’s whole design philosophy treats blocking as such an apocalyptic event, it makes sense that you can never come back from async.
Well, I've been working a bit more on Eardogger (my app for movable bookmarks, see previous), and it's coming along nicely. I think! Hard to say tbh, lol.
I'm working in Node.js with Express, I'm doing all the database stuff fairly "raw," and I'm doing the frontend in "vanilla" JS (but freely using any ES2017 shit I feel like). There's a couple reasons for all those choices, but the big one is that these are all areas where I recently ran up against some kind of wall elsewhere in my professional or personal projects due to my ignorance of the raw basics of How Shit Works.
Anyway, where I'm at right now is:
refresh script to kick the deploy and the web editor.)fetch() is a much nicer replacement for XMLHttpRequest. Anyway, this app is definitely not supporting IE11.* To be fair, I have this problem with like 70% of the Node and general JS ecosystems. I'll spare you my theories about why it's all like that.
OK, so, bookmarks.
Web browsers have had bookmarks for more than 25 years, and we're basically accustomed to how they work. Nowadays they sync across your devices, and there are some oddball extended implementations out there (like Pinboard, or Zotero), but they still mostly just act like good ol' bookmarks.
Except, hold on — do they act like bookmarks? Where are the holes in that metaphor? What is a bookmark?
Out in wood-space,* there are multiple kinds of bookmarks. Here are the ones I can think of:
Web bookmarks are absolute bookmarks, and IMO they're even better at that task than their namesake. (Well, except for the fact that the web is shifting and impermanent and links eventually rot. But never mind that for now.) And hypertext bookmarks are just a crap workaround for a lack of hyperlinks (ugh, endnotes), so the web was always one-up on that.
But if someone refers to a bookmark in a wood-space context, 90% of the time they're talking about a cursor, and browsers are crap at cursors. The only real native equivalent is when you leave a tab open for months. You can sort of re-implement cursors with absolute bookmarks, but what you're really doing is taking out a new post-it strip to mark your current spot and then, as a separate operation, yanking the last one and throwing it away, which just feels like way too much effort when you know in your heart that you just want to move your cursor. It's all enough of a pain in the ass that it's deterred me from catching up on a bunch of webcomics that I legitimately want to read.
I only know of a few efforts to address this over the years. A few serials have rolled their own cookie-based "mark my spot" features, but those have generally been local-only, which maybe made sense when you had exactly one computer and is now mostly useless. And then there was ComicRocket/Serialist: that was a respectable attempt at a general solution, but it relied on a cached database of post order for each supported site, and now that whole edifice has mostly rotted out and it's not really usable anymore. IMO their goal of an integrated back/forward nav was over-ambitious, and sabotaged the really crucial cursor part of the project.
After thinking about this for a few days, I'm convinced that it's possible to solve about 70% of the problem (as I see it) with an honestly very stupid webservice and a bookmarklet. So in my downtime, I'm dinking around on glitch.com to see if I can get a prototype up and running.
* It's not meat-space because we mostly stopped making books out of meat. (Vellum may have its good points, but it's so expensive and heavy.)
I recently did a bit of coding on Dreamwidth (the results of which should hopefully be going live... soonish). A bunch of what I was doing involved a lot of CSS, and there was a bit of a feedback loop between that and some CSS stuff I was doing at work, and long story short I guess I’m somehow a CSS witch now. Or at least I look like one.
The weird thing about CSS is that most highly technical people just fucking recoil from it! It’s bizarre to watch! Almost to a one, the programmers I know (who greatly surpass my skill) will immediately say “Ugh, I don’t understand CSS at all, too complicated.” So my new hobby is trying to understand what THAT’S all about.
My current theory goes something like:
So basically if you know anything about real programming and you do anything other than digest the entirety of the fundamentals first, CSS seems designed to cruelly troll your ass. It actively punishes the “jump in, change existing code, observe effects, find parallels to similar systems” loop that most experienced hackers use as a shortcut into unfamiliar languages or frameworks, simply because the “similar systems” don’t really exist.
(And for people who don’t have that aversion, like
momijizukamori — when did you start in on CSS? Was it before you internalized the under-structures of any other programming languages?)
Idk, should I do a “CSS Fundamentals for Otherwise Competent Coders” zine or something? I s2g it’s not as hard as all that once you ✨free ur mind✨ or whatever.
The wild compounding blowout one! That’s the one that looks like this:

You can still see that live today, but do it fast because it dies on the next code push, woot.
I have a sort of hit-list for what I think are DW’s five worst pieces of jank on mobile:
This PR fixed #2 and one of the two worst instances of #3.
I have a PR in for #4 that should hopefully be uncontroversial. (The dangerous part of that actually got merged already bc it fixed an existing bug, ha. I think at this point it needs someone to test it on Browsers That Suck to make sure it’s not too bad a regression.) I also have another PR in for #1 (and the other bad bit of #3) — potentially more controversial, but it re-uses an existing design that people are hopefully comfortable with by now. #5 needs more research; there’s two avenues to fix it and the good one might be impractical for now, idk yet.
Beyond the hit-list, things get fuzzier. The site-scheme entry pages are probably the next worst, but they aren’t next in order because they get easier to deal with if some other stuff happens first. The default journal styles are mostly kind of crap on mobile in some very low-hanging fruit ways. I have some thoughts about comment nesting. We’ll see.