but of course, books

Aug. 6th, 2025 02:19 pm
jazzfish: Owly, reading (Owly)
[personal profile] jazzfish
Oh hey, I meant to write this all up last week. Well. It's more interesting this week.

What are you reading now?

The Count of Monte Cristo, translated by Robin Buss. Someone, presumably on Mastodon, recommended this translation specifically a few years ago, and I made a note of that but not of why. An internet search reveals that it's the only translation of the complete book; all others are working from an abridgement bowdlerization from 1846.

It's great, of course. The Three Musketeers is Dumas's most famous novel, but I would bet money that there have been more adaptations and retellings of Monte Cristo. It's a universal story. Heck, The Crow is a Monte Cristo retelling.

I read it once in the late nineties and enjoyed it. Sometime in childhood I read the chapter detailing Edmond's escape from the Chateau d'If, where he disguises himself as the dead abbé to get the jailers to carry him outside. I froze in delicious terror at the absolutely chilling line "The sea is the graveyard of the Chateau d'If." Unclear why I didn't seek out the rest of the book at the time, when that one chapter was so great.

What did you just finish reading?

Emily Tesh's latest, The Incandescent, about a teacher at a contemporary Magic School. It's spectacular. It's not quite as vehement as Naomi Novik's Scholomance trilogy but it still gets in some solid criticism of The System, and I think the worldbuilding hangs together a bit better than Scholomance's. It shares with Scholomance a feeling that the latter third is suddenly very different, but in Incandescent that's more obvious and with a very very good reason. Highly recommended. I suspect I shall reread soonish so I can figure out whether I think it all hangs together metaphorically as well as ... whatever the opposite of metaphorically is, in-the-world-of-the-book.

(I have a theory, which is by no means an original theory, that if a writer does not consciously direct her themes and metaphors they will tend to reinforce the prevailing social order of the time she is writing in, which may or not be a desired result.)

Before that, Elizabeth Bear's Lotus Kingdoms trilogy. These are ... fine? The characters are great (I don't entirely believe Chaeri's heel-turn but that might just be me), the first book has a lot of moving everyone into position but once they're there the trilogy does not drag. I think this just caught me at a moment when I am spectacularly disinterested in powerful people complaining about how stressful it is to be powerful, and there is a lot of that. But: if you're looking for some colourful secondary-world fantasy, these are absolutely that, and excellent examples of it.

What do you think you'll read next?

I'm nine chapters into the 117 of Monte Cristo. "Next" seems like a very long ways away. Having said that, I'm carrying around a paperback of Morgan Locke (Laura Jo Mixon)'s 2011 shoulda-been-award-winning SF novel Up Against It in case my devices fail me, so hopefully not that but maybe.

Forgot a new thing!

Aug. 5th, 2025 02:00 pm
terriko: (Default)
[personal profile] terriko
This is crossposted from Curiousity.ca, my personal maker blog. If you want to link to this post, please use the original link since the formatting there is usually better.


A book nook built from a kit. It's a diaorama showing a train going over a river with shops and cherry blossoms, and it has been lit up with leds placed inside the diorama.
A book nook built from a kit. It’s a diaorama showing a train going over a river with shops and cherry blossoms, and it has been lit up with leds placed inside the diorama.




In my previous post about fiber goals I’d claimed not to have done anything new in July, but I forgot I made a book nook! It’s not apparent from the photo but it’s sized to fit on a bookshelf.





This was a kit I bought online a year or two ago. I did decided some of the pieces needed extra glue because the friction fits were not sufficient. But other than that, it was pretty simple and relaxing to put together over a couple of days.





Not going to be a new hobby since this is the only kit I bought for myself, but it was nice to do something different!





[personal profile] mjg59
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

doing things, mostly foodish

Aug. 4th, 2025 05:29 pm
jazzfish: Jazz Fish: beret, sunglasses, saxophone (Default)
[personal profile] jazzfish
When I hit up the dollar store for wax paper for my Ogre gluing, so I wouldn't drip glue on everything, I also picked up a long roll of aluminum foil. For reasons that are unclear to me the grocery store will only sell foil in rolls that are slightly shorter than the short side of a (half-pan) baking sheet.

Normally when I make bacon I do it in the oven, on a baking sheet covered in foil. Normally I have to fold up the edges of the foil manually. Normally some bacon grease leaks out anyway and I have to carefully clean the baking sheet.

This morning I used the long roll of foil, and it covered the entire sheet with overlap on all sides. Near as I can tell no grease leaked through.

It's kind of astounding how having the right tools can improve one's life.



Ogres remaining: one that requires surgery, five more that require colour choice and thought, and three that require both. I'm honestly a little startled that it's almost done. This has been an enjoyable project: it's not so fiddly that I get frustrated at my inability to do fine motor work, and it's producing tangible objects.



This afternoon I decanted the vanilla extract I put up last summer. I'm less optimistic about this. The cinnamon extract I did in the fall was cinnamony enough but also pretty harsh, due I assume to using cheap vodka. Half the vanilla is likewise cheap vodka (though a different kind), so maybe that will turn out alright; the other half is spiced rum, and I have no idea how well that will do. At least it's only a dozen small bottles, instead of the twenty-odd of cinnamon that I need to do something with.

French toast tomorrow morning should give me some indication of quality, at least.

I also spent an hour or so scraping/squeezing "caviar" out of the beans to make vanilla sugar. This was an extremely annoying process that I do not recommend to anyone: removing sticky goop from slick wet beans is not a good time. But I am now prepared to make an awful lot of vanilla sugar. Just need to figure out where I'm storing it. Probably in one of my tall plastic bins: making one smell faintly of vanilla is unlikely to be a downside.

Next steps there are to let the scraped caviar sit until tomorrow so it dries out (possibly with an assist from the oven on low heat), blending it all into a small amount of sugar, and then mixing that into the full amount. The recipe I have calls for "one cup of sugar per vanilla bean". Online varies between one and two cups per bean, so that's a good starting point. Thing is, I undercounted woefully last time; I used eighty vanilla beans in the extract. These are small beans, so, sure, cut that in half. I used forty full beans to make the extract, that's twenty cups of sugar, at 200g a cup that's four kilos of vanilla sugar. That ... should tide me over for awhile. Get some pint or half-pint jars, that's much of xmas sorted.

Then I have the mostly-empty bean pods that I should do something with. I'm currently letting them air dry as well. I guess I could snip them up small and mix them into some (more) sugar.

Onward.

Fiber Goals 2025 mid-year check-in

Aug. 2nd, 2025 02:00 pm
terriko: (Default)
[personal profile] terriko
This is crossposted from Curiousity.ca, my personal maker blog. If you want to link to this post, please use the original link since the formatting there is usually better.


This year’s goals were as follows:






  1. Revisit Old Goals




  2. Try Something New




  3. Something Stash Something




  4. Game Design





We’re a bit more than halfway through the year so let’s see where we’re at!





Revisit Old Goals





Started strong in January by finishing up a rainbow shawl that had been on the needles for quite some time:





A rainbow bias knit shawl/wrap of my own design.
A rainbow bias knit shawl/wrap of my own design.




I’d intended to release the pattern since I had an old goal about writing patterns but… honestly, I haven’t felt like it, and I focused my time on other stuff that was bringing me joy. But I have a bunch of pattern notes and a bit more time right now so I may publish what I have without bothering to polish it.





February I worked on an old Beanie Bag kit from Jimmy Beans Wool that spanned 3 months. It was… honestly kind of boring and the pattern had a bunch of mistakes/confusing bits, but I finished one month’s worth and will likely do the other two at some point.





The first part of the Textures of Nevada Shawl that was part of a Jimmy Beans Wool kit subscription some years back.
The first part of the Textures of Nevada Shawl that was part of a Jimmy Beans Wool kit subscription some years back.




March-April-May I finally got around to knitting Wingspan, which was on my “something famous” goal plan but I never made it. It was a pleasant knit once I got into the swing of things, but by the time I finished it was too warm to wear it here so I haven’t really gotten pictures! Here’s one from before it was blocked, though:





Wingspan shawl knit in a gradient yarn that goes from burgundy to red to orange.  It has no been blocked so it looks a bit lumpy and smaller than the final product looks.
Wingspan shawl knit in a gradient yarn that goes from burgundy to red to orange. It has no been blocked so it looks a bit lumpy and smaller than the final product looks.




June I took a break from old goals (and focused on writing).





July I pulled out some gradient balls and made socks for my mom’s birthday (a bit early because the timing worked out), plus I did tour de fleece stuff.





Blue/green/yellow-green gradient socks using the Affixed pattern from Shoreland Socks by Hunter Hammersen.
Blue/green/yellow-green gradient socks using the Affixed pattern from Shoreland Socks by Hunter Hammersen.




Overall, A+ on revisiting old goals. I have a couple more “use kits from stash” ideas but I may otherwise declare this particular goal complete and focus on some other stuff.





Try Something New





January started strong with me working on a hexagon blanket, which I’m still working on between other projects.





February I tried assigned pooling and made the “Shard” shawl by Romi Hill. It was fun and I’ll likely do other assigned pooling patterns!





Me modeling my Shard shawl (pattern by Romi Hill) knit in Chemknits yarn from valentines day 2024.  It's a red shawl with purple "shards" from assigned pooling.
Me modelling my Shard shawl (pattern by Romi Hill) knit in Chemknits yarn from valentines day 2024. It’s a red shawl with purple “shards” from assigned pooling.




March-April-May I worked on Wingspan for the old goals and didn’t bother doing new stuff.





June again was a break from all knitting goals. (I was writing instead.)





July was mostly finishing up work/travel and I didn’t feel like learning something new.





There’s probably some more to be done here but… honestly, I’m not sure this goal is playing well with my burnout? I’ve got some tentative plans for learning some bookbinding in August if my kid is amenable so that might be up next. But I think I may just focus on finishing up the hex blanket rather than pushing myself to come up with new things to do if I’m not feeling it. So this goal may be as complete as it’s getting unless something fun occurs to me.





Stash Something Stash / Write more





I’d planned to run some kind of stash-focused event about appreciating what you have (as opposed to feeling guilty about what you have, a common vibe in a lot of “use your stash” events) and I got as far as coming up with a nice list of prompts and ideas. But then I realized that… I didn’t actually want to run it. I was burned out on social media and wanted to spend less time on my phone. So I’ve declared this goal as complete as it’s going to be. The prompts will keep if I decide I want to run things later.





That said, I replaced this goal that no longer fit with a goal of “Write more” instead since it was what was bringing me joy and it deserved some focus and time.





I’ve done a bit more writing for this blog but the biggest part of my writing this year has been fanfic since I’m having fun. I joined a discord to hang out with other writers in my current fandom of choice and I took part in a prompt challenge (which is why I didn’t knit as much in June-July so I could write). I’m now over the 40k “that’s a novel’s worth” of words since January and I’m pretty delighted with myself.





There’s something deeply satisfying in the current economic environment about making something that is basically non-monetizable put on a website run by a nonprofit (that I donated to!) and my output only serves to make strangers/new friends happy. And I definitely made a bunch of people happy! (Including my kid, who helped with some ideas in one of my stories.) Also I’m amused that my existing community of open source people and my new community of fan writers are somewhat similar and overlapping nerds. Not a surprise that people who share their creative outputs for free have some similarities but it’s still a delight.





I expect I’ll keep writing through the end of the year (and beyond but this post is about 2025 goals). I’ll probably join another challenge or two but even if I don’t do more than finish my current story in progress, I feel like this replacement goal has been met *and* it’s brought me a lot more joy than the original goal. And these goals have always been about finding time for things that bring me joy!





Game Design





It took waaaaay too long to get approval from work saying that my silly games weren’t going to conflict with my job at which point I was so frustrated with my boss for other reasons that I was intentionally trying to get put in the layoff pool (and I succeeded). But the end result is that I haven’t actually *done* any games stuff beyond a bit of helping my kid learn Scratch programming for his robot. I’m not replacing this goal because I still want to make games, but I haven’t figured out an actual plan yet so that’s on my list for part 2 of the year. So far I’ve got my personal laptop set up a bit better for game work (attached it to the kvm with my big screen and mouse) and I think I might aim to play around with some existing frameworks and make silly things with my kid as a goal for August.





More Thoughts





It turns out this year it hasn’t been *fiber* that was really keeping me happy. I mean, I still knit/spin/whatever but it’s writing and video games that have helped me cope with the burnout and grief (particularly from losing a friend earlier this year, but there’s grief tied up in climate and politics right now too). The fact that fiber wasn’t the perfect solution for this type of burnout makes sense because I needed something that engaged more of my brain and took me away from worrying about geopolitics/work/my deceased friend. I knit to focus my brain but when my brain is spiralling that’s not the right thing to do. I do knit-and-write-in-my-head a lot so it’s compatible with what works to distract me, at least, but fiber hasn’t been as much of a focus for a few months and I’m not sure if that’s going to change. I am wondering if I should stop calling these “fiber goals” next year so I can encompass some other hobbies, though.





With work as a stressor out of the way for now but more “international move” and “find a new job” stress coming, I’m intending to just roll with what works for these goals in the second half of the year. I *am* really enjoying using my fiber and stationary stashes now that I’m trying not to spend so much money — past me bought some lovely stuff and now I have time to use it. I think doing some game stuff is going to be fun when I sit down and start playing. And I’m really enjoying writing fanfic in a way that I haven’t in a long time, so I’m happy to keep leaning into that too. Last time I was involved in a fandom I presented as an artist, and writing is a different experience, and I’m loving it so much.

[personal profile] mjg59
LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)

[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

May 2017

S M T W T F S
 123456
78910111213
141516171819 20
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags