Links
Interesting links I've found.
- Printervention: Resurrecting Old Printers via the Browser
When manufacturers drop driver support for perfectly good hardware, the usual fix is setting up a dedicated Linux box as a print server. Printervention skips all of that and puts the entire print server inside a Chrome tab.
You plug in an unsupported printer (like the Canon SELPHY photo printers), open the site, and print. No installs. Here’s what’s going on under the hood:
The browser connects to the printer over WebUSB, then boots v86, an x86 emulator written in JavaScript that compiles machine code to WebAssembly at runtime. Inside that emulator, it runs Alpine Linux with CUPS and Gutenprint drivers. The correct driver gets matched to your printer model using trigram search.
The really clever part is how the emulated Linux and the browser actually talk to the printer. USB/IP on the Linux side packages USB data into TCP packets. On the JavaScript side, tcpip.js (lwIP compiled to WebAssembly) turns the raw Ethernet frames from v86’s emulated network card back into TCP/IP. This makes the bridge bidirectional, so CUPS can report paper jams, ink levels, and print progress back to the browser.
The project was built by George MacKerron using Claude Code. He’s also working on a companion scanning app using SANE at yes-we-scan.app.
- AWS S3 Files: Finally Mounting S3 Directly as a File System
AWS just launched S3 Files. You can now mount any S3 bucket or prefix as a filesystem on EC2, containers, or Lambda. From your app’s perspective, it’s a mounted directory. Under the covers, it’s actual EFS (Elastic File System) integrated into S3. The S3 and EFS teams built this together, and Andy Warfield’s blog post on the design process is worth reading in full.
The sync model borrows from git: they call it “stage and commit.” Changes accumulate in the EFS layer and get committed back to S3 roughly every 60 seconds as a single PUT. Sync runs both directions, so if something else modifies the S3 objects, the filesystem view updates automatically. When there’s a conflict (both sides modified the same file), S3 wins and the filesystem version gets moved to
lost+foundwith a CloudWatch metric.When you first access a directory, only metadata is imported from S3. Files under 128 KB get their data pulled immediately; larger files are fetched on-demand when you actually read them. So you can mount a bucket with millions of objects and start working right away. Data not accessed in 30 days gets evicted from the filesystem view but stays in S3, keeping storage costs tied to your active working set.
For sequential reads, there’s a “read bypass” mode that reroutes the data path to perform parallel GET requests directly to S3, hitting up to 3 GB/s per client and scaling to terabits/s across multiple clients.
Some limitations worth knowing: renames are expensive because S3 has no native rename operation (it’s copy + delete for every object under a prefix). They warn you at 50 million objects per mount. Some S3 object keys can’t be represented as valid POSIX filenames, so they won’t appear in the filesystem view. And the 60-second commit window won’t work for every workload.
What I like about the design is that they stopped trying to pretend files and objects are the same thing. The explicit boundary between the two is the part that actually makes it work.
- Comprehension Debt - the hidden cost of AI generated code
Addy Osmani puts a name to something I’ve been feeling for a while: Comprehension Debt.
When we lean on AI tools (Copilot, Claude Code, Cursor) to generate code, velocity metrics look great. The PRs are clean, the tests pass. But we quietly stop understanding why our systems work the way they do.
AI generates code far faster than humans can evaluate it. What used to be a quality gate is now a throughput problem… Surface correctness is not systemic correctness.
PR review used to be a bottleneck, but a useful one. It forced you to understand design decisions and architecture. Now a junior engineer can generate 1,000 lines of syntactically perfect code faster than a senior can audit it. An Anthropic study found that engineers who passively delegated to AI scored 17% lower on comprehension quizzes than those without AI. The interesting bit: engineers who used AI to ask questions and explore tradeoffs kept their understanding intact.
Tests and specs don’t save you here either. Tests only cover behaviors you thought to specify. And when AI changes implementation behavior and updates hundreds of test cases to match, your safety net is no longer trustworthy. Only actual understanding catches that.
What makes this worse than regular technical debt is that nothing in your metrics captures it. Velocity is up, DORA looks fine, coverage is green, and comprehension is hollowing out underneath.
Osmani’s point is clear: as generating code gets cheaper, the engineer who actually understands the system — the load-bearing behaviors, the architectural history, the context — becomes the scarce resource everything depends on.
I don’t think we should stop using AI to write code. But we do need to stop pretending that passing tests means we understand what shipped.
- Willingness to look stupid is a genuine moat in creative work
Sharif Shameem wrote about how the fear of looking stupid kills creative output, and I felt called out. I have drafts on this blog that will never see the light of day because I keep telling myself they’re not good enough.
The funny thing is, I already learned this lesson. During my 28 posts in 28 days challenge last month, the breakthrough was lowering my standards. Not every post needs to be a deep dive. Some of my best writing that month came from just saying what I was thinking without overthinking it. But here I am, a few weeks later, already back to filtering everything.
Sharif’s jellyfish bit is what stuck with me. Evolution produced jellyfish, these weird brainless sacs of jelly that have been around for 500 million years. But it only got there by churning out endless bad mutations without any shame. If evolution could feel embarrassed, life wouldn’t exist.
I keep relearning the same thing: publish more, filter less.
- Agent Psychosis
We let AI run wild, generating massive amounts of unverified “slop” code that overwhelms maintainers.
The problem isn’t the tech—it’s the laziness. Generating a PR takes a minute; reviewing it takes an hour. He points to Steve Yegge’s “Gas Town” as a cautionary tale of this loop gone wrong.
We need to stop blindly trusting the machine and start acting like senior engineers again. AI is a power tool, not a replacement for thinking.
- Python Meets JavaScript, Wasm With the Magic of PythonMonkey
- Postgres is eating the database world
- Table of Contents · Crafting Interpreters