Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]
40
Setting up SSL is a pain. Even using free certificate authorities like Let’s Encrypt are difficult to get working. For some time now, I’ve been looking for a cheap and easy way to set up SSL for static sites. AWS Certficate Manager I recently discovered AWS Certificate Manager (ACM). Among other things, ACM allows you to easily issue free certificates for any static site hosted on Cloudfront with a custom domain. This is really cool! Below are some things I like about ACM: You get to use certificates issued by Amazon’s own certificate authority (Amazon Trust Services) on your domain Unlike Let’s Encrypt, ACM certificates are automatically renewed for you You can easily manage your certificates in ACM’s web interface Certificates provisioned by ACM are free! No charges whatsoever Setting up free SSL using ACM and Cloudfront is pretty easy but there are a few things to watch out for. This guide assumes you already have a static site running on Cloudfront (you can find many other...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Alex Meub

The Magic of Solving Problems with 3D Printing

3D Printing has allowed me to be creative in ways I never thought possible. It has allowed me to create products that provide real value, products that didn’t exist before I designed them. On top of that, it’s satisfied my desire to ship products, even if the end-user is just me. Another great thing is how quickly 3D printing provides value. If I see a problem, I can design and print a solution that works in just a few hours. Even if I’m the only one who benefits, that’s enough. But sharing these creations takes the experience even further. When I see others use or improve on something I’ve made, it makes the process feel so much more worthwhile. It gives me the same feeling of fulfillment when I ship software products at work. Before mass-market 3D printing, creators would need to navigate the complexity and high costs of mass-production methods (like injection molding) even to get a limited run of a niche product produced. With 3D printing, they can transfer the cost of production to others. Millions of people have access to good 3D printers now (at home, work, school, libraries, maker spaces), which means almost anyone can replicate a design. Having a universal format for sharing 3D designs dramatically lowers the effort that goes into sharing them. Creators can share their design as an STL file, which describes the surface geometry of their 3D object as thousands of little triangles. This “standard currency” of the 3D printing world is often all that is required to precisely replicate a design. This dramatically lowers the effort that goes into sharing printable designs. The widespread availability of 3D printers and the universal format for sharing 3D designs has allowed 3D-printed products to not only exist but thrive in maker communities. This is the magic of 3D printing: it empowers individuals to solve their own problems by designing solutions while enabling others to reproduce those designs at minimal cost and effort.

8 months ago 97 votes
Building the DataToaster 3000

Last summer, I was inspired by a computer that was built inside of a toaster that I saw at a local computer recycling store. The idea of a computer with the design of a home appliance was really appealing and so was the absurdity of it. It occurred to me that this would be a fun and creative way to integrate technology into my life. After thinking about it, I realized there’s also something visually appealing about how simple and utilitarian toasters are. I have major nostalgia for the famous After Dark screensaver and I think this is why. I knew now that I wanted to make my own attempt at a toaster/computer hybrid. I decided to do just that when I created the DataToaster 3000: a toaster NAS with two 3.5 inch hard disk docking stations built inside it. The hard disks can be easily swapped out (while powered off) without taking anything apart. It uses a Zimaboard x86-64 single board computer and even has a functional knob that controls the color of the power LED. I designed a fairly complex set of 3D-printed parts that attach to the base of the toaster and hold everything neatly in place. This allows it to be easily disassembled if I ever want to make any modifications and also hopefully makes the project easier to build for others. It’s a ridiculous thing but I really do love it. You can find the build guide on Instructables and the 3D models on Printables.

10 months ago 85 votes
Building a Removable Bike Basket for the Yepp Rack

I wanted to add more hauling capacity to my bike and was looking for something compatible with my Yepp rear rack. I also use my rack with a child seat (the Yepp Maxi) which has a mechanism that allows it to attach and detach easily without sacrificing safety. I was thinking it would be great to build a Yepp compatible rear basket that could I just as quickly attach/detach from my rack. I designed a removable Yepp-rack-compatible rear basket that consists of a milk crate, some plywood for stability and a 3D printed bracket threaded for M6 bolts which hold it all together. It can be attached and removed in seconds and is very secure. 3D Printed Mounting Bracket I modeled my mounting bracket after the one on the Yepp Maxi childseat. After a few iterations I was able to make it perfectly fit. I printed it in PETG filament so it was UV resistant and then installed threaded inserts for M6 bolts to attach it to the milk crate and my rear rack. 3D Print and Build Instructions You can find the 3D print on Printables and a full build guide on Instructables.

11 months ago 80 votes
The Yoto Mini is Perfect

The Yoto Mini is one of my favorite products. The team behind it deeply understands its users and put just the right set of features into a brilliantly designed package. I have no affiliation with Yoto, I’m just a happy customer with kids who love it. If you aren’t aware, Yoto is an audio platform for kids with what they call “screen-free” audio players (even though they have little pixel LED screens on them). The players are Wi-Fi enabled and support playing audio from credit card-sized NFC tags called Yoto cards. Yoto sells audio players and also licenses audio content and offers it on its platform as well. The cards themselves do not contain any audio data, just a unique ID of the audio content that is pulled from the cloud. After content is pulled on the first play, it is saved and played locally from the player after that. Yoto also supports playing podcasts and music stations without using cards. Their marketing puts a lot of emphasis on the platform being “ad-free” which is mostly true as there are never ads on Yoto cards or official Yoto podcasts. However, some of the other podcasts do advertise their content. So, what’s so great about the Yoto Mini? This concept isn’t new as there have been many examples of audio players for kids over the years. What sets it apart is how every detail of the hardware, mobile app, and exclusive content is meticulously designed and well executed. Yoto Mini Hardware The main input methods of the Yoto Mini are two orange knobs, turning the left knob controls volume and the right knob navigates chapters or tracks. Pressing the right knob instantly plays the Yoto Daily podcast and pressing it twice plays Yoto Radio (a kid-friendly music station). These actions are both configurable in the mobile app. The NFC reader slot accepts Yoto cards and instantly starts playing where you left off after you insert one. It has a high-quality speaker that can be surprisingly loud, an on/off button, a USB-C charging port, an audio output jack, and a small pixel display that shows images related to the audio content. The Yoto Mini is also surprisingly durable. My kids have dropped it many times on hard surfaces and it still basically looks as good as new. Yoto understands that the physical audio player itself is primarily used by younger kids and the design reflects this. My 3-year-old daughter was able to figure out how to turn it on/off, start listening to books using cards, and play the Yoto Daily podcast each morning which was empowering for her. This was her first technology product that she was fully capable of using without help from an adult. I can’t think of many other products that do this better. Yoto Mobile App The Yoto team understands that parents are users of this product too, mostly for managing the device and its content. Yoto has built a very good mobile experience that is tightly integrated with the hardware and provides all the features you’d want as a parent. From the app, you can start playing any of the content from cards you own on the player or your phone (nice if your kids lose a card), you can set volume limits for both night and day time, you can set alarms, and configure the shortcut buttons. You can record audio onto a blank Yoto card (which comes with the player) if your kid wants to create their own story, link it to their favorite podcast or favorite music. The app even lets you give each track custom pixel art that is displayed on the screen. Audio Content By far the most underrated feature is a daily podcast called Yoto Daily. This ad-free podcast is run by a charming British host and it is funny, entertaining, and educational. My kids (now 4 and 7) look forward to it every morning and the fact that it’s daily free content that is integrated directly into the Yoto hardware is amazing. To me, this is the killer feature, as my kids get to enjoy it every day and it’s always fresh and interesting. Yoto licenses content from child book authors, popular kid’s shows, movies, and music (recently the Beatles) which are made available in their store. I also discovered that Yoto does not seem to lock down its content with DRM. My son traded some Yoto cards with a friend and I assumed there would be some kind of transfer or de-registration process but to my surprise, they just worked without issue. Conclusion The Yoto Mini is a delightful product. The team behind it thought through every detail and made it an absolute joy to use both as a child and parent. I’m impressed at how well the Yoto team understands their users and prioritizes simplicity and ease of use above all else.

a year ago 75 votes
How To Quiet Down Your 3D Printer

When I first got my 3D printer, I built an enclosure to protect it from dust, maintain a consistent temperature, and minimize noise. I was surprised to find that the enclosure didn’t reduce noise that significantly. I then placed a patio paver under my printer, which made it noticeably quieter, but it was still audible from other rooms in my house. Recently, I found the most effective noise reduction solution: squash balls. These balls are designed with varying bounce levels, indicated by colored dots. The “double-yellow dot” balls have a very low bounce, making them ideal for dampening vibration, which is the primary cause of printer noise. I found an existing design for squash ball feet, printed it, and hot glued them evenly under my patio paver. My current setup includes the enclosure, patio paver, and squash balls under the paver. Now, the printer is so quiet that I actually can’t tell if it’s running, even when I’m in the same room. Occasionally, I will hear the stepper motors, but that’s rare. Most of the time I need to open the enclosure to make sure it’s still printing.

a year ago 41 votes

More in programming

The future of large files in Git is Git

.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎

23 hours ago 7 votes
Just a Little More Context Bro, I Promise, and It’ll Fix Everything

Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky

yesterday 3 votes
How to Leverage the CPU’s Micro-Op Cache for Faster Loops

Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache

yesterday 5 votes
Omarchy micro-forks Chromium

You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).

2 days ago 4 votes
Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 5 votes