Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
13
The other day I was working on something where I needed to use CSS to apply multiple background images to an element, e.g. <div> My content with background images. </div> <style> div { background-image: url(image-one.jpg), url(image-two.jpg); background-position: top right, bottom left; /* etc. */ } </style> As I was tweaking the appearance of these images, I found myself wanting to control the opacity of each one. A voice in my head from circa 2012 chimed in, “Um, remember Jim, there is no background-opacity rule. Can’t be done.” Then that voice started rattling off the alternatives: You’ll have to use opacity but that will apply to the entire element, which you have text in, so that won’t work. You’ll have to create a new empty element, apply the background images there, then use opacity. Or: You can use pseudo elements (:before & :after), apply the background images to those, then use opacity. Then modern me interrupted this old guy....
a week ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

You’re Only As Strong As Your Weakest Point

In April 1945, as US soldiers overtook Merkers, Germany, stories began to surface to Army officials of stolen Nazi riches stored in the local salt mine. Eventually, the Americans found the mine and began exploring it, ending up at a vaulted door. Here’s the story, as told by Greg Bradsher: the Americans found the main vault. It was blocked by a brick wall three feet thick…In the center of the wall was a large bank-type steel safe door, complete with combination lock and timing mechanism with a heavy steel door set in the middle of it. Attempts to open the steel vault door were unsuccessful. Word went up the chain of command about the find and suspected gold hoard behind the vaulted steel door. The order came back down to open it up. But what to do about this vault door that, up until now, nobody could open? One engineer looked at the problem and said: forget the door, blow the wall! One of the engineers who inspected the brick wall surrounding the vault door thought it could be blasted through with little effort. Therefore the engineers, using a half-stick of dynamite, blasted an entrance though the masonry wall. To me, this is a fascinating commentary on security specifically [insert meme of gate with no fence] But also a commentary on problem-solving generally. When you have a seemingly intractable problem — there’s an impenetrable door we can’t open — rather than focus on the door itself, you take a step back and realize the door may be impenetrable but the wall enclosing it is not. A little dynamite and problem solved. Lessons: You’re only as strong as your weakest point. Don’t miss the forest for the trees. A little dynamite goes a long way. Footnote to this story, in case you’re wondering what they found inside: [a partial] inventory indicated that there were 8,198 bars of gold bullion; 55 boxes of crated gold bullion; hundreds of bags of gold items; over 1,300 bags of gold Reichsmarks, British gold pounds, and French gold francs; 711 bags of American twenty-dollar gold pieces; hundreds of bags of gold and silver coins; hundreds of bags of foreign currency; 9 bags of valuable coins; 2,380 bags and 1,300 boxes of Reichsmarks (2.76 billion Reichsmarks); 20 silver bars; 40 bags containing silver bars; 63 boxes and 55 bags of silver plate; 1 bag containing six platinum bars; and 110 bags from various countries Email · Mastodon · Bluesky

20 hours ago 2 votes
Be Mindful of What You Make Easy

Carson Gross has a post about vendoring which brought back memories of how I used to build websites in ye olden days, back in the dark times before npm. “Vendoring” is where you copy dependency source files directly into your project (usually in a folder called /vendor) and then link to them — all of this being a manual process. For example: Find jquery.js or reset.css somewhere on the web (usually from the project’s respective website, in my case I always pulled jQuery from the big download button on jQuery.com and my CSS reset from Eric Meyer’s website). Copy that file into /vendor, e.g. /vendor/jquery@1.2.3.js Pull it in where you need it, e.g. <script src="/vendor/jquery@1.2.3.js"> And don’t get me started on copying your transitive dependencies (the dependencies of your dependencies). That gets complicated when you’re vendoring by hand! Now-a-days package managers and bundlers automate all of this away: npm i what you want, import x from 'pkg', and you’re on your way! It’s so easy (easy to get all that complexity). But, as the HTMX article points out, a strength can also be a weakness. It’s not all net gain (emphasis mine): Because dealing with large numbers of dependencies is difficult, vendoring encourages a culture of independence. You get more of what you make easy, and if you make dependencies easy, you get more of them. I like that — you get more of what you make easy. Therefore: be mindful of what you make easy! As Carson points out, dependency management tools foster a culture of dependence — imagine that! I know I keep lamenting Deno’s move away from HTTP imports by default, but I think this puts a finger on why I’m sad: it perpetuates the status quo, whereas a stance on aligning imports with how the browser works would not perpetuate this dependence on dependency resolution tooling. There’s no package manager or dependency resolution algorithm for the browser. I was thinking about all of this the other day when I then came across this thread of thoughts from Dave Rupert on Mastodon. Dave says: I prefer to use and make simpler, less complex solutions that intentionally do less. But everyone just wants the thing they have now but faster and crammed with more features (which are juxtaposed) He continues with this lesson from his startup Luro: One of my biggest takeaways from Luro is that it’s hard-to-impossible to sell a process change. People will bolt stuff onto their existing workflow (ecosystem) all day long, but it takes a religious conversion to change process. Which really helped me put words to my feelings regarding HTTP imports in Deno: i'm less sad about the technical nature of the feature, and more about what it represented as a potential “religious revival” in the area of dependency management in JS. package.json & dep management has become such an ecosystem unto itself that it seems only a Great Reawakening™️ will change it. I don’t have a punchy point to end this article. It’s just me working through my feelings. Email · Mastodon · Bluesky

4 days ago 6 votes
Some Love For Interoperable Apps

I like to try different apps. What makes trying different apps incredible is a layer of interoperability — standardized protocols, data formats, etc. When I can bring my data from one app to another, that’s cool. Cool apps are interoperable. They work with my data, rather than own it. For example, the other day I was itching to try a new RSS reader. I’ve used Reeder (Classic) for ages. But every once in a while I like to try something different. This is super easy because lots of clients support syncing to Feedbin. It’s worth pointing out: Feedbin has their own app. But they don’t force you to use it. You’re free to use any RSS client you want that supports their service. So all I have to do is download a new RSS client, login to Feedbin, and boom! An experience of my data in a totally different app from a totally different developer. That’s amazing! And you know how long it took? Seconds. No data export. No account migration. Doing stuff with my blog is similar. If I want to try a different authoring experience, all my posts are just plain-text markdown files on disk. Any app that can operate on plain-text files is a potential new app to try. No shade on them, but this why I personally don’t use apps like Bear. Don’t get me wrong, I love so much about Bear. But it wants to keep your data in its own own proprietary, note-keeping safe. You can’t just open your notes in Bear in another app. Importing is required. But there’s a big difference between apps that import (i.e. copy) your existing data and ones that interoperably work with it. Email can also be this way. I use Gmail, which supports IMAP, so I can open my mail in lots of different clients — and believe me, I've tried a lot of email clients over the years. Sparrow Mailbox Spark Outlook Gmail (desktop web, mobile app) Apple Mail Airmail This is why I don’t use un-standardized email features because I know I can’t take them with me. It’s also why I haven’t tried email providers like HEY! Because they don't support open protocols so I can’t swap clients when I want. My email is a dataset, and I want to be able to access it with any existing or future client. I don't want to be stuck with the same application for interfacing with my data forever (and have it tied to a company). I love this way of digital life, where you can easily explore different experiences of your data. I wish it was relevant to other areas of my digital life. I wish I could: Download a different app to view/experience my photos Download a different app to view/experience my music Download a different app to view/read my digital books In a world like this, applications would compete on an experience of my data, rather than on owning it. The world’s a big place. The entire world doesn’t need one singular photo experience to Rule Them All. Let’s have experiences that are as unique and varied as us. Email · Mastodon · Bluesky

6 days ago 10 votes
Ductility in Software

I learned a new word: ductile. Do you know it? I’m particularly interested in its usage in a physics/engineering setting when talking about materials. Here’s an answer on Quora to: “What is ductile?” Ductility is the ability of a material to be permanently deformed without cracking. In engineering we talk about elastic deformation as deformation which is reversed once the load is removed for example a spring, conversely plastic deformation isn’t reversed. Ductility is the amount (usually expressed as a ratio) of plastic deformation that a material can undergo before it cracks or tears. I read that and started thinking about the “ductility” of languages like HTML, CSS, and JS. Specifically: how much deformation can they undergo before breaking? HTML, for example, is famously forgiving. It can be stretched, drawn out, or deformed in a variety of ways without breaking. Take this short snippet of HTML: <!doctype html> <title>My site</title> <p>Hello world! <p>Nice to meet you That is valid HTML. But it can also be “drawn out” for readability without losing any of its meaning. It’ll still render the same in the browser: <!doctype html> <html> <head> <title>My site</title> </head> <body> <p>Hello world!</p> <p>Nice to meet you.</p> </body> </html> This capacity for the language to undergo a change in form without breaking is its “ductility”. HTML has some pull before it breaks. JS, on the other hand, doesn’t have the same kind of ductility. Forget a quotation mark and boom! Stretch it a little and it breaks. console.log('works!'); // -> works! console.log('works!); // Uncaught SyntaxError: Invalid or unexpected token I suppose some would say “this isn’t ductility, this is merely forgiving error-parsing”. Ok, sure. Nevertheless, I’m writing here because I learned this new word that has very practical meaning in another discipline to talk about the ability of materials to be stretched and deformed without breaking. I think we need more of that in software. More resiliency. More malleability. More ductility — prioritized in our materials (tools, languages, paradigms) so we can talk more about avoiding sudden failure. Email · Mastodon · Bluesky

a week ago 89 votes

More in programming

You’re Only As Strong As Your Weakest Point

In April 1945, as US soldiers overtook Merkers, Germany, stories began to surface to Army officials of stolen Nazi riches stored in the local salt mine. Eventually, the Americans found the mine and began exploring it, ending up at a vaulted door. Here’s the story, as told by Greg Bradsher: the Americans found the main vault. It was blocked by a brick wall three feet thick…In the center of the wall was a large bank-type steel safe door, complete with combination lock and timing mechanism with a heavy steel door set in the middle of it. Attempts to open the steel vault door were unsuccessful. Word went up the chain of command about the find and suspected gold hoard behind the vaulted steel door. The order came back down to open it up. But what to do about this vault door that, up until now, nobody could open? One engineer looked at the problem and said: forget the door, blow the wall! One of the engineers who inspected the brick wall surrounding the vault door thought it could be blasted through with little effort. Therefore the engineers, using a half-stick of dynamite, blasted an entrance though the masonry wall. To me, this is a fascinating commentary on security specifically [insert meme of gate with no fence] But also a commentary on problem-solving generally. When you have a seemingly intractable problem — there’s an impenetrable door we can’t open — rather than focus on the door itself, you take a step back and realize the door may be impenetrable but the wall enclosing it is not. A little dynamite and problem solved. Lessons: You’re only as strong as your weakest point. Don’t miss the forest for the trees. A little dynamite goes a long way. Footnote to this story, in case you’re wondering what they found inside: [a partial] inventory indicated that there were 8,198 bars of gold bullion; 55 boxes of crated gold bullion; hundreds of bags of gold items; over 1,300 bags of gold Reichsmarks, British gold pounds, and French gold francs; 711 bags of American twenty-dollar gold pieces; hundreds of bags of gold and silver coins; hundreds of bags of foreign currency; 9 bags of valuable coins; 2,380 bags and 1,300 boxes of Reichsmarks (2.76 billion Reichsmarks); 20 silver bars; 40 bags containing silver bars; 63 boxes and 55 bags of silver plate; 1 bag containing six platinum bars; and 110 bags from various countries Email · Mastodon · Bluesky

20 hours ago 2 votes
SSEBITDA -- A steady-state profit metric for SaaS companies

How can a business that is "spending to grow" determine whether it's truly profitable underneath all that "revenue acceleration?" Here's a way.

15 hours ago 1 votes
An unplanned upgrade to Linux Mint 22.1 Cinnamon

<![CDATA[I spoke too soon when I said I was enjoying the stability of Linux. I have been using Linux Mint Cinnamon on a System76 Merkaat PC with no major issues since July of 2024. But a few days ago a routine system update of Mint 22 dumped me to the text console. A fresh install of Mint 22.1, the latest release, brought the system back online. I had backups and the mishap luckily turned out as just an annoyance that consumed several hours of unplanned maintenance. It all started when the Mint Update Manager listed several packages for update, including the System76 driver and tools. Oddly, the Update Manager also marked for removal several packages including core ones such as Xorg, Celluloid, and more. The smooth running of Mint made my paranoid side fall asleep and I applied the recommend changes. At the next reboot the graphics session didn't start and landed me at the text console with no clue what happened. I don't use Timeshift for system snapshots as I prefer a fresh install and restore of data backups if the system breaks. Therefore, to fix such an issue apparently related to Mint 22 the obvious route was to install Mint 22.1. Besides, this was the right occasion to try the new release. On my Raspberry Pi 400 I ran dd to flash a bootable USB stick with Mint 22.1. I had no alternatives as GNOME Disks didn't work. The Merkaat failed to boot off the stick, possibly because I messed with the arguments of dd. I still had around a USB stick with Mint 22 and I used it to freshly install it on the Merkaat. Then I immediately ran the upgrade to Mint 22.1 which completed successfully unlike a prior upgrade attempt. Next, I tried to install the System76 driver with sudo apt install system76-driver but got a package not found error. At that point I had already added the System76 package repository to the APT sources and refreshing the Mint Update Manager yielded this error: Could not refresh the list of updates Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages Aside from the errors the system was up and running on the Merkaat, so with Nemo I reflashed the Mint 22.1 stick. This time the PC did boot off the stick and let me successfully install Mint 22.1. Restoring the data completed the system recovery. I left out the System76 driver as it's the primary suspect, possibly due to package conflicts. Mint detects and supports all hardware of the Merkaat anyway and it's only prudent to skip the package for the time being. Besides improvements under the hood, Mint 22.1 features a redesigned default Cinnamon theme. No major changes, I feel at home. The main takeaway of this adventure is that it's better to have a bootable USB stick ready with the latest Mint release, even if I don't plan to upgrade immediately. Another takeaway is the Pi 400 makes for a viable backup computer that can support my major tasks, should it take longer to recover the Merkaat. However, using the device for making bootable media is problematic as little flashing software is available and some is unreliable. Finally, over decades of Linux experience I honed my emergency installation skills so much I can now confidently address most broken system situations. #linux #pi400 a href="https://remark.as/p/journal.paoloamoroso.com/an-unplanned-upgrade-to-linux-mint-22-1-cinnamon"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>

yesterday 2 votes
On Writing, Social Media, and Finding the Line of Embarrassment

Brace yourself, because I’m about to utter a sequence of words I never thought I would hear myself say: I really miss posting on Twitter. I really, really miss it. It’s funny, because Twitter was never not a trash fire. There was never a time when it felt like we were living through some kind […]

yesterday 5 votes
Why did Stripe build Sorbet? (~2017).

Many hypergrowth companies of the 2010s battled increasing complexity in their codebase by decomposing their monoliths. Stripe was somewhat of an exception, largely delaying decomposition until it had grown beyond three thousand engineers and had accumulated a decade of development in its core Ruby monolith. Even now, significant portions of their product are maintained in the monolithic repository, and it’s safe to say this was only possible because of Sorbet’s impact. Sorbet is a custom static type checker for Ruby that was initially designed and implemented by Stripe engineers on their Product Infrastructure team. Stripe’s Product Infrastructure had similar goals to other companies’ Developer Experience or Developer Productivity teams, but it focused on improving productivity through changes in the internal architecture of the codebase itself, rather than relying solely on external tooling or processes. This strategy explains why Stripe chose to delay decomposition for so long, and how the Product Infrastructure team invested in developer productivity to deal with the challenges of a large Ruby codebase managed by a large software engineering team with low average tenure caused by rapid hiring. Before wrapping this introduction, I want to explicitly acknowledge that this strategy was spearheaded by Stripe’s Product Infrastructure team, not by me. Although I ultimately became responsible for that team, I can’t take credit for this strategy’s thinking. Rather, I was initially skeptical, preferring an incremental migration to an existing strongly-typed programming language, either Java for library coverage or Golang for Stripe’s existing familiarity. Despite my initial doubts, the Sorbet project eventually won me over with its indisputable results. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation The Product Infrastructure team is investing in Stripe’s developer experience by: Every six months, Product Infrastructure will select its three highest priority areas to focus, and invest a significant majority of its energy into those. We will provide minimal support for other areas. We commit to refreshing our priorities every half after running the developer productivity survey. We will further share our results, and priorities, in each Quarterly Business Review. Our three highest priority areas for this half are: Add static typing to the highest value portions of our Ruby codebase, such that we can run the type checker locally and on the test machines to identify errors more quickly. Support selective test execution such that engineers can quickly determine and run the most appropriate tests on their machine rather than delaying until tests run on the build server. Instrument test failures such that we have better data to prioritize future efforts. Static typing is not a typical solution to developer productivity, so it requires some explanation when we say this is our highest priority area for investment. Doubly so when we acknowledge that it will take us 12-24 months of much of the team’s time to get our type checker to an effective place. Our type checker, which we plan to name Sorbet, will allow us to continue developing within our existing Ruby codebase. It will further allow our product engineers to remain focused on developing new functionality rather than migrating existing functionality to new services or programming languages. Instead, our Product Infrastructure team will centrally absorb both the development of the type checker and the initial rollout to our codebase. It’s possible for Product Infrastructure to take on both, despite its fixed size. We’ll rely on a hybrid approach of deep-dives to add typing to particularly complex areas, and scripts to rewrite our code’s Abstract Syntax Trees (AST) for less complex portions. In the relatively unlikely event that this approach fails, the cost to Stripe is of a small, known size: approximately six months of half the Product Infrastructure team, which is what we anticipate requiring to determine if this approach is viable. Based on our knowledge of Facebook’s Hack project, we believe we can build a static type checker that runs locally and significantly faster than our test suite. It’s hard to make a precise guess now, but we think less than 30 seconds to type our entire codebase, despite it being quite large. This will allow for a highly productive local development experience, even if we are not able to speed up local testing. Even if we do speed up local testing, typing would help us eliminate one of the categories of errors that testing has been unable to eliminate, which is passing of unexpected types across code paths which have been tested for expected scenarios but not for entirely unexpected scenarios. Once the type checker has been validated, we can incrementally prioritize adding typing to the highest value places across the codebase. We do not need to wholly type our codebase before we can start getting meaningful value. In support of these static typing efforts, we will advocate for product engineers at Stripe to begin development using the Command Query Responsibility Segregation (CQRS) design pattern, which we believe will provide high-leverage interfaces for incrementally introducing static typing into our codebase. Selective test execution will allow developers to quickly run appropriate tests locally. This will allow engineers to stay in a tight local development loop, speeding up development of high quality code. Given that our codebase is not currently statically typed, inferring which tests to run is rather challenging. With our very high test coverage, and the fact that all tests will still be run before deployment to the production environment, we believe that we can rely on statistically inferring which tests are likely to fail when a given file is modified. Instrumenting test failures is our third, and lowest priority, project for this half. Our focus this half is purely on annotating errors for which we have high conviction about their source, whether infrastructure or test issues. For escalations and issues, reach out in the #product-infra channel. Diagnose In 2017, Stripe is a company of about 1,000 people, including 400 software engineers. We aim to grow our organization by about 70% year-over-year to meet increasing demand for a broader product portfolio and to scale our existing products and infrastructure to accommodate user growth. As our production stability has improved over the past several years, we have now turned our focus towards improving developer productivity. Our current diagnosis of our developer productivity is: We primarily fund developer productivity for our Ruby-authoring software engineers via our Product Infrastructure team. The Ruby-focused portion of that team has about ten engineers on it today, and is unlikely to significantly grow in the future. (If we do expand, we are likely to staff non-Ruby ecosystems like Scala or Golang.) We have two primary mechanisms for understanding our engineer’s developer experience. The first is standard productivity metrics around deploy time, deploy stability, test coverage, test time, test flakiness, and so on. The second is a twice annual developer productivity survey. Looking at our productivity metrics, our test coverage remains extremely high, with coverage above 99% of lines, and tests are quite slow to run locally. They run quickly in our infrastructure because they are multiplexed across a large fleet of test runners. Tests have become slow enough to run locally that an increasing number of developers run an overly narrow subset of tests, or entirely skip running tests until after pushing their changes. They instead rely on our test servers to run against their pull request’s branch, which works well enough, but significantly slows down developer iteration time because the merge, build, and test cycle takes twenty to thirty minutes to complete. By the time their build-test cycle completes, they’ve lost their focus and maybe take several hours to return to addressing the results. There is significant disagreement about whether tests are becoming flakier due to test infrastructure issues, or due to quality issues of the tests themselves. At this point, there is no trustworthy dataset that allows us to attribute between those two causes. Feedback from the twice annual developer productivity survey supports the above diagnosis, and adds some additional nuance. Most concerning, although long-tenured Stripe engineers find themselves highly productive in our codebase, we increasingly hear in the survey that newly hired engineers with long tenures at other companies find themselves unproductive in our codebase. Specifically, they find it very difficult to determine how to safely make changes in our codebase. Our product codebase is entirely implemented in a single Ruby monolith. There is one narrow exception, a Golang service handling payment tokenization, which we consider out of scope for two reasons. First, it is kept intentionally narrow in order to absorb our SOC1 compliance obligations. Second, developers in that environment have not raised concerns about their productivity. Our data infrastructure is implemented in Scala. While these developers have concerns–primarily slow build times–they manage their build and deployment infrastructure independently, and the group remains relatively small. Ruby is not a highly performant programming language, but we’ve found it sufficiently efficient for our needs. Similarly, other languages are more cost-efficient from a compute resources perspective, but a significant majority of our spend is on real-time storage and batch computation. For these reasons alone, we would not consider replacing Ruby as our core programming language. Our Product Infrastructure team is about ten engineers, supporting about 250 product engineers. We anticipate this group growing modestly over time, but certainly sublinearly to the overall growth of product engineers. Developers working in Golang and Scala routinely ask for more centralized support, but it’s challenging to prioritize those requests as we’re forced to consider the return on improving the experience for 240 product engineers working in Ruby vs 10 in Golang or 40 data engineers in Scala. If we introduced more programming languages, this prioritization problem would become increasingly difficult, and we are already failing to support additional languages.

3 days ago 8 votes