Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
44
As the final part of our move out of the cloud, we are working on moving 10 petabytes of data out of AWS Simple Storage Service (S3). After exploring different alternatives, we decided to go with Pure Storage FlashBlade solution. We store different kinds of information on S3, from the attachments customers upload to Basecamp to the Prometheus long-term metrics. On top of that, Pure’s system also provides filesystem-based capabilities, enabling other relevant usages, such as database backup storage. This makes the system a top priority for observability. Although the system has great reliability, out-of-the-box internal alerting, and autonomous ticket creation, it would also be good to have our metrics and alerts to facilitate problem-solving and ensure any disruptions are prioritized and handled. For more context on our current Prometheus setup, see how we use Prometheus at 37signals. Pure OpenMetrics exporter Pure maintains two OpenMetrics exporters, pure-fb-openmetrics-exporter and...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from 37signals Dev

Announcing Hotwire Spark: live reloading for Rails applications

Today, we are releasing Hotwire Spark, a live-reloading system for Rails Applications. Reloading the browser automatically on source changes is a problem that has been well-solved for a long time. Here, we wanted to put an accent on smoothness. If the reload operation is very noticeable, the feedback loop is similar to just reloading the page yourself. But if it’s smooth enough—if you only perceive the intended change—the feedback loop becomes terrific. To use, just install the gem in development: group :development do gem "hotwire-spark" end It will update the current page on three types of change: HTML content, CSS, and Stimulus controllers. How do we achieve that desired smoothness with each? For HTML content, it morphs the <body> of the page into the new <body>. Also, it disconnects and reconnects all the Stimulus controllers on the page. For CSS, it reloads the changed stylesheet. For Stimulus controllers, it fetches the changed controller, replaces its module in Stimulus, and reconnects all the controllers. We designed Hotwire Spark to shine with the #nobuildapproach we use and recommend. Serving CSS and JS assets as standalone files is ideal when you want to fetch and update only what has changed. There is no need to use bundling or any tooling. Hot Module Replacement for Stimulus controllers without any frontend building tool is pretty cool! 2024 has been a very special year for Rails. We’re thrilled to share Hotwire Spark before the year wraps up. Wishing you all a joyful holiday season and a fantastic start to 2025.

a month ago 58 votes
A vanilla Rails stack is plenty

If you have the luxury of starting a new Rails app today, here’s our recommendation: go vanilla. Fight hard before adding Ruby dependencies. Keep that Gemfile that Rails generates as close to the original one as possible. Fight even harder before adding Javascript dependencies. You don’t need React or any other front-end frameworks, nor a JSON API to feed those. Hotwire is a fantastic, pragmatic, and ridiculously productive technology for the front end. Use it. The same goes for mobile apps: use Hotwire Native. With a hybrid approach you can combine the very same web app you have built with a wonderful native experience right where you want it. The productivity compared to a purely native approach is night and day. Embrace and celebrate rendering things on the server. It has become cool again. ERB templates and view helpers will take you as long as you need, and they are a fantastic common ground for designers to collaborate hands-on with the code. #nobuild is the simplest way to go; don’t close this door with your choices. Instead of bundling Javascript, use import maps. Don’t bundle CSS, just use modern standard CSS goodies and serve them all with Propshaft. If you have 100 Javascript files and 100 stylesheets, serve 200 standalone requests multiplexed over HTTP2. You will be delighted. Don’t add Redis to the mix. Use solid_cache for caching, solid_queue for jobs, and solid_cable for Action Cable. They will all work on your beloved relational database and are battle-tested. Test your apps with Minitest. Use fixtures and build a realistic set of those as you cook your app. Make your app a PWA, which is fully supported by Rails 8. This may be more than enough before caring about mobile apps at all. Deploy your app with Kamal. If you want heuristics, your importmap.rb should import Turbo, Stimulus, your app controllers, and little else. Your Gemfile should be almost identical to the one that Rails generates. I know it sounds radical, but going vanilla is a radical stance in this convoluted world of endless choices. This is the Rails 8 stack we have chosen for our new apps at 37signals. We are a tiny crew, so we care a lot about productivity. And we sell products, not stacks, so we care a lot about delighting our users. This is our Omakase stack because it offers the optimal balance for achieving both. Vanilla means your app stays nimble. Fewer dependencies mean fewer future headaches. You get a tight integration out of the box, so you can focus on building things. It also maximizes the odds of having smoother future upgrades. Vanilla requires determination, though, because new dependencies always look shiny and shinier. It’s always clear what you get when you add them, but never what you lose in the long term. It is certainly up to you. Rails is a wonderful big tent. These are our opinions. If it resonates, choose vanilla! Guess what our advice is for architecting your app internals?

a month ago 31 votes
Mission Control — Jobs 1.0 released

We’ve just released Mission Control — Jobs v1.0.0, the dashboard and set of extensions to operate background jobs that we introduced earlier this year. This new version is the result of 92 pull requests, 67 issues and the help of 35 different contributors. It includes many bugfixes and improvements, such as: Support for Solid Queue’s recurring tasks, including running them on-demand. Support for API-only apps. Allowing immediate dispatching of scheduled and blocked jobs. Backtrace cleaning for failed jobs’ backtraces. A safer default for authentication, with Basic HTTP authentication enabled and initially closed unless configured or explicitly disabled. Recurring tasks in Mission Control — Jobs, with a subset of the tasks we run in production We use Mission Control — Jobs daily to manage jobs HEY and Basecamp 4, with both Solid Queue and Resque, and it’s the dashboard we recommend if you’re using Solid Queue for your jobs. Our plan is to upstream some of the extensions we’ve made to Active Job and continue improving it until it’s ready to be included by default in Rails together with Solid Queue. If you want to help us with that, are interested in learning more or have any issues or questions, head over to the repo in GitHub. We hope you like it!

2 months ago 26 votes
All about QA

Quality Assurance (QA) is a team of two at 37signals: Michael, who created the department 12 years ago, and Gabriel, who joined the team in 2022. Together, we have a hand in projects across all of our products, from kickoff to release. Our goal is to help designers and programmers ship their best work. Our process revolves around manual testing and has been tuned to match the rhythm of Shape Up. Here, we’ll share the ins and outs of our methods and touch on a few of the tools we use along the way. Kicking things off At 37signals we run projects in six-week cycles informed by Shape Up. At the beginning of each cycle, Brian, our Head of Product, posts a kick-off message detailing what we plan to ship. This usually consists of new features and improvements for Basecamp, HEY, or a ONCE product. Each gets its own Basecamp project, and each project includes a pitch. The pitch lays out the problem or need, a proposed solution, and the “appetite” or time budget. The kick-off is also QA’s cue to dive in! We offer early feedback, ask questions or illuminate things that aren’t covered, and give extra consideration to flows and interactions that may require extra work on the accessibility front. We then step back and let the teams focus, design, and build things for a while. The right time to test We wait until the feature or product reaches a usable state to start testing in earnest. This helps us keep a fresh perspective, unencumbered by the knowledge of compromises made along the way. We use a Card Table within our QA Team project to track what’s ready for testing or in progress. Teams add a card to the Ready for QA (Triage) section when the time is right. The table is kept simple with just two columns, In Progress and Pending Input, for when we’ve completed our test run and the team is addressing the feedback. Depending on the breadth and complexity of the work being tested, this flow can take anywhere from a few hours to a few days. A holistic approach to QA Once we take on a request, we explore and scrutinize the feature much like an (extremely zealous!) customer would. We want to help teams ship the most polished features they can. We look out for bugs of all kinds: performance issues, visual glitches, unexpected changes, and so on, but perhaps most importantly, we offer feedback on the usability of the feature. We guide our feedback with questions like: Is this feature easy to discover and access? Is it in the right spot? Does it interact in an unexpected way with another part of the app? How does the change play with our mobile apps? Does this solve the problem in a way that customers will find obvious? Critically, what we raise with this type of QA testing are suggestions, not must-haves. The designer and programmer working on the feature make the call on what to address and what to shelve. We document this feedback in a dedicated Card Table within the feature’s Basecamp project. The designer and programmer will then review the cards we’ve added to Triage and direct them to the In Progress and Not Now columns as appropriate. From In Progress, cards are moved to a column called QA to confirm fixed, then finally to Done. More focus, less bloat Our overall approach to testing is guided exploration. We don’t maintain an exhaustive collection of test cases to dogmatically review each time we test a feature. We’ve tried using dedicated test plan tools and comprehensive spreadsheets of test cases upon test cases; the time spent certifying every little thing was considerable, yet it didn’t translate into finding more issues. Worse, it left us with less time to spend sitting with the feature in a more subjective way. We’ve landed on a more pragmatic approach. We’ve boiled down the test plan to a concise list of considerations that live in Basecamp as to-do list templates, one for each product. Instead of a multitude of test cases, each template contains around 100 items. These act as pointers, touching on overall concepts (like commenting, dark mode, email notifications), specific areas of the app, and platform-specific considerations. We reflect on the work presented and how it ties into these areas. Some examples from recent projects have been: Did we update exporting to consider this new addition of time tracking entries? Are email notifications properly reflecting the new Steps feature we added to Card Table? How about print styles, do they look good? QA Considerations for Basecamp 4 We create a to-do list via the template directly in the project we are working on, and use that as our reference for reviewing the work. We also ask the feature team if there are areas that deserve extra attention. Being flexible and discerning about how much time and coverage we use in our testing allows us to cover anywhere from 4 to 12+ projects in a very short span of time. We love working as a team of two and being able to riff on how to approach testing a feature. Sometimes, we divide and conquer; other times, both of us review the work. Fresh eyes provide a good chance of catching something new. Gabriel has a better knack for Android conventions and Michael for iOS, but we actively avoid over-specializing. Keeping up with multiple platforms requires extra effort, but it’s worth it when considering the consistency of the experience across all of them. Accessibility As part of our review, we test the accessibility of the changes. We use a combination of keyboard navigation and at least one screen reader on each platform to vet how well the feature will work for someone who relies on accessible technology. We also use browser extensions like axe and Accessibility Insights for Web to validate semantics of the code and Headings Map to make sure heading levels are sequential. At times, we bring in customers who use a screen reader full-time to help us validate whether everything makes sense and learn where things can improve. Our new colleague, Bruno, is a full-time user of the NVDA screen reader and can offer this sort of direct feedback on how a feature or flow works for him. Explorations in tooling A recent addition to our toolkit is a visual regression tool built on BackstopJS with the help of our colleague Lewis. Whenever we review work, we can run the suite of tests — mostly a list of URLs for various pages around the app — first pointed to production, then against a beta environment where the new feature is staged. Any visual differences will be flagged in a report we review, then write up bug report cards for the team if needed. Walking the walk Part of what enables us to keep our process minimal is that we use our products daily, both on the job and in our everyday lives. This affords us an intimate understanding of how they work and how they can be improved. We’re passionate about what we do. We find ourselves fortunate to work with each other and with so many talented colleagues. We hope this post has given you some helpful insight into the way we do things! If you have questions or if there are topics you’d like us to cover in future posts, drop us an email at qa@37signals.com.

3 months ago 20 votes

More in programming

Making inventory spreadsheets for my LEGO sets

One of my recent home organisation projects has been sorting out my LEGO collection. I have a bunch of sets which are mixed together in one messy box, and I’m trying to separate bricks back into distinct sets. My collection is nowhere near large enough to be worth sorting by individual parts, and I hope that breaking down by set will make it all easier to manage and store. I’ve been creating spreadsheets to track the parts in each set, and count them out as I find them. I briefly hinted at this in my post about looking at images in spreadsheets, where I included a screenshot of one of my inventory spreadsheets: These spreadsheets have been invaluable – I can see exactly what pieces I need, and what pieces I’m missing. Without them, I wouldn’t even attempt this. I’m about to pause this cleanup and work on some other things, but first I wanted to write some notes on how I’m creating these spreadsheets – I’ll probably want them again in the future. Getting a list of parts in a set There are various ways to get a list of parts in a LEGO set: Newer LEGO sets include a list of parts at the back of the printed instructions You can get a list from LEGO-owned website like LEGO.com or BrickLink There are community-maintained databases on sites like Rebrickable I decided to use the community maintained lists from Rebrickable – they seem very accurate in my experience, and you can download daily snapshots of their entire catalog database. The latter is very powerful, because now I can load the database into my tools of choice, and slice and dice the data in fun and interesting ways. Downloading their entire database is less than 15MB – which is to say, two-thirds the size of just opening the LEGO.com homepage. Bargain! Putting Rebrickable data in a SQLite database My tool of choice is SQLite. I slept on this for years, but I’ve come to realise just how powerful and useful it can be. A big part of what made me realise the power of SQLite is seeing Simon Willison’s work with datasette, and some of the cool things he’s built on top of SQLite. Simon also publishes a command-line tool sqlite-utils for manipulating SQLite databases, and that’s what I’ve been using to create my spreadsheets. Here’s my process: Create a Python virtual environment, and install sqlite-utils: python3 -m venv .venv source .venv/bin/activate pip install sqlite-utils At time of writing, the latest version of sqlite-utils is 3.38. Download the Rebrickable database tables I care about, uncompress them, and load them into a SQLite database: curl -O 'https://cdn.rebrickable.com/media/downloads/colors.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/parts.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventories.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventory_parts.csv.gz' gunzip colors.csv.gz gunzip parts.csv.gz gunzip inventories.csv.gz gunzip inventory_parts.csv.gz sqlite-utils insert lego_parts.db colors colors.csv --csv sqlite-utils insert lego_parts.db parts parts.csv --csv sqlite-utils insert lego_parts.db inventories inventories.csv --csv sqlite-utils insert lego_parts.db inventory_parts inventory_parts.csv --csv The inventory_parts table describes how many of each part there are in a set. “Set S contains 10 of part P in colour C.” The parts and colors table contains detailed information about each part and color. The inventories table matches the official LEGO set numbers to the inventory IDs in Rebrickable’s database. “The set sold by LEGO as 6616-1 has ID 4159 in the inventory table.” Run a SQLite query that gets information from the different tables to tell me about all the parts in a particular set: SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1'; Or use sqlite-utils to export the query results as a spreadsheet: sqlite-utils lego_parts.db " SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1';" --csv > 6616-1.csv Here are the first few lines of that CSV: img_url,quantity,is_spare,color,name,part_num https://cdn.rebrickable.com/media/parts/photos/9999/23064-9999-e6da02af-9e23-44cd-a475-16f30db9c527.jpg,1,False,[No Color/Any Color],Sticker Sheet for Set 6616-1,23064 https://cdn.rebrickable.com/media/parts/elements/4523412.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Chequered Print,2335pr0019 https://cdn.rebrickable.com/media/parts/photos/15/2335px13-15-33ae3ea3-9921-45fc-b7f0-0cd40203f749.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Octan Logo Print,2335pr0024 https://cdn.rebrickable.com/media/parts/elements/4141999.jpg,4,False,Green,Tile Special 1 x 2 Grille with Bottom Groove,2412b https://cdn.rebrickable.com/media/parts/elements/4125254.jpg,4,False,Orange,Tile Special 1 x 2 Grille with Bottom Groove,2412b Import that spreadsheet into Google Sheets, then add a couple of columns. I add a column image where every cell has the formula =IMAGE(…) that references the image URL. This gives me an inline image, so I know what that brick looks like. I add a new column quantity I have where every cell starts at 0, which is where I’ll count bricks as I find them. I add a new column remaining to find which counts the difference between quantity and quantity I have. Then I can highlight or filter for rows where this is non-zero, so I can see the bricks I still need to find. If you’re interested, here’s an example spreadsheet that has a clean inventory. It took me a while to refine the SQL query, but now I have it, I can create a new spreadsheet in less than a minute. One of the things I’ve realised over the last year or so is how powerful “get the data into SQLite” can be – it opens the door to all sorts of interesting queries and questions, with a relatively small amount of code required. I’m sure I could write a custom script just for this task, but it wouldn’t be as concise or flexible. [If the formatting of this post looks odd in your feed reader, visit the original article]

20 hours ago 3 votes
Giving Junior Engineers Control Of A Six Trillion Dollar System Is Nuts

For some purpose, the DOGE people are burrowing their way into all US Federal Systems. Their complete control over the Treasury Department is entirely insane. Unless you intend to destroy everything, making arbitrary changes to complex computer systems will result in destruction, even if that was not your intention. No

5 hours ago 3 votes
Stanislav Petrov

A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.

yesterday 7 votes
01 · A spreadsheet for exploring scenarios

In our *Ambsheets* project, we are exploring a small extension to the familiar spreadsheet: **what if a single spreadsheet cell could hold multiple values at once**?

yesterday 2 votes
Recently

I am not going to repeat the news. But man, things are really, really bad and getting worse in America. It’s all so unendingly stupid and evil. The tech industry is being horrible, too. Wishing strength to the people who are much more exposed to the chaos than I am. Reading A Confederacy of Dunces was such a perfect novel. It was pure escapism, over-the-top comedy, and such an unusual artifact, that was sadly only appreciated posthumously. Very earnestly I believe that despite greater access to power and resources, the box labeled “socially acceptable ways to be a man” is much smaller than the box labeled “socially acceptable ways to be a woman.” This article on the distinction between patriarchy and men was an interesting read. With the whole… politics out there, it’s easy to go off the rails with any discussion about men and women and whether either have it easy or hard. The same author wrote this good article about declining male enrollment in college. I think both are worth a read. Whenever I read this kind of article, I’m reminded of how limited and mostly fortunate my own experience is. There’s a big difference, I think, in how vigorously you have to perform your gender in some red state where everyone owns a pickup truck, versus a major city where the roles are a little more fluid. Plus, I’ve been extremely fortunate to have a lot of friends and genuine open conversations about feelings with other men. I wish that was the norm! On Having a Maximum Wealth was right up my alley. I’m reading another one of the new-French-economist books right now, and am still fascinated by the prospect of wealth taxes. My friend David has started a local newsletter for Richmond, Virginia, and written a good piece about public surveillance. Construction Physics is consistently great, and their investigation of why skyscrapers are all glass boxes is no exception. Watching David Lynch was so great. We watched his film Lost Highway a few days after he passed, and it was even better than I had remembered it. Norm Macdonald’s extremely long jokes on late-night talk shows have been getting me through the days. Listening This song by the The Hard Quartet – a supergroup of Emmett Kelly, Stephen Malkmus (Pavement), Matt Sweeney and Jim White. It’s such a loving, tender bit of nonsense, very golden-age Pavement. They also have this nice chill song: I came across this SML album via Hearing Things, which has been highlighting a lot of good music. Small Medium Large by SML It’s a pretty good time for these independent high-quality art websites. Colossal has done the same for the art world and highlights good new art: I really want to make it out to see the Nick Cave (not the musician) art show while it’s in New York.

yesterday 1 votes