Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
12
A friend of mine is very into keyboards and, after seeing his keyboards at work and admiring his Ergodox many times, I took the plunge and built my own. 152 solder joints later, I have this beauty: It took a few days to get used to it and in the process, I found a bug in layer switching, which I contributed a fix for. While fixing it, I came across some very cool debugging features - the keyboard has a console which gives debug info and is very easy to connect to: screen /dev/tty.usbmodem1A12144 This console gives a lot of debugging information, and it turns out that it can show every key press! Neat, until you realize that any user of the system can also see every single key press. A non-privileged test user on my Mac 1 was able to read every key press I made while typing as my normal user. This is a huge breach of security. I routinely create accounts on my desktop for other people (my fiancée, my friends who are learning to code), so this is simply an unacceptable risk. This is...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ntietz.com blog

Licensing can be joyful (and legally dubious)

Software licenses are a reflection of our values. How you choose to license a piece of software says a lot about what you want to achieve with it. Do you want to reach the maximum amount of users? Do you want to ensure future versions remain free and open source? Do you want to preserve your opportunity to make a profit? They can also be used to reflect other values. For example, there is the infamous JSON license written by Doug Crockford. It's essentially the MIT license with this additional clause: The Software shall be used for Good, not Evil. This has caused quite some consternation. It is a legally dubious addition, because "Good" and "Evil" are not defined here. Many people disagree on what these are. This is really not enforceable, and it's going to make many corporate lawyers wary of using software under this license1. I don't think that enforcing this clause was the point. The point is more signaling values and just having fun with it. I don't think anyone seriously believes that this license will be enforceable, or that it will truly curb the amount of evil in the world. But will it start conversations? * * * There are a lot of other small, playful licenses. None of these are going to change the world, but they inject a little joy and play into an area of software that is usually serious and somber. When I had to pick a license for my exceptional language (Hurl), I went down that serious spiral at first. What license will give the project the best adoption, or help it achieve its goals? What are its goals? Well, one its goals was definitely to be funny. Another was to make sure that people can use the software for educational purposes. If I make a language as a joke, I do want people to be able to learn from it and do their own related projects! This is where we enter one of the sheerly joyous parts of licensing: the ability to apply multiple licenses to software so that the user can decide which one to use the software under. You see a lot of Rust projects dual-licensed under Apache and MIT licenses, because the core language is dual-licensed for very good reasons. We can apply similar rationale to Hurl's license, and we end up with triple-licensing. It's currently available under three licenses, each for a separate purpose. Licensing it under the AGPL enables users to create derivative works for their own purposes (probably to learn) as long as it remains licensed the same way. And then we have a commercial license option, which is there so that if you want to commercialize it, I can get a cut of that2. The final option is to license it under the Gay Agenda License, which was created originally for this project. This option basically requires you to not be a bigot, and then you can use the software under the MIT license terms. It seems fair to me. When I got through that license slide at SIGBOVIK 2024, I knew that the mission was accomplished: bigotry was defeated the audience laughed. * * * The Gay Agenda License is a modified MIT license which requires you do a few things: You must provide attribution (typical MIT manner) You have to stand up for LGBTQ rights You have to say "be gay, do crime" during use of the software Oh, and if you support restricting LGBTQ rights, then you lose that license right away. No bigots allowed here. This is all, of course, written in more complete sentences in the license itself. The best thing is that you can use this license today! There is a website for the Gay Agenda License, the very fitting gal.gay3. The website has all the features you'd expect, like showing the license text, using appropriate flags, and copying the text to the clipboard for ease of putting this in your project. Frequently Anticipated Questions Inspired by Hannah's brilliant post's FAQ, here are answers to your questions that you must have by now. Is this enforceable? We don't really know until it's tested in court, but if that happens, everyone has already lost. So, who knows, I hope we don't find out! Isn't it somewhat ambiguous? What defines what is standing up for LGBTQ rights? Ah, yes, good catch. This is a big problem for this totally serious license. Definitely a problem. Can I use it in my project? Yeah! Let me know if you do so I can add it into a showcase on the website. But keep in mind, this is a joke totally serious license, so only use it on silly things highly serious commercial projects! How do I get a commercial license of Hurl? This is supposed to be about the Gay Agenda License, not Hurl. But since you asked, contact me for pricing. When exactly do I have to say "be gay, do crime"? To be safe, it's probably best that you mutter it continuously while using all software. You never know when it's going to be licensed under the Gay Agenda License, so repeat the mantra to ensure compliance. Thank you to Anya for the feedback on a draft of this post. Thank you to Chris for building the first version of gal.gay for me. 1 Not for nothing, because most of those corporations would probably be using the software for evil. So, mission accomplished, I guess? 2 For some reason, no one has contacted me for this option yet. I suspect widespread theft of my software, since surely people want to use Hurl. They're not using the third option, since we still see rampant transphobia. 3 This is my most expensive domain yet at $130 for the first year. I'm hoping that the price doesn't raise dramatically over time, but I'm not optimistic, since it's a three-letter domain. That said, anything short of extortion will likely be worth keeping for the wonderful email addresses I get out of this, being a gay gal myself. It's easier to spell on the phone than this domain is, anyway.

4 months ago 55 votes
Asheville

Asheville is in crisis right now. They're without drinking water, faucets run dry, and it's difficult to flush toilets. As of yesterday, the hospital has water (via tanker trucks), but 80% of the public water system is still without running water. Things are really bad. Lots of infrastructure has been washed away. Even when water is back, there has been tremendous damage done that will take a long time to recover from and rebuild. * * * Here's the only national news story my friend from Asheville had seen which covered the water situation specifically. It's hard for me to understand why this is not covered more broadly. And my heart aches for those in and around the Asheville area. As I'm far away, I can't do a lot to help. But I can donate money, which my friend said is the only donation that would help right now if you aren't in the area. She specifically pointed me to these two ways to donate: Beloved Asheville: a respected community organization in Asheville, this is a great place to send money to help. (If you're closer to that area, it does look like they have specific things they're asking for as well, but this feels like an "if you can help this way, you'd already know" situation.) Mutual Aid Disaster Relief: there's a local Asheville chapter which is doing work to help. Also an organization to support for broad disaster recovery in general. I've donated money. I hope you will, too, for this and for the many other crises that affect us. Let's help each other.

4 months ago 49 votes
Teleportation

teleportation does exist from OR to recovery room I left something behind not quite a part of myself —unwelcome guests poisoning me from the inside no longer welcome

4 months ago 43 votes
Rust needs a web framework for lazy developers

I like to make silly things, and I also like to put in minimal effort for those silly things. I also like to make things in Rust, mostly for the web, and this is where we run into a problem. See, if I want to make something for the web, I could use Django but I don't want that. I mean, Django is for building serious businesses, not for building silly non-commercial things! But using Rust, we have to do a lot more work than if we build it with Django or friends. See, so far, there's no equivalent, and the Rust community leans heavily into the "wire it up yourself" approach. As Are We Web Yet? says, "[...] you generally have to wire everything up yourself. Expect to put in a little bit of extra set up work to get started." This undersells it, though. It's more than a little bit of extra work to get started! I know because I made a list of things to do to get started. Rust needs something that does bundle this up for you, so that we can serve all web developers. Having it would make it a lot easier to make the case to use Rust. The benefits are there: you get wonderful type system, wonderful performance, and build times that give you back those coffee breaks you used to get while your code compiled. What do we need? There is a big pile of stuff that nearly every web app needs, no matter if it's big or small. Here's a rough list of what seems pretty necessary to me: Routing/handlers: this is pretty obvious, but we have to be able to get an incoming request to some handler for it. Additionally, this routing needs to handle path parameters, ideally with type information, and we'll give bonus points for query parameters, forms, etc. Templates: we'll need to generate, you know, HTML (and sometimes other content, like JSON or, if you're in the bad times still, XML). Usually I want these to have basic logic, like conditionals, match/switch, and loops. Static file serving: we'll need to serve some assets, like CSS files. This can be done separately, but having it as part of the same web server is extremely handy for both local development and for small-time deployments that won't handle much traffic. Logins: You almost always need some way to log in, since apps are usually multi-user or deployed on a public network. This is just annoying to wire up every time! It should be customizable and something you can opt out of, but it should be trivial to have logins from the start. Permissions: You also need this for systems that have multiple users, since people will have different data they're allowed to access or different roles in the system. Permissions can be complicated but you can make something relatively simple that follows the check(user, object, action) pattern and get really far with it. Database interface: You're probably going to have to store data for your app, so you want a way to do that. Something that's ORM-like is often nice, but something light is fine. Whatever you do here isn't the only way to interact with the database, but it'll be used for things like logins, permissions, and admin tools, so it's going to be a fundamental piece. Admin tooling: This is arguably a quality-of-life issue, not a necessity, except that every time you setup your application in a local environment or in production you're going to have to bootstrap it with at least one user or some data. And you'll have to do admin actions sometimes! So I think having this built-in for at least some of the common actions is a necessity for a seamless experience. WebSockets: I use WebSockets in a lot of my projects. They just let you do really fun things with pushing data out to connected users in a more real-time fashion! Hot reloading: This is a huge one for developer experience, because you want to have the ability to see changes really quickly. When code or a template change, you need to see that reflected as soon as humanly possible (or as soon as the Rust compiler allows). Then we have a pile of things that are quality-of-life improvements, and I think are necessary for long-term projects but might not be as necessary upfront, so users are less annoyed at implementing it themselves because the cost is spread out. Background tasks: There needs to be a story for these! You're going to have features that have to happen on a schedule, and having a consistent way to do that is a big benefit and makes development easier. Monitoring/observability: Only the smallest, least-critical systems should skip this. It's really important to have and it will make your life so much easier when you have it in that moment that you desperately need it. Caching: There are a lot of ways to do this, and all of them make things more complicated and maybe faster? So this is nice to have a story for, but users can also handle it themselves. Emails and other notifications: It's neat to be able to have password resets and things built-in, and this is probably a necessity if you're going to have logins, so you can have password resets. But other than that feature, it feels like it won't get used that much and isn't a big deal to add in when you need it. Deployment tooling: Some consistent way to deploy somewhere is really nice, even if it's just an autogenerated Dockerfile that you can use with a host of choice. CSS/JS bundling: In the time it is, we use JS and CSS everywhere, so you probably want a web tool to be aware of them so they can be included seamlessly. But does it really have to be integrated in? Probably not... So those are the things I'd target in a framework if I were building one! I might be doing that... The existing ecosystem There's quite a bit out there already for building web things in Rust. None of them quite hit what I want, which is intentional on their part: none of them aspire to be what I'm looking for here. I love what exists, and I think we're sorely missing what I want here (I don't think I'm alone). Web frameworks There are really two main groups of web frameworks/libraries right now: the minimalist ones, and the single-page app ones. The minimalist ones are reminiscent of Flask, Sinatra, and other small web frameworks. These include the excellent actix-web and axum, as well as myriad others. There are so many of these, and they all bring a nice flavor to web development by leveraging Rust's type system! But they don't give you much besides handlers; none of the extra functionality we want in a full for-lazy-developers framework. Then there are the single-page app frameworks. These fill a niche where you can build things with Rust on the backend and frontend, using WebAssembly for the frontend rendering. These tend to be less mature, but good examples include Dioxus, Leptos, and Yew. I used Yew to build a digital vigil last year, and it was enjoyable but I'm not sure I'd want to do it in a "real" production setting. Each of these is excellent for what it is—but what it is requires a lot of wiring up still. Most of my projects would work well with the minimalist frameworks, but those require so much wiring up! So it ends up being a chore to set that up each time that I want to do something. Piles of libraries! The rest of the ecosystem is piles of libraries. There are lots of template libraries! There are some libraries for logins, and for permissions. There are WebSocket libraries! Often you'll find some projects and examples which integrate a couple of the things you're using, but you won't find something that integrates all the pieces you're using. I've run into some of the examples being out of date, which is to be expected in a fast-moving ecosystem. The pile of libraries leaves a lot of friction, though. It makes getting started, the "just wiring it up" part, very difficult and often an exercise in researching how things work, to understand them in depth enough to do the integration. What I've done before The way I've handled this before is basically to pick a base framework (typically actix-web or axum) and then search out all the pieces I want on top of it. Then I'd wire them up, either all at the beginning or as I need them. There are starter templates that could help me avoid some of this pain. They can definitely help you skip some of the initial pain, but you still get all the maintenance burden. You have to make sure your libraries stay up to date, even when there are breaking changes. And you will drift from the template, so it's not really feasible to merge changes to it into your project. For the projects I'm working on, this means that instead of keeping one framework up to date, I have to keep n bespoke frameworks up to date across all my projects! Eep! I'd much rather have a single web framework that handles it all, with clean upgrade instructions between versions. There will be breaking changes sometimes, but this way they can be documented instead of coming about due to changes in the interactions between two components which don't even know they're going to be integrated together. Imagining the future I want In an ideal world, there would be a framework for Rust that gives me all the features I listed above. And it would also come with excellent documentation, changelogs, thoughtful versioning and handling of breaking changes, and maybe even a great community. All the things I love about Django, could we have those for a Rust web framework so that we can reap the benefits of Rust without having to go needlessly slowly? This doesn't exist right now, and I'm not sure if anyone else is working on it. All paths seem to lead me toward "whoops I guess I'm building a web framework." I hope someone else builds one, too, so we can have multiple options. To be honest, "web framework" sounds way too grandiose for what I'm doing, which is simply wiring things together in an opinionated way, using (mostly) existing building blocks1. Instead of calling it a framework, I'm thinking of it as a web toolkit: a bundle of tools tastefully chosen and arranged to make the artisan highly effective. My toolkit is called nicole's web toolkit, or newt. It's available in a public repository, but it's really not usable (the latest changes aren't even pushed yet). It's not even usable for me yet—this isn't a launch post, more shipping my design doc (and hoping someone will do my work for me so I don't have to finish newt :D). The goal for newt is to be able to create a new small web app and start on the actual project in minutes instead of days, bypassing the entire process of wiring things up. I think the list of must-haves and quality-of-life features above will be a start, but by no means everything we need. I'm not ready to accept contributions, but I hope to be there at some point. I think that Rust really needs this, and the whole ecosystem will benefit from it. A healthy ecosystem will have multiple such toolkits, and I hope to see others develop as well. * * * If you want to follow along with mine, though, feel free to subscribe to my RSS feed or newsletter, or follow me on Mastodon. I'll try to let people know in all those places when the toolkit is ready for people to try out. Or I'll do a post-mortem on it, if it ends up that I don't get far with it! Either way, this will be fun. 1 I do plan to build a few pieces from scratch for this, as the need arises. Some things will be easier that way, or fit more cohesively. Can't I have a little greenfield, as a treat?

4 months ago 50 votes
What I tell people new to on-call

The first time I went on call as a software engineer, it was exciting—and ultimately traumatic. Since then, I've had on-call experiences at multiple other jobs and have grown to really appreciate it as part of the role. As I've progressed through my career, I've gotten to help establish on-call processes and run some related trainings. Here is some of what I wish I'd known when I started my first on-call shift, and what I try to tell each engineer before theirs. Heroism isn't your job, triage is It's natural to feel a lot of pressure with on-call responsibilities. You have a production application that real people need to use! When that pager goes off, you want to go in and fix the problem yourself. That's the job, right? But it's not. It's not your job to fix every issue by yourself. It is your job to see that issues get addressed. The difference can be subtle, but important. When you get that page, your job is to assess what's going on. A few questions I like to ask are: What systems are affected? How badly are they impacted? Does this affect users? With answers to those questions, you can figure out what a good course of action is. For simple things, you might just fix it yourself! If it's a big outage, you're putting on your incident commander hat and paging other engineers to help out. And if it's a false alarm, then you're putting in a fix for the noisy alert! (You're going to fix it, not just ignore that, right?) Just remember not to be a hero. You don't need to fix it alone, you just need to figure out what's going on and get a plan. Call for backup Related to the previous one, you aren't going this alone. Your main job in holding the pager is to assess and make sure things get addressed. Sometimes you can do that alone, but often you can't! Don't be afraid to call for backup. People want to be helpful to their teammates, and they want that support available to them, too. And it's better to be wake me up a little too much than to let me sleep through times when I was truly needed. If people are getting woken up a lot, the issue isn't calling for backup, it's that you're having too many true emergencies. It's best to figure out that you need backup early, like 10 minutes in, to limit the damage of the incident. The faster you figure out other people are needed, the faster you can get the situation under control. Communicate a lot In any incident, adrenaline runs and people are stressed out. The key to good incident response is communication in spite of the adrenaline. Communicating under pressure is a skill, and it's one you can learn. Here are a few of the times and ways of communicating that I think are critical: When you get on and respond to an alert, say that you're there and that you're assessing the situation Once you've assessed it, post an update; if the assessment is taking a while, post updates every 15 minutes while you do so (and call for backup) After the situation is being handled, update key stakeholders at least every 30 minutes for the first few hours, and then after that slow down to hourly You are also going to have to communicate within the response team! There might be a dedicated incident channel or one for each incident. Either way, try to over communicate about what you're working on and what you've learned. Keep detailed notes, with timestamps When you're debugging weird production stuff at 3am, that's the time you really need to externalize your memory and thought processes into a notes document. This helps you keep track of what you're doing, so you know which experiments you've run and which things you've ruled out as possibilities or determined as contributing factors. It also helps when someone else comes up to speed! That person will be able to use your notes to figure out what has happened, instead of you having to repeat it every time someone gets on. Plus, the notes doc won't forget things, but you will. You will also need these notes later to do a post-mortem. What was tried, what was found, and how it was fixed are all crucial for the discussion. Timestamps are critical also for understanding the timeline of the incident and the response! This document should be in a shared place, since people will use it when they join the response. It doesn't need to be shared outside of the engineering organization, though, and likely should not be. It may contain details that lead to more questions than they answer; sometimes, normal engineering things can seem really scary to external stakeholders! You will learn a lot! When you're on call, you get to see things break in weird and unexpected ways. And you get to see how other people handle those things! Both of these are great ways to learn a lot. You'll also just get exposure to things you're not used to seeing. Some of this will be areas that you don't usually work in, like ops if you're a developer, or application code if you're on the ops side. Some more of it will be business side things for the impact of incidents. And some will be about the psychology of humans, as you see the logs of a user clicking a button fifteen hundred times (get that person an esports sponsorship, geez). My time on call has led to a lot of my professional growth as a software engineer. It has dramatically changed how I worked on systems. I don't want to wake up at 3am to fix my bad code, and I don't want it to wake you up, either. Having to respond to pages and fix things will teach you all the ways they can break, so you'll write more resilient software that doesn't break. And it will teach you a lot about the structure of your engineering team, good or bad, in how it's structured and who's responding to which things. Learn by shadowing No one is born skilled at handling production alerts. You gain these skills by doing, so get out there and do it—but first, watch someone else do it. No matter how much experience you have writing code (or responding to incidents), you'll learn a lot by watching a skilled coworker handle incoming issues. Before you're the primary for an on-call shift, you should shadow someone for theirs. This will let you see how they handle things and what the general vibe is. This isn't easy to do! It means that they'll have to make sure to loop you in even when blood is pumping, so you may have to remind them periodically. You'll probably miss out on some things, but you'll see a lot, too. Some things can (and should) wait for Monday morning When we get paged, it usually feels like a crisis. If not to us, it sure does to the person who's clicking that button in frustration, generating a ton of errors, and somehow causing my pager to go off. But not all alerts are created equal. If you assess something and figure out that it's only affecting one or two customers in something that's not time sensitive, and it's currently 4am on a Saturday? Let people know your assessment (and how to reach you if you're wrong, which you could be) and go back to bed. Real critical incidents have to be fixed right away, but some things really need to wait. You want to let them go until later for two reasons. First is just the quality of the fix. You're going to fix things more completely if you're rested when you're doing so! Second, and more important, is your health. It's wrong to sacrifice your health (by being up at 4am fixing things) for something non-critical. Don't sacrifice your health Many of us have had bad on-call experiences. I sure have. One regret is that I didn't quit that on-call experience sooner. I don't even necessarily mean quitting the job, but pushing back on it. If I'd stood up for myself and said "hey, we have five engineers, it should be more than just me on call," and held firm, maybe I'd have gotten that! Or maybe I'd have gotten a new job. What I wouldn't have gotten is the knowledge that you can develop a rash from being too stressed. If you're in a bad on-call situation, please try to get out of it! And if you can't get out of it, try to be kind to yourself and protect yourself however you can (you deserve better). Be methodical and reproduce before you fix Along with taking great notes, you should make sure that you test hypotheses. What could be causing this issue? And before that, what even is the problem? And how do we make it happen? Write down your answers to these! Then go ahead and try to reproduce the issue. After reproducing it, you can try to go through your hypotheses and test them out to see what's actually contributing to the issue. This way, you can bisect problem spaces instead of just eliminating one thing at a time. And since you know how to reproduce the issue now, you can be confident that you do have a fix at the end of it all! Have fun Above all, the thing I want people new to on-call to do? Just have fun. I know this might sound odd, because being on call is a big job responsibility! But I really do think it can be fun. There's a certain kind of joy in going through the on-call response together. And there's a fun exhilaration to it all. And the joy of fixing things and really being the competent engineer who handled it with grace under pressure. Try to make some jokes (at an appropriate moment!) and remember that whatever happens, it's going to be okay. Probably.

4 months ago 58 votes

More in programming

Serving the country

In 1940, President Roosevelt tapped William S. Knudsen to run the government's production of military equipment. Knudsen had spent a pivotal decade at Ford during the mass-production revolution, and was president of General Motors, when he was drafted as a civilian into service as a three-star general. Not bad for a Dane, born just ten minutes on bike from where I'm writing this in Copenhagen! Knudsen's leadership raised the productive capacity of the US war machine by a 100x in areas like plane production, where it went from producing 3,000 planes in 1939 to over 300,000 by 1945. He was quoted on his achievement: "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible". Knudsen wasn't an elected politician. He wasn't even a military man. But Roosevelt saw that this remarkable Dane had the skills needed to reform a puny war effort into one capable of winning the Second World War. Do you see where I'm going with this? Elon Musk is a modern day William S. Knudsen. Only even more accomplished in efficiency management, factory optimization, and first-order systems thinking. No, America isn't in a hot war with the Axis powers, but for the sake of the West, it damn well better be prepared for one in the future. Or better still, be so formidable that no other country or alliance would even think to start one. And this requires a strong, confident, and sound state with its affairs in order. If you look at the government budget alone, this is direly not so. The US was knocking on a two-trillion-dollar budget deficit in 2024! Adding to a towering debt that's now north of 36 trillion. A burden that's already consuming $881 billion in yearly interest payments. More than what's spent on the military or Medicare. Second to only Social Security on the list of line items. Clearly, this is not sustainable. This is the context of DOGE. The program, lead by Musk, that's been deputized by Trump to turn the ship around. History doesn't repeat, but it rhymes, and Musk is dropping beats that Knudsen would have surely been tapping his foot to. And just like Knudsen in his time, it's hard to think of any other American entrepreneur more qualified to tackle exactly this two-trillion dollar problem.  It is through The Musk Algorithm that SpaceX lowered the cost of sending a kilo of goods into lower orbit from the US by well over a magnitude. And now America's share of worldwide space transit has risen from less than 30% in 2010 to about 85%. Thanks to reusable rockets and chopstick-catching landing towers. Thanks to Musk. Or to take a more earthly example with Twitter. Before Musk took over, Twitter had revenues of $5 billion and earned $682 million. After the take over, X has managed to earn $1.25 billion on $2.7 billion in revenue. Mostly thank to the fact that Musk cut 80% of the staff out of the operation, and savaged the cloud costs of running the service. This is not what people expected at the time of the take over! Not only did many commentators believe that Twitter was going to collapse from the drastic costs in staff, they also thought that the financing for the deal would implode. Chiefly as a result of advertisers withdrawing from the platform under intense media pressure. But that just didn't happen. Today, the debt used to take over Twitter and turn it into X is trading at 97 cents on the dollar. The business is twice as profitable as it was before, and arguably as influential as ever. All with just a fifth of the staff required to run it. Whatever you think of Musk and his personal tweets, it's impossible to deny what an insane achievement of efficiency this has been! These are just two examples of Musk's incredible ability to defy the odds and deliver the most unbelievable efficiency gains known to modern business records. And we haven't even talked about taking Tesla from producing 35,000 cars in 2014 to making 1.7 million in 2024. Or turning xAI into a major force in AI by assembling a 100,000 H100 cluster at "superhuman" pace.  Who wouldn't want such a capacity involved in finding the waste, sloth, and squander in the US budget? Well, his political enemies, of course! And I get it. Musk's magic is balanced with mania and even a dash of madness. This is usually the case with truly extraordinary humans. The taller they stand, the longer the shadow. Expecting Musk to do what he does and then also be a "normal, chill dude" is delusional. But even so, I think it's completely fair to be put off by his tendency to fire tweets from the hip, opine on world affairs during all hours of the day, and offer his support to fringe characters in politics, business, and technology. I'd be surprised if even the most ardent Musk super fans don't wince a little every now and then at some of the antics. And yet, I don't have any trouble weighing those antics against the contributions he's made to mankind, and finding an easy and overwhelming balance in favor of his positive achievements. Musk is exactly the kind of formidable player you want on your team when you're down two trillion to nothing, needing a Hail Mary pass for the destiny of America, and eager to see the West win the future. He's a modern-day Knudsen on steroids (or Ketamine?). Let him cook.

5 hours ago 2 votes
The Exodus Curve

The concept of Product-Market Fit (PMF) collapse has gained renewed attention with the rise of large language models (LLMs), as highlighted in a recent Reforge article. The article argues we’re witnessing unprecedented market disruption, in this post, I propose we’re experiencing an acceleration of a familiar pattern rather than a fundamentally new phenomenon. Adoption Curves […] The post The Exodus Curve appeared first on Marc Astbury.

8 hours ago 2 votes
Unexpected errors in the BagIt area

Last week, James Truitt asked a question on Mastodon: James Truitt (he/him) @linguistory@code4lib.social Mastodon #digipres folks happen to have a handy repo of small invalid bags for testing purposes? I'm trying to automate our ingest process, and want to make sure I'm accounting for as many broken expectations as possible. Jan 31, 2025 at 07:49 PM The “bags” he’s referring to are BagIt bags. BagIt is an open format developed by the Library of Congress for packaging digital files. Bags include manifests and checksums that describe their contents, and they’re often used by libraries and archives to organise files before transfering them to permanent storage. Although I don’t use BagIt any more, I spent a lot of time working with it when I was a software developer at Wellcome Collection. We used BagIt as the packaging format for files saved to our cloud storage service, and we built a microservice very similar to what James is describing. The “bag verifier” would look for broken bags, and reject them before they were copied to long-term storage. I wrote a lot of bag verifier test cases to confirm that it would spot invalid or broken bags, and that it would give a useful error message when it did. All of the code for Wellcome’s storage service is shared on GitHub under an MIT license, including the bag verifier tests. They’re wrapped in a Scala test framework that might not be the easiest thing to read, so I’m going to describe the test cases in a more human-friendly way. Before diving into specific examples, it’s worth remembering: context is king. BagIt is described by RFC 8493, and you could create invalid bags by doing a line-by-line reading and deliberately ignoring every “MUST” or “SHOULD” but I wouldn’t recommend this aproach. You’d get a long list of test cases, but you’d be overwhelmed by examples, and you might miss specific requirements for your system. The BagIt RFC is written for the most general case, but if you’re actually building a storage service, you’ll have more concrete requirements and context. It’s helpful to look at that context, and how it affects the data you want to store. Who’s creating the bags? How will they name files? Where are you going to store bags? How do bags fit into your wider systems? And so on. Understanding your context will allow you to skip verification steps that you don’t need, and to add verification steps that are important to you. I doubt any two systems implement the exact same set of checks, because every system has different context. Here are examples of potential validation issues drawn from the BagIt specification and my real-world experience. You won’t need to check for everything on this list, and this list isn’t exhaustive – but it should help you think about bag validation in your own context. The Bag Declaration bagit.txt This file declares that this is a BagIt bag, and the version of BagIt you’re using (RFC 8493 §2.1.1). It looks the same in almost every bag, for example: BagIt-Version: 1.0 Tag-File-Character-Encoding: UTF-8 This tightly prescribed format means it can only be invalid in a few ways: What if the bag doesn’t have a bag declaration? It’s a required element of every BagIt bag; it has to be there. What if the bag declaration is the wrong format? It should contain exactly two lines: a version number and a character encoding, in that order. What if the bag declaration has an unexpected version number? If you see a BagIt version that you’ve not seen before, the bag might have a different structure than what you expect. The Payload Files and Payload Manifest The payload files are the actual content you want to save and preserve. They get saved in the payload directory data/ (RFC 8493 §2.1.2), and there’s a payload manifest manifest-algorithm.txt that lists them, along with their checksums (RFC 8493 §2.1.3). Here’s an example of a payload manifest with MD5 checksums: 37d0b74d5300cf839f706f70590194c3 data/waterfall.jpg This tells us that the bag contains a single file data/waterfall.jpg, and it has the MD5 checksum 37d0…. These checksums can be used to verify that the files have transferred correctly, and haven’t been corrupted in the process. There are lots of ways a payload manifest could be invalid: What if the bag doesn’t have a payload manifest? Every BagIt bag must have at least one Payload Manifest file. What if the payload manifest is the wrong format? These files have a prescribed format – one file per line, with a checksum and file path. What if the payload manifest refers to a file that isn’t in the bag? Either one of the files in the bag has been deleted, or the manifest has an erroneous entry. What if the bag has a file that isn’t listed in the payload manifest? The manifest should be a complete listing of all the payload files in the bag. If the bag has a file which isn’t in the payload manifest, either that file isn’t meant to be there, or the manifest is missing an entry. Checking for unlisted files is how I spotted unwanted .DS_Store and Thumbs.db files. What if the checksum in the payload manifest doesn’t match the checksum of the file? Either the file has been corrupted, or the checksum is incorrect. What if there are payload files outside the data/ directory? All the payload files should be stored in data/. Anything outside that is an error. What if there are duplicate entries in the payload manifest? Every payload file must be listed exactly once in the manifest. This avoids ambiguity – suppose a file is listed twice, with two different checksums. Is the bag valid if one of those checksums is correct? Requiring unique entries avoids this sort of issue. What if the payload directory is empty? This is perfectly acceptable in the BagIt RFC, but it may not be what you want. If you know that you will always be sending bags that contain files, you should flag empty payload directories as an error. What if the payload manifest contains paths outside data/, or relative paths that try to escape the bag? (e.g. ../file.txt) Now we’re into “malicious bag” territory – a bag uploaded by somebody who’s trying to compromise your ingest pipeline. Any such bags should be treated with suspicion and rejected. If you’re concerned about malicious bags, you need a more thorough test suite to catch other shenanigans. We never went this far at Wellcome Collection, because we didn’t ingest bags from arbitrary sources. The bags only came from internal systems, and our verification was mainly about spotting bugs in those systems, not defending against malicious actors. A bag can contain multiple payload manifests – for example, it might contain both MD5 and SHA1 checksums. Every payload manifest must be valid for the overall bag to be valid. Payload filenames There are lots of gotchas around filenames and paths. It’s a complicated problem, and I definitely don’t understand all of it. It’s worth understanding the filename rules of any filesystem where you will be storing bags. For example, Azure Blob Storage has a number of rules around how you can name files, and Amazon S3 has different rules. We stored files in both at Wellcome Collection, and so the storage service had to enforce the superset of these rules. I’ve listed some edge cases of filenames you might want to consider, but it’s not a comlpete list. There are lots of ways that unexpected filenames could cause you issues, but whether you care depends on the source of your bags. If you control the bags and you know you’re not going to include any weird filenames, you can probably skip most of these. We only checked for one of these conditions at Wellcome Collection, because we had a pre-ingest step that normalised filenames. It converted filenames to ASCII, and saved a mapping between original and normalised filename in the bag. However, the normalisation was only designed for one filesystem, and produced filenames with trailing dots that were still disallowed in Azure Blob. What if a filename is too long? Some systems have a maximum path length, and an excessively deep directory structure or long filename could cause issues. What if a filename contains special characters? Spaces, emoji, or special characters (\, :, *, etc.) can cause problems for some tools. You should also think about characters that need to be URL-encoded. What if a filename has trailing spaces or dots? Some filesystems can’t support filenames ending in a dot or a space. What happens if your bag contains such a file, and you try to save it to the filesystem? This caused us issues at Wellcome Collection. We initially stored bags just in Amazon S3, which is happy to take filenames with a trailing dot – then we added backups to Azure Blob, which doesn’t. One of the bags we’d stored in Amazon S3 had a trailing dot in the filename, and caused us headaches when we tried to copy it to Azure. What if a filename contains a mix of path separators? The payload manifest uses a forward slash (/) as a path separator. If you have a filename with an alternative path separator, it might behave differently on different systems. For example, consider the payload file a\b\c. This would be a single file on macOS or Linux, but it would be nested inside two folders on Windows. What if the filenames are a mix of uppercase and lowercase characters? Some fileystems are case-sensitive, others aren’t. This can cause issues when you move bags between systems. For example, suppose a bag contains two different files Macrodata.txt and macrodata.txt. When you save that bag on a case-insensitive filesystem, only one file will be saved. What if the same filename appears twice with different Unicode normalisations? This is similar to filenames which only differ in upper/lowercase. They might be treated as two files on one filesystem, but collapsed into one file on another. The classic example is the word “café”: this can be encoded as caf\xc3\xa9 (UTF-8 encoded é) or cafe\xcc\x81 (e + combining acute accent). What if a filename contains a directory reference? A directory reference is /./ (current directory) or /../ (parent directory). It’s used on both Unix and Windows-like systems, and it’s another case of two filenames that look different but can resolve to the same path. For example: a/b, a/./b and a/subdir/../b all resolve to the same path under these rules. This can cause particular issues if you’re moving between local filesystems and cloud storage. Local filesystems treat filenames as hierarchical paths, where cloud storage like Amazon S3 often treats them as opaque strings. This can cause issues if you try to copy files from cloud storage to a local system – if you’re not careful, you could lose files in the process. The Tag Manifest tagmanifest-algorithm.txt Similar to the payload manifest, the tag manifest lists the tag files and their checksums. A “tag file” is the BagIt term for any metadata file that isn’t part of the payload (RFC 8493 §2.2.1). Unlike the payload manifest, the tag manifest is optional. A bag without a tag manifest can still be a valid bag. If the tag manifest is present, then many of the ways that a payload manifest can invalidate a bag – malformed contents, unreferenced files, or incorrect checksums – can also apply to tag manifests. There are some additional things to consider: What if a tag manifest lists payload files? The tag manifest lists tag files; the payload manifest lists payload files in the data/ directory. A tag manifest that lists files in the data/ directory is incorrect. What if the bag has a file that isn’t listed in either manifest? Every file in a bag (except the tag manifests) should be listed in either a payload or a tag manifest. A file that appears in neither could mean an unexpected file, or a missing manifest entry. Although the tag manifest is optional in the BagIt spec, at Wellcome Collection we made it a required file. Every bag had to have at least one tag manifest file, or our storage service would refuse to ingest it. The Bag Metadata bag-info.txt This is an optional metadata file that describes the bag and its contents (RFC 8493 §2.2.2). It’s a list of metadata elements, as simple label-value pairs, one per line. Here’s an example of a bag metadata file: Source-Organization: Lumon Industries Organization-Address: 100 Main Street, Kier, PE, 07043 Contact-Name: Harmony Cobel Unlike the manifest files, this is primarily intended for human readers. You can put arbitrary metadata in here, so you can add fields specific to your organisation. Although this file is more flexible, there are still ways it can be invalid: What if the bag metadata is the wrong format? It should have one metadata entry per line, with a label-value pair that’s separated by a colon. What if the Payload-Oxum is incorrect? The Payload-Oxum contains some concise statistics about the payload files: their total size in bytes, and how many there are. For example: Payload-Oxum: 517114.42 This tells us that the bag contains 42 payload files, and their total size is 517,114 bytes. If these stats don’t match the rest of the bag, something is wrong. What if non-repeatable metadata element names are repeated? The BagIt RFC defines a small number of reserved metadata element names which have a standard meaning. Although most metadata element names can be repeated, there are some which can’t, because they can only have one value. In particular: Bagging-Date, Bag-Size, Payload-Oxum and Bag-Group-Identifier. Although the bag metadata file is optional in a general BagIt bag, you may want to add your own rules based on how you use it. For example, at Wellcome Collection, we required all bags to have an External-Identifier value, that matched a specific schema. This allowed us to link bags to records in other databases, and our bag verifier would reject bags that didn’t include it. The Fetch File fetch.txt This is an optional element that allows you to reference files stored elsewhere (RFC 8493 §2.2.3). It tells the person reading the bag that a file hasn’t been included in this copy of the bag; they have to go and fetch it from somewhere else. The file is still recorded in the payload manifest (with a checksum you can verify), but you don’t have a complete bag until you’ve downloaded all the files. Here’s an example of a fetch.txt: https://topekastar.com/~daria/article.txt 1841 data/article.txt This tells us that data/article.txt isn’t included in this copy of the bag, but we we can download it from https://topekastar.com/~daria/article.txt. (The number 1841 is the size of the file in bytes. It’s optional.) Using fetch.txt allows you to send a bag with “holes”, which saves disk space and network bandwidth, but at a cost – we’re now relying on the remote location to remain available. From a preservation standpoint, this is scary! If topekastar.com goes away, this bag will be broken. I know some people don’t use fetch.txt for precisely this reason. If you do use fetch.txt, here are some things to consider: What if the fetch file is the wrong format? There’s a prescribed format – one file per line, with a URL, optional file size, and file path. What if the fetch file lists a file which isn’t in the payload manifest? The fetch.txt should only tell us that a file is stored elsewhere, and shouldn’t be introducing otherwise unreferenced files. If a file appears in fetch.txt but not the payload manifest, then we can’t verify the remote file because we don’t have a checksum for it. There’s either an erroneous fetch file entry or a missing manifest entry. What if the fetch file points to a file at an unusable URL? The URL is only useful if the person who receives the bag can use it to download the file. If they can’t, the bag might technically be valid, but it’s functionally broken. For example, you might reject URLs that don’t start with http:// or https://. What if the fetch file points to a file with the wrong length? The fetch.txt can optionally specify the size of a file, so you know how much storage you need to download it. If you download the file, the actual size should match the stated size. What if the fetch files points to a file that’s already included in the bag? Now you have two ways to get this file: you can read it from the bag, or from the remote URL. If a file is listed in both fetch.txt and included in the bag, either that file isn’t meant to be in the bag, or the fetch file has an erroneous entry. We used fetch files at Wellcome Collection to implement versioning, and we added extra rules about what remote URLs were allowed. In particular, we didn’t allow fetching a file from just anywhere – you could fetch from our S3 buckets, but not the general Internet. The bag verifier would reject a fetch file entry that pointed elsewhere. These examples illustrate just how many ways a BagIt bag can be invalid, from simple structural issues to complex edge cases. Remember: the key is to understand your specific needs and requirements. By considering your context – who creates your bags, where they’ll be stored, and how they fit into your wider systems – you can build a validation process to catch the issues that matter to you, while avoiding unnecessary complexity. I can give you my ideas, but only you can build your system. [If the formatting of this post looks odd in your feed reader, visit the original article]

6 hours ago 1 votes
Servers can last a long time

We bought sixty-one servers for the launch of Basecamp 3 back in 2015. Dell R430s and R630s, packing thousands of cores and terabytes of RAM. Enough to fill all the app, job, cache, and database duties we needed. The entire outlay for this fleet was about half a million dollars, and it's only now, almost a decade later, that we're finally retiring the bulk of them for a full hardware refresh. What a bargain! That's over 3,500 days of service from this fleet, at a fully amortized cost of just $142/day. For everything needed to run Basecamp. A software service that has grossed hundreds of millions of dollars in that decade. We've of course had other expenses beyond hardware from operating Basecamp over the past decade. The ops team, the bandwidth, the power, and the cabinet rental across both our data centers. But none the less, owning our own iron has been a fantastically profitable proposition. Millions of dollars saved over renting in the cloud. And we aren't even done deriving value from this venerable fleet! The database servers, Dell R630s w/ Xeon E5-2699 CPUs and 768G of RAM, are getting handed down to some of our heritage apps. They will keep on trucking until they give up the ghost. When we did the public accounting for our cloud exit, it was based on five years of useful life from the hardware. But as this example shows, that's pretty conservative. Most servers can easily power your applications much longer than that. Owning your own servers has easily been one of our most effective cost advantages. Together with running a lean team. And managing our costs remains key to reaping the profitable fruit from the business. The dollar you keep at the end of the year is just as real whether you earn it or save it. So you just might want to run those cloud-exit numbers once more with a longer server lifetime value. It might just tip the equation, and motivate you to become a server owner rather than a renter.

yesterday 4 votes
How should we control access to user data?

At some point in a startup’s lifecycle, they decide that they need to be ready to go public in 18 months, and a flurry of IPO-readiness activity kicks off. This strategy focuses on a company working on IPO readiness, which has identified a gap in their internal controls for managing access to their users’ data. It’s a company that wants to meaningfully improve their security posture around user data access, but which has had a number of failed security initiatives over the years. Most of those initiatives have failed because they significantly degraded internal workflows for teams like customer support, such that the initial progress was reverted and subverted over time, to little long-term effect. This strategy represents the Chief Information Security Officer’s (CISO) attempt to acknowledge and overcome those historical challenges while meeting their IPO readiness obligations, and–most importantly–doing right by their users. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore, then Diagnose and so on. Relative to the default structure, this document has been refactored in two ways to improve readability: first, Operation has been folded into Policy; second, Refine has been embedded in Diagnose. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operations Our new policies, and the mechanisms to operate them are: Controls for accessing user data must be significantly stronger prior to our IPO. Senior leadership, legal, compliance and security have decided that we are not comfortable accepting the status quo of our user data access controls as a public company, and must meaningfully improve the quality of resource-level access controls as part of our pre-IPO readiness efforts. Our Security team is accountable for the exact mechanisms and approach to addressing this risk. We will continue to prioritize a hybrid solution to resource-access controls. This has been our approach thus far, and the fastest available option. Directly expose the log of our resource-level accesses to our users. We will build towards a user-accessible log of all company accesses of user data, and ensure we are comfortable explaining each and every access. In addition, it means that each rationale for access must be comprehensible and reasonable from a user perspective. This is important because it aligns our approach with our users’ perspectives. They will be able to evaluate how we access their data, and make decisions about continuing to use our product based on whether they agree with our use. Good security discussions don’t frame decisions as a compromise between security and usability. We will pursue multi-dimensional tradeoffs to simultaneously improve security and efficiency. Whenever we frame a discussion on trading off between security and utility, it’s a sign that we are having the wrong discussion, and that we should rethink our approach. We will prioritize mechanisms that can both automatically authorize and automatically document the rationale for accesses to customer data. The most obvious example of this is automatically granting access to a customer support agent for users who have an open support ticket assigned to that agent. (And removing that access when that ticket is reassigned or resolved.) Measure progress on percentage of customer data access requests justified by a user-comprehensible, automated rationale. This will anchor our approach on simultaneously improving the security of user data and the usability of our colleagues’ internal tools. If we only expand requirements for accessing customer data, we won’t view this as progress because it’s not automated (and consequently is likely to encourage workarounds as teams try to solve problems quickly). Similarly, if we only improve usability, charts won’t represent this as progress, because we won’t have increased the number of supported requests. As part of this effort, we will create a private channel where the security and compliance team has visibility into all manual rationales for user-data access, and will directly message the manager of any individual who relies on a manual justification for accessing user data. Expire unused roles to move towards principle of least privilege. Today we have a number of roles granted in our role-based access control (RBAC) system to users who do not use the granted permissions. To address that issue, we will automatically remove roles from colleagues after 90 days of not using the role’s permissions. Engineers in an active on-call rotation are the exception to this automated permission pruning. Weekly reviews until we see progress; monthly access reviews in perpetuity. Starting now, there will be a weekly sync between the security engineering team, teams working on customer data access initiatives, and the CISO. This meeting will focus on rapid iteration and problem solving. This is explicitly a forum for ongoing strategy testing, with CISO serving as the meeting’s sponsor, and their Principal Security Engineer serving as the meeting’s guide. It will continue until we have clarity on the path to 100% coverage of user-comprehensible, automated rationales for access to customer data. Separately, we are also starting a monthly review of sampled accesses to customer data to ensure the proper usage and function of the rationale-creation mechanisms we build. This meeting’s goal is to review access rationales for quality and appropriateness, both by reviewing sampled rationales in the short-term, and identifying more automated mechanisms for identifying high-risk accesses to review in the future. Exceptions must be granted in writing by CISO. While our overarching Engineering Strategy states that we follow an advisory architecture process as described in Facilitating Software Architecture, the customer data access policy is an exception and must be explicitly approved, with documentation, by the CISO. Start that process in the #ciso channel. Diagnose We have a strong baseline of role-based access controls (RBAC) and audit logging. However, we have limited mechanisms for ensuring assigned roles follow the principle of least privilege. This is particularly true in cases where individuals change teams or roles over the course of their tenure at the company: some individuals have collected numerous unused roles over five-plus years at the company. Similarly, our audit logs are durable and pervasive, but we have limited proactive mechanisms for identifying anomalous usage. Instead they are typically used to understand what occurred after an incident is identified by other mechanisms. For resource-level access controls, we rely on a hybrid approach between a 3rd-party platform for incoming user requests, and approval mechanisms within our own product. Providing a rationale for access across these two systems requires manual work, and those rationales are later manually reviewed for appropriateness in a batch fashion. There are two major ongoing problems with our current approach to resource-level access controls. First, the teams making requests view them as a burdensome obligation without much benefit to them or on behalf of the user. Second, because the rationale review steps are manual, there is no verifiable evidence of the quality of the review. We’ve found no evidence of misuse of user data. When colleagues do access user data, we have uniformly and consistently found that there is a clear, and reasonable rationale for that access. For example, a ticket in the user support system where the user has raised an issue. However, the quality of our documented rationales is consistently low because it depends on busy people manually copying over significant information many times a day. Because the rationales are of low quality, the verification of these rationales is somewhat arbitrary. From a literal compliance perspective, we do provide rationales and auditing of these rationales, but it’s unclear if the majority of these audits increase the security of our users’ data. Historically, we’ve made significant security investments that caused temporary spikes in our security posture. However, looking at those initiatives a year later, in many cases we see a pattern of increased scrutiny, followed by a gradual repeal or avoidance of the new mechanisms. We have found that most of them involved increased friction for essential work performed by other internal teams. In the natural order of performing work, those teams would subtly subvert the improvements because it interfered with their immediate goals (e.g. supporting customer requests). As such, we have high conviction from our track record that our historical approach can create optical wins internally. We have limited conviction that it can create long-term improvements outside of significant, unlikely internal changes (e.g. colleagues are markedly less busy a year from now than they are today). It seems likely we need a new approach to meaningfully shift our stance on these kinds of problems. Explore Our experience is that best practices around managing internal access to user data are widely available through our networks, and otherwise hard to find. The exact rationale for this is hard to determine, but it seems possible that it’s a topic that folks are generally uncomfortable discussing in public on account of potential future liability and compliance issues. In our exploration, we found two standardized dimensions (role-based access controls, audit logs), and one highly divergent dimension (resource-specific access controls): Role-based access controls (RBAC) are a highly standardized approach at this point. The core premise is that users are mapped to one or more roles, and each role is granted a certain set of permissions. For example, a role representing the customer support agent might be granted permission to deactivate an account, whereas a role representing the sales engineer might be able to configure a new account. Audit logs are similarly standardized. All access and mutation of resources should be tied in a durable log to the human who performed the action. These logs should be accumulated in a centralized, queryable solution. One of the core challenges is determining how to utilize these logs proactively to detect issues rather than reactively when an issue has already been flagged. Resource-level access controls are significantly less standardized than RBAC or audit logs. We found three distinct patterns adopted by companies, with little consistency across companies on which is adopted. Those three patterns for resource-level access control were: 3rd-party enrichment where access to resources is managed in a 3rd-party system such as Zendesk. This requires enriching objects within those systems with data and metadata from the product(s) where those objects live. It also requires implementing actions on the platform, such as archiving or configuration, allowing them to live entirely in that platform’s permission structure. The downside of this approach is tight coupling with the platform vendor, any limitations inherent to that platform, and the overhead of maintaining engineering teams familiar with both your internal technology stack and the platform vendor’s technology stack. 1st-party tool implementation where all activity, including creation and management of user issues, is managed within the core product itself. This pattern is most common in earlier stage companies or companies whose customer support leadership “grew up” within the organization without much exposure to the approach taken by peer companies. The advantage of this approach is that there is a single, tightly integrated and infinitely extensible platform for managing interactions. The downside is that you have to build and maintain all of that work internally rather than pushing it to a vendor that ought to be able to invest more heavily into their tooling. Hybrid solutions where a 3rd-party platform is used for most actions, and is further used to permit resource-level access within the 1st-party system. For example, you might be able to access a user’s data only while there is an open ticket created by that user, and assigned to you, in the 3rd-party platform. The advantage of this approach is that it allows supporting complex workflows that don’t fit within the platform’s limitations, and allows you to avoid complex coupling between your product and the vendor platform. Generally, our experience is that all companies implement RBAC, audit logs, and one of the resource-level access control mechanisms. Most companies pursue either 3rd-party enrichment with a sizable, long-standing team owning the platform implementation, or rely on a hybrid solution where they are able to avoid a long-standing dedicated team by lumping that work into existing teams.

yesterday 2 votes