Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
13
Aahh… LinkedIn. I’ve been struggling with the platform for years. One the one hand, I’ve made some great connections on there, and it has helped me tremendously in increasing my visibility, or, as I sometimes put it, as a platform for shameless self-promotion. On the other hand, it has become a pointless waste of time. The constant scrolling through my feed in the hunt for something mildly interesting. The getting sucked into ‘discussions’ with people who only want to see their point of view confirmed, instead of being open to opinions that are different from theirs. The constant rehashing of the same old content and the same old conversations, over and over again. I’ve tried spending less time on the platform a couple of times. I even tried a long break last year. Did that work? Well, no. I did spend less time on LinkedIn for a while, but not nearly as long or as little as I planned / wanted. Before I knew it, I was back to spending way too much time, attention and energy on consuming...
4 months ago

More from On Test Automation

My career and a thought experiment

As is the case every year, 2025 is starting off relatively slowly. There’s not a lot of training courses to run yet, and since a few of the projects I worked on wrapped up in December, I find myself with a little bit of extra time and headspace on my hands. I actually enjoy these slower moments, because they give me some time to think about where my professional career is going, if I’m still happy with the direction it is going on, and what I would like to see changed. Last year, I quit doing full time projects as an individual contributor to development teams in favour of part-time consultancy work and more focus on my training services. 2024 has been a great year overall, and I would be happy to continue working in this way in 2025. However, as a thought experiment, I took some time to think about what it would take for me to go back to full time roles, or maybe (maybe!) even consider joining a company on a permanent basis. Please note that this post is not intended as an ‘I need a job!’ cry for help. My pipeline for 2025 is slowly but surely filling up, and again, I am very happy with the direction my career is going at the moment. However, I have learned that it never hurts to leave your options open, and even though I love the variety in my working days these days, I think I would enjoy working with one team, on one goal, for an extended amount of time, too, under the right conditions. If nothing else, this post might serve as a reference post to send to people and companies that reach out to me with a full time contract opportunity or even a permanent job opening. This is also not a list of requirements that is set in stone. As my views on what would make a great job change (and they will), I will update this post to reflect those views. So, to even consider joining a company on a full-time contract or even a permanent basis, there are basically three things I will and should consider: What does the job look like? What will I be doing on a day-to-day basis? What are the must-haves regarding terms and conditions? What are the nice to haves that would provide the icing on the cake for me? Let’s take a closer look at each of these things. What I look for in a job As I mentioned before, I am not looking for a job as an individual contributor to a development team. I have done that for many years, and it does not really give me the energy that it used to. On the other hand, I am definitely not looking for a hands-off, managerial kind of role, as I’d like to think I would make an atrocious manager. Plus, I simply enjoy being hands-on and writing code way too much to let that go. I would like to be responsible for designing and implementing the testing and automation strategy for a product I believe in. It would be a lead role, but, as mentioned, with plenty (as in daily) opportunities to get hands-on and contribute to the code. The work would have to be technically and mentally challenging enough to keep me motivated in the long term. Getting bored quickly is something I suffer from, which is the main driver behind only doing part-time projects and working on multiple different things in parallel right now. I don’t want to work for a consultancy and be ‘farmed out’ to their clients. I’ve done that pretty much my entire career, and if that’s what the job will look like, I’d rather keep working the way I’m working now. The must-haves There are (quite) a few things that are non-negotiable for me to even consider joining a company full time, no matter if it’s on a contract or a permanent basis. The pay must be excellent. Let’s not beat around the bush here: people work to make money. I do, too. I’m doing very well right now, and I don’t want that to change. The company should be output-focused, as in they don’t care when I work, how many hours I put in and where I work from, as long as the job gets done. I am sort of spoiled by my current way of working, I fully realise that, but I’ve grown to love the flexibility. By the way, please don’t read ‘flexible’ as ‘working willy-nilly’. Most work is not done in a vacuum, and you will have to coordinate with others. The key word here is ‘balance’. Collaboration should be part of the company culture. I enjoy working in pair programming and pair testing setups. What I do not like are pointless meetings, and that includes having Scrum ceremonies ‘just because’. The company should be a remote-first company. I don’t mind the occasional office day, but I value my time too much to spend hours per week on commuting. I’ve done that for years, and it is time I’ll never get back. The company should actively stimulate me contributing to conferences and meetups. Public speaking is an important part of my career at the moment, and I get a lot of value from it. I don’t want to give that up. There should be plenty of opportunities for teaching others. This is what I do for a living right now, I really enjoy it, and I’d like to think I’m pretty good at it, too. Just like with the public speaking, I don’t want to give that up. This teaching can take many forms, though. Running workshops and regular pairing with others are just two examples. The job should scratch my travel itch. I travel abroad for work on average about 5-6 times per year these days, and I would like to keep doing that, as I get a lot of energy from seeing different places and meeting people. Please note that ‘traveling’ and ‘commuting’ are two completely different things. Yes, I realize this is quite a long list, but I really enjoy my career at the moment, and there are a lot of aspects to it that I’m not ready to give up. The nice to haves There are also some things that are not strictly necessary, but would be very nice to have in a job or full time contract: The opportunity to continue working on side gigs. I have a few returning customers that I’ve been working with for years, and I would really appreciate the opportunity to continue doing that. I realise that I would have to give up some things, but there are a few clients that I would really like to keep working with. By the way, this is only a nice to have for permanent jobs. For contracting gigs, it is a must-have. It would be very nice if the technology stack that the company is using is based on C#. I’ve been doing quite a bit of work in this stack over the years and I would like to go even deeper. If the travel itch I mentioned under the must-haves could be scratched with regular travel to Canada, Norway or South Africa, three of my favourite destinations in the world, that would be a very big plus. I realize that the list of requirements above is a long one. I don’t think there is a single job out there that ticks all the boxes. But, again, I really like what I’m doing at the moment, and most of the boxes are ticked at the moment. I would absolutely consider going full time with a client or even an employer, but I want it to be a step forward, not a step back. After all, this is mostly a thought experiment at the moment, and until that perfect contract or job comes along, I’ll happily continue what I’m doing right now.

2 weeks ago 29 votes
RestAssured .NET in 2024 - a review

As a (sort of) follow-up post to my yearly review for 2024, in this post, I would like to go over the changes, bug fixes and new features that have been introduced in RestAssured .NET in 2024. This year, I released 7 new versions of the library, and while none of the versions included changes that were worthy of a blog post on its own, I thought it would be a good idea to wrap them all up in a single overview. Basically, this blog post is an extended version of the library’s CHANGELOG. I’ll go through the new versions chronologically, starting with the first release of 2024. Version 4.2.2 - released April 23 RestAssured .NET 4.2.2 fixes a bug that prevented JSON responses that are an array to be properly verified. In other words, if the JSON response body looks like this: [ { "id": 1, "text": "Do the dishes" }, { "id": 2, "text": "Clean out the trash" }, { "id": 3, "text": "Read the newspaper" } ] I would expect this test to pass: [Test] public void JsonArrayResponseBodyElementCanBeVerifiedUsingNHamcrestMatcher() { Given() .When() .Get("http://localhost:9876/json-array-response-body") .Then() .StatusCode(200) .Body("$[1].text", NHamcrest.Is.EqualTo("Clean out the trash")); } but prior to this version, it threw a Newtonsoft.Json.JsonReaderException. The solution? Adding a try-catch that first tries to parse the JSON response as a JObject (equal to existing behaviour), catch the JsonReaderException and try again, now parsing the JSON response into a JArray. That made the newly added test pass without failing any other tests. Another demonstration of the added value of having a decent set of tests. RestAssured .NET is slowly growing and becoming more complex, and having a test suite I can run locally, and that always runs when I push code to GitHub is an invaluable safety net for me. These tests run in a few seconds, yet they give me invaluable feedback on the effect of new features, bug fixes and code refactoring efforts. I haven’t heard back from the person submitting the original issue, but I assume that this fixed their issue. Version 4.3.0 - released August 16 I love learning about how people use RestAssured .NET, because invariably they will use it in ways I haven’t foreseen. I was unfamiliar with the concept of server-sent events (SSE) in APIs, for example, yet there are people looking to test these kinds of APIs using RestAssured .NET. It turned out that what this user was looking for was a way to set the HttpCompletionOption value on the System.Net.Http.HttpClient that is wrapped by RestAssured .NET. To enable this, I added a method to the DSL that looks like this: Given() .UseHttpCompletionOption(HttpCompletionOption.ResponseHeadersRead) I also added the option to specify the HttpCompletionOption to be used in a RequestSpecification as well as in the global config. A straightforward fix that solved the problem for this specific user. The only thing I don’t like here is that I don’t know of a way to test this locally. Do you? I would love to hear it. Version 4.3.1 - release August 22 Another user pointed out to me that trying to verify that the value of a JSON response body element is an empty array also threw an exception. So, if the JSON response body looks like this: { "success": true, "errors": [] } this test should pass, but instead it threw a Newtonsoft.Json.JsonSerializationException: [Test] public void JsonResponseBodyElementEmptyArrayValueCanBeVerifiedUsingNHamcrestMatcher() { Given() .When() .Get("http://localhost:9876/json-empty-array-response-body") .Then() .StatusCode(200) .Body("$.errors", NHamcrest.Is.OfLength(0)); } The fix? Adding some code that checks if the element returned when evaluating the JsonPath expression is a JArray or a JObject and using the right matching logic accordingly. I used my preferred procedure here: first, write a failing test that reproduces the issue then, make the test pass without breaking any other tests refactor the code, document and release Does this procedure sound familiar to you? Version 4.4.0 - released October 21 As you can probably tell from the semantic versioning, this version introduced a new feature to RestAssured .NET: the ability to use NTLM authentication when making an HTTP call. To enable this, I added a new method to the DSL: Given() .NtlmAuth() // This one uses default NTLM credentials for the current user .NtlmAuth("username", "password", "domain") // This one uses custom NTLM credentials As I had no idea how to write a proper test for this, even though I had tested it before releasing using Fiddler, I released a beta version first that the person submitting the issue could use to verify the solution. I’m happy to say that it worked for them and that the solution could be released properly. Again, if someone can think of a way to add a proper test for NTLM authentication to the test suite, I would love to hear it. All that the current tests do is run the code and see if no exception is thrown. Not a good test, but until I find a better way, it will have to do. Version 4.5.0 - released November 19 This version introduced not one, but two changes. First, since .NET 9 was officially released earlier that week (or maybe the week before, I forgot), I needed to release a RestAssured .NET version that targets .NET 9, so I did. Just like with .NET 8, I didn’t really have to change anything to the code other than adding net9.0 to the TargetFrameworks and add .NET 9 to the build pipeline for the library to make sure that every change is tested on .NET 9, too. Happy to say it all ‘just worked’. The other change took more effort: a user reported that they could not override the ResponseLogLevel set in a RequestSpecification at the individual test level. The reason? In the existing code, the response was logged directly after the HTTP call completed, so before any calls to Log() for the response. When Log() is called on the response, it was then logged again. I have no idea how I completely overlooked this until now, but I did. Rewriting the code to make this work took longer than I expected, but I managed in the end, through quite a bit of trial and error and lots of humand-centered testing (again, no idea how to write automated tests for this). The logging functionality of RestAssured .NET is something I intend to rewrite in the future, for a couple of reasons: It’s impossible to write automated tests for it (or at least I don’t know how to do this) Ideally, I want the logging to be more configurable and extensible to give users more flexibility than they have at the moment Version 4.5.1 - released November 20 As one does, I found an issue with the updated logging logic almost immediately after releasing 4.5.0 to the public: masking of sensitive headers and cookies didn’t work anymore when specified as part of a RequestSpecification. Lucky for me, this was a quick fix, but a bit embarrassing nonetheless. Had I had proper automated tests for the logging in place, I probably would have caught this before releasing 4.5.0…. Anyway, it’s fixed now, as far as I can tell. Version 4.6.0 - released December 9 The final RestAssured .NET release of 2024 added the capability to strip the ; charset=<some_charset> from the Content-Type header in a request. It turns out, some APIs explicitly expect this header to not contain the charset suffix, but the way I create a request, or rather, the way .NET creates a StringContent object, will add it by default. This issue was a great example of one of the main reasons why I started this project: there is so much I don’t know yet about HTTP, APIs, C#/.NET and other technologies, and working on these issues and improving RestAssured .NET gives me an opportunity to learn them. I make a habit of writing what I learned down in the issue on GitHub, so I can review it later, and so I can point others to these links and thoughts, too. So, if you’re looking for a way to strip the charset identifier from the Content-Type header in the request, you can now do that by passing an optional second boolean argument to Body() (defaults to false): Given() .Body(your_body_goes_here, stripCharset: true) That’s it! As you can see, lots of small changes, bug fixes and new features have been added to RestAssured .NET this year. Oh, and before I forget: with every release, I also made sure to update the dependencies I use to create and test RestAssured .NET to their latest versions. I consider that good housekeeping, and it’s all part of keeping a library up to date. I am looking forward to seeing the library evolve and improve further in 2025.

a month ago 48 votes
2024 - A year in review

Well, I guess it’s true: time does fly when you’re having fun! 2024 is coming towards an end soon, and since I’m deliberately slowing down this week and will be away from work for two weeks after that, this to me is a great time to look back on 2024 and what it has brought me professionally. When I wrote my yearly review for 2023, I mentioned that I wanted to focus on a couple of different things in 2024: Training Mentoring Consulting Public speaking Therefore, it makes sense to use this list as a structure for this annual review and go over each item in this list to see what I’ve done, what went well and what I would like to improve in 2025. Training The plan for 2024 was to spend more time and effort promoting my training services, with the goal of increasing the amount of training I did back towards the 2022 level. Overall, in 2024, I ran: 29 full day training sessions (10 on site, 19 online) 28 half day sessions (22 on site, 6 online) 7 free public workshops (all online) 3 conference workshops (all on site) This is significantly more than in 2023, but still not quite at the level of 2022. Still, I’m very happy with these numbers, especially because I have managed to significantly diversify my training client base. I have run training sessions with more clients (14) than in 2023 (10), and these clients have come from more different countries (2024: 6, 2023: 5). Several of these clients are return clients, and I am confident that I will keep working with most if not all of these return clients in 2025. However, I will spend (and have to spend!) even more time and effort to keep my training business growing. This is one of the main reasons I have returned to LinkedIn after a much needed break, but I am going to at other channels and ways to find new training clients, too, preferably using channels other than social media. Mentoring In 2024, I worked with one corporate mentoring client, and I really, really enjoyed that. Over three months, I worked with a group of four testers, as well as the wider organization, to help them define an automation strategy and teach them how to build their first tests, bring them under version control and make those tests part of a build pipeline. It was once again amazing to see how these testers grew more confident and knowledgeable by the week, simply by helping them define the next small step forward, giving them the confidence to take that step, making them learn from it and then look ahead for the step after that. This kind of engagement is something I would like to do again with a different company in 2025, because it is both a lot of fun and very rewarding. Even though it is more expensive to a client than buying their staff a book or a Pluralsight account, or even than bringing me in for a day or two of training, the results are spectacularly more effective, too. Consulting I spent most of 2024 doing billed-by-the-hour consulting, for a variety of clients both in the Netherlands and abroad. A lot of that consulting was focused on implementing contract testing, by the way. It seems like 2024 has been the year that many companies decided they wanted to get serious about their contract testing and could use some help in that area. For 2025, while I still want to keep doing consulting, I am thinking about offering my consulting services in a slightly different way. I have one ongoing consulting client that just extended my contract for another three months, using the same billed-by-the-hour agreement, but for future clients, I want to see if it’s possible to move towards value-based pricing instead. We’ll see how that works out as 2025 progresses. Public speaking One of the highlights of the year was the first trip to the US in 18 years, for the 2024 PNSQC conference in Portland, Oregon. I really enjoyed delivering both a keynote and a half-day workshop at that conference, as well as having the opportunity to do a bit of sightseeing in the area. Including the PNSQC keynote, I delivered 30 talks in 2024, 8 more than in 2023. Many of these talks were online, but 12 of them have been on site, which I think is a good number. About half of those were public events, with the other half being in-company presentations. A lot of these talks, too, were focused on contract testing, and for 2025, I would like to diversify a little more in terms of topics I speak about. I’m definitely looking to keep up the public speaking frequency, and my first talk for 2025 has been confirmed already. Other facts and figures Apart from what I mentioned above, I have been spending time on a couple of other things, too: I was part of the program committee for EuroSTAR 2024 and for the Test Automation Days I co-organized this year’s edition of the Dutch Testing Day I released 8 new versions of RestAssured .NET I wrote and published 15 blog posts (including this one), as well as one article for another website I traveled to 6 different countries for work (Belgium, Spain, Sweden, Austria, Switzerland and the US) The trip to the US added a third year to my ‘at least one work trip outside Europe per year’ streak, a streak I’m of course hoping to extend in 2025 (Africa again? South America maybe?) I was a guest on 7 different podcasts (5 more than in 2023) I started working on a public, self-paced course on contract testing, which will be complete in February 2025 All in all, I’ve worked on lots of different things, just the way I like it. I’m definitely looking forward to 2025 providing as much, if not more variety. Some final words While I would like to keep the variety I enjoyed in 2024, I also do want to spend more time outside of work in 2025. My goal for next year can be summarized as “Physical, mental and financial fitness” Physical fitness means spending more time outside, mostly during walks and long rides on my racing bike. Mental fitness means shutting down from work more often in the evenings, on weekends and during holidays to spend time on other activities, like family, reading and playing and studying chess. Financial fitness is an enabler for the first two and means a move away from billed-by-the-hour work towards selling more productized services and implementing value-based pricing where I can. I can’t wait to see what 2025 has in store, but first, I wish you all a wonderful festive season.

a month ago 39 votes
The test automation quadrant, or a different way to look at your tests

Like many others working in software testing, and more specifically in automation, I have been introduced to the concept of the test automation pyramid early on in my career. While this model has received its share of criticism in the testing community over the years, I still use it from time to time. What I think this model is still somewhat useful for is introducing people who are relatively new to software testing and automation to testing scopes, layers and, most importantly, to thinking and talking about finding the right balance between different test scopes. After all, they will encounter this model at some point in time, and I’d rather they learn about this model in context. However, recently I have started to think and talk about classification of automated tests in a different way, using a different kind of mental model, and I thought it might be useful to share this model with you. Please note that whenever I use the word ‘test’ in the remainder of this blog post, I’m referring to an automated test / check that confirms or falsifies an expectation about the behaviour of our product. I don’t think the model applies equally well to exploratory testing activities (but I’m happy to be proven wrong). Why a different model? Because I think that while the test automation pyramid still has its use in certain contexts, there are a couple of things about it that I feel are missing: Regarding the model itself: what is completely missing in the pyramid model is a representation of the value of a test. I know, I know, it’s just a model, and all models are wrong, but there is so much talk about the amount of testing one should do in general, what part of that testing should be automated, what part of that automation should be at the unit / integration / E2E level and what is a good amount of line / branch / method / requirement coverage, yet so rarely do we talk about the value of our tests. I think we need a model that at least takes the value of a test into account. Regarding how people talk about / use the model: as I said, there is a lot of talk about the shape of the model and what that says about the ratio of unit to integration to E2E tests. And you know what? I don’t particularly care. I don’t think it is interesting at all what your ratios look like, no matter if it is a pyramid (well, a triangle, really), an hourglass, an ice cream cone, or some other shape. It’s all good, as long as your tests are efficient and valuable. In other words, I think there are better ways to have a conversation about your tests and the value they provide, and we need a model that supports that. So, then what? My current mental model of (classification of) tests Let’s take a quick step back first: why do we use automation? Why do we use tools to help us when we test? If you ask me, it’s because we want to retrieve and present valuable information about the state of our product and potential risks in a way that is efficient. Following this, there are two factors that play a key role in test automation: information, specifically information that is valuable, and efficiency, or the amount of resources we need to spend to get the information that we’re looking for. With these two things in mind, here’s what my mental model looks like these days when I talk about automated tests: You could call this model a test automation quadrant. In its essence, it’s a very simple model, and yes, there are lots of nuances that are missing. That’s why it’s called a model. Now, let me clarify why this model looks the way it does and what my thought process behind it is. On the horizontal axis, there’s information value. As testers, we are in the business of uncovering and presenting information, more specifically, information about the state of our product. Not all information is equal, though, with some pieces of information being more important than others. Tests that uncover and present valuable information are inherently more valuable themselves than tests that uncover and present less valuable information. Also, the value of information is undeniably related to risk. Information related to high-risk problems is more valuable than information related to low-risk problems. By information, I mean the confirmation or falsification of assumptions, beliefs or expectations about the state or the behaviour of the product we’re building. Since we’re talking about automation here, most effort will focus on confirming pre-existing, codified expectations through assertions, but automation is not limited to executing assertions and demonstrating that something might work. Oh, and the value of the information is not limited to the information itself and what it tells you about the state of your product, it also includes the reliability of that information. There’s no point in tests that tell you something is broken when you can’t trust that test. On the vertical axis, we have efficiency. All other things being equal, tests that are more efficient are generally preferable over tests that are less efficient. As time typically equals money, efficiency includes the time to read, write, run and maintain a test, and also the time it takes to analyze the root cause in case of a failure. Besides time, we might also want to consider the cost of hard- and software required to write and run the test, as well as other things related to efficiency. Please note that unlike the test automation pyramid, I’m leaving the scope or size of the test by itself out of the equation. E2E tests can be more efficient or less efficient, and the information produced by these tests can be more valuable or less valuable. The same applies to unit and integration tests. Oh, and the model applies to automated performance testing, security testing and other types of test automation, too. All of these can produce information that is more or is less valuable, and they can be done in a more or in a less efficient way. So, now that you understand the reasoning behind this model and the aspects of tests considered in it, let’s take a look at how you can use this test automation quadrant to your benefit. Use the test automation quadrant to assess your current situation When you decide to use this model, the first step I recommend you take is to place your tests somewhere in one of these quadrants. Of course, the exact spot differs for different types of tests, and even for individual tests. That’s a perk of this model, if you’d ask me: you can apply it to your entire test suite, to separate types of tests and even to individual test cases. Ideally, most of your tests should find a place somewhere in the top right quadrant, i.e., these are efficient tests that provide valuable information. Will all your tests be in the top right quadrant? Probably not, but that doesn’t necessarily have to be a bad thing. Remember, we’re simply taking stock of our current situation for now. Tests that are in the bottom right quadrant, for example, still produce high value information, just not in a very efficient way. Maybe the tests take a long time to write or to run. Maybe you need to do a significant amount of setup in dependent systems. Maybe there’s another reason your test isn’t as efficient as you would want it to be. Still, they produce valuable information, so it is probably in your best interest to keep them. When we move to the left two quadrants, things get a little more tricky. Tests in the top left quadrant can be written and run efficiently, but the information they produce is not very valuable. I do not want to imply that you shouldn’t write these tests, but what I do recommend is to give them a (much) lower priority than those tests on the right hand side of the quadrant, and only work on these when you have the time and resources available to do so. Or, if you want to be a little more ruthless, consider not spending any time on them at all. As for tests in the bottom left corner of the quadrant: I strongly recommend you to stop spending time on them. At all. Don’t write new tests that fall into this part of the quadrant, and strongly consider throwing away existing tests here. They likely cost you a lot of time and effort to write and run, and they don’t produce a lot of value in return. Summarizing the above in a picture: Use the test automation quadrant to improve your automation efforts After you have assessed your current situation, for an individual test, a group of related tests or even for your entire test suite, the second step is to identify and carry out steps to bring some or all of your tests closer to the top right corner of the quadrant, either by moving them up, moving them to the right, or both. Moving tests up implies making existing tests more efficient in a way that does not negatively impact the value of the information they provide. How to do this exactly is outside the scope of this blog post, and the exact steps to take heavily depend on context, but here are some suggestions for steps to take: Breaking down existing E2E tests into smaller, more focused tests that run more efficiently Refactoring application code that is hard to test by applying principles such as single responsibility and dependency inversion Using a simulated version of a third-party dependency instead of ‘the real thing’ for cases that are hard or even impossible to set up Moving tests to the right implies improving tests that produce low value information or information that is not reliable. Here are some examples of actions you may consider taking to achieve this: Gather data from flaky tests, try and find the root cause of their flakiness and address issues Test your tests, for example using techniques like mutation testing, to identify potential false negatives and improve the effectiveness of your test suite, for example around boundary values Improve the reporting of your test suite to make it more clear what exactly happened in case of failing tests Again, this is a very generic list of suggestions, and one that is far from complete. In a follow-up blog post, I’ll go through different realistic examples and show you how to apply these techniques on actual tests to move them further up and / or to the right in the test automation quadrant. This blog post is just a first introduction to the test automation quadrant model, and there is much more to unpack here yet. Still, I’m very much looking forward to your feedback and comments. In the meantime, I’m not just working on a post with actual examples, but also on a brand-new talk on this model that I am looking to deliver at events, meetups and conferences in 2025. Maybe at your event, too?

a month ago 26 votes
On ditching hourly and productizing my services

In the last couple of weeks, I’ve spent much more time commuting than normal. I mostly work remotely these days, for clients both in the Netherlands and abroad. And I like it that way. Don’t get me wrong, I like to drive, but commuting takes up a lot of time, time I would rather spend in another way. Reading. Running. Sleeping. Finally getting back into chess. Sometimes, spending time in the same room with the people you are working with is just much more fun and more efficient, though. Especially when I’m running a workshop or training session, or when I’m doing a talk, I strongly prefer to be in the same room as the participants / the audience. As I have been doing a lot of that in the last few weeks, spending more time on the road was inevitable. A benefit of spending a lot of time in the car is finally having the time to catch up on some podcasts, and that’s what I did this time as well. One podcast I heard a lot of good things about, but didn’t really listen to yet was Jonathan Stark’s Ditching Hourly podcast. I was familiar with his work and some of his thoughts on moving away from hourly billing as an independent consultant, but again, I hadn’t found the time yet to really listen to what he and his guests have to say. However, as I said, recently I did have the time to do so, and some of the things that were discussed on the podcast really struck a chord with me. I have been an independent consultant for 10 years now, and during those years pretty much all my work, except my training courses, has been compensated on an hourly basis. Why? Because it is what every company is used to when they’re hiring someone. Especially over here in the Netherlands. So, why move away from that? If it works, it works, right? There are a few problems I’ve got with hourly billing: First of all, hourly billing disincentivizes me to work efficiently. Why finish something in 20 hours when I can do it in 40 and bill twice as much? Second, I think dragging on the work and spending hours doing nothing much just to maximize billing is not fair towards my clients Third, I don’t like getting into conversations that turn into a haggle… ‘But another consultant said they could do it for EUR 10 per hour less…’ Finally, the whole having to explain the hours I did and did not spend with a certain client through time sheets is annoying, even when I haven’t really had a problem getting my time sheets signed off. So, from today on, I’m going to work on moving away from purely hourly billing and towards what is commonly known as value-based pricing. How? Well, I haven’t figured out every detail, but here are some thoughts and ideas: Focusing more on selling my training courses. While the pricing my training services is still technically time-based, as I use a more or less fixed half day / full day rate for doing training, the problems with hourly billing I outlined above are pretty much non-existent for this type of work. Improve my negotiation skills and get to the bottom of what clients and prospects are looking for when they approach me with a request. What’s the result they have in mind? What is the added value of that result? And then base my price off that. This is something I definitely still need to learn. Sell my services in packages. I’ve done this with one client this year, and that worked really well. They told me their budget, I outlined what I could do for that price, we agreed, did the paperwork and moved on to the interesting stuff. Really refreshing. Work on a retainer basis. As in: ‘for EUR XYZ per month, I will be available to do A, B and C for you’. Typically, this is consulting or being available for questions, but they could be other things, too. Helping organizations set up their test automation, for example. Contrary to hourly billing, this would incentivize me to work efficiently. Again, I don’t expect to completely get rid of hourly billing soon. This transition will take time, and I will likely make mistakes along the way. But I’m ready to take that time and learn from those mistakes, because I see how this would be a much better and more fulfilling way of working. Oh, and the fact that when I work this way, it is much less likely to be seen as pseudo-employment, which as I recently wrote about is a pretty hot topic here in the Netherlands at the moment. To be continued.

a month ago 25 votes

More in programming

Adding auto-generated cover images to EPUBs downloaded from AO3

I was chatting with a friend recently, and she mentioned an annoyance when reading fanfiction on her iPad. She downloads fic from AO3 as EPUB files, and reads it in the Kindle app – but the files don’t have a cover image, and so the preview thumbnails aren’t very readable: She’s downloaded several hundred stories, and these thumbnails make it difficult to find things in the app’s “collections” view. This felt like a solvable problem. There are tools to add cover images to EPUB files, if you already have the image. The EPUB file embeds some key metadata, like the title and author. What if you had a tool that could extract that metadata, auto-generate an image, and use it as the cover? So I built that. It’s a small site where you upload EPUB files you’ve downloaded from AO3, the site generates a cover image based on the metadata, and it gives you an updated EPUB to download. The new covers show the title and author in large text on a coloured background, so they’re much easier to browse in the Kindle app: If you’d find this helpful, you can use it at alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ Otherwise, I’m going to explain how it works, and what I learnt from building it. There are three steps to this process: Open the existing EPUB to get the title and author Generate an image based on that metadata Modify the EPUB to insert the new cover image Let’s go through them in turn. Open the existing EPUB I’ve not worked with EPUB before, and I don’t know much about it. My first instinct was to look for Python EPUB libraries on PyPI, but there was nothing appealing. The results were either very specific tools (convert EPUB to/from format X) or very unmaintained (the top result was last updated in April 2014). I decied to try writing my own code to manipulate EPUBs, rather than using somebody else’s library. I had a vague memory that EPUB files are zips, so I changed the extension from .epub to .zip and tried unzipping one – and it turns out that yes, it is a zip file, and the internal structure is fairly simple. I found a file called content.opf which contains metadata as XML, including the title and author I’m looking for: <?xml version='1.0' encoding='utf-8'?> <package xmlns="http://www.idpf.org/2007/opf" version="2.0" unique-identifier="uuid_id"> <metadata xmlns:opf="http://www.idpf.org/2007/opf" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:calibre="http://calibre.kovidgoyal.net/2009/metadata"> <dc:title>Operation Cameo</dc:title> <meta name="calibre:timestamp" content="2025-01-25T18:01:43.253715+00:00"/> <dc:language>en</dc:language> <dc:creator opf:file-as="alexwlchan" opf:role="aut">alexwlchan</dc:creator> <dc:identifier id="uuid_id" opf:scheme="uuid">13385d97-35a1-4e72-830b-9757916d38a7</dc:identifier> <meta name="calibre:title_sort" content="operation cameo"/> <dc:description><p>Some unusual orders arrive at Operation Mincemeat HQ.</p></dc:description> <dc:publisher>Archive of Our Own</dc:publisher> <dc:subject>Fanworks</dc:subject> <dc:subject>General Audiences</dc:subject> <dc:subject>Operation Mincemeat: A New Musical - SpitLip</dc:subject> <dc:subject>No Archive Warnings Apply</dc:subject> <dc:date>2023-12-14T00:00:00+00:00</dc:date> </metadata> … That dc: prefix was instantly familiar from my time working at Wellcome Collection – this is Dublin Core, a standard set of metadata fields used to describe books and other objects. I’m unsurprised to see it in an EPUB; this is exactly how I’d expect it to be used. I found an article that explains the structure of an EPUB file, which told me that I can find the content.opf file by looking at the root-path element inside the mandatory META-INF/container.xml file which is every EPUB. I wrote some code to find the content.opf file, then a few XPath expressions to extract the key fields, and I had the metadata I needed. Generate a cover image I sketched a simple cover design which shows the title and author. I wrote the first version of the drawing code in Pillow, because that’s what I’m familiar with. It was fine, but the code was quite flimsy – it didn’t wrap properly for long titles, and I couldn’t get custom fonts to work. Later I rewrote the app in JavaScript, so I had access to the HTML canvas element. This is another tool that I haven’t worked with before, so a fun chance to learn something new. The API felt fairly familiar, similar to other APIs I’ve used to build HTML elements. This time I did implement some line wrapping – there’s a measureText() API for canvas, so you can see how much space text will take up before you draw it. I break the text into words, and keeping adding words to a line until measureText tells me the line is going to overflow the page. I have lots of ideas for how I could improve the line wrapping, but it’s good enough for now. I was also able to get fonts working, so I picked Georgia to match the font used for titles on AO3. Here are some examples: I had several ideas for choosing the background colour. I’m trying to help my friend browse her collection of fic, and colour would be a useful way to distinguish things – so how do I use it? I realised I could get the fandom from the EPUB file, so I decided to use that. I use the fandom name as a seed to a random number generator, then I pick a random colour. This means that all the fics in the same fandom will get the same colour – for example, all the Star Wars stories are a shade of red, while Star Trek are a bluey-green. This was a bit harder than I expected, because it turns out that JavaScript doesn’t have a built-in seeded random number generator – I ended up using some snippets from a Stack Overflow answer, where bryc has written several pseudorandom number generators in plain JavaScript. I didn’t realise until later, but I designed something similar to the placeholder book covers in the Apple Books app. I don’t use Apple Books that often so it wasn’t a deliberate choice to mimic this style, but clearly it was somewhere in my subconscious. One difference is that Apple’s app seems to be picking from a small selection of background colours, whereas my code can pick a much nicer variety of colours. Apple’s choices will have been pre-approved by a designer to look good, but I think mine is more fun. Add the cover image to the EPUB My first attempt to add a cover image used pandoc: pandoc input.epub --output output.epub --epub-cover-image cover.jpeg This approach was no good: although it added the cover image, it destroyed the formatting in the rest of the EPUB. This made it easier to find the fic, but harder to read once you’d found it. An EPUB file I downloaded from AO3, before/after it was processed by pandoc. So I tried to do it myself, and it turned out to be quite easy! I unzipped another EPUB which already had a cover image. I found the cover image in OPS/images/cover.jpg, and then I looked for references to it in content.opf. I found two elements that referred to cover images: <?xml version="1.0" encoding="UTF-8"?> <package xmlns="http://www.idpf.org/2007/opf" version="3.0" unique-identifier="PrimaryID"> <metadata xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:opf="http://www.idpf.org/2007/opf"> <meta name="cover" content="cover-image"/> … </metadata> <manifest> <item id="cover-image" href="images/cover.jpg" media-type="image/jpeg" properties="cover-image"/> … </manifest> </package> This gave me the steps for adding a cover image to an EPUB file: add the image file to the zipped bundle, then add these two elements to the content.opf. Where am I going to deploy this? I wrote the initial prototype of this in Python, because that’s the language I’m most familiar with. Python has all the libraries I need: The zipfile module can unpack and modify the EPUB/ZIP The xml.etree or lxml modules can manipulate XML The Pillow library can generate images I built a small Flask web app: you upload the EPUB to my server, my server does some processing, and sends the EPUB back to you. But for such a simple app, do I need a server? I tried rebuilding it as a static web page, doing all the processing in client-side JavaScript. That’s simpler for me to host, and it doesn’t involve a round-trip to my server. That has lots of other benefits – it’s faster, less of a privacy risk, and doesn’t require a persistent connection. I love static websites, so can they do this? Yes! I just had to find a different set of libraries: The JSZip library can unpack and modify the EPUB/ZIP, and is the only third-party code I’m using in the tool Browsers include DOMParser for manipulating XML I’ve already mentioned the HTML <canvas> element for rendering the image This took a bit longer because I’m not as familiar with JavaScript, but I got it working. As a bonus, this makes the tool very portable. Everything is bundled into a single HTML file, so if you download that file, you have the whole tool. If my friend finds this tool useful, she can save the file and keep a local copy of it – she doesn’t have to rely on my website to keep using it. What should it look like? My first design was very “engineer brain” – I just put the basic controls on the page. It was fine, but it wasn’t good. That might be okay, because the only person I need to be able to use this app is my friend – but wouldn’t it be nice if other people were able to use it? If they’re going to do that, they need to know what it is – most people aren’t going to read a 2,500 word blog post to understand a tool they’ve never heard of. (Although if you have read this far, I appreciate you!) I started designing a proper page, including some explanations and descriptions of what the tool is doing. I got something that felt pretty good, including FAQs and acknowledgements, and I added a grey area for the part where you actually upload and download your EPUBs, to draw the user’s eye and make it clear this is the important stuff. But even with that design, something was missing. I realised I was telling you I’d create covers, but not showing you what they’d look like. Aha! I sat down and made up a bunch of amusing titles for fanfic and fanfic authors, so now you see a sample of the covers before you upload your first EPUB: This makes it clearer what the app will do, and was a fun way to wrap up the project. What did I learn from this project? Don’t be scared of new file formats My first instinct was to look for a third-party library that could handle the “complexity” of EPUB files. In hindsight, I’m glad I didn’t find one – it forced me to learn more about how EPUBs work, and I realised I could write my own code using built-in libraries. EPUB files are essentially ZIP files, and I only had basic needs. I was able to write my own code. Because I didn’t rely on a library, now I know more about EPUBs, I have code that’s simpler and easier for me to understand, and I don’t have a dependency that may cause problems later. There are definitely some file formats where I need existing libraries (I’m not going to write my own JPEG parser, for example) – but I should be more open to writing my own code, and not jumping to add a dependency. Static websites can handle complex file manipulations I love static websites and I’ve used them for a lot of tasks, but mostly read-only display of information – not anything more complex or interactive. But modern JavaScript is very capable, and you can do a lot of things with it. Static pages aren’t just for static data. One of the first things I made that got popular was find untagged Tumblr posts, which was built as a static website because that’s all I knew how to build at the time. Somewhere in the intervening years, I forgot just how powerful static sites can be. I want to build more tools this way. Async JavaScript calls require careful handling The JSZip library I’m using has a lot of async functions, and this is my first time using async JavaScript. I got caught out several times, because I forgot to wait for async calls to finish properly. For example, I’m using canvas.toBlob to render the image, which is an async function. I wasn’t waiting for it to finish, and so the zip would be repackaged before the cover image was ready to add, and I got an EPUB with a missing image. Oops. I think I’ll always prefer the simplicity of synchronous code, but I’m sure I’ll get better at async JavaScript with practice. Final thoughts I know my friend will find this helpful, and that feels great. Writing software that’s designed for one person is my favourite software to write. It’s not hyper-scale, it won’t launch the next big startup, and it’s usually not breaking new technical ground – but it is useful. I can see how I’m making somebody’s life better, and isn’t that what computers are for? If other people like it, that’s a nice bonus, but I’m really thinking about that one person. Normally the one person I’m writing software for is me, so it’s extra nice when I can do it for somebody else. If you want to try this tool yourself, go to alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ If you want to read the code, it’s all on GitHub. [If the formatting of this post looks odd in your feed reader, visit the original article]

2 hours ago 2 votes
Non-alcoholic apéritifs

I’ve been doing Dry January this year. One thing I missed was something for apéro hour, a beverage to mark the start of the evening. Something complex and maybe bitter, not like a drink you’d have with lunch. I found some good options. Ghia sodas are my favorite. Ghia is an NA apéritif based on grape juice but with enough bitterness (gentian) and sourness (yuzu) to be interesting. You can buy a bottle and mix it with soda yourself but I like the little cans with extra flavoring. The Ginger and the Sumac & Chili are both great. Another thing I like are low-sugar fancy soda pops. Not diet drinks, they still have a little sugar, but typically 50 calories a can. De La Calle Tepache is my favorite. Fermented pineapple is delicious and they have some fun flavors. Culture Pop is also good. A friend gave me the Zero book, a drinks cookbook from the fancy restaurant Alinea. This book is a little aspirational but the recipes are doable, it’s just a lot of labor. Very fancy high end drink mixing, really beautiful flavor ideas. The only thing I made was their gin substitute (mostly junipers extracted in glycerin) and it was too sweet for me. Need to find the right use for it, a martini definitely ain’t it. An easier homemade drink is this Nonalcoholic Dirty Lemon Tonic. It’s basically a lemonade heavily flavored with salted preserved lemons, then mixed with tonic. I love the complexity and freshness of this drink and enjoy it on its own merits. Finally, non-alcoholic beer has gotten a lot better in the last few years thanks to manufacturing innovations. I’ve been enjoying NA Black Butte Porter, Stella Artois 0.0, Heineken 0.0. They basically all taste just like their alcoholic uncles, no compromise. One thing to note about non-alcoholic substitutes is they are not cheap. They’ve become a big high end business. Expect to pay the same for an NA drink as one with alcohol even though they aren’t taxed nearly as much.

2 days ago 5 votes
It burns

The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us. Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future. I feel into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot. It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't. By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move. Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction. By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours. That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home. But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction. We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again. So we left. That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go. None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said.  But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences. Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted. I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers. And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.

2 days ago 6 votes
Slow, flaky, and failing

Thou shalt not suffer a flaky test to live, because it’s annoying, counterproductive, and dangerous: one day it might fail for real, and you won’t notice. Here’s what to do.

3 days ago 6 votes
Name that Ware, January 2025

The ware for January 2025 is shown below. Thanks to brimdavis for contributing this ware! …back in the day when you would get wares that had “blue wires” in them… One thing I wonder about this ware is…where are the ROMs? Perhaps I’ll find out soon! Happy year of the snake!

3 days ago 4 votes