Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
6
Last weekend, I wrote a more or less casual post on LinkedIn containing the ‘rules’ (it’s more of a list of terms and conditions, really) I set for myself when it comes to using AI. That post received some interesting comments that made me think and refine my thoughts on when (not) to use AI to support me in my work. Thank you to all of you who commented for doing so, and for showing me that there still is value in being active on LinkedIn in between all the AI-generated ‘content’. I really appreciate it. Now, AI and LLMs like ChatGPT or Claude can be very useful, that is, when used prudently. I think it is very important to be conscious and cautious when it comes to using AI, though, which is why I wrote that post. I wrote it mostly for myself, to structure my thoughts around AI, but also because I think it is important that others are at least conscious of what they’re doing and working with. That doesn’t mean you have to adhere to or even agree with my views and the way I use these...
3 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from On Test Automation

Improving the tests for RestAssured.Net with mutation testing and Stryker.NET

When I build and release new features or bug fixes for RestAssured.Net, I rely heavily on the acceptance tests that I wrote over time. Next to serving as living documentation for the library, I run these tests both locally and on every push to GitHub to see if I didn’t accidentally break something, for different versions of .NET. But how reliable are these tests really? Can I trust them to pass and fail when they should? Did I cover all the things that are important? I speak, write and teach about the importance of testing your tests on a regular basis, so it makes sense to start walking the talk and get more insight into the quality of the RestAssured.Net test suite. One approach to learning more about the quality of your tests is through a technique called mutation testing. I speak about and demo testing your tests and using mutation testing to do so on a regular basis (you can watch a recent talk here), but until now, I’ve pretty much exclusively used PITest for Java. As RestAssured.Net is a C# library, I can’t use PITest, but I’d heard many good things about Stryker.NET, so this would be a perfect opportunity to finally use it. Adding Stryker.NET to the RestAssured.Net project The first step was to add Stryker.Net to the RestAssured.Net project. Stryker.NET is a dotnet tool, so installing it is straightforward: run dotnet new tool-manifest to create a new, project-specific tool manifest (this was the first local dotnet tool for this project) and then dotnet tool install dotnet-stryker to add Stryker.NET as a dotnet tool to the project. Running mutation tests for the first time Running mutation tests with Stryker.NET is just as straightforward: dotnet stryker --project RestAssured.Net.csproj from the tests project folder is all it takes. Because both my test suite (about 200 tests) and the project itself are relatively small code bases, and because my test suite runs quickly, running mutation tests for my entire project works for me. It still took around five minutes for the process to complete. If you have a larger code base, and longer-running test suites, you’ll see that mutation testing will take much, much longer. In that case, it’s probably best to start on a subset of your code base and a subset of your test suite. After five minutes and change, the results are in: Stryker.NET created 538 mutants from my application code base. Of these: 390 were killed, that is, at least one test failed because of this mutation, 117 survived, that is, the change did not make any of the tests fail, and 31 resulted in a timeout, which I’ll need to investigate further, but I suspect it has something to do with HTTP timeouts (RestAssured.Net is an HTTP API testing library, and all acceptance tests perform actual HTTP requests) This leads to an overall mutation testing score of 59.97%. Is that good? Is that bad? In all honesty, I don’t know, and I don’t care. Just like with code coverage, I am not a fan of setting fixed targets for this type of metric, as these will typically lead to writing tests for the sake of improving a score rather than for actual improvement of the code. What I am much more interested in is the information that Stryker.NET produced during the mutation testing process. Opening the HTML report I was surprised to see that out of the box, Stryker.NET produces a very good-looking and incredibly helpful HTML report. It provides both a high-level overview of the results: as well as in-depth detail for every mutant that was killed or that survived. It offers a breakdown of the results per namespace and per class, and it is the starting point for further drilling down into results for individual mutants. Let’s have a look and see if the report provides some useful, actionable information for us to improve the RestAssured.Net test suite. Missing coverage Like many other mutation testing tools, Stryker.NET provides code coverage information along with mutation coverage information. That is, if there is code in the application code base that was mutated, but that is not covered by any of the tests, Stryker.NET will inform you about it. Here’s an example: Stryker.NET changed the message of an exception thrown when RestAssured.Net is asked to deserialize a response body that is either null or empty. Apparently, there is no test in the test suite that covers this path in the code. As this particular code path deals with exception handling, it’s probably a good idea to add a test for it: [Test] public void EmptyResponseBodyThrowsTheExpectedException() { var de = Assert.Throws<DeserializationException>(() => { Location responseLocation = (Location)Given() .When() .Get($"{MOCK_SERVER_BASE_URL}/empty-response-body") .DeserializeTo(typeof(Location)); }); Assert.That(de?.Message, Is.EqualTo("Response content is null or empty.")); } I added the corresponding test in this commit. Removed code blocks Another type of mutant that Stryker.NET generates is the removal of a code block. Going by the mutation testing report, it seems like there are a few of these mutants that are not detected by any of the tests. Here’s an example: The return statement for the Put() method body, which is used to perform an HTTP PUT operation, is replaced with an empty method body, but this is not picked up by any of the tests. The same applies to the methods for HTTP PATCH, DELETE, HEAD and OPTIONS. Looking at the tests that cover the different HTTP verbs, this makes sense. While I do call each of these HTTP methods in a test, I don’t assert on the result for the aforementioned HTTP verbs. I am basically relying on the fact that no exception is thrown when I call Put() when I say ‘it works’. Let’s change that by at least asserting on a property of the response that is returned when these HTTP verbs are used: [Test] public void HttpPutCanBeUsed() { Given() .When() .Put($"{MOCK_SERVER_BASE_URL}/http-put") .Then() .StatusCode(200); } These assertions were added to the RestAssured.Net test suite in this commit. Improving testability The next signal I received from this initial mutation testing run is an interesting one. It tells me that even though I have acceptance tests that add cookies to the request and that only pass when the request contains the cookies I set, I’m not properly covering some logic that I added: To understand what is going on here, it is useful to know that a Cookie in C# offers a constructor that creates a Cookie specifying only a name and a value, but that a cookie has to have a domain value set. To enforce that, I added the logic you see in the screenshot. However, Stryker.NET tells me I’m not properly testing this logic, because changing its implementation doesn’t cause any tests to fail. Now, I might be able to test this specific logic with a few added acceptance tests, but it really is only a small piece of logic, and I should be able to test that logic in isolation, right? Well, not with the code written in the way it currently is… So, time to extract that piece of logic into a class of its own, which will improve both the modularity of the code and allow me to test it in isolation. First, let’s extract the logic into a CookieUtils class: internal class CookieUtils { internal Cookie SetDomainFor(Cookie cookie, string hostname) { if (string.IsNullOrEmpty(cookie.Domain)) { cookie.Domain = hostname; } return cookie; } } I deliberately made this class internal as I don’t want it to be directly accessible to RestAssured.Net users. However, as I do need to access it in the tests, I have to add this little snippet to the RestAssured.Net.csproj file: <ItemGroup> <InternalsVisibleTo Include="$(MSBuildProjectName).Tests" /> </ItemGroup> Now, I can add unit tests that should cover both paths in the SetDomainFor() logic: [Test] public void CookieDomainIsSetToDefaultValueWhenNotSpecified() { Cookie cookie = new Cookie("cookie_name", "cookie_value"); CookieUtils cookieUtils = new CookieUtils(); cookie = cookieUtils.SetDomainFor(cookie, "localhost"); Assert.That(cookie.Domain, Is.EqualTo("localhost")); } [Test] public void CookieDomainIsUnchangedWhenSpecifiedAlready() { Cookie cookie = new Cookie("cookie_name", "cookie_value", "/my_path", "strawberry.com"); CookieUtils cookieUtils = new CookieUtils(); cookie = cookieUtils.SetDomainFor(cookie, "localhost"); Assert.That(cookie.Domain, Is.EqualTo("strawberry.com")); } These changes were added to the RestAssured.Net source and test code in this commit. An interesting mutation So far, all the signals that appeared in the mutation testing report generated by Stryker.NET have been valuable, as in: they have pointed me at code that isn’t covered by any tests yet, to tests that could be improved, and they have led to code refactoring to improve testability. Using Stryker.NET (and mutation testing in general) does sometimes lead to some, well, interesting mutations, like this one: I’m checking that a certain string is either null or an empty string, and if either condition is true, RestAssured.Net throws an exception. Perfectly valid. However, Stryker.NET changes the logical OR to a logical AND (a common mutation), which makes it impossible for the condition to evaluate to true. Is that even a useful mutation to make? Well, to some extent, it is. Even if the code doesn’t make sense anymore after it has been mutated, it does tell you that your tests for this logical condition probably need some improvement. In this case, I don’t have to add more tests, as we discussed this exact statement earlier (remember that it had no test coverage at all). It did make me look at this statement once again, though, and I only then realized that I could simplify this code snippet to if (string.IsNullOrEmpty(responseBodyAsString)) { throw new DeserializationException("Response content is null or empty."); } Instead of a custom-built logical OR, I am now using a construct built into C#, which is arguably the safer choice. In general, if your mutation testing tool generates several (or even many) mutants for the same code statement or block, it might be a good idea to have another look at that code and see if it can be simplified. This was just a very small example, but I think this observation holds true in general. This change was added to the RestAssured.Net source and test code in this commit. Running mutation tests again and inspecting the results Now that several (supposed) improvements to the tests and the code have been made, let’s run the mutation tests another time to see if the changes improved our score. In short: 397 mutants were killed now, up from 390 (that’s good) 111 mutants survived, down from 117 (that’s also good) there were 32 timeouts, up from 31 (that needs some further investigation) Overall, the mutation testing score went up from 59,97% to 61,11%. This might not seem like much, but it is definitely a step in the right direction. The most important thing for me right now is that my tests for RestAssured.Net have improved, my code has improved and I learned a lot about mutation testing and Stryker.NET in the process. Am I going to run mutation tests every time I make a change? Probably not. There is quite a lot of information to go through, and that takes time, time that I don’t want to spend for every build. For that reason, I’m also not going to make these mutation tests part of the build and test pipeline for RestAssured.Net, at least not any time soon. This was nonetheless both a very valuable and a very enjoyable exercise, and I’ll definitely keep improving the tests and the code for RestAssured.Net using the suggestions that Stryker.NET presents.

a week ago 10 votes
On working and contributing to conferences abroad

This blog post is another one in the ‘writing things down to structure my thinking on where I want my career to go’ series. I will get back to writing technical and automation blog posts soon, but I need to finish my contract testing course first. One of the things I like to do most in life is traveling and seeing new places. Well, seeing new places, mostly, as the novelty of waiting, flying and staying in hotel rooms has definitely worn off by now. I am in the privileged position (really, that is what it is: I’m privileged, and I fully realize that) that I get to scratch this travel itch professionally on a regular basis these days. Over the last few years, I have been invited to contribute to meetups and conferences abroad, and I also get to run in-house training sessions with companies outside the Netherlands a couple of times per year. Most of this traveling takes place within Europe, but for the last three years, I have been able to travel outside of Europe once every year (South Africa in 2022, Canada in 2023 and the United States in 2024), and needless to say I have enjoyed those opportunities very much. To give you an idea of the amount of traveling I do: for 2025, I now have four work-related trips abroad scheduled, and I am pretty sure at least a few more will be added to that before the year ends (it’s only just February…). That might not be much travel by some people’s standards, but for me, it is. And it seems the number of opportunities I get for traveling increase year over year, to the point where I have to say ‘no’ to several of these opportunities. Say no? Why? I thought you just said you loved to travel? Yes, that’s true. I do love to travel. But I also love spending time at home with my family, and that comes first. Always. Now, my sons are getting older, and being away from home for a few days doesn’t put as much pressure on them and on my wife as it did a few years ago. Still, I always need to find a balance between spending time with them and spending time at work. I am away from home for work not just when I’m abroad. I run evening training sessions with clients here in the Netherlands on a regular basis, too, as well as training sessions in my evenings for clients in different time zones, mainly US-based clients. And all that adds up. I try to only be away from home one night per week, but often, it’s two. When I travel abroad, it’s even more than that. Again, I’m not complaining. Not at all. It is an absolute privilege to get to travel for work and get paid to do that, but I cannot do that indefinitely, and that’s why I have made a decision: With a few exceptions (more on those below), I am going to say ‘no’ to conferences abroad from now on. This is a tough decision for me to make, but sometimes that’s exactly what you need to do. Tough, because I have very fond memories of all the conferences and meetups abroad I have contributed to. My first one, Romanian Testing Conference in 2017. My first keynote abroad, UKStar in 2019. My first one outside of Europe, Targeting Quality in 2023. They were all amazing, because of the travel and sightseeing (when time allowed), but also because of all the people I have met at these conferences. Yet, I can meet at least some of these people at conferences here in the Netherlands, too. Test Automation Days, the TestNet events, the Dutch Testing Day and TestMass all provide a great opportunity for me to catch up with my network. Sometimes, international conferences come to the Netherlands, too, like AutomationSTAR this year. And then there are plenty of smaller meetups here in the Netherlands (and Belgium) where I can meet and catch up with people as well. Plus, the money. I am not going to be a hypocrite and say that money doesn’t play into this. For the reasons mentioned above, I have a limited number of opportunities to travel every year, and I prefer to spend those on in-house training sessions with clients abroad, simply because the pay is much better. Even when a conference compensates flights and hotel (as they should) and offer a speaker or workshop facilitator fee (a nice bonus), it will be significantly less of a payday than when I run a training session with a client. That’s not the fault of those conferences, not at all, especially when they’re compensating their speakers fairly, but this is simply a matter of numbers and budgets. At the moment, I have one, maybe two contributions to conferences abroad coming up, and I gave them my word, so I’ll be there. That’s the SAST 30-year anniversary conference in October, plus one other conference that I’m talking to but haven’t received a ‘yes’ or ‘no’ from yet. Other than that, if conferences reach out to me, it’s likely to be a ‘no’ from now on, unless: the event pays a fee comparable to my rate for in-house training I can combine the event with paid in-house training (for example with a sponsor) it is a country or region I really, really want to visit, either for personal reasons or because I want to grow my professional network there I don’t see the first one happening soon, and the list of destinations for the third one is very short (Norway, Canada, New Zealand, that’s pretty much it), so unless we can arrange paid in-house training alongside the conference, the answer will be a ‘no’ from me. Will this reduce the number of travel opportunities for me? Maybe. Maybe not. Again, I see the number of requests I get for in-house training abroad growing, too, and if that dies down, it’ll be a sign for me that I’ll have to work harder to create those opportunities. For 2025, things are looking pretty good, with trips for training to Romania, North Macedonia and Denmark already scheduled, and several leads for more in the pipeline. And if the number of opportunities does go down, that’s fine, too. I’m happy to spend that time with family, working on other things, or riding my bike. And I’m sure there will be a few opportunities to speak at online meetups, events and webinars, too.

2 weeks ago 14 votes
My career and a thought experiment

As is the case every year, 2025 is starting off relatively slowly. There’s not a lot of training courses to run yet, and since a few of the projects I worked on wrapped up in December, I find myself with a little bit of extra time and headspace on my hands. I actually enjoy these slower moments, because they give me some time to think about where my professional career is going, if I’m still happy with the direction it is going on, and what I would like to see changed. Last year, I quit doing full time projects as an individual contributor to development teams in favour of part-time consultancy work and more focus on my training services. 2024 has been a great year overall, and I would be happy to continue working in this way in 2025. However, as a thought experiment, I took some time to think about what it would take for me to go back to full time roles, or maybe (maybe!) even consider joining a company on a permanent basis. Please note that this post is not intended as an ‘I need a job!’ cry for help. My pipeline for 2025 is slowly but surely filling up, and again, I am very happy with the direction my career is going at the moment. However, I have learned that it never hurts to leave your options open, and even though I love the variety in my working days these days, I think I would enjoy working with one team, on one goal, for an extended amount of time, too, under the right conditions. If nothing else, this post might serve as a reference post to send to people and companies that reach out to me with a full time contract opportunity or even a permanent job opening. This is also not a list of requirements that is set in stone. As my views on what would make a great job change (and they will), I will update this post to reflect those views. So, to even consider joining a company on a full-time contract or even a permanent basis, there are basically three things I will and should consider: What does the job look like? What will I be doing on a day-to-day basis? What are the must-haves regarding terms and conditions? What are the nice to haves that would provide the icing on the cake for me? Let’s take a closer look at each of these things. What I look for in a job As I mentioned before, I am not looking for a job as an individual contributor to a development team. I have done that for many years, and it does not really give me the energy that it used to. On the other hand, I am definitely not looking for a hands-off, managerial kind of role, as I’d like to think I would make an atrocious manager. Plus, I simply enjoy being hands-on and writing code way too much to let that go. I would like to be responsible for designing and implementing the testing and automation strategy for a product I believe in. It would be a lead role, but, as mentioned, with plenty (as in daily) opportunities to get hands-on and contribute to the code. The work would have to be technically and mentally challenging enough to keep me motivated in the long term. Getting bored quickly is something I suffer from, which is the main driver behind only doing part-time projects and working on multiple different things in parallel right now. I don’t want to work for a consultancy and be ‘farmed out’ to their clients. I’ve done that pretty much my entire career, and if that’s what the job will look like, I’d rather keep working the way I’m working now. The must-haves There are (quite) a few things that are non-negotiable for me to even consider joining a company full time, no matter if it’s on a contract or a permanent basis. The pay must be excellent. Let’s not beat around the bush here: people work to make money. I do, too. I’m doing very well right now, and I don’t want that to change. The company should be output-focused, as in they don’t care when I work, how many hours I put in and where I work from, as long as the job gets done. I am sort of spoiled by my current way of working, I fully realise that, but I’ve grown to love the flexibility. By the way, please don’t read ‘flexible’ as ‘working willy-nilly’. Most work is not done in a vacuum, and you will have to coordinate with others. The key word here is ‘balance’. Collaboration should be part of the company culture. I enjoy working in pair programming and pair testing setups. What I do not like are pointless meetings, and that includes having Scrum ceremonies ‘just because’. The company should be a remote-first company. I don’t mind the occasional office day, but I value my time too much to spend hours per week on commuting. I’ve done that for years, and it is time I’ll never get back. The company should actively stimulate me contributing to conferences and meetups. Public speaking is an important part of my career at the moment, and I get a lot of value from it. I don’t want to give that up. There should be plenty of opportunities for teaching others. This is what I do for a living right now, I really enjoy it, and I’d like to think I’m pretty good at it, too. Just like with the public speaking, I don’t want to give that up. This teaching can take many forms, though. Running workshops and regular pairing with others are just two examples. The job should scratch my travel itch. I travel abroad for work on average about 5-6 times per year these days, and I would like to keep doing that, as I get a lot of energy from seeing different places and meeting people. Please note that ‘traveling’ and ‘commuting’ are two completely different things. Yes, I realize this is quite a long list, but I really enjoy my career at the moment, and there are a lot of aspects to it that I’m not ready to give up. The nice to haves There are also some things that are not strictly necessary, but would be very nice to have in a job or full time contract: The opportunity to continue working on side gigs. I have a few returning customers that I’ve been working with for years, and I would really appreciate the opportunity to continue doing that. I realise that I would have to give up some things, but there are a few clients that I would really like to keep working with. By the way, this is only a nice to have for permanent jobs. For contracting gigs, it is a must-have. It would be very nice if the technology stack that the company is using is based on C#. I’ve been doing quite a bit of work in this stack over the years and I would like to go even deeper. If the travel itch I mentioned under the must-haves could be scratched with regular travel to Canada, Norway or South Africa, three of my favourite destinations in the world, that would be a very big plus. I realize that the list of requirements above is a long one. I don’t think there is a single job out there that ticks all the boxes. But, again, I really like what I’m doing at the moment, and most of the boxes are ticked at the moment. I would absolutely consider going full time with a client or even an employer, but I want it to be a step forward, not a step back. After all, this is mostly a thought experiment at the moment, and until that perfect contract or job comes along, I’ll happily continue what I’m doing right now.

a month ago 35 votes
RestAssured .NET in 2024 - a review

As a (sort of) follow-up post to my yearly review for 2024, in this post, I would like to go over the changes, bug fixes and new features that have been introduced in RestAssured .NET in 2024. This year, I released 7 new versions of the library, and while none of the versions included changes that were worthy of a blog post on its own, I thought it would be a good idea to wrap them all up in a single overview. Basically, this blog post is an extended version of the library’s CHANGELOG. I’ll go through the new versions chronologically, starting with the first release of 2024. Version 4.2.2 - released April 23 RestAssured .NET 4.2.2 fixes a bug that prevented JSON responses that are an array to be properly verified. In other words, if the JSON response body looks like this: [ { "id": 1, "text": "Do the dishes" }, { "id": 2, "text": "Clean out the trash" }, { "id": 3, "text": "Read the newspaper" } ] I would expect this test to pass: [Test] public void JsonArrayResponseBodyElementCanBeVerifiedUsingNHamcrestMatcher() { Given() .When() .Get("http://localhost:9876/json-array-response-body") .Then() .StatusCode(200) .Body("$[1].text", NHamcrest.Is.EqualTo("Clean out the trash")); } but prior to this version, it threw a Newtonsoft.Json.JsonReaderException. The solution? Adding a try-catch that first tries to parse the JSON response as a JObject (equal to existing behaviour), catch the JsonReaderException and try again, now parsing the JSON response into a JArray. That made the newly added test pass without failing any other tests. Another demonstration of the added value of having a decent set of tests. RestAssured .NET is slowly growing and becoming more complex, and having a test suite I can run locally, and that always runs when I push code to GitHub is an invaluable safety net for me. These tests run in a few seconds, yet they give me invaluable feedback on the effect of new features, bug fixes and code refactoring efforts. I haven’t heard back from the person submitting the original issue, but I assume that this fixed their issue. Version 4.3.0 - released August 16 I love learning about how people use RestAssured .NET, because invariably they will use it in ways I haven’t foreseen. I was unfamiliar with the concept of server-sent events (SSE) in APIs, for example, yet there are people looking to test these kinds of APIs using RestAssured .NET. It turned out that what this user was looking for was a way to set the HttpCompletionOption value on the System.Net.Http.HttpClient that is wrapped by RestAssured .NET. To enable this, I added a method to the DSL that looks like this: Given() .UseHttpCompletionOption(HttpCompletionOption.ResponseHeadersRead) I also added the option to specify the HttpCompletionOption to be used in a RequestSpecification as well as in the global config. A straightforward fix that solved the problem for this specific user. The only thing I don’t like here is that I don’t know of a way to test this locally. Do you? I would love to hear it. Version 4.3.1 - release August 22 Another user pointed out to me that trying to verify that the value of a JSON response body element is an empty array also threw an exception. So, if the JSON response body looks like this: { "success": true, "errors": [] } this test should pass, but instead it threw a Newtonsoft.Json.JsonSerializationException: [Test] public void JsonResponseBodyElementEmptyArrayValueCanBeVerifiedUsingNHamcrestMatcher() { Given() .When() .Get("http://localhost:9876/json-empty-array-response-body") .Then() .StatusCode(200) .Body("$.errors", NHamcrest.Is.OfLength(0)); } The fix? Adding some code that checks if the element returned when evaluating the JsonPath expression is a JArray or a JObject and using the right matching logic accordingly. I used my preferred procedure here: first, write a failing test that reproduces the issue then, make the test pass without breaking any other tests refactor the code, document and release Does this procedure sound familiar to you? Version 4.4.0 - released October 21 As you can probably tell from the semantic versioning, this version introduced a new feature to RestAssured .NET: the ability to use NTLM authentication when making an HTTP call. To enable this, I added a new method to the DSL: Given() .NtlmAuth() // This one uses default NTLM credentials for the current user .NtlmAuth("username", "password", "domain") // This one uses custom NTLM credentials As I had no idea how to write a proper test for this, even though I had tested it before releasing using Fiddler, I released a beta version first that the person submitting the issue could use to verify the solution. I’m happy to say that it worked for them and that the solution could be released properly. Again, if someone can think of a way to add a proper test for NTLM authentication to the test suite, I would love to hear it. All that the current tests do is run the code and see if no exception is thrown. Not a good test, but until I find a better way, it will have to do. Version 4.5.0 - released November 19 This version introduced not one, but two changes. First, since .NET 9 was officially released earlier that week (or maybe the week before, I forgot), I needed to release a RestAssured .NET version that targets .NET 9, so I did. Just like with .NET 8, I didn’t really have to change anything to the code other than adding net9.0 to the TargetFrameworks and add .NET 9 to the build pipeline for the library to make sure that every change is tested on .NET 9, too. Happy to say it all ‘just worked’. The other change took more effort: a user reported that they could not override the ResponseLogLevel set in a RequestSpecification at the individual test level. The reason? In the existing code, the response was logged directly after the HTTP call completed, so before any calls to Log() for the response. When Log() is called on the response, it was then logged again. I have no idea how I completely overlooked this until now, but I did. Rewriting the code to make this work took longer than I expected, but I managed in the end, through quite a bit of trial and error and lots of humand-centered testing (again, no idea how to write automated tests for this). The logging functionality of RestAssured .NET is something I intend to rewrite in the future, for a couple of reasons: It’s impossible to write automated tests for it (or at least I don’t know how to do this) Ideally, I want the logging to be more configurable and extensible to give users more flexibility than they have at the moment Version 4.5.1 - released November 20 As one does, I found an issue with the updated logging logic almost immediately after releasing 4.5.0 to the public: masking of sensitive headers and cookies didn’t work anymore when specified as part of a RequestSpecification. Lucky for me, this was a quick fix, but a bit embarrassing nonetheless. Had I had proper automated tests for the logging in place, I probably would have caught this before releasing 4.5.0…. Anyway, it’s fixed now, as far as I can tell. Version 4.6.0 - released December 9 The final RestAssured .NET release of 2024 added the capability to strip the ; charset=<some_charset> from the Content-Type header in a request. It turns out, some APIs explicitly expect this header to not contain the charset suffix, but the way I create a request, or rather, the way .NET creates a StringContent object, will add it by default. This issue was a great example of one of the main reasons why I started this project: there is so much I don’t know yet about HTTP, APIs, C#/.NET and other technologies, and working on these issues and improving RestAssured .NET gives me an opportunity to learn them. I make a habit of writing what I learned down in the issue on GitHub, so I can review it later, and so I can point others to these links and thoughts, too. So, if you’re looking for a way to strip the charset identifier from the Content-Type header in the request, you can now do that by passing an optional second boolean argument to Body() (defaults to false): Given() .Body(your_body_goes_here, stripCharset: true) That’s it! As you can see, lots of small changes, bug fixes and new features have been added to RestAssured .NET this year. Oh, and before I forget: with every release, I also made sure to update the dependencies I use to create and test RestAssured .NET to their latest versions. I consider that good housekeeping, and it’s all part of keeping a library up to date. I am looking forward to seeing the library evolve and improve further in 2025.

2 months ago 55 votes

More in programming

Five Kinds of Nondeterminism

No newsletter next week, I'm teaching a TLA+ workshop. Speaking of which: I spend a lot of time thinking about formal methods (and TLA+ specifically) because it's where the source of almost all my revenue. But I don't share most of the details because 90% of my readers don't use FM and never will. I think it's more interesting to talk about ideas from FM that would be useful to people outside that field. For example, the idea of "property strength" translates to the idea that some tests are stronger than others. Another possible export is how FM approaches nondeterminism. A nondeterministic algorithm is one that, from the same starting conditions, has multiple possible outputs. This is nondeterministic: # Pseudocode def f() { return rand()+1; } When specifying systems, I may not encounter nondeterminism more often than in real systems, but I am definitely more aware of its presence. Modeling nondeterminism is a core part of formal specification. I mentally categorize nondeterminism into five buckets. Caveat, this is specifically about nondeterminism from the perspective of system modeling, not computer science as a whole. If I tried to include stuff on NFAs and amb operations this would be twice as long.1 1. True Randomness Programs that literally make calls to a random function and then use the results. This the simplest type of nondeterminism and one of the most ubiquitous. Most of the time, random isn't truly nondeterministic. Most of the time computer randomness is actually pseudorandom, meaning we seed a deterministic algorithm that behaves "randomly-enough" for some use. You could "lift" a nondeterministic random function into a deterministic one by adding a fixed seed to the starting state. # Python from random import random, seed def f(x): seed(x) return random() >>> f(3) 0.23796462709189137 >>> f(3) 0.23796462709189137 Often we don't do this because the point of randomness is to provide nondeterminism! We deliberately abstract out the starting state of the seed from our program, because it's easier to think about it as locally nondeterministic. (There's also "true" randomness, like using thermal noise as an entropy source, which I think are mainly used for cryptography and seeding PRNGs.) Most formal specification languages don't deal with randomness (though some deal with probability more broadly). Instead, we treat it as a nondeterministic choice: # software if rand > 0.001 then return a else crash # specification either return a or crash This is because we're looking at worst-case scenarios, so it doesn't matter if crash happens 50% of the time or 0.0001% of the time, it's still possible. 2. Concurrency # Pseudocode global x = 1, y = 0; def thread1() { x++; x++; x++; } def thread2() { y := x; } If thread1() and thread2() run sequentially, then (assuming the sequence is fixed) the final value of y is deterministic. If the two functions are started and run simultaneously, then depending on when thread2 executes y can be 1, 2, 3, or 4. Both functions are locally sequential, but running them concurrently leads to global nondeterminism. Concurrency is arguably the most dramatic source of nondeterminism. Small amounts of concurrency lead to huge explosions in the state space. We have words for the specific kinds of nondeterminism caused by concurrency, like "race condition" and "dirty write". Often we think about it as a separate topic from nondeterminism. To some extent it "overshadows" the other kinds: I have a much easier time teaching students about concurrency in models than nondeterminism in models. Many formal specification languages have special syntax/machinery for the concurrent aspects of a system, and generic syntax for other kinds of nondeterminism. In P that's choose. Others don't special-case concurrency, instead representing as it as nondeterministic choices by a global coordinator. This more flexible but also more inconvenient, as you have to implement process-local sequencing code yourself. 3. User Input One of the most famous and influential programming books is The C Programming Language by Kernighan and Ritchie. The first example of a nondeterministic program appears on page 14: For the newsletter readers who get text only emails,2 here's the program: #include /* copy input to output; 1st version */ main() { int c; c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } } Yup, that's nondeterministic. Because the user can enter any string, any call of main() could have any output, meaning the number of possible outcomes is infinity. Okay that seems a little cheap, and I think it's because we tend to think of determinism in terms of how the user experiences the program. Yes, main() has an infinite number of user inputs, but for each input the user will experience only one possible output. It starts to feel more nondeterministic when modeling a long-standing system that's reacting to user input, for example a server that runs a script whenever the user uploads a file. This can be modeled with nondeterminism and concurrency: We have one execution that's the system, and one nondeterministic execution that represents the effects of our user. (One intrusive thought I sometimes have: any "yes/no" dialogue actually has three outcomes: yes, no, or the user getting up and walking away without picking a choice, permanently stalling the execution.) 4. External forces The more general version of "user input": anything where either 1) some part of the execution outcome depends on retrieving external information, or 2) the external world can change some state outside of your system. I call the distinction between internal and external components of the system the world and the machine. Simple examples: code that at some point reads an external temperature sensor. Unrelated code running on a system which quits programs if it gets too hot. API requests to a third party vendor. Code processing files but users can delete files before the script gets to them. Like with PRNGs, some of these cases don't have to be nondeterministic; we can argue that "the temperature" should be a virtual input into the function. Like with PRNGs, we treat it as nondeterministic because it's useful to think in that way. Also, what if the temperature changes between starting a function and reading it? External forces are also a source of nondeterminism as uncertainty. Measurements in the real world often comes with errors, so repeating a measurement twice can give two different answers. Sometimes operations fail for no discernable reason, or for a non-programmatic reason (like something physically blocks the sensor). All of these situations can be modeled in the same way as user input: a concurrent execution making nondeterministic choices. 5. Abstraction This is where nondeterminism in system models and in "real software" differ the most. I said earlier that pseudorandomness is arguably deterministic, but we abstract it into nondeterminism. More generally, nondeterminism hides implementation details of deterministic processes. In one consulting project, we had a machine that received a message, parsed a lot of data from the message, went into a complicated workflow, and then entered one of three states. The final state was totally deterministic on the content of the message, but the actual process of determining that final state took tons and tons of code. None of that mattered at the scope we were modeling, so we abstracted it all away: "on receiving message, nondeterministically enter state A, B, or C." Doing this makes the system easier to model. It also makes the model more sensitive to possible errors. What if the workflow is bugged and sends us to the wrong state? That's already covered by the nondeterministic choice! Nondeterministic abstraction gives us the potential to pick the worst-case scenario for our system, so we can prove it's robust even under those conditions. I know I beat the "nondeterminism as abstraction" drum a whole lot but that's because it's the insight from formal methods I personally value the most, that nondeterminism is a powerful tool to simplify reasoning about things. You can see the same approach in how I approach modeling users and external forces: complex realities black-boxed and simplified into nondeterministic forces on the system. Anyway, I hope this collection of ideas I got from formal methods are useful to my broader readership. Lemme know if it somehow helps you out! I realized after writing this that I already talked wrote an essay about nondeterminism in formal specification just under a year ago. I hope this one covers enough new ground to be interesting! ↩ There is a surprising number of you. ↩

16 hours ago 4 votes
When to give up

Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out. But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop. Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't. So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them. What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar? I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large. I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now. This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years. I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze. I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap. The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works. But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want". Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.

3 hours ago 2 votes
How I create static websites for tiny archives

Last year I wrote about using static websites for tiny archives. The idea is that I create tiny websites to store and describe my digital collections. There are several reasons I like this approach: HTML is flexible and lets me display data in a variety of ways; it’s likely to remain readable for a long time; it lets me add more context than a folder full of files. I’m converting more and more of my local data to be stored in static websites – paperwork I’ve scanned, screenshots I’ve taken, and web pages I’ve bookmarked. I really like this approach. I got a lot of positive feedback, but the most common reply was “please share some source code”. People wanted to see examples of the HTML and JavaScript I was using I deliberately omitted any code from the original post, because I wanted to focus on the concept, not the detail. I was trying to persuade you that static websites are a good idea for storing small archives and data sets, and I didn’t want to get distracted by the implementation. There’s also no single code base I could share – every site I build is different, and the code is often scrappy or poorly documented. I’ve built dozens of small sites this way, and there’s no site that serves as a good example of this approach – they’re all built differently, implement a subset of my ideas, or have hard-coded details. Even if I shared some source code, it would be difficult to read or understand what’s going on. However, there’s clearly an appetite for that sort of explanation, so this follow-up post will discuss the “how” rather than the “why”. There’s a lot of code, especially JavaScript, which I’ll explain in small digestible snippets. That’s another reason I didn’t describe this in the original post – I didn’t want anyone to feel overwhelmed or put off. A lot of what I’m describing here is nice-to-have, not essential. You can get started with something pretty simple. I’ll go through a feature at a time, as if we were building a new static site. I’ll use bookmarks as an example, but there’s nothing in this post that’s specific to bookmarking. If you’d like to see everything working together, check out the demo site. It includes the full code for all the sections in this post. Let’s dive in! Start with a hand-written HTML page (demo) Reduce repetition with JavaScript templates (demo) Add filtering to find specific items (demo) Introduce sorting to bring order to your data (demo) Use pagination to break up long lists (demo) Provide feedback with loading states and error handling (demo 1, demo 2) Test the code with QUnit and Playwright Manipulate the metadata with Python Store the website code in Git Closing thoughts demo) A website can be a single HTML file you edit by hand. Open a text editor like TextEdit or Notepad, copy-paste the following text, and save it in a file named bookmarks.html. <h1>Bookmarks</h1> <ul> <li><a href="https://estherschindler.medium.com/the-old-family-photos-project-lessons-in-creating-family-photos-that-people-want-to-keep-ea3909129943">Lessons in creating family photos that people want to keep, by Esther Schindler (2018)</a></li> <li><a href="https://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/">Why I Am Not a Maker, by Debbie Chachra (The Atlantic, 2015)</a></li> <li><a href="https://meyerweb.com/eric/thoughts/2014/06/10/so-many-nevers/">So Many Nevers, by Eric Meyer (2014)</a></li> </ul> If you open this file in your web browser, you’ll see a list of three links. You can also check out my demo page to see this in action. This is an excellent way to build a website. If you stop here, you’ve got all the flexibility and portability of HTML, and this file will remain readable for a very long time. I build a lot of sites this way. I like it for small data sets that I know are never going to change, or which change very slowly. It’s simple, future-proof, and easy to edit if I ever need to. demo) As you store more data, it gets a bit tedious to keep copying the HTML markup for each item. Wouldn’t it be useful if we could push it into a reusable template? When a site gets bigger, I convert the metadata into JSON, then I use JavaScript and template literals to render it on the page. Let’s start with a simple example of metadata in JSON. My real data has more fields, like date saved or a list of keyword tags, but this is enough to get the idea: const bookmarks = [ { "url": "https://estherschindler.medium.com/the-old-family-photos-project-lessons-in-creating-family-photos-that-people-want-to-keep-ea3909129943", "title": "Lessons in creating family photos that people want to keep, by Esther Schindler (2018)" }, { "url": "https://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/", "title": "Why I Am Not a Maker, by Debbie Chachra (The Atlantic, 2015)" }, { "url": "https://meyerweb.com/eric/thoughts/2014/06/10/so-many-nevers/", "title": "So Many Nevers, by Eric Meyer (2014)" } ]; Then I have a function that renders the data for a single bookmark as HTML: function Bookmark(bookmark) { return ` <li> <a href="${bookmark.url}">${bookmark.title}</a> </li> `; } Having a function that returns HTML is inspired by React and Next.js, where code is split into “components” that each render part of the web app. This function is simpler than what you’d get in React. Part of React’s behaviour is that it will re-render the page if the data changes, but my function won’t do that. That’s okay, because my data isn’t going to change. The HTML gets rendered once when the page loads, and that’s enough. I’m using a template literal because I find it simple and readable. It looks pretty close to the actual HTML, so I have a pretty good idea of what’s going to appear on the page. Template literals are dangerous if you’re getting data from an untrusted source – it could allow somebody to inject arbitrary HTML into your page – but I’m writing all my metadata, so I trust it. I know there are other ways to construct HTML in JavaScript, like document.createElement(), the <template> element, or Web Components – but template literals have always been sufficient for me, and I’ve never had a reason to explore other options. Now we have to call this function when the page loads, and render the list of bookmarks. Here’s the rest of the code: <script> window.addEventListener("DOMContentLoaded", () => { document.querySelector("#listOfBookmarks").innerHTML = bookmarks.map(Bookmark).join(""); }); </script> <h1>Bookmarks</h1> <ul id="listOfBookmarks"></ul> I’m listening for the DOMContentLoaded event, which occurs when the HTML page has been fully parsed. When that event occurs, it looks for <ul id="listOfBookmarks"> in the page, and inserts the HTML for the list of bookmarks. We have to wait for this event so the <ul> actually exists. If we tried to run it immediately, it might run before the <ul> exists, and then it wouldn’t know where to insert the HTML. I’m using querySelector() to find the <ul> I want to modify – this is a newer alternative to functions like getElementById(). It’s quite flexible, because I can target any CSS selector, and I find CSS rules easier to remember than the family of getElementBy* functions. Although it’s slightly slower in benchmarks, the difference is negligible and it’s easier for me to remember. If you want to see this page working, check out the demo page. I use this pattern as a starting point for a lot of my static sites – metadata in JSON, some functions that render HTML, and an event listener that renders the whole page after it loads. Once I have the basic site, I add data, render more HTML, and write CSS styles to make it look pretty. This is where I can have fun, and really customise each site. I keep tweaking until I have something I like. I’m ignoring CSS because that could be a whole other post, and there’s a vintage charm to unstyled HTML – it’s fine for what we’re discussing today. What else can we do? demo) As the list gets even longer, it’s useful to have a way to find specific items in the list – I don’t want to scroll the whole thing every time. I like adding keyword tags to my data, and then filtering for items with particular tags. If I add other metadata fields, I could filter on those too. Here’s a brief sketch of the sort of interface I like: I like to be able to define a series of filters, and apply them to focus on a specific subset of items. I like to combine multiple filters to refine my search, and to see a list of applied filters with a way to remove them, if I’ve filtered too far. I like to apply filters from a global menu, or to use controls on each item to find similar items. I use URL query parameters to store the list of currently-applied filters, for example: bookmarks.html?tag=animals&tag=wtf&publicationYear=2025 This means that any UI element that adds or removes a filter is a link to a new URL, so clicking it loads a new page, which triggers a complete re-render with the new filters. When I write filtering code, I try to make it as easy as possible to define new filters. Every site needs a slightly different set of filters, but the overall principle is always the same: here’s a long list of items, find the items that match these rules. Let’s start by expanding our data model to include a couple of new fields: const bookmarks = [ { "url": "https://estherschindler.medium.com/the-old-family-photos-project-lessons-in-creating-family-photos-that-people-want-to-keep-ea3909129943", "title": "Lessons in creating family photos that people want to keep, by Esther Schindler (2018)", "tags": ["photography", "preservation"], "publicationYear": "2018" }, … ]; Then we can define some filters we might use to narrow the list: const bookmarkFilters = [ { id: 'tag', label: 'tagged with', filterFn: (bookmark, tagName) => bookmark.tags.includes(tagName), }, { id: 'publicationYear', label: 'published in', filterFn: (bookmark, year) => bookmark.publicationYear === year, }, ]; Each filter has three fields: id matches the name of the associated URL query parameter label is how the filter will be described in the list of applied filters filterFn is a function that takes two arguments: a bookmark, and a filter value, and returns true/false depending on whether the bookmark matches this filter This list is the only place where I need to customise the filters for a particular site; the rest of the filtering code is completely generic. This means there’s only one place I need to make changes if I want to add or remove filters. The next piece of the filtering code is a generic function that filters a list of items, and takes the list of filters as an argument: /* * Filter a list of items. * * This function takes the list of items and available filters, and the * URL query parameters passed to the page. * * This function returns a list with the items that match these filters, * and a list of filters that have been applied. */ function filterItems({ items, filters, params }) { // By default, all items match, and no filters are applied. var matchingItems = items; var appliedFilters = []; // Go through the URL query params one by one, and look to // see if there's a matching filter. for (const [key, value] of params) { console.debug(`Checking query parameter ${key}`); const matchingFilter = filters.find(f => f.id === key); if (typeof matchingFilter === 'undefined') { continue; } // There's a matching filter! Go ahead and filter the // list of items to only those that match. console.debug(`Detected filter ${JSON.stringify(matchingFilter)}`); matchingItems = matchingItems.filter( item => matchingFilter.filterFn(item, value) ); // Construct a new query string that doesn't include // this filter. const altQuery = new URLSearchParams(params); altQuery.delete(key, value); const linkToRemove = "?" + altQuery.toString(); appliedFilters.push({ type: matchingFilter.id, label: matchingFilter.label, value, linkToRemove, }) } return { matchingItems, appliedFilters }; } This function doesn’t care what sort of items I’m passing, or what the actual filters are, so I can reuse it between different sites. It returns the list of matching items, and the list of applied filters. The latter allows me to show that list on the page. linkToRemove is a link to the same page with this filter removed, but keeping any other filters. This lets us provide a button that removes the filter. The final step is to wire this filtering into the page render. We need to make sure we only show items that match the filter, and show the user a list of applied filters. Here’s the new code: <script> window.addEventListener("DOMContentLoaded", () => { const params = new URLSearchParams(window.location.search); const { matchingItems: matchingBookmarks, appliedFilters } = filterItems({ items: bookmarks, filters: bookmarkFilters, params: params, }); document.querySelector("#appliedFilters").innerHTML = appliedFilters .map(f => `<li>${f.label}: ${f.value} <a href="${f.linkToRemove}">(remove)</a></li>`) .join(""); document.querySelector("#listOfBookmarks").innerHTML = matchingBookmarks.map(Bookmark).join(""); }); </script> <h1>Bookmarks</h1> <p>Applied filters:</p> <ul id="appliedFilters"></ul> <p>Bookmarks:</p> <ul id="listOfBookmarks"></ul> I stick to simple filters that can be phrased as a yes/no question, and I rely on my past self to have written sufficiently useful metadata. At least in static sites, I’ve never implemented anything like a fuzzy text search, where it’s less obvious whether a particular item should match. You can check out the filtering code on the demo page. demo) The next feature I usually implement is sorting. I build a dropdown menu with all the options, and picking one reloads the page with the new sort order. Here’s a quick design sketch: For example, I often sort by the date I saved an item, so I can find an item I saved recently. Another sort order I often use is “random”, which shuffles the items and is a fun way to explore the data. As with filters, I put the current sort order in a query parameter, for example: bookmarks.html?sortOrder=titleAtoZ As before, I want to write this in a generic way and share code between different sites. Let’s start by defining a list of sort options: const bookmarkSortOptions = [ { id: 'titleAtoZ', label: 'title (A to Z)', compareFn: (a, b) => a.title > b.title ? 1 : -1, }, { id: 'publicationYear', label: 'publication year (newest first)', compareFn: (a, b) => Number(b.publicationYear) - Number(a.publicationYear), }, ]; Each sort option has three fields: id is the value that will appear in the URL query parameter label is the human-readable label that will appear in the dropdown compareFn(a, b) is a function that compares two items, and will be passed directly to the JavaScript sort function. If it returns a negative value, then a sorts before b. If it returns a positve value, then a sorts after b. Next, we can define a function that will sort a list of items: /* * Sort a list of items. * * This function takes the list of items and available options, and the * URL query parameters passed to the page. * * It returns a list with the items in sorted order, and the * sort order that was applied. */ function sortItems({ items, sortOptions, params }) { // Did the user pass a sort order in the query parameters? const sortOrderId = getSortOrder(params); // What sort order are we using? // // Look for a matching sort option, or use the default if the sort // order is null/unrecognised. For now, use the first defined // sort order as the default. const defaultSort = sortOptions[0]; const selectedSort = sortOptions.find(s => s.id === sortOrderId) || defaultSort; console.debug(`Selected sort: ${JSON.stringify(selectedSort)}`); // Now apply the sort to the list of items. const sortedItems = items.sort(selectedSort.compareFn); return { sortedItems, appliedSortOrder: selectedSort }; } /* Get the current sort order from the URL query parameters. */ function getSortOrder(params) { return params.get("sortOrder"); } This function works with any list of items and sort orders, making it easy to reuse across different sites. I only have to define the list of sort orders once. This approach makes it easy to add new sort orders, and to write a component that renders a dropdown menu to pick the sort order: /* * Create a dropdown control to choose the sort order. When you pick * a different value, the page reloads with the new sort. */ function SortOrderDropdown({ sortOptions, appliedSortOrder }) { return ` <select onchange="setSortOrder(this.value)"> ${ sortOptions .map(({ id, label }) => ` <option value="${id}" ${id === appliedSortOrder.id ? 'selected' : ''}> ${label} </option> `) .join("") } </select> `; } function setSortOrder(sortOrderId) { const params = new URLSearchParams(window.location.search); params.set("sortOrder", sortOrderId); window.location.search = params.toString(); } Finally, we can wire the sorting code into the rest of the app. After filtering, we sort the items and then render the sorted list. We also show the sort controls on the page: <script> window.addEventListener("DOMContentLoaded", () => { const params = new URLSearchParams(window.location.search); const { matchingItems: matchingBookmarks, appliedFilters } = filterItems(…); … const { sortedItems: sortedBookmarks, appliedSortOrder } = sortItems({ items: matchingBookmarks, sortOptions: bookmarkSortOptions, params, }); document.querySelector("#sortOrder").innerHTML += SortOrderDropdown({ sortOptions: bookmarkSortOptions, appliedSortOrder }); document.querySelector("#listOfBookmarks").innerHTML = sortedBookmarks.map(Bookmark).join(""); }); </script> <p id="sortOrder">Sort by:</p> You can check out the sorting code on the demo page. demo) If you have a really long list of items, you may want to break them into multiple pages. This isn’t something I do very often. Modern web browsers are very performant, and you can put thousands of elements on the page without breaking a sweat. I’ve only had to add pagination in a couple of very image-heavy sites – if it’s a text-based site, I just show everything. (You may notice that, for example, there are no paginated lists anywhere on this site. By writing lean HTML, I can fit all my lists on a single page.) If I do want pagination, I stick to a classic design: As with other features, I use a URL query parameter to track the current page number: bookmarks.html?pageNumber=2 This code can be written in a completely generic way – it doesn’t have to care what sort of items we’re paginating. First, let’s write a function that will select a page of items for us. If we’re on page N, what items should we be showing? /* * Get a page of items. * * This function will reduce the list of items to the items that should * be shown on this particular page. */ function paginateItems({ items, pageNumber, pageSize }) { // Page numbers are 1-indexed, so page 1 corresponds to // the indices 0…(pageSize - 1). const startOfPage = (pageNumber - 1) * pageSize; const endOfPage = pageNumber * pageSize; const thisPage = items.slice(startOfPage, endOfPage); return { thisPage, totalPages: Math.ceil(items.length / pageSize), }; } In some of my sites, the page size is a suggestion rather than a hard rule. If there are 27 items and the page size is 25, I think it’s nicer to show all the items on one page than push a few items onto a second page which barely has anything on it. But that might reflect my general dislike of pagination, and it’s definitely a nice-to-have rather than a required feature. Once we know what page we’re on and how many pages there are, we can create a component to render some basic pagination controls: /* * Renders a list of pagination controls. * * This includes links to prev/next pages and the current page number. */ function PaginationControls({ pageNumber, totalPages, params }) { // If there are no pages, we don't need pagination controls. if (totalPages === 1) { return ""; } // Do we need a link to the previous page? Only if we're past page 1. if (pageNumber > 1) { const prevPageUrl = setPageNumber({ params, pageNumber: pageNumber - 1 }); prevPageLink = `<a href="${prevPageUrl}">&larr; prev</a>`; } else { prevPageLink = null; } // Do we need a link to the next page? Only if we're before // the last page. if (pageNumber < totalPages) { const nextPageUrl = setPageNumber({ params, pageNumber: pageNumber + 1 }); nextPageLink = `<a href="${nextPageUrl}">next &rarr;</a>`; } else { nextPageLink = null; } const pageText = `Page ${pageNumber} of ${totalPages}`; // Construct the final result. return [prevPageLink, pageText, nextPageLink] .filter(p => p !== null) .join(" / "); } /* Returns a URL that points to the new page number. */ function setPageNumber({ params, pageNumber }) { const updatedParams = new URLSearchParams(params); updatedParams.set("pageNumber", pageNumber); return `?${updatedParams.toString()}`; } Finally, let’s wire this code into the rest of the app. We get the page number from the URL query parameters, paginate the list of filtered and sorted items, and show some pagination controls: <script> /* Get the current page number. */ function getPageNumber(params) { return Number(params.get("pageNumber")) || 1; } window.addEventListener("DOMContentLoaded", () => { const params = new URLSearchParams(window.location.search); const { matchingItems: matchingBookmarks, appliedFilters } = filterItems(…); const { sortedItems: sortedBookmarks, appliedSortOrder } = sortItems(…); const pageNumber = getPageNumber(params); const { thisPage: thisPageOfBookmarks, totalPages } = paginateItems({ items: sortedBookmarks, pageNumber, pageSize: 25, }); document.querySelector("#paginationControls").innerHTML += PaginationControls({ pageNumber, totalPages, params }); document.querySelector("#listOfBookmarks").innerHTML = thisPageOfBookmarks.map(Bookmark).join(""); }); </script> <p id="paginationControls">Pagination controls: </p> One thing that makes pagination a little tricky is that it affects filtering and sorting as well – when you change either of those, you probably want to reset to the first page. For example, if you’re filtering for animals and you’re on page 3, then you add a second filter for giraffes, you should reset to page 1. If you stay on page 3, it might be confusing if there are less than 3 pages of results with the new filter. The key to this is calling params.delete("pageNumber") when you update the URL query parameters. You can play with the pagination on the demo page. demo 1, demo 2) One problem with relying on JavaScript to render the page is that sometimes JavaScript goes wrong. For example, I write a lot of my metadata by hand, and a typo can create invalid JSON and break the page. There are also people who disable JavaScript, or sometimes it just doesn’t work. If I’m using the site, I can open the Developer Tools in my web browser and start debugging there – but that’s not a great experience. If you’re not expecting something to go wrong, it will just look like the page is taking a long time to load. We can do better. To start, we can add a <noscript> element that explains to users that they need to enable JavaScript. This will only be shown if they’ve disabled JavaScript: <noscript> <strong>You need to enable JavaScript to use this site!</strong> </noscript> I have a demo page which disables JavaScript, so you can see how the noscript tag behaves. This won’t help if JavaScript is broken rather than disabled, so we also need to add error handling. We can listen for the error event on the window, and report an error to the user – for example, if a script fails to load. <div id="errors"></div> <script> window.addEventListener("error", function(event) { document .querySelector('#errors') .innerHTML = `<strong>Something went wrong when loading the page!</strong>`; }); </script> We can also attach an onerror handler to specific script tags, which allows us to customise the error message – we can tell the user that a particular file failed to load. <script src="app.js" onerror="alert('Something went wrong while loading app.js')"></script> I have another demo page which has a basic error handler. Finally, I like to include a loading indicator, or some placeholder text that will be replaced when the page will finish loading – this tells the user where they can expect to see something load in. <ul id="listOfBookmarks">Loading…</ul> It’s somewhat rare for me to add a loading indicator or error handling, just because I’m the only user of my static sites, and it’s easier for me to use the developer tools when something breaks. But providing mechanisms for the user to understand what’s going on is crucial if you want to build static sites like this that other people will use. Test the code with QUnit and Playwright If I’m writing a very complicated viewer, it’s helpful to have tests. I’ve found two test frameworks that I particularly like for this purpose. QUnit is a JavaScript library that I use for unit testing – to me, that means testing individual functions and components. For example, QUnit was very helpful when I was writing the early iterations of the sorting and filtering code, and writing tests caught a number of mistakes. You can run QUnit in the browser, and it only requires two files, so I can test a project without creating a whole JavaScript build system or dependency tree. Here’s an example of a QUnit test: QUnit.test("sorts bookmarks by title", function(assert) { // Create three bookmarks with different titles const bookmarkA = { title: "Almanac for apples" }; const bookmarkC = { title: "Compendium of coconuts" }; const bookmarkP = { title: "Page about papayas" }; const params = new URLSearchParams("sortOrder=titleAtoZ"); // Pass the bookmarks in the wrong order, so they can't be sorted // correctly "by accident" const { sortedItems, appliedSortOrder } = sortItems({ items: [bookmarkC, bookmarkA, bookmarkP], sortOptions: bookmarkSortOptions, params, }); // Check the bookmarks have been sorted in the right order assert.deepEqual(sortedItems, [bookmarkA, bookmarkC, bookmarkP]); }); You can see this test running in the browser in my demo page. Playwright is a testing library that can open a web app in a real web browser, interact with the page, and check that the app behaves correctly. It’s often used for dynamic web apps, but it works just as well for static pages. For example, it can test that if you select a new sort order, the page reloads and show results in the correct order. Here’s an example of a simple test written with Playwright in Python: from playwright.sync_api import expect, sync_playwright with sync_playwright() as p: browser = p.webkit.launch() # Open the HTML file in the browser page = browser.new_page() page.goto('file:///Users/alexwlchan/Sites/sorting.html') # Look for an <li> element with one of the bookmarks -- this will # only appear if the page has rendered correctly. expect(page.get_by_text("So Many Nevers")).to_be_visible() browser.close() These tools are a great safety net for catching mistakes, but I don’t always need them. I only write tests for my more complicated sites – when the sorting/filtering code is particularly complex, there’s a lot of rendering code, or I anticipate making major changes in future. I don’t bother with tests when the site is simple and unlikely to change, and I can just do manual checks when I write it the first time. Tests are less useful if I know I’ll never make changes. This is getting away from the idea of a self-contained static website, because now I’m relying on third-party code, and for Playwright I need to maintain a working Python environment. I’m okay with this, because the website is still usable even if I can no longer run the tests. These are useful sidecar tools, but I only need them if I’m making changes. If I finish a site and I know I won’t change it again, I don’t need to worry about whether the tests will still work years later. Manipulate the metadata with Python For small sites, we could write all this JavaScript directly in <script> tags or in a single file. As we get more data, splitting the metadata and application logic makes everything easier to manage. One pattern I’ve adopted is to put all the item metadata into a single, standalone JavaScript file that assigns a single variable: const bookmarks = […]; and then load that file in the HTML page with a <script src="metadata.js"> element. I use JavaScript rather than pure JSON because browsers don’t allow fetching local JSON files via file://. If you open an HTML page without a web server, the browser will block requests to fetch a JSON file because of security restrictions. By storing data in a JavaScript file instead, I can load it with a simple <script> tag. I wrote a small Python library javascript-data-files that lets me interact with JSON stored this way. This allows me to write scripts that add data to the metadata file (like saving a new bookmark) or to verify the existing metadata (like checking that I have an archived copy of every bookmark). I’ll write more about this in future posts, because this one is long enough already. For example, let’s add a new bookmark to the metadata.js file: from javascript_data_files import read_js, write_js bookmarks = read_js("metadata.js", varname="bookmarks") bookmarks.append({ "url": "https://www.theguardian.com/lifeandstyle/2019/jan/13/ella-risbridger-john-underwood-friendship-life-new-family", "title": "When my world fell apart, my friends became my family, by Ella Risbridger (2019)" }) write_js("metadata.js", varname="bookmarks", value=bookmarks) We’re starting to blur the line between a static site and a static site generator. These scripts only work if I have a working Python environment, which is less future-proof than pure HTML. I’m happy with this compromise, because the website is fully functional without them – I only need to run these scripts if I’m modifying the metadata. If I stop making changes and the Python environment breaks, I can still read everything I’ve already saved. Store the website code in Git I create Git repositories for all of my local websites. This allows me to track changes, and it means I can experiment freely – I can always roll back if I break something. These Git repositories only live on my local machine. I run git init . in the folder, I create commits to record any changes, and that’s it. I don’t push the repository to GitHub or another remote Git server. (Although I do have backups of every site, of course.) Git has a lot of features for writing code in a collaborative environment, but I don’t need any of those here – I’m the only person working on these sites. Most of the time, I just use two commands: $ git add bookmarks.html $ git commit -m "Add filtering by author" This creates a labelled snapshot of my latest changes to bookmarks.html. I only track the text files in Git – the HTML, CSS, and JavaScript. I don’t track binary files like images and videos. Git struggles with those larger files, and I don’t edit those as much as the text files, so having them in version control is less useful. I write a gitignore file to ignore all of them. Closing thoughts There are lots of ideas here, but you don’t need to use all of them – most of my sites only use a few. Every site is different, and you can pick what makes most sense for your project. If you’re building a static site for a tiny archive, start with a simple HTML file. Add features like templates, sorting, and filtering incrementally as they become useful. You don’t need to add them all upfront – that can make things more complicated than they need to be. This approach can scale from simple collections to sophisticated archives. A static website built with HTML and JavaScript is easy to maintain and modify, has no external dependencies, and is future-proof against a lot of technological changes. I’ve come to love using static websites to store my local data. They’re flexible, resilient, and surprisingly powerful. I hope you’ll consider it too, and that these ideas help you get started. [If the formatting of this post looks odd in your feed reader, visit the original article]

yesterday 5 votes
Nobody Profits

Intellectual property is a really dumb idea. “But piracy is theft. Clean and simple. It’s smash and grab. It ain’t no different than smashing a window at Tiffany’s and grabbing merchandise.” - Joe Biden, 46th president of the USA Except it isn’t and Joe Biden is a senile moron. Because when you smash the windows and grab the stuff, Tiffany’s no longer has the stuff. With piracy, everyone has the stuff. It’s a lot more like taking a picture, which Tiffany’s probably encourages. Win-win cooperation. Wealth is being increasingly concentrated. What’s shocking to me is how much everyone still cares about money. Even the die-hard complain about capitalism type deeply cares, because the opposite of love isn’t hate, it’s indifference. I hate scammers, but I’m pretty indifferent to money. The best outcome of AI is if it delivers huge amounts of value to society but no profit to anyone. The old days of the Internet were this goldmine. The Internet delivered huge value but no profit, and that’s why it was good. Suddenly we had all these new powers. Then people figured out how to monetize it. It was a race to extract every tiny bit of value, and now we have today’s Internet. Can this play out differently with AI? Let’s build technology and open source software that market breaks everything. Let’s demoralize the scammers so hard that they don’t even try. Every loser and grifter will be gone from technology because there’s nothing to be gained there. They can play golf all day or something. If I ever figure out how to channel power like Elon, I will do this. Spin up open source projects in every sector to eliminate all the capturable value. This is what I’m trying to do with comma.ai and tinygrad. I dream of a day when company valuations halve when I create a GitHub repo. Someday.

2 days ago 4 votes
My Top 15 OS Books: From Theory and Implementation to Systems Programming

A personal guide to the most useful books for understanding operating systems

2 days ago 8 votes