More from On Test Automation
A few weeks ago, I ran a pair programming / mentoring session with someone who reached out to me because they felt they could use some support. When I first saw the code they wrote, I was pretty impressed. Sure, there were some things I would have done differently, but most of that was personal preference, not a matter of my way being better than their way objectively. Instead of working on their code directly, instead, we therefore decided to build up some test code together from zero, discussing and applying good programming principles and patterns along the way. As the tests were using Playwright in TypeScript, and were heavily oriented towards using the graphical user interface, we decided to start building a Page Object-based structure for a key component in their application. This component was a UI component that enabled an end user to create a report in the system. The exact type of system or even the domain itself isn’t really important for the purpose of this blog post, by the way. The component looked somewhat like this, heavily simplified: At the top, there was a radiobutton with three options that selected different report layouts. Every report layout consists of multiple form fields, and most form fields are text areas plus lock buttons that open a dropdown-like structure where you can edit the permissions for that field by selecting one or more roles that can view the contents of that text field (this is a privacy feature). And of course, there’s a save button to save the report, as well as a print button. The actual UI component had a few other types of components, but for the sake of brevity, let’s stick to these for now. Iteration 0 - creating an initial Page Object My approach whenever I start from scratch, either on my own or when working with someone else, is to take small steps and gradually introduce complexity. It might be tempting to immediately create a Page Object containing fields for all the elements and methods to interact with them, but that is going to get messy very quickly. Instead, we started with the simplest Page Object we could think of: one that allowed us to create a standard report, without considering the lock buttons to set permissions. Let’s assume that a standard report consists of only a title and a summary text field. The first iteration of that Page Object turned out to look something like this: export class StandardReportPage { readonly page: Page; readonly radioSelectStandard: Locator; readonly textfieldTitle: Locator; readonly textfieldSummary: Locator; readonly buttonSaveReport: Locator; readonly buttonPrintReport: Locator; constructor(page: Page) { this.page = page; this.radioSelectStandard = page.getByLabel('Standard report'); this.textfieldTitle = page.getByPlaceholder('Title'); this.textfieldSummary = page.getByPlaceholder('Summary'); this.buttonSaveReport = page.getByRole('button', { name: 'Save' }); this.buttonPrintReport = page.getByRole('button', { name: 'Print' }); } async select() { await this.radioSelectStandard.click(); } async setTitle(title: string) { await this.textfieldTitle.fill(title); } async setSummary(summary: string) { await this.textfieldSummary.fill(summary); } async save() { await this.buttonSaveReport.click(); } async print() { await this.buttonPrintReport.click(); } } which makes the test using this Page Object look like this: test('Creating a standard report', async ({ page } ) => { const standardReportPage = new StandardReportPage(page); await standardReportPage.select(); await standardReportPage.setTitle('My new report title'); await standardReportPage.setSummary('Summary of the report'); await standardReportPage.save(); await expect(page.getByTestId('standard-report-save-success')).toBeVisible(); }); Iteration 1 - grouping element interactions My first question after we implemented and used this Page Object was: ‘how do you feel about the readability of this test?’. Of course, we just wrote this code, and it’s a small example, but imagine you’re working with Page Objects that are all written like this, and offer many more element interactions. This will quickly lead to very procedural test code ‘enter this, enter that, click here, check there’ that doesn’t show the intent of the test very clearly. In other words, this coding style does not really do a great job of hiding the implementation of the page (even when it hides the locators) and focusing only on behaviour. To improve this, I suggested grouping element interactions that form a logical end user interaction together in a single method and expose that. When I read or write a test, I’m not particularly interested in the sequence of individual element interactions I need to execute to perform a higher-level action. I’m not interested in ‘filling a text field’ or ‘clicking a button’, I’m interested in ‘creating a standard report’. This led us to refactor the Page Object into this: export class StandardReportPage { readonly page: Page; readonly radioSelectStandard: Locator; readonly textfieldTitle: Locator; readonly textfieldSummary: Locator; readonly buttonSaveReport: Locator; readonly buttonPrintReport: Locator; constructor(page: Page) { this.page = page; this.radioSelectStandard = page.getByLabel('Standard report'); this.textfieldTitle = page.getByPlaceholder('Title'); this.textfieldSummary = page.getByPlaceholder('Summary'); this.buttonSaveReport = page.getByRole('button', { name: 'Save' }); this.buttonPrintReport = page.getByRole('button', { name: 'Print' }); } async select() { await this.radioSelectStandard.click(); } async create(title: string, summary: string) { await this.textfieldTitle.fill(title); await this.textfieldSummary.fill(summary); await this.buttonSaveReport.click(); } async print() { await this.buttonPrintReport.click(); } } which in turn made the test look like this: test('Creating a standard report', async ({ page } ) => { const standardReportPage = new StandardReportPage(page); await standardReportPage.select(); await standardReportPage.create('My new report title', 'Summary of the report'); await expect(page.getByTestId('standard-report-save-success')).toBeVisible(); }); Much better already when it comes to readability and ‘expose behaviour, hide implementation’. Doing exactly this is not something that is unique to UI automation, or even to test automation in general, by the way. This principle is called encapsulation, and it is one of the fundamental principles of object-oriented programming. It is a principle that is very useful to know when you’re writing test code, if you want to keep your code readable, that is. Iteration 2 - Adding the ability to set permissions on a form field For our next step, we decided to introduce the ability to set the access permissions for every text field. As explained and shown in the graphical representation of the form at the top of this post, every form field in the standard form has an associated lock button that opens a small dialog where the user can select which user roles can and cannot see the report field. Our initial idea was to simply add additional fields in the Page Object representing the standard report. However, that would lead to a lot of repetitive work and to the standard report having a lot of fields containing element locators. So, we decided to see if we could consider the combination of a report text field and the associated permission lock button as a Page Component, i.e., a separate class that encapsulates the behaviour of a group of related elements on a specific page. Setting this up in a reusable manner will be a lot easier when the HTML for these Page Components has the same structure across the entire application. The good news is that this is often the case, especially when frontend designers and developers design and implement frontends using tools like Storybook. So, the relevant part of the HTML for the standard form might look like this (again, simplified): <div id="standard_form"> <div data-testid="form_field_subject"> <div data-testid="form_field_subject_textfield"></div> <div data-testid="form_field_subject_lock"></div> </div> <div data-testid="form_field_summary"> <div data-testid="form_field_summary_textfield"></div> <div data-testid="form_field_summary_lock"></div> </div> </div> An example reusable Page Component class might then look something like this: export class ReportFormField { readonly page: Page; readonly textfield: Locator; readonly buttonLockPermissions: Locator; constructor(page: Page, formFieldName: string) { this.page = page; this.textfield = page.getByTestId(`${formFieldName}_textfield`); this.buttonLockPermissions = page.getByTestId(`${formFieldName}_lock`); } async complete(text: string, roles: string[]) { await this.textfield.fill(text); await this.buttonLockPermissions.click(); // handle setting permissions for the form field } } Note how the constructor of this Page Component class uses (in fact, relies on) the predictable, repetitive structure of the component in the application and the presence of data-testid attributes. If your components do not have these, find a way to add them, or find another generic way to locate the individual elements in the component on the page. Now that we have defined our Page Component class, we need to define the relationship between these Page Components and the Page Object that contains them. In the past, my choice would default to creating base Page classes that would contain the reusable Page Components, as well as other utility methods. The more specific Page Object would then inherit from these base Pages, allowing them to use the methods defined in the parent base Page class. Almost invariably, at some point that would lead to very messy base Page classes, with lots of fields and methods in it that were only tangentially related, at best. The cause of this mess? Me not thinking clearly about the type of the relation between different Page Objects and Components. You see, creating base classes and using inheritance for reusability creates ‘is-a’ relations. These are useful when the relation between objects is of an ‘is-a’ nature. However, in our case, there is no ‘is-a’ relation, there is a ‘has-a’ relation. A Page Object has a certain Page Component. In other words, we need to define the relationship a different way, and that’s by using composition instead of inheritance. We define Page Components as components of our Page Objects, which makes for a far more natural relationship between the two, and for code that is way more clearly structured: export class StandardReportPage { readonly page: Page; readonly radioSelectStandard: Locator; readonly reportFormFieldTitle: ReportFormField; readonly reportFormFieldSummary: ReportFormField; readonly buttonSaveReport: Locator; readonly buttonPrintReport: Locator; constructor(page: Page) { this.page = page; this.radioSelectStandard = page.getByLabel('Standard report'); this.reportFormFieldTitle = new ReportFormField(this.page, 'title'); this.reportFormFieldSummary = new ReportFormField(this.page, 'summary'); this.buttonSaveReport = page.getByRole('button', { name: 'Save' }); this.buttonPrintReport = page.getByRole('button', { name: 'Print' }); } async select() { await this.radioSelectStandard.click(); } async create(title: string, summary: string, roles: string[]) { await this.reportFormFieldTitle.complete(title, roles); await this.reportFormFieldSummary.complete(summary, roles); await this.buttonSaveReport.click(); } async print() { await this.buttonPrintReport.click(); } } Reading this code feels far more natural than cramming everything into one or more parent classes c.q. base page objects. Lesson learned here: the way objects are related in your code should reflect the relationship between these objects in real life, that is, in your application. Iteration 3 - What about the other report types? The development and refactoring steps we have gone through so far led us to a point where we were pretty happy with the code. However, we still only have Page Objects for a single type of form, and as you have seen in the sketch at the top of this blog post, there are different types of forms. How do we deal with those? Especially when we know that these forms share some components and behaviour, but not all of them? It is tempting to immediately jump to conclusions and start throwing patterns and structures at the problem, but in pair programming sessions like this, I typically try and avoid finding and implementing the ‘final’ solution right away. Why? Because the best learning is done when you see (or create, in this case) a suboptimal situation, discuss the problems with that situation, investigate potential solutions and only then implement them. Sure, it will take longer, initially, but this is made up for in spades with a much better understanding of what suboptimal code looks like and how to improve it. So, first we create separate classes for individual report types, each similar to the implementation for the standard report we created before. Here is an example for an extended report, containing more form fields (well, just one more, but you get the idea): export class ExtendedReportPage { readonly page: Page; readonly radioSelectExtended: Locator; readonly reportFormFieldTitle: ReportFormField; readonly reportFormFieldSummary: ReportFormField; readonly reportFormFieldAdditionalInfo: ReportFormField; readonly buttonSaveReport: Locator; readonly buttonPrintReport: Locator; constructor(page: Page) { this.page = page; this.radioSelectExtended = page.getByLabel('Extended report'); this.reportFormFieldTitle = new ReportFormField(this.page, 'title'); this.reportFormFieldSummary = new ReportFormField(this.page, 'summary'); this.reportFormFieldAdditionalInfo = new ReportFormField(this.page, 'additionalInfo'); this.buttonSaveReport = page.getByRole('button', { name: 'Save' }); this.buttonPrintReport = page.getByRole('button', { name: 'Print' }); } async select() { await this.radioSelectExtended.click(); } async create(title: string, summary: string, additionalInfo: string, roles: string[]) { await this.reportFormFieldTitle.complete(title, roles); await this.reportFormFieldSummary.complete(summary, roles); await this.reportFormFieldAdditionalInfo.complete(additionalInfo, roles); await this.buttonSaveReport.click(); } async print() { await this.buttonPrintReport.click(); } } Obviously, there’s a good amount of duplication between this class and the Page Object for the standard report. What to do with them? Contrary to the situation with the Page Components, it is a good idea to reduce the duplication by creating a base report Page Object here. We’re talking about creating an ‘is-a’ relationship (inheritance) here, not a ‘has-a’ relation (composition). A standard report is a report. That means that in this case, we can, and we should, create a base report Page Object, move some (or maybe even all) of the duplicated code there, and have the specific report Page Objects derive from that base report class. My recommendation here is to make the base report Page Object an abstract class to prevent people from instantiating it directly. This leads to more expressive and clear code, as we can only instantiate the concrete report subtype, which will make it immediately clear to the reader of the code what type of report they’re dealing with. In the abstract class, we declare the elements that are shared between all reports. This applies to methods, but also to web elements that appear in all report types. This is what the abstract base class might look like: export abstract class ReportBasePage { readonly page: Page; readonly reportFormFieldTitle: ReportFormField; readonly reportFormFieldSummary: ReportFormField; readonly buttonSaveReport: Locator; readonly buttonPrintReport: Locator; abstract readonly radioSelect: Locator; protected constructor(page: Page) { this.page = page; this.reportFormFieldTitle = new ReportFormField(this.page, 'title'); this.reportFormFieldSummary = new ReportFormField(this.page, 'summary'); this.buttonSaveReport = page.getByRole('button', { name: 'Save' }); this.buttonPrintReport = page.getByRole('button', { name: 'Print' }); } async select() { await this.radioSelect.click(); } async print() { await this.buttonPrintReport.click(); } } and a concrete class for the standard report, implementing the abstract class now looks like this: export class ExtendedReportPage extends ReportBasePage { readonly page: Page; readonly radioSelect: Locator; readonly reportFormFieldAdditionalInfo: ReportFormField; constructor(page: Page) { super(page); this.page = page; this.radioSelect = page.getByLabel('Extended report'); this.reportFormFieldAdditionalInfo = new ReportFormField(this.page, 'additionalInfo'); } async create(title: string, summary: string, additionalInfo: string, roles: string[]) { await this.reportFormFieldTitle.complete(title, roles); await this.reportFormFieldSummary.complete(summary, roles); await this.reportFormFieldAdditionalInfo.complete(additionalInfo, roles); await this.buttonSaveReport.click(); } } The abstract class takes care of the methods that are shared between all reports, such as the print() and the select() methods, It also defines what elements and methods should be implemented by the implementing concrete classes. For now, that’s only the radioSelect locator. Note that at the moment, because the data required for the different types of reports is not the same, we cannot yet add an abstract select(): void method requirement, that all report Page Objects should implement, to our abstract class. This is a temporary drawback and one that we will address in a moment. Also note that the test code doesn’t change, but we can now create both a standard report and an extended report that, behind the scenes, share a significant amount of code. Definitely a step in the right direction. Iteration 4 - Dealing with test data Our tests already look pretty good. They are easy to read, and the way the code is structured aligns with the structure of the parts of the application they’re representing. Are we done yet? Well, maybe. As a final improvement to our tests, let’s have a look at the way we handle our test data. Right now, the test data we use in our test methods is simply an unstructured collection of strings, integers, boolean and so on. For small tests and a simple domain that is easy to understand, you might get away with this, but as soon as your test suite grows and your domain becomes more complex, this will get confusing. What does that string value represent exactly? Why is that variable a boolean and what happens if it is set to true (or false)? This is where test data objects can help out. Test data objects are simple classes, often nothing more fancy than a Data Transfer Object (DTO), that represent a domain entity. In this situation, that domain entity might be a report, for example. Having types that represent domain entities greatly improves the readability of our code, it will make it much easier to understand what exactly we’re doing here. The implementation of these test data objects is often straightforward. In TypeScript, we can use a simple interface for this purpose. I chose to create one ReportContent class that contains the data for all of our report types. As they diverge, I might choose to refactor these into separate interfaces, but for now, this is fine. Also, defining this test data object has the additional benefit of allowing me to move the definition of the create() method for the different report Page Objects to the abstract base class, a step that we were unable to perform previously. This is what my interface looks like: export interface ReportContent { title: string; summary: string; additionalInfo?: string; roles: string[]; } The additionalInfo field is marked as optional, as it only appears in an extended report, not in a standard report. In some cases, to further improve flexibility and readability of our code, we might add a builder or a factory that helps us create instances of our test data objects using a fluent syntax. This also allows us to set sensible default values for properties to avoid having to assign the same values to these properties in every test. In this specific case, that’s not really necessary, because object creation based on an interface in TypeScript is really straightforward, and our ReportContent object is small, anyway. Your mileage may vary. Now that we have defined a type for our report data, we can change the signature and the implementation of the create() methods in our Page Objects to use this type. Here’s an example for the extended report: async create(report: ReportContent) { await this.reportFormFieldTitle.complete(report.title, report.roles); await this.reportFormFieldSummary.complete(report.summary, report.roles); await this.reportFormFieldAdditionalInfo.complete(report.additionalInfo, report.roles); await this.buttonSaveReport.click(); } and we can now add the following line to the abstract ReportBasePage class: abstract create(report: ReportContent): void; to enforce all report Page Objects to implement a create() method that takes an argument of type ReportContent. We can do the same for other test data objects. Oh, and if you’re storing your tests in the same repository as your application code, these test data objects might even exist already, in which case you might be able to reuse them. This is definitely worth checking, because why would we reinvent the wheel? That was a lot of work, but it has led to code that is, in my opinion, well-structured and easy to read and maintain. As this blog post has hopefully shown, it is very useful to have a good working knowledge of common object-oriented programming principles and patterns when you’re writing test code. This is especially true for UI automation, but many of the principles we have seen in this blog post can be applied to other types of test automation, too. There are many other patterns out there to explore. This blog post is not an attempt to list them all, nor does it show ‘the one true way’ of writing Page Objects. Hopefully, though, it has shown you my thought process when I write test automation code, and how understanding fundamentals of object-oriented programming helps me do this better. A massive ‘thank you’ to Olena for participating in the pair programming session I discussed and for reviewing this blog post. I really appreciate it.
Last weekend, I wrote a more or less casual post on LinkedIn containing the ‘rules’ (it’s more of a list of terms and conditions, really) I set for myself when it comes to using AI. That post received some interesting comments that made me think and refine my thoughts on when (not) to use AI to support me in my work. Thank you to all of you who commented for doing so, and for showing me that there still is value in being active on LinkedIn in between all the AI-generated ‘content’. I really appreciate it. Now, AI and LLMs like ChatGPT or Claude can be very useful, that is, when used prudently. I think it is very important to be conscious and cautious when it comes to using AI, though, which is why I wrote that post. I wrote it mostly for myself, to structure my thoughts around AI, but also because I think it is important that others are at least conscious of what they’re doing and working with. That doesn’t mean you have to adhere to or even agree with my views and the way I use these tools, by the way. Different strokes for different folks. Because of the ephemeral nature of these LinkedIn posts, and the importance of the topic to me, I want to repeat the ‘rules’ (again, more of a T&C list) I wrote down here. This is the original, unchanged list from the post I wrote on February 15: I only use it to support me in completing tasks I understand. I need to be able to scrutinize the output the AI system produces and see if it is both sound and fit for the purpose I want to use it for. I never use it to explain to me something I don’t know yet or don’t understand enough. I have seen and read about too many hallucinations to trust them to teach me what I don’t understand. Instead, I use books, articles, and other content from authors and sources I do trust if I’m looking to learn something new. I never EVER use it for creative work. I don’t use AI-generated images anywhere, and all of my blogs, LinkedIn posts, comments, course material and other written text are 100% my own, warts and all. My views, my ideas, my voice. Interestingly, most of the comments were written in reaction to the first two bullet points at the time I wrote this blog post. I don’t know exactly why this is the case, it might be because the people who read it agree (which I doubt seeing the tsunami of AI-generated content that’s around these days), or maybe because there’s a bit of stigma around admitting to use AI for content generation. I don’t know. What I do know is that it is an important principle to me. I wrote about the reasons for that in an earlier blog post, so I won’t repeat myself here. Like so many terms and conditions, the list I wrote down in this post will probably evolve over time, but what will not change is me remaining very careful around where I use and where I don’t use AI to help me in my work. Especially now that the speed with which new developments in the AI space are presented to us and the claims around what it can and will do only get bigger, I think it is wise to remain cautious and look at these developments with a critical and very much human view.
When I build and release new features or bug fixes for RestAssured.Net, I rely heavily on the acceptance tests that I wrote over time. Next to serving as living documentation for the library, I run these tests both locally and on every push to GitHub to see if I didn’t accidentally break something, for different versions of .NET. But how reliable are these tests really? Can I trust them to pass and fail when they should? Did I cover all the things that are important? I speak, write and teach about the importance of testing your tests on a regular basis, so it makes sense to start walking the talk and get more insight into the quality of the RestAssured.Net test suite. One approach to learning more about the quality of your tests is through a technique called mutation testing. I speak about and demo testing your tests and using mutation testing to do so on a regular basis (you can watch a recent talk here), but until now, I’ve pretty much exclusively used PITest for Java. As RestAssured.Net is a C# library, I can’t use PITest, but I’d heard many good things about Stryker.NET, so this would be a perfect opportunity to finally use it. Adding Stryker.NET to the RestAssured.Net project The first step was to add Stryker.Net to the RestAssured.Net project. Stryker.NET is a dotnet tool, so installing it is straightforward: run dotnet new tool-manifest to create a new, project-specific tool manifest (this was the first local dotnet tool for this project) and then dotnet tool install dotnet-stryker to add Stryker.NET as a dotnet tool to the project. Running mutation tests for the first time Running mutation tests with Stryker.NET is just as straightforward: dotnet stryker --project RestAssured.Net.csproj from the tests project folder is all it takes. Because both my test suite (about 200 tests) and the project itself are relatively small code bases, and because my test suite runs quickly, running mutation tests for my entire project works for me. It still took around five minutes for the process to complete. If you have a larger code base, and longer-running test suites, you’ll see that mutation testing will take much, much longer. In that case, it’s probably best to start on a subset of your code base and a subset of your test suite. After five minutes and change, the results are in: Stryker.NET created 538 mutants from my application code base. Of these: 390 were killed, that is, at least one test failed because of this mutation, 117 survived, that is, the change did not make any of the tests fail, and 31 resulted in a timeout, which I’ll need to investigate further, but I suspect it has something to do with HTTP timeouts (RestAssured.Net is an HTTP API testing library, and all acceptance tests perform actual HTTP requests) This leads to an overall mutation testing score of 59.97%. Is that good? Is that bad? In all honesty, I don’t know, and I don’t care. Just like with code coverage, I am not a fan of setting fixed targets for this type of metric, as these will typically lead to writing tests for the sake of improving a score rather than for actual improvement of the code. What I am much more interested in is the information that Stryker.NET produced during the mutation testing process. Opening the HTML report I was surprised to see that out of the box, Stryker.NET produces a very good-looking and incredibly helpful HTML report. It provides both a high-level overview of the results: as well as in-depth detail for every mutant that was killed or that survived. It offers a breakdown of the results per namespace and per class, and it is the starting point for further drilling down into results for individual mutants. Let’s have a look and see if the report provides some useful, actionable information for us to improve the RestAssured.Net test suite. Missing coverage Like many other mutation testing tools, Stryker.NET provides code coverage information along with mutation coverage information. That is, if there is code in the application code base that was mutated, but that is not covered by any of the tests, Stryker.NET will inform you about it. Here’s an example: Stryker.NET changed the message of an exception thrown when RestAssured.Net is asked to deserialize a response body that is either null or empty. Apparently, there is no test in the test suite that covers this path in the code. As this particular code path deals with exception handling, it’s probably a good idea to add a test for it: [Test] public void EmptyResponseBodyThrowsTheExpectedException() { var de = Assert.Throws<DeserializationException>(() => { Location responseLocation = (Location)Given() .When() .Get($"{MOCK_SERVER_BASE_URL}/empty-response-body") .DeserializeTo(typeof(Location)); }); Assert.That(de?.Message, Is.EqualTo("Response content is null or empty.")); } I added the corresponding test in this commit. Removed code blocks Another type of mutant that Stryker.NET generates is the removal of a code block. Going by the mutation testing report, it seems like there are a few of these mutants that are not detected by any of the tests. Here’s an example: The return statement for the Put() method body, which is used to perform an HTTP PUT operation, is replaced with an empty method body, but this is not picked up by any of the tests. The same applies to the methods for HTTP PATCH, DELETE, HEAD and OPTIONS. Looking at the tests that cover the different HTTP verbs, this makes sense. While I do call each of these HTTP methods in a test, I don’t assert on the result for the aforementioned HTTP verbs. I am basically relying on the fact that no exception is thrown when I call Put() when I say ‘it works’. Let’s change that by at least asserting on a property of the response that is returned when these HTTP verbs are used: [Test] public void HttpPutCanBeUsed() { Given() .When() .Put($"{MOCK_SERVER_BASE_URL}/http-put") .Then() .StatusCode(200); } These assertions were added to the RestAssured.Net test suite in this commit. Improving testability The next signal I received from this initial mutation testing run is an interesting one. It tells me that even though I have acceptance tests that add cookies to the request and that only pass when the request contains the cookies I set, I’m not properly covering some logic that I added: To understand what is going on here, it is useful to know that a Cookie in C# offers a constructor that creates a Cookie specifying only a name and a value, but that a cookie has to have a domain value set. To enforce that, I added the logic you see in the screenshot. However, Stryker.NET tells me I’m not properly testing this logic, because changing its implementation doesn’t cause any tests to fail. Now, I might be able to test this specific logic with a few added acceptance tests, but it really is only a small piece of logic, and I should be able to test that logic in isolation, right? Well, not with the code written in the way it currently is… So, time to extract that piece of logic into a class of its own, which will improve both the modularity of the code and allow me to test it in isolation. First, let’s extract the logic into a CookieUtils class: internal class CookieUtils { internal Cookie SetDomainFor(Cookie cookie, string hostname) { if (string.IsNullOrEmpty(cookie.Domain)) { cookie.Domain = hostname; } return cookie; } } I deliberately made this class internal as I don’t want it to be directly accessible to RestAssured.Net users. However, as I do need to access it in the tests, I have to add this little snippet to the RestAssured.Net.csproj file: <ItemGroup> <InternalsVisibleTo Include="$(MSBuildProjectName).Tests" /> </ItemGroup> Now, I can add unit tests that should cover both paths in the SetDomainFor() logic: [Test] public void CookieDomainIsSetToDefaultValueWhenNotSpecified() { Cookie cookie = new Cookie("cookie_name", "cookie_value"); CookieUtils cookieUtils = new CookieUtils(); cookie = cookieUtils.SetDomainFor(cookie, "localhost"); Assert.That(cookie.Domain, Is.EqualTo("localhost")); } [Test] public void CookieDomainIsUnchangedWhenSpecifiedAlready() { Cookie cookie = new Cookie("cookie_name", "cookie_value", "/my_path", "strawberry.com"); CookieUtils cookieUtils = new CookieUtils(); cookie = cookieUtils.SetDomainFor(cookie, "localhost"); Assert.That(cookie.Domain, Is.EqualTo("strawberry.com")); } These changes were added to the RestAssured.Net source and test code in this commit. An interesting mutation So far, all the signals that appeared in the mutation testing report generated by Stryker.NET have been valuable, as in: they have pointed me at code that isn’t covered by any tests yet, to tests that could be improved, and they have led to code refactoring to improve testability. Using Stryker.NET (and mutation testing in general) does sometimes lead to some, well, interesting mutations, like this one: I’m checking that a certain string is either null or an empty string, and if either condition is true, RestAssured.Net throws an exception. Perfectly valid. However, Stryker.NET changes the logical OR to a logical AND (a common mutation), which makes it impossible for the condition to evaluate to true. Is that even a useful mutation to make? Well, to some extent, it is. Even if the code doesn’t make sense anymore after it has been mutated, it does tell you that your tests for this logical condition probably need some improvement. In this case, I don’t have to add more tests, as we discussed this exact statement earlier (remember that it had no test coverage at all). It did make me look at this statement once again, though, and I only then realized that I could simplify this code snippet to if (string.IsNullOrEmpty(responseBodyAsString)) { throw new DeserializationException("Response content is null or empty."); } Instead of a custom-built logical OR, I am now using a construct built into C#, which is arguably the safer choice. In general, if your mutation testing tool generates several (or even many) mutants for the same code statement or block, it might be a good idea to have another look at that code and see if it can be simplified. This was just a very small example, but I think this observation holds true in general. This change was added to the RestAssured.Net source and test code in this commit. Running mutation tests again and inspecting the results Now that several (supposed) improvements to the tests and the code have been made, let’s run the mutation tests another time to see if the changes improved our score. In short: 397 mutants were killed now, up from 390 (that’s good) 111 mutants survived, down from 117 (that’s also good) there were 32 timeouts, up from 31 (that needs some further investigation) Overall, the mutation testing score went up from 59,97% to 61,11%. This might not seem like much, but it is definitely a step in the right direction. The most important thing for me right now is that my tests for RestAssured.Net have improved, my code has improved and I learned a lot about mutation testing and Stryker.NET in the process. Am I going to run mutation tests every time I make a change? Probably not. There is quite a lot of information to go through, and that takes time, time that I don’t want to spend for every build. For that reason, I’m also not going to make these mutation tests part of the build and test pipeline for RestAssured.Net, at least not any time soon. This was nonetheless both a very valuable and a very enjoyable exercise, and I’ll definitely keep improving the tests and the code for RestAssured.Net using the suggestions that Stryker.NET presents.
This blog post is another one in the ‘writing things down to structure my thinking on where I want my career to go’ series. I will get back to writing technical and automation blog posts soon, but I need to finish my contract testing course first. One of the things I like to do most in life is traveling and seeing new places. Well, seeing new places, mostly, as the novelty of waiting, flying and staying in hotel rooms has definitely worn off by now. I am in the privileged position (really, that is what it is: I’m privileged, and I fully realize that) that I get to scratch this travel itch professionally on a regular basis these days. Over the last few years, I have been invited to contribute to meetups and conferences abroad, and I also get to run in-house training sessions with companies outside the Netherlands a couple of times per year. Most of this traveling takes place within Europe, but for the last three years, I have been able to travel outside of Europe once every year (South Africa in 2022, Canada in 2023 and the United States in 2024), and needless to say I have enjoyed those opportunities very much. To give you an idea of the amount of traveling I do: for 2025, I now have four work-related trips abroad scheduled, and I am pretty sure at least a few more will be added to that before the year ends (it’s only just February…). That might not be much travel by some people’s standards, but for me, it is. And it seems the number of opportunities I get for traveling increase year over year, to the point where I have to say ‘no’ to several of these opportunities. Say no? Why? I thought you just said you loved to travel? Yes, that’s true. I do love to travel. But I also love spending time at home with my family, and that comes first. Always. Now, my sons are getting older, and being away from home for a few days doesn’t put as much pressure on them and on my wife as it did a few years ago. Still, I always need to find a balance between spending time with them and spending time at work. I am away from home for work not just when I’m abroad. I run evening training sessions with clients here in the Netherlands on a regular basis, too, as well as training sessions in my evenings for clients in different time zones, mainly US-based clients. And all that adds up. I try to only be away from home one night per week, but often, it’s two. When I travel abroad, it’s even more than that. Again, I’m not complaining. Not at all. It is an absolute privilege to get to travel for work and get paid to do that, but I cannot do that indefinitely, and that’s why I have made a decision: With a few exceptions (more on those below), I am going to say ‘no’ to conferences abroad from now on. This is a tough decision for me to make, but sometimes that’s exactly what you need to do. Tough, because I have very fond memories of all the conferences and meetups abroad I have contributed to. My first one, Romanian Testing Conference in 2017. My first keynote abroad, UKStar in 2019. My first one outside of Europe, Targeting Quality in 2023. They were all amazing, because of the travel and sightseeing (when time allowed), but also because of all the people I have met at these conferences. Yet, I can meet at least some of these people at conferences here in the Netherlands, too. Test Automation Days, the TestNet events, the Dutch Testing Day and TestMass all provide a great opportunity for me to catch up with my network. Sometimes, international conferences come to the Netherlands, too, like AutomationSTAR this year. And then there are plenty of smaller meetups here in the Netherlands (and Belgium) where I can meet and catch up with people as well. Plus, the money. I am not going to be a hypocrite and say that money doesn’t play into this. For the reasons mentioned above, I have a limited number of opportunities to travel every year, and I prefer to spend those on in-house training sessions with clients abroad, simply because the pay is much better. Even when a conference compensates flights and hotel (as they should) and offer a speaker or workshop facilitator fee (a nice bonus), it will be significantly less of a payday than when I run a training session with a client. That’s not the fault of those conferences, not at all, especially when they’re compensating their speakers fairly, but this is simply a matter of numbers and budgets. At the moment, I have one, maybe two contributions to conferences abroad coming up, and I gave them my word, so I’ll be there. That’s the SAST 30-year anniversary conference in October, plus one other conference that I’m talking to but haven’t received a ‘yes’ or ‘no’ from yet. Other than that, if conferences reach out to me, it’s likely to be a ‘no’ from now on, unless: the event pays a fee comparable to my rate for in-house training I can combine the event with paid in-house training (for example with a sponsor) it is a country or region I really, really want to visit, either for personal reasons or because I want to grow my professional network there I don’t see the first one happening soon, and the list of destinations for the third one is very short (Norway, Canada, New Zealand, that’s pretty much it), so unless we can arrange paid in-house training alongside the conference, the answer will be a ‘no’ from me. Will this reduce the number of travel opportunities for me? Maybe. Maybe not. Again, I see the number of requests I get for in-house training abroad growing, too, and if that dies down, it’ll be a sign for me that I’ll have to work harder to create those opportunities. For 2025, things are looking pretty good, with trips for training to Romania, North Macedonia and Denmark already scheduled, and several leads for more in the pipeline. And if the number of opportunities does go down, that’s fine, too. I’m happy to spend that time with family, working on other things, or riding my bike. And I’m sure there will be a few opportunities to speak at online meetups, events and webinars, too.
As is the case every year, 2025 is starting off relatively slowly. There’s not a lot of training courses to run yet, and since a few of the projects I worked on wrapped up in December, I find myself with a little bit of extra time and headspace on my hands. I actually enjoy these slower moments, because they give me some time to think about where my professional career is going, if I’m still happy with the direction it is going on, and what I would like to see changed. Last year, I quit doing full time projects as an individual contributor to development teams in favour of part-time consultancy work and more focus on my training services. 2024 has been a great year overall, and I would be happy to continue working in this way in 2025. However, as a thought experiment, I took some time to think about what it would take for me to go back to full time roles, or maybe (maybe!) even consider joining a company on a permanent basis. Please note that this post is not intended as an ‘I need a job!’ cry for help. My pipeline for 2025 is slowly but surely filling up, and again, I am very happy with the direction my career is going at the moment. However, I have learned that it never hurts to leave your options open, and even though I love the variety in my working days these days, I think I would enjoy working with one team, on one goal, for an extended amount of time, too, under the right conditions. If nothing else, this post might serve as a reference post to send to people and companies that reach out to me with a full time contract opportunity or even a permanent job opening. This is also not a list of requirements that is set in stone. As my views on what would make a great job change (and they will), I will update this post to reflect those views. So, to even consider joining a company on a full-time contract or even a permanent basis, there are basically three things I will and should consider: What does the job look like? What will I be doing on a day-to-day basis? What are the must-haves regarding terms and conditions? What are the nice to haves that would provide the icing on the cake for me? Let’s take a closer look at each of these things. What I look for in a job As I mentioned before, I am not looking for a job as an individual contributor to a development team. I have done that for many years, and it does not really give me the energy that it used to. On the other hand, I am definitely not looking for a hands-off, managerial kind of role, as I’d like to think I would make an atrocious manager. Plus, I simply enjoy being hands-on and writing code way too much to let that go. I would like to be responsible for designing and implementing the testing and automation strategy for a product I believe in. It would be a lead role, but, as mentioned, with plenty (as in daily) opportunities to get hands-on and contribute to the code. The work would have to be technically and mentally challenging enough to keep me motivated in the long term. Getting bored quickly is something I suffer from, which is the main driver behind only doing part-time projects and working on multiple different things in parallel right now. I don’t want to work for a consultancy and be ‘farmed out’ to their clients. I’ve done that pretty much my entire career, and if that’s what the job will look like, I’d rather keep working the way I’m working now. The must-haves There are (quite) a few things that are non-negotiable for me to even consider joining a company full time, no matter if it’s on a contract or a permanent basis. The pay must be excellent. Let’s not beat around the bush here: people work to make money. I do, too. I’m doing very well right now, and I don’t want that to change. The company should be output-focused, as in they don’t care when I work, how many hours I put in and where I work from, as long as the job gets done. I am sort of spoiled by my current way of working, I fully realise that, but I’ve grown to love the flexibility. By the way, please don’t read ‘flexible’ as ‘working willy-nilly’. Most work is not done in a vacuum, and you will have to coordinate with others. The key word here is ‘balance’. Collaboration should be part of the company culture. I enjoy working in pair programming and pair testing setups. What I do not like are pointless meetings, and that includes having Scrum ceremonies ‘just because’. The company should be a remote-first company. I don’t mind the occasional office day, but I value my time too much to spend hours per week on commuting. I’ve done that for years, and it is time I’ll never get back. The company should actively stimulate me contributing to conferences and meetups. Public speaking is an important part of my career at the moment, and I get a lot of value from it. I don’t want to give that up. There should be plenty of opportunities for teaching others. This is what I do for a living right now, I really enjoy it, and I’d like to think I’m pretty good at it, too. Just like with the public speaking, I don’t want to give that up. This teaching can take many forms, though. Running workshops and regular pairing with others are just two examples. The job should scratch my travel itch. I travel abroad for work on average about 5-6 times per year these days, and I would like to keep doing that, as I get a lot of energy from seeing different places and meeting people. Please note that ‘traveling’ and ‘commuting’ are two completely different things. Yes, I realize this is quite a long list, but I really enjoy my career at the moment, and there are a lot of aspects to it that I’m not ready to give up. The nice to haves There are also some things that are not strictly necessary, but would be very nice to have in a job or full time contract: The opportunity to continue working on side gigs. I have a few returning customers that I’ve been working with for years, and I would really appreciate the opportunity to continue doing that. I realise that I would have to give up some things, but there are a few clients that I would really like to keep working with. By the way, this is only a nice to have for permanent jobs. For contracting gigs, it is a must-have. It would be very nice if the technology stack that the company is using is based on C#. I’ve been doing quite a bit of work in this stack over the years and I would like to go even deeper. If the travel itch I mentioned under the must-haves could be scratched with regular travel to Canada, Norway or South Africa, three of my favourite destinations in the world, that would be a very big plus. I realize that the list of requirements above is a long one. I don’t think there is a single job out there that ticks all the boxes. But, again, I really like what I’m doing at the moment, and most of the boxes are ticked at the moment. I would absolutely consider going full time with a client or even an employer, but I want it to be a step forward, not a step back. After all, this is mostly a thought experiment at the moment, and until that perfect contract or job comes along, I’ll happily continue what I’m doing right now.
More in programming
Email is your most important online account, so keep it clean.
Kubernetes is not exactly the most fun piece of technology around. Learning it isn’t easy, and learning the surrounding ecosystem is even harder. Even those who have managed to tame it are still afraid of getting paged by an ETCD cluster corruption, a Kubelet certificate expiration, or the DNS breaking down (and somehow, it’s always the DNS). Samuel Sianipar If you’re like me, the thought of making your own orchestrator has crossed your mind a few times. The result would, of course, be a magical piece of technology that is both simple to learn and wouldn’t break down every weekend. Sadly, the task seems daunting. Kubernetes is a multi-million lines of code project which has been worked on for more than a decade. The good thing is someone wrote a book that can serve as a good starting point to explore the idea of building our own container orchestrator. This book is named “Build an Orchestrator in Go”, written by Tim Boring, published by Manning. The tasks The basic unit of our container orchestrator is called a “task”. A task represents a single container. It contains configuration data, like the container’s name, image and exposed ports. Most importantly, it indicates the container state, and so acts as a state machine. The state of a task can be Pending, Scheduled, Running, Completed or Failed. Each task will need to interact with a container runtime, through a client. In the book, we use Docker (aka Moby). The client will get its configuration from the task and then proceed to pull the image, create the container and start it. When it is time to finish the task, it will stop the container and remove it. The workers Above the task, we have workers. Each machine in the cluster runs a worker. Workers expose an API through which they receive commands. Those commands are added to a queue to be processed asynchronously. When the queue gets processed, the worker will start or stop tasks using the container client. In addition to exposing the ability to start and stop tasks, the worker must be able to list all the tasks running on it. This demands keeping a task database in the worker’s memory and updating it every time a task change’s state. The worker also needs to be able to provide information about its resources, like the available CPU and memory. The book suggests reading the /proc Linux file system using goprocinfo, but since I use a Mac, I used gopsutil. The manager On top of our cluster of workers, we have the manager. The manager also exposes an API, which allows us to start, stop, and list tasks on the cluster. Every time we want to create a new task, the manager will call a scheduler component. The scheduler has to list the workers that can accept more tasks, assign them a score by suitability and return the best one. When this is done, the manager will send the work to be done using the worker’s API. In the book, the author also suggests that the manager component should keep track of every tasks state by performing regular health checks. Health checks typically consist of querying an HTTP endpoint (i.e. /ready) and checking if it returns 200. In case a health check fails, the manager asks the worker to restart the task. I’m not sure if I agree with this idea. This could lead to the manager and worker having differing opinions about a task state. It will also cause scaling issues: the manager workload will have to grow linearly as we add tasks, and not just when we add workers. As far as I know, in Kubernetes, Kubelet (the equivalent of the worker here) is responsible for performing health checks. The CLI The last part of the project is to create a CLI to make sure our new orchestrator can be used without having to resort to firing up curl. The CLI needs to implement the following features: start a worker start a manager run a task in the cluster stop a task get the task status get the worker node status Using cobra makes this part fairly straightforward. It lets you create very modern feeling command-line apps, with properly formatted help commands and easy argument parsing. Once this is done, we almost have a fully functional orchestrator. We just need to add authentication. And maybe some kind of DaemonSet implementation would be nice. And a way to handle mounting volumes…
Unexamined life is not worth living said Socrates. I don’t know about that but to become a better, faster, more productive programmer it pays to examine what makes you un-productive. Fixing bugs is one of those un-productive activities. You have to fix them but it would be even better if you didn’t write them in the first place. Therefore it’s good to reflect after fixing a bug. Why did the bug happen? Could I have done something to not write the bug in the first place? If I did write the bug, could I do something to diagnose or fix it faster? This seems like a great idea that I wasn’t doing. Until now. Here’s a random selection of bugs I found and fixed in SumatraPDF, with some reflections. SumatraPDF is a C++ win32 Windows app. It’s a small, fast, open-source, multi-format PDF/eBook/Comic Book reader. To keep the app small and fast I generally avoid using other people’s code. As a result most code is mine and most bugs are mine. Let’s reflect on those bugs. TabWidth doesn’t work A user reported that TabWidth advanced setting doesn’t work in 3.5.2 but worked in 3.4.6. I looked at the code and indeed: the setting was not used anywhere. The fix was to use it. Why did the bug happen? It was a refactoring. I heavily refactored tabs control. Somehow during the rewrite I forgot to use the advanced setting when creating the new tabs control, even though I did write the code to support it in the control. I guess you could call it sloppiness. How could I not write the bug? I could review the changes more carefully. There’s no-one else working on this project so there’s no one else to do additional code reviews. I typically do a code review by myself with webdiff but let’s face it: reviewing changes right after writing them is the worst possible time. I’m biased to think that the code I just wrote is correct and I’m often mentally exhausted. Maybe I should adopt a process when I review changes made yesterday with fresh, un-tired eyes? How could I detect the bug earlier?. 3.5.2 release happened over a year ago. Could I have found it sooner? I knew I was refactoring tabs code. I knew I have a setting for changing the look of tabs. If I connected the dots at the time, I could have tested if the setting still works. I don’t make releases too often. I could do more testing before each release and at the very least verify all advanced settings work as expected. The real problem In retrospect, I shouldn’t have implemented that feature at all. I like Sumatra’s customizability and I think it’s non-trivial contributor to it’s popularity but it took over a year for someone to notice and report that particular bug. It’s clear it’s not a frequently used feature. I implemented it because someone asked and it was easy. I should have said no to that particular request. Fix printing crash by correctly ref-counting engine Bugs can crash your program. Users rarely report crashes even though I did put effort into making it easy. When I a crash happens I have a crash handler that saves the diagnostic info to a file and I show a message box asking users to report the crash and with a press of a button I launch a notepad with diagnostic info and a browser with a page describing how to submit that as a GitHub issue. The other button is to ignore my pleas for help. Most users overwhelmingly choose to ignore. I know that because I also have crash reporting system that sends me a crash report. I get thousands of crash reports for every crash reported by the user. Therefore I’m convinced that the single most impactful thing for making software that doesn’t crash is to have a crash reporting system, look at the crashes and fix them. This is not a perfect system because all I have is a call stack of crashed thread, info about the computer and very limited logs. Nevertheless, sometimes all it takes is a look at the crash call stack and inspection of the code. I saw a crash in printing code which I fixed after some code inspection. The clue was that I was accessing a seemingly destroyed instance of Engine. That was easy to diagnose because I just refactored the code to add ref-counting to Engine so it was easy to connect the dots. I’m not a fan of ref-counting. It’s easy to mess up ref-counting (add too many refs, which leads to memory leaks or too many releases which leads to premature destruction). I’ve seen codebases where developers were crazy in love with ref-counting: every little thing, even objects with obvious lifetimes. In contrast,, that was the first ref-counted object in over 100k loc of SumatraPDF code. It was necessary in this case because I would potentially hand off the object to a printing thread so its lifetime could outlast the lifetime of the window for which it was created. How could I not write the bug? It’s another case of sloppiness but I don’t feel bad. I think the bug existed there before the refactoring and this is the hard part about programming: complex interactions between distant, in space and time, parts of the program. Again, more time spent reviewing the change could have prevented it. As a bonus, I managed to simplify the logic a bit. Writing software is an incremental process. I could feel bad about not writing the perfect code from the beginning but I choose to enjoy the process of finding and implementing improvements. Making the code and the program better over time. Tracking down a chm thumbnail crash Not all crashes can be fixed given information in crash report. I saw a report with crash related to creating a thumbnail crash. I couldn’t figure out why it crashes but I could add more logging to help figure out the issue if it happens again. If it doesn’t happen again, then I win. If it does happen again, I will have more context in the log to help me figure out the issue. Update: I did fix the crash. Fix crash when viewing favorites menu A user reported a crash. I was able to reproduce the crash and fix it. This is the bast case scenario: a bug report with instructions to reproduce a crash. If I can reproduce the crash when running debug build under the debugger, it’s typically very easy to figure out the problem and fix it. In this case I’ve recently implemented an improved version of StrVec (vector of strings) class. It had a compatibility bug compared to previous implementation in that StrVec::InsertAt(0) into an empty vector would crash. Arguably it’s not a correct usage but existing code used it so I’ve added support to InsertAt() at the end of vector. How could I not write the bug? I should have written a unit test (which I did in the fix). I don’t blindly advocate unit tests. Writing tests has a productivity cost but for such low-level, relatively tricky code, unit tests are good. I don’t feel too bad about it. I did write lots of tests for StrVec and arguably this particular usage of InsertAt() was borderline correct so it didn’t occur to me to test that condition. Use after free I saw a crash in crash reports, close to DeleteThumbnailForFile(). I looked at the code: if (!fs->favorites->IsEmpty()) { // only hide documents with favorites gFileHistory.MarkFileInexistent(fs->filePath, true); } else { gFileHistory.Remove(fs); DeleteDisplayState(fs); } DeleteThumbnailForFile(fs->filePath); I immediately spotted suspicious part: we call DeleteDisplayState(fs) and then might use fs->filePath. I looked at DeleteDisplayState and it does, in fact, deletes fs and all its data, including filePath. So we use freed data in a classic use after free bug. The fix was simple: make a copy of fs->filePath before calling DeleteDisplayState and use that. How could I not write the bug? Same story: be more careful when reviewing the changes, test the changes more. If I fail that, crash reporting saves my ass. The bug didn’t last more than a few days and affected only one user. I immediately fixed it and published an update. Summary of being more productive and writing bug free software If many people use your software, a crash reporting system is a must. Crashes happen and few of them are reported by users. Code reviews can catch bugs but they are also costly and reviewing your own code right after you write it is not a good time. You’re tired and biased to think your code is correct. Maybe reviewing the code a day after, with fresh eyes, would be better. I don’t know, I haven’t tried it.
A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky
As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)