More from alexwlchan
I’ve been working on a project where I need to plot points on a map. I don’t need an interactive or dynamic visualisation – just a static map with coloured dots for each coordinate. I’ve created maps on the web using Leaflet.js, which load map data from OpenStreetMap (OSM) and support zooming and panning – but for this project, I want a standalone image rather than something I embed in a web page. I want to put in coordinates, and get a PNG image back. This feels like it should be straightforward. There are lots of Python libraries for data visualisation, but it’s not an area I’ve ever explored in detail. I don’t know how to use these libraries, and despite trying I couldn’t work out how to accomplish this seemingly simple task. I made several attempts with libraries like matplotlib and plotly, but I felt like I was fighting the tools. Rather than persist, I wrote my own solution with “lower level” tools. The key was a page on the OpenStreetMap wiki explaining how to convert lat/lon coordinates into the pixel system used by OSM tiles. In particular, it allowed me to break the process into two steps: Get a “base map” image that covers the entire world Convert lat/lon coordinates into xy coordinates that can be overlaid on this image Let’s go through those steps. Get a “base map” image that covers the entire world Let’s talk about how OpenStreetMap works, and in particular their image tiles. If you start at the most zoomed-out level, OSM represents the entire world with a single 256×256 pixel square. This is the Web Mercator projection, and you don’t get much detail – just a rough outline of the world. We can zoom in, and this tile splits into four new tiles of the same size. There are twice as many pixels along each edge, and each tile has more detail. Notice that country boundaries are visible now, but we can’t see any names yet. We can zoom in even further, and each of these tiles split again. There still aren’t any text labels, but the map is getting more detailed and we can see small features that weren’t visible before. You get the idea – we could keep zooming, and we’d get more and more tiles, each with more detail. This tile system means you can get detailed information for a specific area, without loading the entire world. For example, if I’m looking at street information in Britain, I only need the detailed tiles for that part of the world. I don’t need the detailed tiles for Bolivia at the same time. OpenStreetMap will only give you 256×256 pixels at a time, but we can download every tile and stitch them together, one-by-one. Here’s a Python script that enumerates all the tiles at a particular zoom level, downloads them, and uses the Pillow library to combine them into a single large image: #!/usr/bin/env python3 """ Download all the map tiles for a particular zoom level from OpenStreetMap, and stitch them into a single image. """ import io import itertools import httpx from PIL import Image zoom_level = 2 width = 256 * 2**zoom_level height = 256 * (2**zoom_level) im = Image.new("RGB", (width, height)) for x, y in itertools.product(range(2**zoom_level), range(2**zoom_level)): resp = httpx.get(f"https://tile.openstreetmap.org/{zoom_level}/{x}/{y}.png", timeout=50) resp.raise_for_status() im_buffer = Image.open(io.BytesIO(resp.content)) im.paste(im_buffer, (x * 256, y * 256)) out_path = f"map_{zoom_level}.png" im.save(out_path) print(out_path) The higher the zoom level, the more tiles you need to download, and the larger the final image will be. I ran this script up to zoom level 6, and this is the data involved: Zoom level Number of tiles Pixels File size 0 1 256×256 17.1 kB 1 4 512×512 56.3 kB 2 16 1024×1024 155.2 kB 3 64 2048×2048 506.4 kB 4 256 4096×4096 2.7 MB 5 1,024 8192×8192 13.9 MB 6 4,096 16384×16384 46.1 MB I can just about open that zoom level 6 image on my computer, but it’s struggling. I didn’t try opening zoom level 7 – that includes 16,384 tiles, and I’d probably run out of memory. For most static images, zoom level 3 or 4 should be sufficient – I ended up a base map from zoom level 4 for my project. It takes a minute or so to download all the tiles from OpenStreetMap, but you only need to request it once, and then you have a static image you can use again and again. This is a particularly good approach if you want to draw a lot of maps. OpenStreetMap is provided for free, and we want to be a respectful user of the service. Downloading all the map tiles once is more efficient than making repeated requests for the same data. Overlay lat/lon coordinates on this base map Now we have an image with a map of the whole world, we need to overlay our lat/lon coordinates as points on this map. I found instructions on the OpenStreetMap wiki which explain how to convert GPS coordinates into a position on the unit square, which we can in turn add to our map. They outline a straightforward algorithm, which I implemented in Python: import math def convert_gps_coordinates_to_unit_xy( *, latitude: float, longitude: float ) -> tuple[float, float]: """ Convert GPS coordinates to positions on the unit square, which can be plotted on a Web Mercator projection of the world. This expects the coordinates to be specified in **degrees**. The result will be (x, y) coordinates: - x will fall in the range (0, 1). x=0 is the left (180° west) edge of the map. x=1 is the right (180° east) edge of the map. x=0.5 is the middle, the prime meridian. - y will fall in the range (0, 1). y=0 is the top (north) edge of the map, at 85.0511 °N. y=1 is the bottom (south) edge of the map, at 85.0511 °S. y=0.5 is the middle, the equator. """ # This is based on instructions from the OpenStreetMap Wiki: # https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#Example:_Convert_a_GPS_coordinate_to_a_pixel_position_in_a_Web_Mercator_tile # (Retrieved 16 January 2025) # Convert the coordinate to the Web Mercator projection # (https://epsg.io/3857) # # x = longitude # y = arsinh(tan(latitude)) # x_webm = longitude y_webm = math.asinh(math.tan(math.radians(latitude))) # Transform the projected point onto the unit square # # x = 0.5 + x / 360 # y = 0.5 - y / 2π # x_unit = 0.5 + x_webm / 360 y_unit = 0.5 - y_webm / (2 * math.pi) return x_unit, y_unit Their documentation includes a worked example using the coordinates of the Hachiko Statue. We can run our code, and check we get the same results: >>> convert_gps_coordinates_to_unit_xy(latitude=35.6590699, longitude=139.7006793) (0.8880574425, 0.39385379958274735) Most users of OpenStreetMap tiles will use these unit positions to select the tiles they need, and then dowload those images – but we can also position these points directly on the global map. I wrote some more Pillow code that converts GPS coordinates to these unit positions, scales those unit positions to the size of the entire map, then draws a coloured circle at each point on the map. Here’s the code: from PIL import Image, ImageDraw gps_coordinates = [ # Hachiko Memorial Statue in Tokyo {"latitude": 35.6590699, "longitude": 139.7006793}, # Greyfriars Bobby in Edinburgh {"latitude": 55.9469224, "longitude": -3.1913043}, # Fido Statue in Tuscany {"latitude": 43.955101, "longitude": 11.388186}, ] im = Image.open("base_map.png") draw = ImageDraw.Draw(im) for coord in gps_coordinates: x, y = convert_gps_coordinates_to_unit_xy(**coord) radius = 32 draw.ellipse( [ x * im.width - radius, y * im.height - radius, x * im.width + radius, y * im.height + radius, ], fill="red", ) im.save("map_with_dots.png") and here’s the map it produces: The nice thing about writing this code in Pillow is that it’s a library I already know how to use, and so I can customise it if I need to. I can change the shape and colour of the points, or crop to specific regions, or add text to the image. I’m sure more sophisticated data visualisation libraries can do all this, and more – but I wouldn’t know how. The downside is that if I need more advanced features, I’ll have to write them myself. I’m okay with that – trading sophistication for simplicity. I didn’t need to learn a complex visualization library – I was able to write code I can read and understand. In a world full of AI-generating code, writing something I know I understand feels more important than ever. [If the formatting of this post looks odd in your feed reader, visit the original article]
I’m sitting in a small coffee shop in Brooklyn. I have a warm drink, and it’s just started to snow outside. I’m visiting New York to see Operation Mincemeat on Broadway – I was at the dress rehearsal yesterday, and I’ll be at the opening preview tonight. I’ve seen this show more times than I care to count, and I hope US theater-goers love it as much as Brits. The people who make the show will tell you that it’s about a bunch of misfits who thought they could do something ridiculous, who had the audacity to believe in something unlikely. That’s certainly one way to see it. The musical tells the true story of a group of British spies who tried to fool Hitler with a dead body, fake papers, and an outrageous plan that could easily have failed. Decades later, the show’s creators would mirror that same spirit of unlikely ambition. Four friends, armed with their creativity, determination, and a wardrobe full of hats, created a new musical in a small London theatre. And after a series of transfers, they’re about to open the show under the bright lights of Broadway. But when I watch the show, I see a story about friendship. It’s about how we need our friends to help us, to inspire us, to push us to be the best versions of ourselves. I see the swaggering leader who needs a team to help him truly achieve. The nervous scientist who stands up for himself with the support of his friends. The enthusiastic secretary who learns wisdom and resilience from her elder. And so, I suppose, it’s fitting that I’m not in New York on my own. I’m here with friends – dozens of wonderful people who I met through this ridiculous show. At first, I was just an audience member. I sat in my seat, I watched the show, and I laughed and cried with equal measure. After the show, I waited at stage door to thank the cast. Then I came to see the show a second time. And a third. And a fourth. After a few trips, I started to see familiar faces waiting with me at stage door. So before the cast came out, we started chatting. Those conversations became a Twitter community, then a Discord, then a WhatsApp. We swapped fan art, merch, and stories of our favourite moments. We went to other shows together, and we hung out outside the theatre. I spent New Year’s Eve with a few of these friends, sitting on somebody’s floor and laughing about a bowl of limes like it was the funniest thing in the world. And now we’re together in New York. Meeting this kind, funny, and creative group of people might seem as unlikely as the premise of Mincemeat itself. But I believed it was possible, and here we are. I feel so lucky to have met these people, to take this ridiculous trip, to share these precious days with them. I know what a privilege this is – the time, the money, the ability to say let’s do this and make it happen. How many people can gather a dozen friends for even a single evening, let alone a trip halfway round the world? You might think it’s silly to travel this far for a theatre show, especially one we’ve seen plenty of times in London. Some people would never see the same show twice, and most of us are comfortably into double or triple-figures. Whenever somebody asks why, I don’t have a good answer. Because it’s fun? Because it’s moving? Because I enjoy it? I feel the need to justify it, as if there’s some logical reason that will make all of this okay. But maybe I don’t have to. Maybe joy doesn’t need justification. A theatre show doesn’t happen without people who care. Neither does a friendship. So much of our culture tells us that it’s not cool to care. It’s better to be detached, dismissive, disinterested. Enthusiasm is cringe. Sincerity is weakness. I’ve certainly felt that pressure – the urge to play it cool, to pretend I’m above it all. To act as if I only enjoy something a “normal” amount. Well, fuck that. I don’t know where the drive to be detached comes from. Maybe it’s to protect ourselves, a way to guard against disappointment. Maybe it’s to seem sophisticated, as if having passions makes us childish or less mature. Or perhaps it’s about control – if we stay detached, we never have to depend on others, we never have to trust in something bigger than ourselves. Being detached means you can’t get hurt – but you’ll also miss out on so much joy. I’m a big fan of being a big fan of things. So many of the best things in my life have come from caring, from letting myself be involved, from finding people who are a big fan of the same things as me. If I pretended not to care, I wouldn’t have any of that. Caring – deeply, foolishly, vulnerably – is how I connect with people. My friends and I care about this show, we care about each other, and we care about our joy. That care and love for each other is what brought us together, and without it we wouldn’t be here in this city. I know this is a once-in-a-lifetime trip. So many stars had to align – for us to meet, for the show we love to be successful, for us to be able to travel together. But if we didn’t care, none of those stars would have aligned. I know so many other friends who would have loved to be here but can’t be, for all kinds of reasons. Their absence isn’t for lack of caring, and they want the show to do well whether or not they’re here. I know they care, and that’s the important thing. To butcher Tennyson: I think it’s better to care about something you cannot affect, than to care about nothing at all. In a world that’s full of cynicism and spite and hatred, I feel that now more than ever. I’d recommend you go to the show if you haven’t already, but that’s not really the point of this post. Maybe you’ve already seen Operation Mincemeat, and it wasn’t for you. Maybe you’re not a theatre kid. Maybe you aren’t into musicals, or history, or war stories. That’s okay. I don’t mind if you care about different things to me. (Imagine how boring the world would be if we all cared about the same things!) But I want you to care about something. I want you to find it, find people who care about it too, and hold on to them. Because right now, in this city, with these people, at this show? I’m so glad I did. And I hope you find that sort of happiness too. Some of the people who made this trip special. Photo by Chloe, and taken from her Twitter. Timing note: I wrote this on February 15th, but I delayed posting it because I didn’t want to highlight the fact I was away from home. [If the formatting of this post looks odd in your feed reader, visit the original article]
As well as changing the way I organise my writing, last year I made some cosmetic improvements to this site. I design everything on this site myself, and I write the CSS by hand – I don’t use any third-party styles or frameworks. I don’t have any design training, and I don’t do design professionally, so I use this site as a place to learn and practice my design skills. It’s a continual work-in-progress, but I’d like to think it’s getting better over time. I design this site for readers. I write long, text-heavy posts with the occasional illustration or diagram, so I want something that will be comfortable to read and look good on a wide variety of browsers and devices. I get a lot of that “for free” by using semantic HTML and the default styles – most of my CSS is just cosmetic. Let’s go through some of the changes. Cleaning up the link styles This is what links used to look like: Every page has a tint colour, and then I was deriving different shades to style different links – a darker shade for visited links, a lighter shade for visited links in dark mode, and a background that appears on hover. I’m generating these new colours programatically, and I was so proud of getting that code working that I didn’t stop to think whether it was a good idea. In hindsight, I see several issues. The tint colour is meant to give the page a consistent visual appearance, but the different shades diluted that effect. I don’t think their meaning was especially obvious. How many readers ever worked it out? And the hover styles are actively unhelpful – just as you hover over a link you’re interested in, I’m making it harder to read! (At least in light mode – in dark mode, the hover style is barely legible.) One thing I noticed is that for certain tint colours, the “visited” colour I generated was barely distinguishable from the text colour. So I decided to lean into that in the new link styles: visited links are now the same colour as regular text. This new set of styles feels more coherent. I’m only using one shade of the tint colour, and I think the meaning is a bit clearer – only new-to-you links will get the pop of colour to stand out from the rest of the text. I’m happy to rely on underlines for the links you’ve already visited. And when you hover, the thick underline means you can see where you are, but the link text remains readable. Swapping out the font I swapped out the font, replacing Georgia with Charter. The difference is subtle, so I’d be surprised if anyone noticed: I’ve always used web safe fonts for this site – the fonts that are built into web browsers, and don’t need to be downloaded first. I’ve played with custom fonts from time to time, but there’s no font I like more enough to justify the hassle of loading a custom font. I still like Georgia, but I felt it was showing its age – it was designed in 1993 to look good on low-resolution screens, but looks a little chunky on modern displays. I think Charter looks nicer on high-resolution screens, but if you don’t have it installed then I fall back to Georgia. Making all the roundrects consistent I use a lot of rounded rectangles for components on this site, including article cards, blockquotes, and code blocks. For a long time they had similar but not identical styles, because I designed them all at different times. There were weird inconsistencies. For example, why does one roundrect have a 2px border, but another one is 3px? These are small details that nobody will ever notice directly, but undermine the sense of visual together-ness. I’ve done a complete overhaul of these styles, to make everything look more consistent. I’m leaning heavily on CSS variables, a relatively new CSS feature that I’ve really come to like. Variables make it much easier to use consistent values in different rules. I also tweaked the appearance: I’ve removed another two shades of the tint colour. (Yes, those shades were different from the ones used in links.) Colour draws your attention, so I’m trying to use it more carefully. A link says “click here”. A heading says “start here”. What does a blockquote or code snippet say? It’s just part of the text, so it shouldn’t be grabbing your attention. I think the neutral background also makes the syntax highlighting easier to read, because the tint colour isn’t clashing with the code colours. I could probably consolidate the shades of grey I’m using, but that’s a task for another day. I also removed the left indent on blockquotes and code blocks – I think it looks nicer to have a flush left edge for everything, and it means you can read more text on mobile screens. (That’s where I really felt the issues with the old design.) What’s next? By tidying up the design and reducing the number of unique elements, I’ve got a bit of room to add something new. For a while now I’ve wanted a place at the bottom of posts for common actions, or links to related and follow-up posts. As I do more and more long-form, reflective writing, I want to be able to say “if you liked this, you should read this too”. I want something that catches your eye, but doesn’t distract from the article you’re already reading. Louie Mantia has a version of this that I quite like: I’ve held off designing this because the existing pages felt too busy, but now I feel like I have space to add this – there aren’t as many clashing colours and components to compete for your attention. I’m still sketching out designs – my current idea is my rounded rectangle blocks, but with a coloured border instead of a subtle grey, but when I did a prototype, I feel like it’s missing something. I need to try a few more ideas. Watch this space! [If the formatting of this post looks odd in your feed reader, visit the original article]
Last year I wrote about using static websites for tiny archives. The idea is that I create tiny websites to store and describe my digital collections. There are several reasons I like this approach: HTML is flexible and lets me display data in a variety of ways; it’s likely to remain readable for a long time; it lets me add more context than a folder full of files. I’m converting more and more of my local data to be stored in static websites – paperwork I’ve scanned, screenshots I’ve taken, and web pages I’ve bookmarked. I really like this approach. I got a lot of positive feedback, but the most common reply was “please share some source code”. People wanted to see examples of the HTML and JavaScript I was using I deliberately omitted any code from the original post, because I wanted to focus on the concept, not the detail. I was trying to persuade you that static websites are a good idea for storing small archives and data sets, and I didn’t want to get distracted by the implementation. There’s also no single code base I could share – every site I build is different, and the code is often scrappy or poorly documented. I’ve built dozens of small sites this way, and there’s no site that serves as a good example of this approach – they’re all built differently, implement a subset of my ideas, or have hard-coded details. Even if I shared some source code, it would be difficult to read or understand what’s going on. However, there’s clearly an appetite for that sort of explanation, so this follow-up post will discuss the “how” rather than the “why”. There’s a lot of code, especially JavaScript, which I’ll explain in small digestible snippets. That’s another reason I didn’t describe this in the original post – I didn’t want anyone to feel overwhelmed or put off. A lot of what I’m describing here is nice-to-have, not essential. You can get started with something pretty simple. I’ll go through a feature at a time, as if we were building a new static site. I’ll use bookmarks as an example, but there’s nothing in this post that’s specific to bookmarking. If you’d like to see everything working together, check out the demo site. It includes the full code for all the sections in this post. Let’s dive in! Start with a hand-written HTML page (demo) Reduce repetition with JavaScript templates (demo) Add filtering to find specific items (demo) Introduce sorting to bring order to your data (demo) Use pagination to break up long lists (demo) Provide feedback with loading states and error handling (demo 1, demo 2) Test the code with QUnit and Playwright Manipulate the metadata with Python Store the website code in Git Closing thoughts demo) A website can be a single HTML file you edit by hand. Open a text editor like TextEdit or Notepad, copy-paste the following text, and save it in a file named bookmarks.html. <h1>Bookmarks</h1> <ul> <li><a href="https://estherschindler.medium.com/the-old-family-photos-project-lessons-in-creating-family-photos-that-people-want-to-keep-ea3909129943">Lessons in creating family photos that people want to keep, by Esther Schindler (2018)</a></li> <li><a href="https://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/">Why I Am Not a Maker, by Debbie Chachra (The Atlantic, 2015)</a></li> <li><a href="https://meyerweb.com/eric/thoughts/2014/06/10/so-many-nevers/">So Many Nevers, by Eric Meyer (2014)</a></li> </ul> If you open this file in your web browser, you’ll see a list of three links. You can also check out my demo page to see this in action. This is an excellent way to build a website. If you stop here, you’ve got all the flexibility and portability of HTML, and this file will remain readable for a very long time. I build a lot of sites this way. I like it for small data sets that I know are never going to change, or which change very slowly. It’s simple, future-proof, and easy to edit if I ever need to. demo) As you store more data, it gets a bit tedious to keep copying the HTML markup for each item. Wouldn’t it be useful if we could push it into a reusable template? When a site gets bigger, I convert the metadata into JSON, then I use JavaScript and template literals to render it on the page. Let’s start with a simple example of metadata in JSON. My real data has more fields, like date saved or a list of keyword tags, but this is enough to get the idea: const bookmarks = [ { "url": "https://estherschindler.medium.com/the-old-family-photos-project-lessons-in-creating-family-photos-that-people-want-to-keep-ea3909129943", "title": "Lessons in creating family photos that people want to keep, by Esther Schindler (2018)" }, { "url": "https://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/", "title": "Why I Am Not a Maker, by Debbie Chachra (The Atlantic, 2015)" }, { "url": "https://meyerweb.com/eric/thoughts/2014/06/10/so-many-nevers/", "title": "So Many Nevers, by Eric Meyer (2014)" } ]; Then I have a function that renders the data for a single bookmark as HTML: function Bookmark(bookmark) { return ` <li> <a href="${bookmark.url}">${bookmark.title}</a> </li> `; } Having a function that returns HTML is inspired by React and Next.js, where code is split into “components” that each render part of the web app. This function is simpler than what you’d get in React. Part of React’s behaviour is that it will re-render the page if the data changes, but my function won’t do that. That’s okay, because my data isn’t going to change. The HTML gets rendered once when the page loads, and that’s enough. I’m using a template literal because I find it simple and readable. It looks pretty close to the actual HTML, so I have a pretty good idea of what’s going to appear on the page. Template literals are dangerous if you’re getting data from an untrusted source – it could allow somebody to inject arbitrary HTML into your page – but I’m writing all my metadata, so I trust it. I know there are other ways to construct HTML in JavaScript, like document.createElement(), the <template> element, or Web Components – but template literals have always been sufficient for me, and I’ve never had a reason to explore other options. Now we have to call this function when the page loads, and render the list of bookmarks. Here’s the rest of the code: <script> window.addEventListener("DOMContentLoaded", () => { document.querySelector("#listOfBookmarks").innerHTML = bookmarks.map(Bookmark).join(""); }); </script> <h1>Bookmarks</h1> <ul id="listOfBookmarks"></ul> I’m listening for the DOMContentLoaded event, which occurs when the HTML page has been fully parsed. When that event occurs, it looks for <ul id="listOfBookmarks"> in the page, and inserts the HTML for the list of bookmarks. We have to wait for this event so the <ul> actually exists. If we tried to run it immediately, it might run before the <ul> exists, and then it wouldn’t know where to insert the HTML. I’m using querySelector() to find the <ul> I want to modify – this is a newer alternative to functions like getElementById(). It’s quite flexible, because I can target any CSS selector, and I find CSS rules easier to remember than the family of getElementBy* functions. Although it’s slightly slower in benchmarks, the difference is negligible and it’s easier for me to remember. If you want to see this page working, check out the demo page. I use this pattern as a starting point for a lot of my static sites – metadata in JSON, some functions that render HTML, and an event listener that renders the whole page after it loads. Once I have the basic site, I add data, render more HTML, and write CSS styles to make it look pretty. This is where I can have fun, and really customise each site. I keep tweaking until I have something I like. I’m ignoring CSS because that could be a whole other post, and there’s a vintage charm to unstyled HTML – it’s fine for what we’re discussing today. What else can we do? demo) As the list gets even longer, it’s useful to have a way to find specific items in the list – I don’t want to scroll the whole thing every time. I like adding keyword tags to my data, and then filtering for items with particular tags. If I add other metadata fields, I could filter on those too. Here’s a brief sketch of the sort of interface I like: I like to be able to define a series of filters, and apply them to focus on a specific subset of items. I like to combine multiple filters to refine my search, and to see a list of applied filters with a way to remove them, if I’ve filtered too far. I like to apply filters from a global menu, or to use controls on each item to find similar items. I use URL query parameters to store the list of currently-applied filters, for example: bookmarks.html?tag=animals&tag=wtf&publicationYear=2025 This means that any UI element that adds or removes a filter is a link to a new URL, so clicking it loads a new page, which triggers a complete re-render with the new filters. When I write filtering code, I try to make it as easy as possible to define new filters. Every site needs a slightly different set of filters, but the overall principle is always the same: here’s a long list of items, find the items that match these rules. Let’s start by expanding our data model to include a couple of new fields: const bookmarks = [ { "url": "https://estherschindler.medium.com/the-old-family-photos-project-lessons-in-creating-family-photos-that-people-want-to-keep-ea3909129943", "title": "Lessons in creating family photos that people want to keep, by Esther Schindler (2018)", "tags": ["photography", "preservation"], "publicationYear": "2018" }, … ]; Then we can define some filters we might use to narrow the list: const bookmarkFilters = [ { id: 'tag', label: 'tagged with', filterFn: (bookmark, tagName) => bookmark.tags.includes(tagName), }, { id: 'publicationYear', label: 'published in', filterFn: (bookmark, year) => bookmark.publicationYear === year, }, ]; Each filter has three fields: id matches the name of the associated URL query parameter label is how the filter will be described in the list of applied filters filterFn is a function that takes two arguments: a bookmark, and a filter value, and returns true/false depending on whether the bookmark matches this filter This list is the only place where I need to customise the filters for a particular site; the rest of the filtering code is completely generic. This means there’s only one place I need to make changes if I want to add or remove filters. The next piece of the filtering code is a generic function that filters a list of items, and takes the list of filters as an argument: /* * Filter a list of items. * * This function takes the list of items and available filters, and the * URL query parameters passed to the page. * * This function returns a list with the items that match these filters, * and a list of filters that have been applied. */ function filterItems({ items, filters, params }) { // By default, all items match, and no filters are applied. var matchingItems = items; var appliedFilters = []; // Go through the URL query params one by one, and look to // see if there's a matching filter. for (const [key, value] of params) { console.debug(`Checking query parameter ${key}`); const matchingFilter = filters.find(f => f.id === key); if (typeof matchingFilter === 'undefined') { continue; } // There's a matching filter! Go ahead and filter the // list of items to only those that match. console.debug(`Detected filter ${JSON.stringify(matchingFilter)}`); matchingItems = matchingItems.filter( item => matchingFilter.filterFn(item, value) ); // Construct a new query string that doesn't include // this filter. const altQuery = new URLSearchParams(params); altQuery.delete(key, value); const linkToRemove = "?" + altQuery.toString(); appliedFilters.push({ type: matchingFilter.id, label: matchingFilter.label, value, linkToRemove, }) } return { matchingItems, appliedFilters }; } This function doesn’t care what sort of items I’m passing, or what the actual filters are, so I can reuse it between different sites. It returns the list of matching items, and the list of applied filters. The latter allows me to show that list on the page. linkToRemove is a link to the same page with this filter removed, but keeping any other filters. This lets us provide a button that removes the filter. The final step is to wire this filtering into the page render. We need to make sure we only show items that match the filter, and show the user a list of applied filters. Here’s the new code: <script> window.addEventListener("DOMContentLoaded", () => { const params = new URLSearchParams(window.location.search); const { matchingItems: matchingBookmarks, appliedFilters } = filterItems({ items: bookmarks, filters: bookmarkFilters, params: params, }); document.querySelector("#appliedFilters").innerHTML = appliedFilters .map(f => `<li>${f.label}: ${f.value} <a href="${f.linkToRemove}">(remove)</a></li>`) .join(""); document.querySelector("#listOfBookmarks").innerHTML = matchingBookmarks.map(Bookmark).join(""); }); </script> <h1>Bookmarks</h1> <p>Applied filters:</p> <ul id="appliedFilters"></ul> <p>Bookmarks:</p> <ul id="listOfBookmarks"></ul> I stick to simple filters that can be phrased as a yes/no question, and I rely on my past self to have written sufficiently useful metadata. At least in static sites, I’ve never implemented anything like a fuzzy text search, where it’s less obvious whether a particular item should match. You can check out the filtering code on the demo page. demo) The next feature I usually implement is sorting. I build a dropdown menu with all the options, and picking one reloads the page with the new sort order. Here’s a quick design sketch: For example, I often sort by the date I saved an item, so I can find an item I saved recently. Another sort order I often use is “random”, which shuffles the items and is a fun way to explore the data. As with filters, I put the current sort order in a query parameter, for example: bookmarks.html?sortOrder=titleAtoZ As before, I want to write this in a generic way and share code between different sites. Let’s start by defining a list of sort options: const bookmarkSortOptions = [ { id: 'titleAtoZ', label: 'title (A to Z)', compareFn: (a, b) => a.title > b.title ? 1 : -1, }, { id: 'publicationYear', label: 'publication year (newest first)', compareFn: (a, b) => Number(b.publicationYear) - Number(a.publicationYear), }, ]; Each sort option has three fields: id is the value that will appear in the URL query parameter label is the human-readable label that will appear in the dropdown compareFn(a, b) is a function that compares two items, and will be passed directly to the JavaScript sort function. If it returns a negative value, then a sorts before b. If it returns a positve value, then a sorts after b. Next, we can define a function that will sort a list of items: /* * Sort a list of items. * * This function takes the list of items and available options, and the * URL query parameters passed to the page. * * It returns a list with the items in sorted order, and the * sort order that was applied. */ function sortItems({ items, sortOptions, params }) { // Did the user pass a sort order in the query parameters? const sortOrderId = getSortOrder(params); // What sort order are we using? // // Look for a matching sort option, or use the default if the sort // order is null/unrecognised. For now, use the first defined // sort order as the default. const defaultSort = sortOptions[0]; const selectedSort = sortOptions.find(s => s.id === sortOrderId) || defaultSort; console.debug(`Selected sort: ${JSON.stringify(selectedSort)}`); // Now apply the sort to the list of items. const sortedItems = items.sort(selectedSort.compareFn); return { sortedItems, appliedSortOrder: selectedSort }; } /* Get the current sort order from the URL query parameters. */ function getSortOrder(params) { return params.get("sortOrder"); } This function works with any list of items and sort orders, making it easy to reuse across different sites. I only have to define the list of sort orders once. This approach makes it easy to add new sort orders, and to write a component that renders a dropdown menu to pick the sort order: /* * Create a dropdown control to choose the sort order. When you pick * a different value, the page reloads with the new sort. */ function SortOrderDropdown({ sortOptions, appliedSortOrder }) { return ` <select onchange="setSortOrder(this.value)"> ${ sortOptions .map(({ id, label }) => ` <option value="${id}" ${id === appliedSortOrder.id ? 'selected' : ''}> ${label} </option> `) .join("") } </select> `; } function setSortOrder(sortOrderId) { const params = new URLSearchParams(window.location.search); params.set("sortOrder", sortOrderId); window.location.search = params.toString(); } Finally, we can wire the sorting code into the rest of the app. After filtering, we sort the items and then render the sorted list. We also show the sort controls on the page: <script> window.addEventListener("DOMContentLoaded", () => { const params = new URLSearchParams(window.location.search); const { matchingItems: matchingBookmarks, appliedFilters } = filterItems(…); … const { sortedItems: sortedBookmarks, appliedSortOrder } = sortItems({ items: matchingBookmarks, sortOptions: bookmarkSortOptions, params, }); document.querySelector("#sortOrder").innerHTML += SortOrderDropdown({ sortOptions: bookmarkSortOptions, appliedSortOrder }); document.querySelector("#listOfBookmarks").innerHTML = sortedBookmarks.map(Bookmark).join(""); }); </script> <p id="sortOrder">Sort by:</p> You can check out the sorting code on the demo page. demo) If you have a really long list of items, you may want to break them into multiple pages. This isn’t something I do very often. Modern web browsers are very performant, and you can put thousands of elements on the page without breaking a sweat. I’ve only had to add pagination in a couple of very image-heavy sites – if it’s a text-based site, I just show everything. (You may notice that, for example, there are no paginated lists anywhere on this site. By writing lean HTML, I can fit all my lists on a single page.) If I do want pagination, I stick to a classic design: As with other features, I use a URL query parameter to track the current page number: bookmarks.html?pageNumber=2 This code can be written in a completely generic way – it doesn’t have to care what sort of items we’re paginating. First, let’s write a function that will select a page of items for us. If we’re on page N, what items should we be showing? /* * Get a page of items. * * This function will reduce the list of items to the items that should * be shown on this particular page. */ function paginateItems({ items, pageNumber, pageSize }) { // Page numbers are 1-indexed, so page 1 corresponds to // the indices 0…(pageSize - 1). const startOfPage = (pageNumber - 1) * pageSize; const endOfPage = pageNumber * pageSize; const thisPage = items.slice(startOfPage, endOfPage); return { thisPage, totalPages: Math.ceil(items.length / pageSize), }; } In some of my sites, the page size is a suggestion rather than a hard rule. If there are 27 items and the page size is 25, I think it’s nicer to show all the items on one page than push a few items onto a second page which barely has anything on it. But that might reflect my general dislike of pagination, and it’s definitely a nice-to-have rather than a required feature. Once we know what page we’re on and how many pages there are, we can create a component to render some basic pagination controls: /* * Renders a list of pagination controls. * * This includes links to prev/next pages and the current page number. */ function PaginationControls({ pageNumber, totalPages, params }) { // If there are no pages, we don't need pagination controls. if (totalPages === 1) { return ""; } // Do we need a link to the previous page? Only if we're past page 1. if (pageNumber > 1) { const prevPageUrl = setPageNumber({ params, pageNumber: pageNumber - 1 }); prevPageLink = `<a href="${prevPageUrl}">← prev</a>`; } else { prevPageLink = null; } // Do we need a link to the next page? Only if we're before // the last page. if (pageNumber < totalPages) { const nextPageUrl = setPageNumber({ params, pageNumber: pageNumber + 1 }); nextPageLink = `<a href="${nextPageUrl}">next →</a>`; } else { nextPageLink = null; } const pageText = `Page ${pageNumber} of ${totalPages}`; // Construct the final result. return [prevPageLink, pageText, nextPageLink] .filter(p => p !== null) .join(" / "); } /* Returns a URL that points to the new page number. */ function setPageNumber({ params, pageNumber }) { const updatedParams = new URLSearchParams(params); updatedParams.set("pageNumber", pageNumber); return `?${updatedParams.toString()}`; } Finally, let’s wire this code into the rest of the app. We get the page number from the URL query parameters, paginate the list of filtered and sorted items, and show some pagination controls: <script> /* Get the current page number. */ function getPageNumber(params) { return Number(params.get("pageNumber")) || 1; } window.addEventListener("DOMContentLoaded", () => { const params = new URLSearchParams(window.location.search); const { matchingItems: matchingBookmarks, appliedFilters } = filterItems(…); const { sortedItems: sortedBookmarks, appliedSortOrder } = sortItems(…); const pageNumber = getPageNumber(params); const { thisPage: thisPageOfBookmarks, totalPages } = paginateItems({ items: sortedBookmarks, pageNumber, pageSize: 25, }); document.querySelector("#paginationControls").innerHTML += PaginationControls({ pageNumber, totalPages, params }); document.querySelector("#listOfBookmarks").innerHTML = thisPageOfBookmarks.map(Bookmark).join(""); }); </script> <p id="paginationControls">Pagination controls: </p> One thing that makes pagination a little tricky is that it affects filtering and sorting as well – when you change either of those, you probably want to reset to the first page. For example, if you’re filtering for animals and you’re on page 3, then you add a second filter for giraffes, you should reset to page 1. If you stay on page 3, it might be confusing if there are less than 3 pages of results with the new filter. The key to this is calling params.delete("pageNumber") when you update the URL query parameters. You can play with the pagination on the demo page. demo 1, demo 2) One problem with relying on JavaScript to render the page is that sometimes JavaScript goes wrong. For example, I write a lot of my metadata by hand, and a typo can create invalid JSON and break the page. There are also people who disable JavaScript, or sometimes it just doesn’t work. If I’m using the site, I can open the Developer Tools in my web browser and start debugging there – but that’s not a great experience. If you’re not expecting something to go wrong, it will just look like the page is taking a long time to load. We can do better. To start, we can add a <noscript> element that explains to users that they need to enable JavaScript. This will only be shown if they’ve disabled JavaScript: <noscript> <strong>You need to enable JavaScript to use this site!</strong> </noscript> I have a demo page which disables JavaScript, so you can see how the noscript tag behaves. This won’t help if JavaScript is broken rather than disabled, so we also need to add error handling. We can listen for the error event on the window, and report an error to the user – for example, if a script fails to load. <div id="errors"></div> <script> window.addEventListener("error", function(event) { document .querySelector('#errors') .innerHTML = `<strong>Something went wrong when loading the page!</strong>`; }); </script> We can also attach an onerror handler to specific script tags, which allows us to customise the error message – we can tell the user that a particular file failed to load. <script src="app.js" onerror="alert('Something went wrong while loading app.js')"></script> I have another demo page which has a basic error handler. Finally, I like to include a loading indicator, or some placeholder text that will be replaced when the page will finish loading – this tells the user where they can expect to see something load in. <ul id="listOfBookmarks">Loading…</ul> It’s somewhat rare for me to add a loading indicator or error handling, just because I’m the only user of my static sites, and it’s easier for me to use the developer tools when something breaks. But providing mechanisms for the user to understand what’s going on is crucial if you want to build static sites like this that other people will use. Test the code with QUnit and Playwright If I’m writing a very complicated viewer, it’s helpful to have tests. I’ve found two test frameworks that I particularly like for this purpose. QUnit is a JavaScript library that I use for unit testing – to me, that means testing individual functions and components. For example, QUnit was very helpful when I was writing the early iterations of the sorting and filtering code, and writing tests caught a number of mistakes. You can run QUnit in the browser, and it only requires two files, so I can test a project without creating a whole JavaScript build system or dependency tree. Here’s an example of a QUnit test: QUnit.test("sorts bookmarks by title", function(assert) { // Create three bookmarks with different titles const bookmarkA = { title: "Almanac for apples" }; const bookmarkC = { title: "Compendium of coconuts" }; const bookmarkP = { title: "Page about papayas" }; const params = new URLSearchParams("sortOrder=titleAtoZ"); // Pass the bookmarks in the wrong order, so they can't be sorted // correctly "by accident" const { sortedItems, appliedSortOrder } = sortItems({ items: [bookmarkC, bookmarkA, bookmarkP], sortOptions: bookmarkSortOptions, params, }); // Check the bookmarks have been sorted in the right order assert.deepEqual(sortedItems, [bookmarkA, bookmarkC, bookmarkP]); }); You can see this test running in the browser in my demo page. Playwright is a testing library that can open a web app in a real web browser, interact with the page, and check that the app behaves correctly. It’s often used for dynamic web apps, but it works just as well for static pages. For example, it can test that if you select a new sort order, the page reloads and show results in the correct order. Here’s an example of a simple test written with Playwright in Python: from playwright.sync_api import expect, sync_playwright with sync_playwright() as p: browser = p.webkit.launch() # Open the HTML file in the browser page = browser.new_page() page.goto('file:///Users/alexwlchan/Sites/sorting.html') # Look for an <li> element with one of the bookmarks -- this will # only appear if the page has rendered correctly. expect(page.get_by_text("So Many Nevers")).to_be_visible() browser.close() These tools are a great safety net for catching mistakes, but I don’t always need them. I only write tests for my more complicated sites – when the sorting/filtering code is particularly complex, there’s a lot of rendering code, or I anticipate making major changes in future. I don’t bother with tests when the site is simple and unlikely to change, and I can just do manual checks when I write it the first time. Tests are less useful if I know I’ll never make changes. This is getting away from the idea of a self-contained static website, because now I’m relying on third-party code, and for Playwright I need to maintain a working Python environment. I’m okay with this, because the website is still usable even if I can no longer run the tests. These are useful sidecar tools, but I only need them if I’m making changes. If I finish a site and I know I won’t change it again, I don’t need to worry about whether the tests will still work years later. Manipulate the metadata with Python For small sites, we could write all this JavaScript directly in <script> tags or in a single file. As we get more data, splitting the metadata and application logic makes everything easier to manage. One pattern I’ve adopted is to put all the item metadata into a single, standalone JavaScript file that assigns a single variable: const bookmarks = […]; and then load that file in the HTML page with a <script src="metadata.js"> element. I use JavaScript rather than pure JSON because browsers don’t allow fetching local JSON files via file://. If you open an HTML page without a web server, the browser will block requests to fetch a JSON file because of security restrictions. By storing data in a JavaScript file instead, I can load it with a simple <script> tag. I wrote a small Python library javascript-data-files that lets me interact with JSON stored this way. This allows me to write scripts that add data to the metadata file (like saving a new bookmark) or to verify the existing metadata (like checking that I have an archived copy of every bookmark). I’ll write more about this in future posts, because this one is long enough already. For example, let’s add a new bookmark to the metadata.js file: from javascript_data_files import read_js, write_js bookmarks = read_js("metadata.js", varname="bookmarks") bookmarks.append({ "url": "https://www.theguardian.com/lifeandstyle/2019/jan/13/ella-risbridger-john-underwood-friendship-life-new-family", "title": "When my world fell apart, my friends became my family, by Ella Risbridger (2019)" }) write_js("metadata.js", varname="bookmarks", value=bookmarks) We’re starting to blur the line between a static site and a static site generator. These scripts only work if I have a working Python environment, which is less future-proof than pure HTML. I’m happy with this compromise, because the website is fully functional without them – I only need to run these scripts if I’m modifying the metadata. If I stop making changes and the Python environment breaks, I can still read everything I’ve already saved. Store the website code in Git I create Git repositories for all of my local websites. This allows me to track changes, and it means I can experiment freely – I can always roll back if I break something. These Git repositories only live on my local machine. I run git init . in the folder, I create commits to record any changes, and that’s it. I don’t push the repository to GitHub or another remote Git server. (Although I do have backups of every site, of course.) Git has a lot of features for writing code in a collaborative environment, but I don’t need any of those here – I’m the only person working on these sites. Most of the time, I just use two commands: $ git add bookmarks.html $ git commit -m "Add filtering by author" This creates a labelled snapshot of my latest changes to bookmarks.html. I only track the text files in Git – the HTML, CSS, and JavaScript. I don’t track binary files like images and videos. Git struggles with those larger files, and I don’t edit those as much as the text files, so having them in version control is less useful. I write a gitignore file to ignore all of them. Closing thoughts There are lots of ideas here, but you don’t need to use all of them – most of my sites only use a few. Every site is different, and you can pick what makes most sense for your project. If you’re building a static site for a tiny archive, start with a simple HTML file. Add features like templates, sorting, and filtering incrementally as they become useful. You don’t need to add them all upfront – that can make things more complicated than they need to be. This approach can scale from simple collections to sophisticated archives. A static website built with HTML and JavaScript is easy to maintain and modify, has no external dependencies, and is future-proof against a lot of technological changes. I’ve come to love using static websites to store my local data. They’re flexible, resilient, and surprisingly powerful. I hope you’ll consider it too, and that these ideas help you get started. [If the formatting of this post looks odd in your feed reader, visit the original article]
More in programming
Since November 2023, I’ve been living in Karuizawa, a small resort town that’s 70 minutes away from Tokyo by Shinkansen. The elevation is approximately 1000 meters above sea level, making the summers relatively mild. Unlike other colder places in Japan, it doesn’t get much snow, and has the same sunny winters I came to love in Tokyo. With COVID and the remote work boom, it’s also become popular among professionals such as myself who want to live somewhere with an abundance of nature, but who still need to commute into Tokyo on a semi-regular basis. While I have a home office, I sometimes like to work outside. So I thought I’d share my impressions of the coworking spaces in town that I’ve personally visited, and a few other places where you can get some work done when you’re in town. Sawamura Roastery 11am on a Friday morning and there was only one other customer. Sawamura Roastery is technically a cafe, but it’s my personal favourite coworking space. It has free wifi, outlets, and comfortable chairs. While their coffees are on the expensive side, at about 750 yen for a cafe latte, they are also some of Karuizawa’s best. It’s empty enough on weekday mornings that I feel fine about staying there for hours, making it a deal compared to official drop-in coworking spaces. Another bonus is that it opens early: 7 a.m. (or 8 a.m. during the winter months). This allows me to start working right after I drop off my kids at daycare, rather than having 20 odd minutes to kill before heading to the other places that open at 9 a.m. If you’re having an online meeting, you can make use of the outdoor seating. It’s perfect when the weather is nice, but they also have heating for when it isn’t. The downsides are that their playlist is rather short, so I’m constantly hearing the same songs, and their roasting machine sometimes gets quite noisy. Gokalab Gokalab is my favourite dedicated coworking space in Karuizawa. Technically it is in Miyota, the next town over, which is sometimes called “Nishikaruizawa”. But it’s the only coworking space in the area I’ve been to that feels like it has a real community. When you want to work here, you have three options: buy a drink (600 yen for a cafe au lait—no cafe lattes, unfortunately, but if you prefer black coffee they have a good selection) and work out of the cafe area on the first floor; pay their daily drop-in fee of 1,000 yen; or become a “researcher” (研究員, kenkyuin) for 3,000 yen per month and enjoy unlimited usage. Now you may be thinking that the last option is a steal. That’s because it is. However, to become a researcher you need to go through a workshop that involves making something out of LEGO, and submit an essay about why you want to use the space. The thinking behind this is that they want to support people who actually share their vision, and aren’t just after a cheap space to work or study. Kind of zany, but that sort of out-of-the-box thinking is exactly what I want in a coworking space. When I first moved to Karuizawa, my youngest child couldn’t get into the local daycare. However, we found out that in Miyota, Suginoko Kindergarten had part-time spots available for two year olds. My wife and I ended up taking turns driving my kid there, and then spending the morning working out of Gokalab. Since my youngest is now in a local daycare, I haven’t made it out to Gokalab much. It’s just a bit too far for me (about a 15-minute drive from my house, while other options on this list are at most a 15-minute bicycle ride). But if I was living closer, I’d be a regular there. 232 Coworking Space & Hotel Noon on a Monday morning at 232 Coworking Space. If you’re looking for a coworking space near Karuizawa station, 232 Coworking Space & Hotel is the best option I’ve come across. The “hotel” part of the name made me think they were focused on “workcations,” but the space seems like it caters to locals as well. The space offers free coffee via an automatic espresso machine, along with other drinks, and a decent number of desks. When I used it on a Monday morning in the off-season, it was moderately occupied at perhaps a quarter capacity. Everyone spoke in whispers, so it felt a bit like a library. There were two booths for calls, but unfortunately they were both occupied when I wanted to have mine, so I had to sit in the hall instead. If the weather was a bit warmer I would have taken it outside, as there was some nice covered seating available. The decor was nice, though the chairs weren’t that comfortable. After a couple of hours I was getting sore. It was also too dimly lit for me, without much natural light. The price for drop-ins is reasonable, starting at 1,500 yen for four hours. They also have monthly plans starting from 10,000 yen for five days per month. WhatI found missing was a feeling of community. I didn’t see any small talk between the people working there, though I was only there for a couple hours, and maybe this occurs at other times. Their webpage also mentioned that they host events, but apparently they don’t have any upcoming ones planned and haven’t had any in a while. Shozo Coffee Karuizawa The latte is just okay here, but the atmosphere is nice. Shozo Coffee Karuizawa is a cafe on the first floor of the bookstore in Karuizawa Commongrounds. The second floor has a dedicated coworking space, but for me personally, the cafe is a better deal. Their cafe latte is mid-tier and 700 yen. In the afternoons I’ll go for their chai to avoid over-caffeination. They offer free wifi and have signs posted asking you not to hold online meetings, implicitly making it clear that otherwise they don’t mind you working there. Location-wise, this place is very convenient for me, but it suffers from a fatal flaw that prevents me from working there for an extended amount of time: the tables are way too low for me to type comfortably. I’m tall though (190 cm), so they aren’t designed with me in mind. Sheridan Coffee and a popover \- my entrance fee to this “coworking space”. Sheridan is a western breakfast and brunch restaurant. They aren’t that busy on weekdays and have free wifi, plus the owner was happy to let me work there. The coffee comes in a pot with enough for at least one refill. There’s also some covered outdoor seating. I used this spot to get some work done when my child was sick and being looked after at the wonderful Hochi Lodge (ほっちのロージ). It’s a clinic and sick childcare facility that does its best to not let on that it’s a medical facility. The doctors and nurses don’t wear uniforms, and appointments there feel more like you’re visiting someone’s home. Sheridan is within walking distance of it. Natural Cafeina An excellent cappuccino but only an okay place to work. If you’d like to get a bit of work done over an excellent cappuccino, Natural Cafeina is a good option. This cafe feels a bit cramped, and as there isn’t much seating, I wouldn’t want to use it for an extended period of time. Also, the music was also a bit loud. But they do have free wifi, and when I visited, there were a couple of other customers besides myself working there. Nakakaruizawa Library The Nakakaruizawa Library is a beautiful space with plenty of desks facing the windows and free wifi. Anyone can use it for free, making it the most economical coworking space in town. I’ve tried working out of it, but found that, for me personally, it wasn’t conducive to work. It is still a library, and there’s something about the vibes that just doesn’t inspire me. Karuizawa Commongrounds Bookstore Coworking Space The renowned bookstore Tsutaya operates Karuizawa Books in the Karuizawa Commongrounds development. The second floor has a coworking space that features the “cheap chic” look common among hip coworking spaces. Unfinished plywood is everywhere, as are books. I’d never actually worked at this space until writing this article. The price is just too high for me to justify it, as it starts at 1,100 yen for a mere hour, to a max of 4,000 yen per day. At 22,000 yen per month, it’s a more reasonable price for someone using it as an office full time. But I already have a home office and just want somewhere I can drop in at occasionally. There are a couple options, seating-wise. Most of the seats are in booths, which I found rather dark but with comfortable chairs. Then there’s a row of stools next to the window, which offer a good view, but are too uncomfortable for me. Depending on your height, the bar there may work as a standing desk. Lastly, there are two coveted seats with office chairs by a window, but they were both occupied when I visited. The emphasis here seems to be on individual deep work, and though there were a number of other people working, I’d have felt uncomfortable striking up a conversation with one of them. That’s enough to make me give it a pass. Coworking Space Ikoi Villa Coworking Space Ikoi Villa is located in Naka-Karuizawa, relatively close to my home. I’ve only used it once though. It’s part of a hotel, and they converted the lobby to a coworking space by putting a bunch of desks and chairs in it. If all you need is wifi and space to work, it gets the job done. But it’s a shame they didn’t invest a bit more in making it feel like a nice place to work. I went during the summer on one of the hottest days. My house only had one AC unit and couldn’t keep up, so I was hoping to find somewhere cooler to work. But they just had the windows open with some fans going, which left me disappointed. This was ostensibly the peak season for Karuizawa, but only a couple of others were working there that day. Maybe the regulars knew it’d be too hot, but it felt kind of lonely for a coworking space. The drop-in fee starts at 1,000 yen for four hours. It comes with free drinks from a machine: green tea, coffee, and water, if I recall correctly. Karuizawa Prince The Workation Core Do you like corporate vibes? Then this is the place for you. Karuizawa Prince The Workation Core is a coworking space located in my least favourite part of the town—the outlet mall. The throngs of shoppers and rampant commercialism are in stark contrast to the serenity found farther away from the station. This is another coworking space I visited expressly for this article. The fee is 660 yen per 30 minutes, to a maximum of 6,336 yen per day. Even now, just reading that maximum, my heart skipped a beat. This is certainly the most expensive coworking space I’ve ever worked from—I better get this article done fast. The facilities include a large open space with reasonably comfortable seating. There are a number of booths with monitors. As they are 23.8 inch monitors with 1,920 x 1,080 resolution, they’re a step down from the resolution of modern laptops, and so not of much use. Though there was room for 40 plus people, I was the only person working . Granted this was on a Sunday morning, so not when most people would typically attend. I don’t think I’ll be back here again. The price and sterile corporate vibe just aren’t for me. If you’re staying at The Prince Hotel, I think you get a discount. In that case, maybe it’s worth it, but otherwise I think there are better options. Sawamura Bakery & Restaurant Kyukaruizawa Sawamura Bakery & Restaurant is across the street from the Roastery. It offers slightly cheaper prices, with about 100 yen off the cafe latte, though the quality is worse, as is the vibe of the place as a whole. They do have a bigger selection of baked goods, though. As a cafe for doing some work, there’s nothing wrong with it per se. The upstairs cafe area has ample seating outside of peak hours. But I just don’t have a good reason to work here over the Roastery. The Pie Hole Los Angles Karuizawa The best (and only) pecan pie that I’ve had in Japan. The name of this place is a mouthful. Technically, it shouldn’t be on this list because I’ve never worked out of it. But they have wonderful pie, free wifi, and not many customers, so I could see working here. The chairs are a bit uncomfortable though, so I wouldn’t want to stop by for more than an hour or two. While this place had been on my radar for a while, I’d avoided it because there’s no good bicycle parking nearby—-or so I thought. I just found that the relatively close Church Street shopping street has a bit of bicycle parking off to the side. If you come to Karuizawa… When I was living in Tokyo, there were just too many opportunities to meet people, and so I found myself having to frequently turn down offers to go out for coffee. Since moving here, I’ve made some local connections, but the pace has been a lot slower. If you’re ever passing through Karuizawa, do get in touch, and I’d be happy to meet up for a cafe latte and possibly some pie.
Sometimes, feels have to be trusted over figures
When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright! But of course now, we know it wasn't. Steve Ballmer infamously dismissed the iPhone with a chuckle, as he believed all of Microsoft's past glory would guarantee them mobile victory. He wasn't worried at all. He clearly should have been! After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game. But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art. Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes. This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch. This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.