More from Jim Nielsen’s Blog
A friend gave me a copy of the book “Perfect Wave” by Dave Hickey. I’ve been slowly reading through each essay and highlighting parts with my red pencil. When I got to the chapter “Cool on Cool”, this passage stood out. I want to write it down and share it: there was this perfect, luminous pop single by the Carpenters that just blew me away. And, believe me, the Carpenters were the farthest thing from my kind of thing. But when something that is not your thing blows you away, that’s one of the best things that can happen. It means you are something more and something other than you thought you were. I find this beautiful. I should take more time to wonder at moments of surprise I did not expect. What a beautiful thing that I can be plowing through my existence and suddenly be surprised by something outside my taste, my beliefs, even my identity, that reaches in past all those things and rearranges me. Perhaps my boundaries are more porous than I assume. In an instant, I can become something different, something more that I ever believed was possible. Just think, that ability is lying there inside all of us. I don’t have to think of myself as a walled garden but an open field. Who knows where my boundaries will expand to next. All it takes is someone walking by and tossing out a seed I would’ve never chosen to plant myself. (Tangential: I love this interaction between Jerry Seinfeld and Brian Regan talking about being “blown away”.) Email :: Mastodon :: Bluesky
Jeremy Keith, Chris Coyier, and others (see Jeremy’s post) have written about the idea of “pace layers” and now I’m going to take a stab at applying it to user interface primitives. First, let’s start with a line of reasoning: Common user interface controls — such as checkboxes or radios — should be visually and functionally consistent. This provides users a uniform, predictable interface for common interactions across various applications (turning things on/off, selecting an option from a pre-defined list, etc.). Designers and developers should use the primitives afforded by the lower-levels they’re building on. This gives application and web site users a consistent, predictable interface within their context and environment. For web designers, every person accessing your site has a particular piece of hardware with a particular operating system and design language to boot. You can build on top of those primitives, rather than re-create them yourself, which gives end users consistency within their chosen environment. For example, a radio button on one website or in one native app is the same radio button on another website or in another app. (Rather than every single website and application rolling their own radios that might be identical, similar, or drastically different.) To achieve this, user interfaces can be built in “layers” where each layer builds on top of the layer below it, providing a level of integration and consistency within its environment. And, where lower levels don’t provide primitives necessary for a user interface, designers and developers can create their own. In this world, individual websites are free to explore patterns and interactions which don’t yet exist (or are only half-implemented by lower layers). However, where a lower-level dependency exists, they can leverage it which gives end users a more consistent experience in their chosen environment while also giving designers and developers more time to focus on building UI controls and patterns that don’t yet exist. UI Primitives We Build Ourselves Are a Liability Every UI control you roll yourself is a liability. You have to design it, test it, ship it, document it, debug it, maintain it — the list goes on. It makes you wonder why we insist on rolling (or styling) our own common UI controls so often. Perhaps we’d be better off asking: What are the fewest amount of components we have to build to deliver value to our users? When creating user interfaces, you can leverage the existing controls and patterns of the layers you’re building on top of. This helps you build, maintain, and debug less because you’re using primitives built and maintained by the makers of levels below you (which are generally more stable and change less). And it helps your users because the experience of your app or website is more consistent and predictable within whatever particular hardware and operating system they’re using. Adopting UI Primitives: Addition or Subtraction When designers and developers set out to create a user interface, they have to ask themselves: Are we going to use something that already exists, or create our own? When you use a checkbox or radio control, are you adopting those controls by 1) leveraging existing APIs of lower-level layers, or 2) re-implementing them yourself? Approach (1) makes it easier for you to maintain and easier for your users to use, as it’s a pattern consistent with the shared language and functionality of their bespoke computing experience. Approach (2) does neither of these. It’s more work for you to build and maintain, and it’s more cognitive work for your users to learn yet another visual and functional variant of an otherwise standard UI primitive. An Example: The Switch Control For a long time, a checkbox was all you had on the web. So people built their own “switch” controls. Eventually, browsers got around to providing an API to the existing switch control of lower-level systems. So the question for many websites and design systems became: Do we adopt the switch control that the browser (and lower-level layers) now provide us? Or do we keep our hand-rolled switch? In this sense, there are two approaches to building a design system: Build everything that’s needed. Build only what is not already provided by lower layers (and trade variance in your system for consistency in your users’ systems). In approach (1) you build and maintain everything yourself. In approach (2) you build what isn’t provided and you maintain by deleting previous implementations now provided by lower layers. Priority Says: Brands > People In a world of layers built on top of each other, you would see updates to UI primitives change in lower levels and “bubble up” to websites. OS -> Native -> Browser -> Website -> Form control However, the world we’ve constructed with many of our websites and design systems is outside of this flow of updates. There’s the OS-level stuff: OS -> Native -> Browser And then tangential to that stream of updates is this flow, which requires manual intervention and updates: Design system -> Website -> Form control For example, if the OS changes its radios, websites only get those updates if individual design system and website owners decide to pick up those changes, leaving users’ experiences inconsistent in their chosen computing environment. In other words, the UI layering of operating systems and websites diverge from each other. We opt to make the experience of our brands primary over the users’. Whereas we could be choosing to make our brands fit into users' choices. But then we’d have to value honoring user choice over brand consistency, and I just don’t know if the world is ready for that because brands pay the bills. Email :: Mastodon :: Bluesky
John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes
Let me tell you about one of the best feelings. You have a problem. You bang your head on it for a while. Through the banging, you formulate a string of keywords describing the problem. You put those words into a search engine. You land on a forum or a blog post and read someone else’s words containing those keywords and more. Their words resonate with you deeply. They’re saying the exact same things you were saying to yourself in your head. You immediately know, “This person gets it!” You know they have an answer to your problem. They’ve seen what you’re seeing. And on top of it all, they provide a solution which fixes your problem! A sense of connection is now formed. You feel validated, understood, seen. They’ve been through what you’re going through, and they wrote about it to reach out to you — across time and space. I fell in love with the web for this reason, this feeling of connection. You could search the world and find someone who saw what you see, felt what you feel, went through what you’re going through. Contrast that with today. Today you have a problem. You bang your head on it. You ask a question in a prompt. And you get back something. But there’s no human behind it. Just a machine which takes human voices and de-personalizes them until the individual point of view is annihilated. And so too with it the sense of connection — the feeling of being validated, understood, seen. Every prompt a connection that could have been. A world of missed connections. Email :: Mastodon :: Bluesky
This is a note to my future self, as I’ve setup HTML minification on a few different projects and each time I ask myself, “How did I do that again?” So here’s your guide, future Jim (and anyone else on the internet who finds this). I use html-minifier to minifiy HTML files created by my static site generator. Personally, I use the CLI tool because it's easy to add a CLI command as an npm postbuild step. Example package.json: { "scripts": { "build": "<BUILD-COMMAND>" "postbuild": "html-minifier --input-dir <BUILD-DIR> --output-dir <BUILD-DIR> --file-ext html <OPTIONS>" } } All the minification options are off by default, so you have to turn them on one-by-one (HTML minfication is a tricky concern). Me personally, I’m using the ones exemplified in the project README: --collapse-whitespace --remove-comments --remove-optional-tags --remove-redundant-attributes --remove-script-type-attributes --remove-tag-whitespace --use-short-doctype --minify-css true --minify-js true So, for a site folder named build, the entire command looks like this: html-minifier --input-dir ./build --output-dir ./build --file-ext html --collapse-whitespace --remove-comments --remove-optional-tags --remove-redundant-attributes --remove-script-type-attributes --remove-tag-whitespace --use-short-doctype --minify-css true --minify-js true That’s it — that’s the template. What Kind of Results Do I Get? I use this on a few of my sites, including my notes site and this blog. When testing it locally for my blog’s build, I: Run a build and put files to ./build Copy ./build to ./build-min Command: cp -R build build-min Run html-minifier on build-min and compare the resulting folders in macOS finder. Here’s my results for my blog (2,501 items in ./build): Directory size: Before: 37MB After: 28.4MB Difference: ▼ -8.6MB (-23.24%) Main index.html file lines of code: Before: 1,484 After: 15 lines Difference: ▼ -1,469 lines (-99%) Main index.html file size over the network: Before: 30.6kB After: 17.6kB Difference: ▼ -13kB (-42.48%) And the results for my notes (one big index.html file): File size: Before: 1.5MB After: 1.1MB Difference: ▼ -0.4MB (-26.67%) Lines of code: Before: 25,974 After: 1 Difference: ▼ -25,973 lines (-99.996%) Email :: Mastodon :: Bluesky #html
More in programming
For some purpose, the DOGE people are burrowing their way into all US Federal Systems. Their complete control over the Treasury Department is entirely insane. Unless you intend to destroy everything, making arbitrary changes to complex computer systems will result in destruction, even if that was not your intention. No
Entering 2025, I decided to spend some time exploring the topic of agents. I started reading Anthropic’s Building effective agents, followed by Chip Huyen’s AI Engineering. I kicked off a major workstream at work on using agents, and I also decided to do a personal experiment of sorts. This is a general commentary on building that project. What I wanted to build was a simple chat interface where I could write prompts, select models, and have the model use tools as appropriate. My side goal was to build this using Cursor and generally avoid writing code directly as much as possible, but I found that generally slower than writing code in emacs while relying on 4o-mini to provide working examples to pull from. Similarly, while I initially envisioned building this in fullstack TypeScript via Cursor, I ultimately bailed into a stack that I’m more comfortable, and ended up using Python3, FastAPI, PostgreSQL, and SQLAlchemy with the async psycopg3 driver. It’s been a… while… since I started a brand new Python project, and used this project as an opportunity to get comfortable with Python3’s async/await mechanisms along with Python3’s typing along with mypy. Finally, I also wanted to experiment with Tailwind, and ended up using TailwindUI’s components to build the site. The working version supports everything I wanted: creating chats with models, and allowing those models to use function calling to use tools that I provide. The models are allowed to call any number of tools in pursuit of the problem they are solving. The tool usage is the most interesting part here for sure. The simplest tool I created was a get_temperature tool that provided a fake temperature for your location. This allowed me to ask questions like “What should I wear tomorrow in San Francisco, CA?” and get a useful respond. The code to add this function to my project was pretty straightforward, just three lines of Python and 25 lines of metadata to pass to the OpenAI API. def tool_get_current_weather(location: str|None=None, format: str|None=None) -> str: "Simple proof of concept tool." temp = random.randint(40, 90) if format == 'fahrenheit' else random.randint(10, 25) return f"It's going to be {temp} degrees {format} tomorrow." FUNCTION_REGISTRY['get_current_weather'] = tool_get_current_weather TOOL_USAGE_REGISTRY['get_current_weather'] = { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, } } After getting this tool, the next tool I added was a simple URL retriever tool, which allowed the agent to grab a URL and use the content of that URL in its prompt. The implementation for this tool was similarly quite simple. def tool_get_url(url: str|None=None) -> str: if url is None: return '' url = str(url) response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') content = soup.find('main') or soup.find('article') or soup.body if not content: return str(response.content) markdown = markdownify(str(content), heading_style="ATX").strip() return str(markdown) FUNCTION_REGISTRY['get_url'] = tool_get_url TOOL_USAGE_REGISTRY['get_url'] = { "type": "function", "function": { "name": "get_url", "description": "Retrieve the contents of a website via its URL.", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The complete URL, including protocol to retrieve. For example: \"https://lethain.com\"", } }, "required": ["url"], }, } } What’s pretty amazing is how much power you can add to your agent by adding such a trivial tool as retrieving a URL. You can similarly imagine adding tools for retrieving and commenting on Github pull requests and so, which could allow a very simple agent tool like this to become quite useful. Working on this project gave me a moderately compelling view of a near-term future where most engineers have simple application like this running that they can pipe events into from various systems (email, text, Github pull requests, calendars, etc), create triggers that map events to templates that feed into prompts, and execute those prompts with tool-aware agents. Combine that with ability for other agents to register themselves with you and expose the tools that they have access to (e.g. schedule an event with tool’s owner), and a bunch of interesting things become very accessible with a very modest amount of effort: You could schedule events between two busy people’s calendars, as if both of them had an assistant managing their calendar Reply to your own pull requests with new blog posts, providing feedback on typos and grammatical issues Crawl websites you care about and identify posts you might be interested in Ask the model to generate a system model using lethain:systems, run that model, then chart the responses Add a “planning tool” which allows the model to generate a plan to guide subsequent steps in a complex task. (e.g. getting my calendar, getting a friend’s calendar, suggesting a time we could meet) None of these are exactly lifesaving, but each is somewhat useful, and I imagine there are many more fairly obvious ideas that become easy once you have the necessary scaffolding to make this sort of thing easy. Altogether, I think that I am convinced at this points that agents, using current foundational models, are going to create a number of very interesting experiences that improve our day to day lives in small ways that are, in aggregate, pretty transformational. I’m less convinced that this is the way all software should work going forward though, but more thoughts on that over time. (A bunch of fun experiments happening at work, but early days on those.)
One of my recent home organisation projects has been sorting out my LEGO collection. I have a bunch of sets which are mixed together in one messy box, and I’m trying to separate bricks back into distinct sets. My collection is nowhere near large enough to be worth sorting by individual parts, and I hope that breaking down by set will make it all easier to manage and store. I’ve been creating spreadsheets to track the parts in each set, and count them out as I find them. I briefly hinted at this in my post about looking at images in spreadsheets, where I included a screenshot of one of my inventory spreadsheets: These spreadsheets have been invaluable – I can see exactly what pieces I need, and what pieces I’m missing. Without them, I wouldn’t even attempt this. I’m about to pause this cleanup and work on some other things, but first I wanted to write some notes on how I’m creating these spreadsheets – I’ll probably want them again in the future. Getting a list of parts in a set There are various ways to get a list of parts in a LEGO set: Newer LEGO sets include a list of parts at the back of the printed instructions You can get a list from LEGO-owned website like LEGO.com or BrickLink There are community-maintained databases on sites like Rebrickable I decided to use the community maintained lists from Rebrickable – they seem very accurate in my experience, and you can download daily snapshots of their entire catalog database. The latter is very powerful, because now I can load the database into my tools of choice, and slice and dice the data in fun and interesting ways. Downloading their entire database is less than 15MB – which is to say, two-thirds the size of just opening the LEGO.com homepage. Bargain! Putting Rebrickable data in a SQLite database My tool of choice is SQLite. I slept on this for years, but I’ve come to realise just how powerful and useful it can be. A big part of what made me realise the power of SQLite is seeing Simon Willison’s work with datasette, and some of the cool things he’s built on top of SQLite. Simon also publishes a command-line tool sqlite-utils for manipulating SQLite databases, and that’s what I’ve been using to create my spreadsheets. Here’s my process: Create a Python virtual environment, and install sqlite-utils: python3 -m venv .venv source .venv/bin/activate pip install sqlite-utils At time of writing, the latest version of sqlite-utils is 3.38. Download the Rebrickable database tables I care about, uncompress them, and load them into a SQLite database: curl -O 'https://cdn.rebrickable.com/media/downloads/colors.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/parts.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventories.csv.gz' curl -O 'https://cdn.rebrickable.com/media/downloads/inventory_parts.csv.gz' gunzip colors.csv.gz gunzip parts.csv.gz gunzip inventories.csv.gz gunzip inventory_parts.csv.gz sqlite-utils insert lego_parts.db colors colors.csv --csv sqlite-utils insert lego_parts.db parts parts.csv --csv sqlite-utils insert lego_parts.db inventories inventories.csv --csv sqlite-utils insert lego_parts.db inventory_parts inventory_parts.csv --csv The inventory_parts table describes how many of each part there are in a set. “Set S contains 10 of part P in colour C.” The parts and colors table contains detailed information about each part and color. The inventories table matches the official LEGO set numbers to the inventory IDs in Rebrickable’s database. “The set sold by LEGO as 6616-1 has ID 4159 in the inventory table.” Run a SQLite query that gets information from the different tables to tell me about all the parts in a particular set: SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1'; Or use sqlite-utils to export the query results as a spreadsheet: sqlite-utils lego_parts.db " SELECT ip.img_url, ip.quantity, ip.is_spare, c.name as color, p.name, ip.part_num FROM inventory_parts ip JOIN inventories i ON ip.inventory_id = i.id JOIN parts p ON ip.part_num = p.part_num JOIN colors c ON ip.color_id = c.id WHERE i.set_num = '6616-1';" --csv > 6616-1.csv Here are the first few lines of that CSV: img_url,quantity,is_spare,color,name,part_num https://cdn.rebrickable.com/media/parts/photos/9999/23064-9999-e6da02af-9e23-44cd-a475-16f30db9c527.jpg,1,False,[No Color/Any Color],Sticker Sheet for Set 6616-1,23064 https://cdn.rebrickable.com/media/parts/elements/4523412.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Chequered Print,2335pr0019 https://cdn.rebrickable.com/media/parts/photos/15/2335px13-15-33ae3ea3-9921-45fc-b7f0-0cd40203f749.jpg,2,False,White,Flag 2 x 2 Square [Thin Clips] with Octan Logo Print,2335pr0024 https://cdn.rebrickable.com/media/parts/elements/4141999.jpg,4,False,Green,Tile Special 1 x 2 Grille with Bottom Groove,2412b https://cdn.rebrickable.com/media/parts/elements/4125254.jpg,4,False,Orange,Tile Special 1 x 2 Grille with Bottom Groove,2412b Import that spreadsheet into Google Sheets, then add a couple of columns. I add a column image where every cell has the formula =IMAGE(…) that references the image URL. This gives me an inline image, so I know what that brick looks like. I add a new column quantity I have where every cell starts at 0, which is where I’ll count bricks as I find them. I add a new column remaining to find which counts the difference between quantity and quantity I have. Then I can highlight or filter for rows where this is non-zero, so I can see the bricks I still need to find. If you’re interested, here’s an example spreadsheet that has a clean inventory. It took me a while to refine the SQL query, but now I have it, I can create a new spreadsheet in less than a minute. One of the things I’ve realised over the last year or so is how powerful “get the data into SQLite” can be – it opens the door to all sorts of interesting queries and questions, with a relatively small amount of code required. I’m sure I could write a custom script just for this task, but it wouldn’t be as concise or flexible. [If the formatting of this post looks odd in your feed reader, visit the original article]
A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.
In our *Ambsheets* project, we are exploring a small extension to the familiar spreadsheet: **what if a single spreadsheet cell could hold multiple values at once**?