More from Liz Denys
I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!
I found inspiration for this pitcher's glaze design in the night sky. Whenever I feel lost, I know I can always look up and be under the same night sky, no matter where I am. Whenever I feel alone, I know I can always look up and feel connected to humanity, everyone else looking up at the same sky. Whenever I feel all is lost, the vast darkness in the night sky reminds me there are so many possibilities out there that I haven't even thought of yet. My studio practice is on a partial pause for an unknown amount of time right now; every piece I make is stuck in the greenware stage as I continue to save up to buy kilns and build out the glaze and kiln area. In some moments, this pause feels like a rare opportunity to take time to make more experimental and labor intensive pieces, but in other moments, I am overwhelmed by the feeling that pieces without a completion timeline on the horizon are just not worth doing. It's easy to bask in fleeting bursts of inspiration; it's harder to push through the periods where nothing feels worth doing. It's especially when the waves of anxiety about the unknown future of my studio practice and the waves of anxiety about the direction of the US government and the future of my country come at me at the same time. I try to ground myself, to keep myself from spiraling. I name things I can see, smell, hear. At night, I look to the dark sky. When I can, I reread Rebecca Solnit's Hope in the Dark: Hope locates itself in the premises that we don't know what will happen and that in the spaciousness of uncertainty is room to act. When you recognize uncertainty, you recognize that you may be able to influence the outcomes–you alone or you in concert with a few dozen or several million others. Hope is an embrace of the unknown and the unknowable, an alternative to the certainty of both optimists and pessimists. Optimists think it will all be fine without our involvement; pessimists take the opposite position; both excuse themselves from acting. It's the belief that what we do matters even though how and when it may matter, who and what it may impact, are not things we can know beforehand. We may not, in fact, know them afterward either, but they matter all the same, and history is full of people whose influence was most powerful after they were gone. May we all find hope in the dark and choose to act.
When I was glazing this v60-style cone, I was thinking of rising sea levels, eroding beaches, and melting ice caps. Trying to tackle large challenges like climate change is overwhelming in the best of times, and these are not the best of times. There are many things we can personally do to reduce our carbon footprints and fight climate change, but If we want to have any chance to succeed, we need to join together and organize. If you're new to organizing, connect with local groups already doing the work you're interested in, and don't forget to look for groups pushing for change outside of just the national stage. Creating more dense walkable, transit-oriented communities is one of our strongest tools for a sustainable, climate friendly future. Generally, the bulk this work in the US happens at the state and local levels. In addition to the climate benefits, it's essential work to keep communities together and fight displacement. I personally spend a lot of my spare time organizing locally around this issue to help ensure NYC and New York State stay places everyone can thrive. I focus especially on pro-housing policies and improving transportation options and reliability so climate-friendly, less car-dependent lifestyles - and New York's relative safety - can be for everyone.
Clay shrinks as it dries and even more as it's fired, so it's useful to have a way to estimate the final size of in-progress work - especially if you're making multiples or trying to fit pieces together. One way to do this is with shrinkage rulers. I figured I'd design my own shrinkage rulers and provide a way for folks to make them themselves since ceramic tool costs can add up. To make your shrinkage rulers: Download either the colorful printable shrinkage rulers or black and white printable shrinkage rulers. Print at 100% size. (These files are both 400 dpi.) Verify that the 0% shrinkage standard ruler at the top matches the size of an existing regular ruler you have. This quick calibration step will make sure nothing out of scale during printing! Cut out your rulers. Optionally, laminate or cover in packing tape to help them last longer. To use your shrinkage rulers: If you're using commercial clay, look up how much your clay is estimated to shrink. If you're using a blend of clays or custom clays, you'll have to calculate how much your clay shrinks. An easy way to do this is measure the length of a wet piece right after you form them and again after it's been through its glaze firing. You can then calculate the estimated shrinkage rate: Pick the shrinkage ruler that corresponds to your clay's shrinkage rate. If you're between shrinkage rates, you can estimate with a nearby size. Remember that shrinkage rates are estimates, and a piece's actual shrinkage depends on many variables, including how wet your clay is and how close it is to it's original composition (this can change with repeated recycling). Measure your wet piece with the shrinkage ruler! The length shown is the expected length your piece's dimension will be when fired. The fine print: Reminder that shrinkage rulers only give estimated lengths! You're welcome to print these shrinkage rulers for yourself or your business. You may use the printed shrinkage rulers privately, even in commercial applications (I hope they help your ceramic art and business!), provided you do not redistribute or resell the shrinkage rulers themselves in any form, digital or physical. Footnotes If you're working on a jar or something else that needs to fit together tightly, it's better not to rely on shrinkage rulers to get a perfect fit. In my experiences, you ideally want to make the vessel and the lid as close in time as possible and have them dry together and fire together through as many phases as possible.↩
I'm continuing my clay body reviews series with two very heavily grogged "sculpture" clays I've used. Note that I currently practice in a community studio that glaze fires to cone 6 in oxidation, so my observations reflect that. Standard 420 Sculpture: Cone 6: average shrinkage 8.0%, absorption 1.5% Light straw when fired to cone 6: more yellow/beige than most white stonewares so the color is something to consider in your final vision (or engobe in something else) So much grog that it’s best described as working with wet sand, non-derogatory I've made complicated open coil-based structures with this clay that have been formed across many studio sessions over a couple days, and they've survived without cracking! Wet clay attaches readily to leather hard and even slightly dry clay. Wrapping my works in dry cleaning bags until done and dry before bisque was enough - I was worried I'd have to make a damp box, but not with this clay! The grog is white and grey, and it comes in a variety of sizes, including some that is visually rather large. The grog really shows if you sand to smooth the surface. I typically dislike how this looks - the result ends up looking more like concrete than clay. If you use this for functional ware or anything you move around a lot, you'll certainly want to sand the bottom since the groggy surface is extra rough to protect tables and counters. Burnishing alone doesn't usually make this clay smooth. Can be thrown when very soft, but your hands will feel scratched if you're not used to it! Angled slab joins join readily, and support coils press in quickly and easily. Some members of my studio prefer to make plates with this clay because the high level of grog significantly reduces warping. I personally prefer to make plates with clays with far less grog that I dry very slowly. High palpable grog content means a weaker object, and I prefer more strength in objects that are handled frequently. Can be marbled with 798, but needs to dry slowly. Standard 420's straw color shows in the unglazed section of this planter's drip tray, and there's also some flashing from the glaze near the edges. I sanded the base of this piece so the slightly rough surface of Standard 420 wouldn't scratch tables, and you can see the contrast between the sanded bottom (outside) layer where the varied grogs are revealed and the rougher surfaces of the other layers where they are still covered by clay particles. This handbuilt planter was made of Standard 798 over multiple studio sessions. The sculptural coil structures attached readily with my regular slip and score process, and it dried evenly enough to not crack with my regular process of drying under a single plastic dry-cleaning bag. This coiled wall art piece was made out of equal parts Standard 112 and Standard 420 wedged fully together. There's still ample grog in this hybrid clay body to work the same as the Standard 798 planter's coiled structure. Standard 798 Black Sculpture: Cone 6: average shrinkage 10%, absorption 1.0% Dark brown when wet, fires to a gorgeous black at cone 6 when unglazed. Clear glazes will make this clay look brown, so you need to use a black like Coyote Black or Amaco Obsidian to preserve the black color if you want to glaze it. So much grog that it’s best described as working with wet sand, non-derogatory. The grog is white, and provides a lovely contrast when on the surface or sanded to be revealed. Like 420, you'll probably want to sand the bottom of anything you'll pick up and put down more than once. Very similar working qualities to 420 - a true joy for handbuilding! Can be marbled with 420, but needs to dry slowly.
More in programming
One of my side quests at work is to get a simple feedback loop going where we can create knowledge bases that comment on Notion documents. I was curious if I could hook this together following these requirements: No custom code hosting Prompt is editable within Notion rather than requiring understanding of Zapier Should be be fairly quickly Ultimately, I was able to get it working. So a quick summary of how it works, some comments on why I don’t particularly like this approach, then some more detailed comments on getting it working. General approach Create a Notion database of prompts. Create a specific prompt for providing feedback on RFCs. Create a Notion database for all RFCs. Add an automation into this database that calls a Zapier webhook. The Zapier webhook does a variety of things that culminate in using the RFC prompt to provide feedback on the specific RFC as a top-level comment in the RFC. Altogether this works fairly well. The challenges with this approach The best thing about this approach is that it actually works, and it works fairly well. However, as we dig into the implementation details, you’ll also see that a series of things are unnaturally difficult with Zapier: Managing rich text in Notion because it requires navigating the blocks datastructure Allowing looping API constructs such as making it straightforward to leave multiple comments on specific blocks rather than a single top-level comment Notion only allows up to 2,000 characters per block, but chunking into multiple blocks is moderately unnatural. In a true Python environment, it would be trivial to translate to and from Markdown using something like md2notion Ultimately, I could only recommend this approach as an initial validation. It’s definitely not the right long-term resting place for this kind of approach. Zapier implementation I already covered the Notion side of the integration, so let’s dig into the Zapier pieces a bit. Overall it had eight steps. I’ve skipped the first step, which was just a default webhook receiver. The second step was retrieving a statically defined Notion page containing the prompt. (In later steps I just use the Notion API directly, which I would do here if I was redoing this, but this worked too. The advantage of the API is that it returns a real JSON object, this doesn’t, probably because I didn’t specify the content-type header or some such.) This is the configuration page of step 2, where I specify the prompt’s page explicitly. ) Probably because I didn’t set content-type, I think I was getting post formatted data here, so I just regular expressed the data out. It’s a bit sloppy, but hey it worked, so there’s that. ) Here is using the Notion API request tool to retrieve the updated RFC (as opposed to the prompt which we already retrieved). ) The API request returns a JSON object that you can navigate without writing regular expressions, so that’s nice. ) Then we send both the prompt as system instructions and the RFC as the user message to Open AI. ) Then pass the response from OpenAI to json.dumps to encode it for being included in an API call. This is mostly solving for newlines being \n rather than literal newlines. ) Then format the response into an API request to add a comment to the document. Anyway, this wasn’t beautiful, and I think you could do a much better job by just doing all of this in Python, but it’s a workable proof of concept.
When working on big JavaScript web apps, you can split the bundle in multiple chunks and import selected chunks lazily, only when needed. That makes the main bundle smaller, faster to load and parse. How to lazy import a module? let hljs = await import("highlight.js").default; is equivalent of: import hljs from "highlight.js"; Now: let libZip = await import("@zip.js/zip.js"); let blobReader = new libZip.BlobReader(blob); Is equivalent to: import { BlobReader } from "@zip.js/zip.js"; It’s simple if we call it from async function but sometimes we want to lazy load from non-async function so things might get more complicated: let isLazyImportng = false; let hljs; let markdownIt; let markdownItAnchor; async function lazyImports() { if (isLazyImportng) return; isLazyImportng = true; let promises = await Promise.all([ import("highlight.js"), import("markdown-it"), import("markdown-it-anchor"), ]); hljs = promises[0].default; markdownIt = promises[1].default; markdownItAnchor = promises[2].default; } We can run it from non-async function: function doit() { lazyImports().then( () => { if (hljs) { // use hljs to do something } }) } I’ve included protection against kicking of lazy import more than once. That means on second and n-th call we might not yet have the module loaded so hljs will be still undefined.
A step-by-step walkthrough of the toy kill program using raw Linux syscalls.
For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it’s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face. There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”). Orchestration versus leadership You can break the components of solving a problem down in a number of ways, and I’m not saying this is the perfect way to do it, but here are six important components of directing a team’s work: Problem discovery: Identifying which problems to work on Problem selection: Aligning with your stakeholders on the problems you’ve identified Solution discovery: Identifying potential solutions to the selected problem Solution selection: Aligning with your stakeholders on the approach you’ve chosen Execution: Implementing the selected solution Ongoing revision: Keeping your team and stakeholders aligned as you evolve the plan In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who’ve only worked in orchestration-heavy roles often have no idea that they are expected to perform all of these. So, yes, there’s a skill gap in performing the work, but more importantly there’s an awareness gap that the work actually exists to be done. Here are a few ways you can identify an orchestration‑heavy manager that doesn’t quite understand their current, leadership‑heavy circumstances: Focuses on prioritization as “solution of first resort.” When you’re not allowed to change the problem or the approach, prioritization becomes one of the best tools you have. Accepts problems and solutions as presented. If a stakeholder asks for something, questions are around priority rather than whether the project makes sense to do at all, or suggestions of alternative approaches. There’s no habit of questioning whether the request makes sense—that’s left to the stakeholder or to more senior functional leadership. Focuses on sprint planning and process. With the problem and approach fixed, protecting your team from interruption and disruption is one of your most valuable tools. Operating strictly to a sprint cadence (changing plans only at the start of each sprint) is a powerful technique. All of these things are still valuable in a leadership‑heavy role, but they just aren’t necessarily the most valuable things you could be doing. Operating in a leadership-heavy role There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it’s a much more expansive role. However, it’s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy). Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There’s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I’m skeptical of that relationship—this change was already underway before LLMs became prominent. My advice for folks working through the leadership‑heavy role learning curve is: Think of your job’s core loop as four steps: Identify the problems your team should be working on Decide on a destination that solves those problems Explain to your team, stakeholders, and executives the path the team will follow to reach that destination Communicate both data and narratives that provide evidence you’re walking that path successfully If you are not doing these four things, you are not performing your full role, even if people say you do some parts well. Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in How to get more headcount). Ask your team for priorities and problems to solve. Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren’t performing the first step of the core loop. It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you’re not sure, ask. If your leadership chain is running the core loop for your team, it’s because they lack evidence that you can run it yourself. That’s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role. Get your projects prioritized by following the core loop. If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job. Picking the right problems and solutions is your highest‑leverage work. No, this is not only your product manager’s job or your tech lead’s—it is your job. It’s also theirs, but leadership overlaps because getting it right is so valuable. Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value. Understand your domain and technology in detail. You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work. If you don’t feel capable of that, that’s okay. But you need to urgently write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them. If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders. Align with your team and cross‑functional stakeholders as much as you align with your executive. If your executive is wrong and you follow them, it is your fault that your team and stakeholders are upset: part of your job is changing your executive’s mind. Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much. Now that I’ve shared my perspective, I admit I’m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.
When building a large web app it’s possible to split the .js bundle into chunks and lazy load certain parts only when needed. For example, in Edna I use markdown-it and highlight.js library only in a certain scenario. By putting it in it’s separate chunk, I save almost 1 MB of uncompressed JavaScript in main bundle. Faster to download, faster to run. ../dist/assets/markdownit-hljs-DbctGXX9.js 1,087.33 kB │ gzip: 358.42 kB To split in chunks you configure rollup in vite.config.js: function manualChunks(id) { // partition files into chunks } export default defineConfig({ build: { rollupOptions: { output: { manualChunks: manualChunks, } } } } manualChunks() functions takes a path of the file (don’t know why everyone calls it an id). If you return a string for a given path, you tell rollup to bundle that file in a given chunk. If you return nothing (i.e. undefined) rollup will decide on how to chunk automatically, most likely putting everything into a single chunk It gets called for .css files, .js files and probably others. Here’s my hard won wisdom: console.log(id) when working on manualChunks() to see what files are processed chunk specific modules from node_modules to be lazy loaded everything else from node_modules goes into vendor chunk the rest is my own code and goes into lmain chunk as decided by rollup Seems simple enough: function manualChunks(id) { console.log(id); const chunksDef = [ ["/@zip.js/zip.js/", "zipjs"], ["/prettier/", "prettier"], // markdown-it and highlight.js are used together in askai.svelte [ "/markdown-it/", "/markdown-it-anchor/", "/highlight.js/", "/entities/", "/linkify-it/", "/mdurl/", "/punycode.js/", "/uc.micro/", "markdownit-hljs", ], ]; for (let def of chunksDef) { let n = def.length; for (let i = 0; i < n - 1; i++) { if (id.includes(def[i])) { return def[n - 1]; } } } // bundle all other 3rd-party modules into a single vendor.js module if (id.includes("/node_modules/")) { return "vendor"; } // when we return undefined, rollup will decide } This is real example from Edna. I’ve put zip.js, prettier and markdown-it + markdown-it-anchor + highlight.js into their own chunks, which I lazily import. Things to note: order is important. If I match /node_modules/ first, then everything would end up in vendor bundle id is a full path of bundled file in Unix format e.g. C:/Users/kjk/src/elaris/node_modules/prettier/standalone.mjs. People seem to match the path against just package name like prettier. I match against /prettier/ so that if some file has string prettier in it, it won’t be accidentally put in prettier chunk /entities/, /mdurl/ etc. are used by markdown-it so they should be included in its chunk. That’s where console.log(id) is helpful. I saw modules that I didn’t explicitly put in package.json which means they are implicit dependencies. I used bun.lock to see which package depends on those mysterious packages and that’s how I found what is used by markdown-it There were 2 remaining problems. I also had this in my code: import "highlight.js/styles/github.css"; manualChunks was called for "highlight.js/styles/github.css" to which I returned markdownit-hljs so it was put in its own chunk. Which was too much because I didn’t want to lazy import a small CSS file, so I told rollup to put all .css files in main CSS chunk: function manualChunks(id) { // pack all .css files in the same chunk if (id.endsWith(".css")) { return; } // ... rest of the code } There was one more thing that was a big pain in the ass to debug. To verify things are properly chunked I opened Dev Tools in Chrome and looked at network tab. To my surprise, markdownit-hljs was loaded immediately. After lots of debugging and research: turns out that vite bundles some helper functions. Because I didn’t specify explicitly which chunk they should go into, rollup decided to put them in markdownit-hljs chunk. Because main chunk was using that function, it had to import it, defeating my cunning plan to load it lazily later. The fix was to direct those known helper function into vendor bundle: function manualChunks(id) { // ... other code // bundle all other 3rd-party modules into a single vendor.js module if ( id.includes("/node_modules/") || ROLLUP_COMMON_MODULES.some((commonModule) => id.includes(commonModule)) ) { return "vendor"; } // ... rest of the code } You can read more at: https://github.com/vitejs/vite/issues/17823 https://github.com/vitejs/vite/issues/19758 https://github.com/vitejs/vite/issues/5189 Things might still go wrong. I got one invalid build that created a chunk that wouldn’t parse in the browser. For that reason I suggest to start simple: empty manualChunks() function that packs everything in one chunk. Then add desired chunks one by one and after each change. And how to lazy import things? let markdownIt = (await import("markdown-it")).default; is equivalent to static import: import markdownIt from "markdown-it"; Read more about lazy imports.