Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
30
Pure CSS Bar Graphs with Graceful Mobile Fallbacks 2020-12-08 I recently published a new open source project, Flexbox Bar Graphs, and wanted to share a simple breakdown of how it was built. It isn't anything mind-blowing, but I like the idea of placing bar graphs in a web page with zero Javascript. So in the end, this is what our bar graphs will look like on desktop: The flexbox bar graph in desktop view (direct link to image) And this is how it will look on smaller devices: The flexbox bar graph on mobile devices (direct link to image) Let's get into the details! The HTML The main "secret" of this project is that our graphs are constructed out of HTML tables. Now before you freak out - this is perfectly fine and works in our favor quite well. If the user has JS disabled --> they will still see our graphs If the user has CSS disabled --> they will see a standard data table set All bases are covered! <!-- Using a basic table with our custom data-id --> <table...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Making software better without sacrificing user experience.

Bringing dwm Shortcuts to GNOME

Bringing dwm Shortcuts to GNOME 2023-11-02 The dwm window manager is my standard "go-to" for most of my personal laptop environments. For desktops with larger, higher resolution monitors I tend to lean towards using GNOME. The GNOME DE is fairly solid for my own purposes. This article isn't going to deep dive into GNOME itself, but instead highlight some minor configuration changes I make to mimic a few dwm shortcuts. For reference, I'm running GNOME 45.0 on Ubuntu 23.10 Setting Up Fixed Workspaces When I use dwm I tend to have a hard-set amount of tags to cycle through (normally 4-5). Unfortunately, dynamic rendering is the default for workspaces (ie. tags) in GNOME. For my personal preference I set this setting to fixed. We can achieve this by opening Settings > Multitasking and selecting "Fixed number of workspaces". Screenshot of GNOME's Multitasking Settings GUI Setting Our Keybindings Now all that is left is to mimic dwm keyboard shortcuts, in this case: ALT + $num for switching between workspaces and ALT + SHIFT + $num for moving windows across workspaces. These keyboard shortcuts can be altered under Settings > Keyboard > View and Customize Shortcuts > Navigation. You'll want to make edits to both the "Switch to workspace n" and "Move window to workspace n". Screenshot of GNOME's keyboard shortcut GUI: switch to workspace Screenshot of GNOME's keyboard shortcut GUI: move window to workspace That's it. You're free to include even more custom keyboard shortcuts (open web browser, lock screen, hibernate, etc.) but this is a solid starting point. Enjoy tweaking GNOME!

a year ago 83 votes
The X220 ThinkPad is the Best Laptop in the World

The X220 ThinkPad is the Best Laptop in the World 2023-09-26 The X220 ThinkPad is the greatest laptop ever made and you're wrong if you think otherwise. No laptop hardware has since surpassed the nearly perfect build of the X220. New devices continue to get thinner and more fragile. Useful ports are constantly discarded for the sake of "design". Functionality is no longer important to manufacturers. Repairability is purposefully removed to prevent users from truly "owing" their hardware. It's a mess out there. But thank goodness I still have my older, second-hand X220. Specs Before I get into the details explaining why this laptop is the very best of its kind, let's first take a look at my machine's basic specifications: CPU: Intel i7-2640M (4) @ 3.500GHz GPU: Intel 2nd Generation Core Processor Memory: 16GB DDR3 OS: Arch Linux / OpenBSD Resolution: 1366x768 With that out of the way, I will break down my thoughts on the X220 into five major sections: Build quality, available ports, the keyboard, battery life, and repairability. Build Quality The X220 (like most of Lenovo's older X/T models) is built like a tank. Although sourced from mostly plastic, the device is still better equipped to handle drops and mishandling compared to that of more fragile devices (such as the MacBook Air or Framework). This is made further impressive since the X220 is actually composed of many smaller interconnected pieces (more on this later). A good litmus test I perform on most laptops is the "corner test". You grab the base corner of a laptop in its open state. The goal is to see if the device displays any noticeable give or flex. In the X220's case: it feels rock solid. The base remains stiff and bobbing the device causes no movement on the opened screen. I'm aware that holding a laptop in this position is certainly not a normal use case, but knowing it is built well enough to do so speaks volumes of its construction. The X220 is also not a lightweight laptop. This might be viewed as a negative for most users, but I actually prefer it. I often become too cautious and end up "babying" thinner laptops out of fear of breakage. A minor drop from even the smallest height will severely damage these lighter devices. I have no such worries with my X220. As for the laptop's screen and resolution: your mileage may vary. I have zero issues with the default display or the smaller aspect ratio. I wrote about how I stopped using an external monitor, so I might be a little biased. Overall, this laptop is a device you can snatch up off your desk, whip into your travel bag and be on your way. The rugged design and bulkier weight help put my mind at ease - which is something I can't say for newer laptop builds. Ports Ports. Ports Everywhere. I don't think I need to explain how valuable it is to have functional ports on a laptop. Needing to carrying around a bunch of dongles for ports that should already be on the device just seems silly. The X220 comes equipped with: 3 USB ports (one of those being USB3 on the i7 model) DisplayPort VGA Ethernet SD Card Reader 3.5mm Jack Ultrabay (SATA) Wi-Fi hardware kill-switch Incredibly versatile and ready for anything I throw at it! Keyboard The classic ThinkPad keyboards are simply that: classic. I don't think anyone could argue against these keyboards being the golden standard for laptops. It's commendable how Lenovo managed to package so much functionality into such a small amount of real estate. Most modern laptops lack helpful keys such as Print Screen, Home, End, and Screen Lock. They're also an absolute joy to type on. The fact that so many people go out of their way to mod X230 ThinkPad models with X220 keyboards should tell you something... Why Lenovo moved away from these keyboards will always baffle me. (I know why they did it - I just think it's stupid). Did I mention these classic keyboards come with the extremely useful Trackpoint as well? Battery Life Author's Note: This section is very subjective. The age, quality, and size of the X220's battery can have a massive impact on benchmarks. I should also mention that I run very lightweight operating systems and use DWM as opposed to a heavier desktop environment. Just something to keep in mind. The battery life on my own X220 is fantastic. I have a brand-new 9-cell that lasts for roughly 5-6 hours of daily work. Obviously these numbers don't come close to the incredible battery life of Apple's M1/M2 chip devices, but it's still quite competitive against other "newer" laptops on the market. Although, even if the uptime was lower than 5-6 hours, you have the ability to carry extra batteries with you. The beauty of swapping out your laptop's battery without needing to open up the device itself is fantastic. Others might whine about the annoyance of carrying an extra battery in their travel bag, but doing so is completely optional. A core part of what makes the X220 so wonderful is user control and choice. The X220's battery is another great example of that. Repairability The ability to completely disassemble and replace almost everything on the X220 has to be one of its biggest advantages over newer laptops. No glue to rip apart. No special proprietary tools required. Just some screws and plastic snaps. If someone as monkey-brained as me can completely strip down this laptop and put it back together again without issue, then the hardware designers have done something right! Best of all, Lenovo provides a very detailed hardware maintenance manual to help guide you through the entire process. My disassembled X220 when I was reapplying the CPU thermal paste. Bonus Round: Price I didn't list this in my initial section "breakdown" but it's something to consider. I purchased my X220 off eBay for $175 Canadian. While this machine came with a HDD instead of an SSD and only 8GB of total memory, that was still an incredible deal. I simply swapped out the hard-drive with an SSD I had on hand, along with upgrading the DDR3 memory to its max of 16. Even if you needed to buy those components separately you would be hard-pressed to find such a good deal for a decent machine. Not to mention you would be helping to prevent more e-waste! What More Can I Say? Obviously the title and tone of this article is all in good fun. Try not to take things so seriously! But, I still personally believe the X220 is one of, if not the best laptop in the world.

a year ago 108 votes
Installing Older Versions of MongoDB on Arch Linux

Installing Older Versions of MongoDB on Arch Linux 2023-09-11 I've recently been using Arch Linux for my main work environment on my ThinkPad X260. It's been great. As someone who is constantly drawn to minimalist operating systems such as Alpine or OpenBSD, it's nice to use something like Arch that boasts that same minimalist approach but with greater documentation/support. Another major reason for the switch was the need to run older versions of "services" locally. Most people would simply suggest using Docker or vmm, but I personally run projects in self-contained, personalized directories on my system itself. I am aware of the irony in that statement... but that's just my personal preference. So I thought I would share my process of setting up an older version of MongoDB (3.4 to be precise) on Arch Linux. AUR to the Rescue You will need to target the specific version of MongoDB using the very awesome AUR packages: yay -S mongodb34-bin Follow the instructions and you'll be good to go. Don't forget to create the /data/db directory and give it proper permissions: mkdir -p /data/db/ chmod -R 777 /date/db What About My "Tools"? If you plan to use MongoDB, then you most likely want to utilize the core database tools (restore, dump, etc). The problem is you can't use the default mongodb-tools package when trying to work with older versions of MongoDB itself. The package will complain about conflicts and ask you to override your existing version. This is not what we want. So, you'll have to build from source locally: git clone https://github.com/mongodb/mongo-tools cd mongodb-tools ./make build Then you'll need to copy the built executables into the proper directory in order to use them from the terminal: cp bin/* /usr/local/bin/ And that's it! Now you can run mongod directly or use systemctl to enable it by default. Hopefully this helps anyone else curious about running older (or even outdated!) versions of MongoDB.

a year ago 51 votes
Converting HEIF Images with macOS Automator

Converting HEIF Images with macOS Automator 2023-07-21 Often times when you save or export photos from iOS to iCloud they often render themselves into heif or heic formats. Both macOS and iOS have no problem working with these formats, but a lot of software programs will not even recognize these filetypes. The obvious step would just be to convert them via an application or online service, right? Not so fast! Wouldn't it be much cleaner if we could simply right-click our heif or heic files and convert them directly in Finder? Well, I've got some good news for you... Basic Requirements You will need to have Homebrew installed You will need to install the libheif package through Homebrew: brew install libheif Creating our custom Automator script For this example script we are going to convert the image to JPG format. You can freely change this to whatever format you wish (PNG, TIFF, etc.). We're just keeping things basic for this tutorial. Don't worry if you've never worked with Automator before because setting things up is incredibly simple. Open the macOS Automator from the Applications folder Select Quick Option from the first prompt Set "Workflow receives current" to image files Set the label "in" to Finder From the left pane, select "Library > Utilities" From the presented choices in the next pane, drag and drop Run Shell Script into the far right pane Set the area "Pass input" to as arguments Enter the following code below as your script and type ⌘-S to save (name it something like "Convert HEIC/HEIF to JPG") for f in "$@" do /opt/homebrew/bin/heif-convert "$f" "${f%.*}.jpg" done Making Edits If you ever have the need to edit this script (for example, changing the default format to png), you will need to navigate to your ~/Library/Services folder and open your custom heif Quick Action in the Automator application. Simple as that. Happy converting! If you're interested, I also have some other Automator scripts available: Batch Converting Images to webp with macOS Automator Convert Files to HTML with macOS Automator Quick Actions

a year ago 24 votes
Blogging for 7 Years

Blogging for 7 Years 2023-06-24 My first public article was posted on June 28th 2016. That was seven years ago. In that time, quite a lot has changed in my life both personally and professionally. So, I figured it would be interesting to reflect on these years and document it for my own personal records. My hope is that this is something I could start doing every 5 or 10 years (if I can keep going that long!). This way, my blog also serves as a "time capsule" or museum of the past... Fun Facts This Blog: I originally started blogging on bradleytaunt.com using WordPress, but since then I have changed both my main domain and blog infrastructure multiple times. At a glance I have used: Jekyll Hugo Blot Static HTML/CSS PHPetite Shinobi pblog barf Currently using! Personal: As with anyone over time, the personal side of my life has seen the biggest updates: Married the love of my life (after knowing each other for ~14 years!) Moved out into rural Ontario for some peace and quiet Had three wonderful kids with said wife (two boys and a girl) Started noticing grey sprinkles in my stubble (I guess I can officially call myself a "grey beard"?) Professionally: Pivoted heavily into UX research and design for a handful of years (after working mostly with web front-ends) Recently switched back into a more fullstack development role to challenge myself and learn more Nothing Special This post isn't anything ground-breaking but for me it's nice to reflect on the time passed and remember how much can change in such little time. Hopefully I'll be right back here in another 7 years and maybe you'll still be reading along with me!

a year ago 47 votes

More in programming

We'll always need junior programmers

We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.

12 hours ago 4 votes
Requirements change until they don't

Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩

9 hours ago 3 votes
How should Stripe deprecate APIs? (~2016)

While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.

7 hours ago 2 votes
Brian Regan Helped Me Understand My Aversion to Job Titles

I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky

yesterday 4 votes
Bike Brooklyn! zine

I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!

yesterday 5 votes