More from Posts on Made of Bugs
Earlier this month, I used Claude to port (parts of) an Emacs package into Rust, shrinking the execution time by a factor of 1000 or more (in one concrete case: from 90s to about 15ms). This is a variety of yak-shave that I do somewhat routinely, both professionally and in service of my personal computing environment. However, this time, Claude was able to execute substantially the entire project under my supervision without me writing almost-any lines of code, speeding up the project substantially compared to doing it by hand.
I worked at Stripe for about seven years, from 2012 to 2019. Over that time, I used and contributed to many generations of Stripe’s developer environment – the tools that engineers used daily to write and test code. I think Stripe did a pretty good job designing and building that developer experience, and since leaving, I’ve found myself repeatedly describing features of that environment to friends and colleagues. This post is an attempt to record the salient features of that environment as I remember it.
I was recently introduced to the paper “Seeing the Invisible: Perceptual-Cognitive Aspects of Expertise” by Gary Klein and Robert Hoffman. It’s excellent and I recommend you read it when you have a chance. Klein and Hoffman discuss the ability of experts to “see what is not there”: in addition to observing data and cues that are present in the environment, experts perceive implications of these cues, such as the absence of expected or “typical” information, the typicality or atypicality of observed data, and likely/possible past and future time trajectories of a system based on a point-in-time snapshot or limited duration of observation.
This December, the imp of the perverse struck me, and I decided to see how many days of Advent of Code I could do purely in compile-time C++ metaprogramming. As of this writing, I’ve done two days, and I’m not sure I’ll make it any further. However, that’s one more day than I planned to do as of yesterday, which is in turn further than I thought I’d make it after my first attempt.
More in technology
A quick intro to interfacing common OLED displays to bare-metal microcontrollers.
If you are a pet owner, you know how important it is to keep furry companions fed and happy – even when life gets busy! With the Arduino Plug and Make Kit, you can now build a customizable, smart pet feeder that dispenses food on schedule and can be controlled remotely. It’s the perfect blend […] The post Build your own smart pet feeder with the Arduino Plug and Make Kit appeared first on Arduino Blog.
Disruptive technologies call for rethinking product design. We must question assumptions about underlying infrastructure and mental models while acknowledging neither change overnight. For example, self-driving cars don’t need steering wheels. Users direct AI-driven vehicles by giving them a destination address. Keyboards and microphones are better controls for this use case than steering wheels and pedals. But people expect cars to have steering wheels and pedals. Without them, they feel a loss of control – especially if they don’t fully trust the new technology. It’s not just control. The entire experience can – and perhaps must — change as a result. In a self-driving car, passengers needn’t all face forward. Freed from road duties, they can focus on work or leisure during the drive. As a result, designers can rethink the cabin experience from scratch. Such changes don’t happen overnight. People are used to having agency. They expect to actively sit behind the wheel with everyone facing forward. It’ll take time for people to cede control and relax. Moreover, current infrastructure is designed around these assumptions. For example, road signs point toward oncoming traffic because that’s where drivers can see them. Roads transited by robots don’t need signals at all. But it’s going to be a while before roads are used exclusively by AI-driven vehicles. Human drivers will share roads with them for some time, and humans need signs. The presence of robots might even call for new signaling. It’s a liminal situation that a) doesn’t yet accommodate the full potential of the new reality while b) trying to accommodate previous ways of being. The result is awkward “neither fish nor fowl” experiments. My favorite example is a late 19th Century product called Horsey Horseless. Patent diagram of Horsey Horseless (1899) via Wikimedia Yes, it’s a vehicle with a wooden horse head grafted on front. When I first saw this abomination (in a presentation by my friend Andrew Hinton,) I assumed it meant to appeal to early adopters who couldn’t let go of the idea of driving behind a horse. But there was a deeper logic here. At the time, cars shared roads with horse-drawn vehicles. Horsey Horseless was meant to keep motorcars from freaking out the horses. Whether it worked or not doesn’t matter. The important thing to note is people were grappling with the implications of the new technology on the product typology given the existing context. We’re in that situation now. Horsey Horseless is a metaphor for an approach to product evolution after the introduction of a disruptive new technology. To wit, designers seek to align the new technology with existing infrastructure and mental models by “grafting a horse.” Consider how many current products are “adding AI” by including a button that opens a chatbox alongside familiar UI. Here’s Gmail: Gmail’s Gemini AI panel. In this case, the email client UI is a sort of horse’s head that lets us use the new technology without disrupting our workflows. It’s a temporary hack. New products will appear that rethink use cases from the new technology’s unique capabilities. Why have a chat panel on an email client when AI can obviate the need for email altogether? Today, email is assumed infrastructure. Other products expect users to have an email address and a client app to access it. That might not always stand. Eventually, such awkward compromises will go away. But it takes time. We’re entering that liminal period now. It’s exciting – even if it produces weird chimeras for a while.
Many modern video games may put your character inside of a virtual 3D environment, but you aren’t seeing that in three dimensions — your TV’s screen is only a 2D display, after all. 3D displays/glasses and VR goggles make it feel more like you’re in the 3D world, but it isn’t quite the same as […] The post Displaying games on a 9x9x9 LED cube appeared first on Arduino Blog.
A year ago I tried to understand how much power ChatGPT was using and if I should be outraged by it. Today I try it again.