Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
26
This book’s introduction started by defining strategy as “making decisions.” Then we dug into exploration, diagnosis, and refinement: three chapters where you could argue that we didn’t decide anything at all. Clarifying the problem to be solved is the prerequisite of effective decision making, but eventually decisions do have to be made. Here in this chapter on policy, and the following chapter on operations, we finally start to actually make some decisions. In this chapter, we’ll dig into: How we define policy, and how setting policy differs from operating policy as discussed in the next chapter The structured steps for setting policy How many policies should you set? Is it preferable to have one policy, many policies, or does it not matter much either way? Recurring kinds of policies that appear frequently in strategies Why it’s valuable to be intentional about your strategy’s altitude, and how engineers and executives generally maintain different altitudes in their...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Irrational Exuberance

systems-mcp: generate systems models via LLM

Back in 2018, I wrote lethain/systems as a domain-specific language for writing runnable systems models, and introduced it with this blog post modeling a hiring funnel. While it’s far from a perfect system, I’ve gotten a lot of value out of it over the last seven years, because it allows me to maintain systems models in version control. As I’ve been playing with writing Model Context Protocol (MCP) servers, one I’ve been thinking about frequently is one to help writing systems syntax, and I finally put that together in the lethain/systems-mcp repository. More detailed installation and usage instructions are in the GitHub repository, so I’ll just share a couple of screenshots and comments here. Starting with the load_systems_documentation tool which loads a copy of lethain/systems/README.md and a file with example systems into the context window. The biggest challenge of properly writing DSLs with an LLM is providing enough in-context learning (ICL) examples, and I think the idea of providing tools that are specifically designed to provide that context is a very interesting idea. Eventually I imagine there will be generalized tools for this, e.g. a search index of the best ICL examples for a wide variety of DSLs. Until then, my guess is that this sort of tool is particularly valuable. The second tool is run_systems_model which passes the DSL (and an optional parameter for number of rounds) to the tool and then returns the result. I experimented with interface design here, initially trying to return a rendered chart of the results, but ultimately even multi-modal models are just much better at working with text than with images. This meant that I had the best results returning JSON of the results and then having the LLM build a tool for interacting with the results. Altogether, a fun little experiment, and another confirmation in my mind that the most interesting part of designing MCPs today is deciding where to introduce and eliminate complexity from the LLM. Introduce too little and the tool lacks power; eliminate too little and the combination rarely works.

6 days ago 5 votes
How to provide feedback on documents.

At Carta, we recently ran a reading group for Facilitating Software Architecture by Andrew Harmel-Law. We already loosely followed the ideas of an architectural advice process (from this 2021 article by the same Andrew Harmel-Law), but in practice we found that internal tech spec and architecture decision record (ADR) authors tended to exclusively share their documents locally within their team rather than more widely. As we asked authors why they preferred sharing locally, the most common answer was that they got enough feedback from their team that they didn’t want to pay the time overhead of sharing widely. The wider feedback wasn’t necessarily bad or combative. It just wasn’t good enough to compensate for the additional time it cost to process. This made sense from the authors’ perspectives, but didn’t work well for me from the executive perspective, as I was seeing teams make misaligned decisions due to lack of cross-team communication. As one step in reducing the overhead of sharing documents widely, I wrote up and shared this recommended process for providing feedback on documents: Before starting, remember that the goal of providing feedback on a document is to help the author. Optimizing for anything else, even if it’s a worthy cause, discourages authors from sharing their future writing. If you prioritize something other than helping the author, you are discouraging them from sharing future work. Start by skimming the document to understand its structure and where various kinds of topics are addressed. Why? This helps avoid giving feedback on ways the document’s actual structure diverges from how you imagined it would be structured. It also reduces questions about topics that are answered later in the document. Both of these sorts of feedback are a distraction during a discussion on a tech spec. In general, it’s better to avoid them. If you notice an author making the same significant structural mistake over several ADRs, it’s worth delivering that feedback separately. After skimming, reread the document, leaving comments with concerns. Each comment should include these details: What your suggested change or concern is Why you believe this is meaningful to address How important this seems (from ignorable nitpick to critical) If you find yourself leaving more than three or four issues, then you should either raise your threshold for commenting or you should schedule time with the individual to talk over the feedback. If the document is unreasonably weak, then it’s appropriate to nudge their leadership to dig into what’s happening on that team. The most important idea behind these steps is that your goal as a feedback giver is to help the document’s author. It is not to protect your team’s strategy or platform. It is not to optimize for your goals. It’s to help the author. This might feel wrong, but ultimately optimizing for anything else will lead to an environment where sharing widely is an irrational behavior. As a final aside, I think the user experience around commenting on documents is fundamentally wrong in most document editors. For example, Google Docs treats individual comments as first-order objects, similarly to how old version control systems like CVS tracked changes to individual files without tracking an overall state of the project. Ultimately, you want to collect all your comments into a bundle, then review that bundle for consistency and duplicates, and then submit that bundle as commentary, but editors don’t support that flow particularly well.

6 days ago 9 votes
Public company comparables.

A few years ago I wrote about reading a Profit & Loss statement, which is a foundational executive skill. I also subsequently wrote about ways to measure your engineering organization. Despite having written those, I still spend a lot of time wondering about effective ways to represent an engineering organization to your board of directors. Over the past few years, one of the most useful charts I’ve found for explaining an R&D organization is a scatterplot of R&D spend as a % of margin versus YoY growth of last twelve months (LTM) revenue. Unlike so many other measures, this is an explicit measure of your R&D organization value as an investment relative to peer organizations. Until recently, I assumed building this dataset required reading financial filings but my strategic finance partner at Carta, Tyler Braslow, pointed out that you can get all of this data for the tech section from Meritech Analytics, for free. When you login to Meritech, you’re dropped into a table of public company comparables for tech companies. This is the exact dataset I’d been looking for to build this chart. After logging in, you can then copy the contents of that table into a Google Sheets spreadsheet or Excel, or whatever you’re most comfortable with. Within that sheet, the columns you care about are: % YoY Growth LTM Rev (column Q for me) – how much “last twelve months revenue” has grown year over year, as a percentage % LTM Margins for R&D (column U for me) – how much R&D spend is as a percentage of last twelve months margin LTM Revenue (column O for me) – although I don’t show this in the scatterplot, I find this one useful for debugging outlier values Hiding the other columns gives you a much simpler table. From that table, you’re then able to build the scatterplot. Note that being “higher” means your R&D spend as a percentage of LTM margin is higher, which is a bad thing. The best companies are to the bottom and to the right; the worst companies are to the top and to the left. With this chart as a starting point, you can then plot your company in and show where you stand. You could also show how your company’s position in the chart has evolved over time: hopefully improving. Finally, you might want to cull some of these data points to better determine your public company comparables. The Meritech dataset has 106 entries, but you might prefer a more representative thirty entries.

a week ago 6 votes
How to filter out old email from inbox

Every few years I take a pass at reducing the chaos in my personal inboxes. There are simply too many emails to deal with, and that generally leads to me increasingly failing to follow up on important email. Up to this point, my strategy has largely been filtering out emails that I never want to read. But there’s another category of email which is stuff I often want to read when it’s fresh, but never want to read after it’s fresh. For example, calendar reminders, some mailing lists, some news letters, etc. I decided to figure out how I could setup a system where I could mark a number of things as “filter three days after receipt”. This is a nice compromise, because I do want to see those things, but I don’t want to have to remember to archive them after the fact. You can write a search query for this in GMail: from:(calendar-notification@google.com) older_than:3d However, if you try to create a GMail filter using that, it turns the older_than:3d into a fixed point in time rather than doing what you want. It seems that this is unsolvable within GMail itself. However, some quick searching suggested it was possible to create a Google App Script to solve this, and asked Claude to write the script for me. Following those instructions, I went to script.google.com, which I have not gone to in many years. I edited the generated script from Claude to use the tag “TempMsg”, to archive messages (as originally it had those commented out), and to limit itself to the first fifty items matching that tag. You can find the full code in this gist. I attempted to run this as is, and got an error message that I needed to grant permissions. That requires three clicks within the Google Scripts UI. This also requires approving the somewhat scary message that I trust myself. From there I tried to run this script, and it failed because the TempMsg tag doesn’t exist in my inbox. So I went ahead and created that tag, and setup some filters to assign that tag to certain email senders. After that, I was able to run the script and it worked properly. Note that I convinced myself that it was failing for a bit, because it doesn’t remove messages from the past three days. That is exactly how it’s supposed to work, but I would run it and then see messages with the tag there and think it was failing. Woops. After convincing myself it was working, I added a periodic trigger to run this. I now have this running on a daily basis, and it’s given me a nice new tool for managing my email a bit better. After verifying it, I also used the tag manager to “hide” this tag in the inbox, so I don’t have to see the TempMsg tag everywhere. If I ever need to debug things, I can always make it visible again.

a week ago 4 votes
How should Stripe deprecate APIs? (~2016)

While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.

3 weeks ago 21 votes

More in programming

Building (and Breaking) Your First X86 Assembly Program

We build a minimal X86 assembly program, run it… and hit a crash. But that crash is exactly what makes this program worth writing.

13 hours ago 4 votes
RubyKaigi 2025 Recap

In 2023 I attended RubyKaigi for the first time and also wrote my first recap, which I’m pleased to say was well-received! This was my third time attending RubyKaigi, and I was once again really impressed with the event. I’m eternally grateful to the conference organizers, local organizers (organizers recruited each year who live/lived in the area RubyKaigi is held), designers, NOC team, helpers, sponsors, speakers, and other attendees for helping create such a memorable experience, not just during the three day conference, but over my entire time in Matsuyama. What is RubyKaigi? RubyKaigi is a three-day technology conference focused on the Ruby programming language, held annually “somewhere in Japan.” It attracts a global audience and this year welcomed over 1500 attendees to Matsuyama, Ehime. The traveling nature of the conference means that for the majority of the attendees (not just the international ones), it’s a chance to take a trip—and the days leading up to and following the event are full of fun encounters with other Rubyists as we wander around town. Checking social media a few days before the conference, I saw posts tagged with “RubyKaigi Day –3” and started getting FOMO! Talks RubyKaigi featured 3 keynotes, 51 talks, 11 Lightning talks, the TRICK showcase, and Ruby Committers and the World. There were talks in the Main Hall, Sub Hall, and Pearls Room, so you frequently had 3 options to choose from at any given time. Despite being held in Japan, RubyKaigi is an international conference that welcomes English speakers; all talks in the Sub Hall are in English, for example, and all the Japanese talks also have real-time translation and subtitles. Organizers put a great deal of thought into crafting the schedule to maximize everyone’s chances of seeing the talks they’re interested in. For example, every time slot has at least one English and one Japanese talk, and colleagues are scheduled to speak at different times so their work friends don’t have to split their support. The power of pre-study One great feature of RubyKaigi is its esoteric talks, delivered by speakers who are enthusiastic experts in their domains. I’m more of a Ruby user than a Ruby committer (the core team who have merge access to the Ruby repository), so every year there are talks during which I understand nothing—and I know I’m not alone in that. One of the topics I struggle with is parsers, so before the conference I created these sketch notes covering “How Do Computers Understand Ruby?”. Then, as I was listening to previously incomprehensible talks I found myself thinking, “I know this concept! I can understand! Wow, that’s cool!” Sketch notes on "How do Computers Understand Ruby" My plan for next year is to organize my schedule as soon as RubyKaigi’s talks are announced, and create a pre-conference study plan based on the talks I’m going to see. Just when I thought I’d leveled up, I attended Ryo Kajiwara’s talk “You Can Save Lives with End-to-End Encryption in Ruby,” where he talked about the importance of end-to-end encryption and told us all to stop using SMTP. It was a humbling experience because, after the first few slides, I couldn’t understand anything. Ruby taught me about encoding under the hood This year’s opening keynote was delivered by Mari Imaizumi, who took us on the journey of how converting the information you want to convey into symbols has been a thing since basically forever, as well as how she originally got interested in encoding. She talked about the competing standards for character encoding, and her experience with Mojibake. It made me think about how lucky I am, that the internet heavily favours English speakers. Even when I was exploring the Web in the 2000s, it was rare for me to come across content scrambled by encoding. TRICK 2025: Episode I There was a point at which it seemed like the awards were going to be a one-man-show, as Pen-san took the fifth, fourth, and third places, but the first-place winner was Don Yang, who until then hadn’t received any awards. The moment that stood out for me, though, was when Pen-san was talking about his work that won “Best ASMR”: code in the shape of bubbles that produces the sound of ocean waves when run. Pen-san explained how the sound was made and said something like, “Once you know this, anyone can write this code.” To his right you could see Matz waving his arm like, “No, no!” which captured my own feelings perfectly. Drawing of Pen san and Matz ZJIT: building a next-generation Ruby JIT Maxime Chevalier-Boisvert started her talk by apologising for YJIT not being fast enough. Because YJIT is hitting a plateau, she is now working on ZJIT. While YJIT uses a technique called Lazy Basic Block Versioning, ZJIT is method-based JIT. It will be able to “see” more chunks of code and thus be able to optimize more than YJIT. Ruby committers and the world There were humorous moments, like when the panel was asked, “What do you want to depreciate in Ruby 4.0?” Matz answered, “I don’t want to depreciate anything. If stuff breaks people will complain to me!” Also, when the question was, “If you didn’t have to think about compatibility, what would you change?” the committers started debating the compatibility of a suggested change, leading the moderator to say committers are always thinking about compatibility. Matz ended this segment with the comment that it might seem like there’s a big gap between those on stage and those in the audience, but it’s not that big—it’s something that you can definitely cross. Sketch notes for Ruby Committers and The World Eliminating unnecessary implicit allocations Despite this being an unfamiliar topic for me, Jeremy Evans explained things so well even I could follow along. I really liked how approachable this talk was! Jeremy shared about how he’d made a bug fix that resulted in Ruby allocating an object where it hadn’t allocated one before. Turns out, even when you’re known for fixing bugs, you can still cause bugs. And as he fixed this case, more cases were added to the code through new commits. To prevent new code changes from adding unnecessary allocations, he wrote the “Allocation Test Suite,” which was included in the Ruby 3.4 release. Optimizing JRuby 10 JRuby 10 is Ruby 3.4 compatible! What stood out to me the most was the slide showing a long list of CRuby committers, and then three committers on the JRuby side: Charles Nutter (the speaker), his friend, and his son. This was one of those talks where the pre-study really helped—I could better understand just what sort of work goes into JRuby. Itandi’s sponsor lightning talk Usually sponsors use their time to talk about their company, but the speaker for Itandi spent most of his time introducing his favorite manga, Shoujiki Fudousan. He encouraged us to come visit the Itandi booth, where they had set up a game in which you could win a copy of the manga. Sponsors This year there were a total of 102 sponsors, with so many gold and platinum-level sponsors the organizers held a lottery for the booths. To encourage attendees to visit each booth, there was once again a stamp rally with spaces for all 46 booths, although you could reach the pin-badge goal with just 35 stamps. It also helped keep track of where you had/hadn’t been. Since sponsors are an invaluable part of the conference, and they put so much effort into their booths, I always want to be able to show my appreciation and hear what each of them have to say. With 46 to visit, though, it was somewhat difficult! Each booth had plenty of novelties to hand out and also fun activities, such as lotteries, games, surveys and quizzes, and coding challenges. By day three, though, the warm weather had me apologetically skipping all coding challenges and quizzes, as my brain had melted. For me, the most memorable novelty was SmartHR’s acrylic charm collection! Since they missed out on a booth this year, they instead created 24 different acrylic charms. To collect them, you had to talk to people wearing SmartHR hoodies. I felt that was a unique solution and a great icebreaker. Collection of SmartHR acrylic charms I actually sent out a plea on X (Twitter) because I was missing just a few charms—and some SmartHR employees gave me charms from their own collection so I could complete the set! (Although it turns out I’m still missing the table-flipping charm, so if anyone wants to help out here . . . ) Hallway track (Events) Every year RubyKaigi has various official events scheduled before and after the conference. It’s not just one official after party—this year there were lunches, meetups, drinkups, board games, karaoke, live acts until three a.m., morning group exercises (there’s photographic proof of people running), and even an 18-hour ferry ride. I need sleep to understand the talks, and I need to wake up early because the conference is starting, and I need to stay up late to connect with other attendees! The official events I attended this year were SmartHR’s pre-event study session, the Women and Non-binary Dinner, the RubyKaigi Official Party, the STORES CAFE for Women, the Leaner Board Game Night, RubyKaraoke, RubyMusicMixin 2025 and the codeTakt Drinkup. I got to chat with so many people, including: Udzura-san, who inspired my Ruby study notes; Naoko-san, one of STORES’s founders; and Elle, a fellow Australian who traveled to Japan for RubyKaigi! The venues were also amazing. The official party was in a huge park next to Matsuyama Castle, and the board game event took place in what seemed to be a wedding reception hall. Compared to the conference, where you are usually racing to visit booths or heading to your next talk, it’s at these events you can really get to know your fellow Rubyists. But it’s not just about the official events; my time was also full of random, valuable catch ups and meetings. For lunch, I went out to eat tai meshi (sea bream rice) with some of the ladies I met at the dinner. I was staying at emorihouse, so following the after party we continued drinking and chatting in our rooms. After RubyMusicMixin, I didn’t want the night to end and bumped into a crew headed towards the river. And on day four, the cafe I wanted to go to was full, but I got in by joining Eririn-san who was already seated. That night I ended up singing karaoke with a couple of international speakers after running into them near Dogo Onsen earlier that day. Part of the joy of RubyKaigi is the impromptu events, the ones that you join because the town is so full of Rubyists you can’t help but run into them. I organised an event! This year I organised the Day 0 Women and Non-binary Dinner&DrinkUp. We were sponsored by TokyoDev, and we had a 100 percent turnout! I’m grateful to everyone who came, especially Emori-san who helped me with taking orders and on-the-day Japanese communications. With this event, I wanted to create a space where women and non-binary people–whether from Japan or overseas, RubyKaigi veterans or first-timers—could connect with each other before the conference started, while enjoying Matsuyama’s local specialities. We’re still a minority among developers, so it was exciting for me to see so many of us together in one place! Group photo from Women & Non-binary Dinner and DrinkUp! I’d love to host a similar event next year, so if you or your company is interested in sponsoring, please reach out! Matz-yama (Matsuyama) Last year’s RubyKaigi closed with the announcement that “We’ve taken Matz to the ocean [Okinawa], so now it’s time to take him to the mountains.” (Yama means “mountain” in Japanese.) Matsuyama city is located in Ehime, Shikoku, and its famous tourist attractions include Matsuyama Castle and Dogo Onsen, which is said to have inspired the bathhouse in Spirited Away. RubyKaigi banner on display at Okaido Shipping Street Ehime is renowned for mikan (蜜柑, mandarin oranges), and everywhere you go there is mikan and Mikyan, Ehime’s adorable mascot character. Before arriving, everyone told me, “In Ehime, mikan juice comes out of the tap,” which I thought meant there were literally pipes with mikan juice flowing through them all over Ehime! However, reality is not so exciting: yes, mikan juice does flow from taps, but there’s clearly a container behind the tap! There’s no underground mikan juice pipe network. 😢 RubyKaigi also highlighted Ehime’s specialties. This year’s theme color was red-orange, break-time snacks were mikan and mikan jelly, the logo paid homage to the cut fruit, and one of the sponsors even had a mikan juice tap at their booth! Also, included among this year’s official novelties was a RubyKaigi imabari towel, since Imabari city in Ehime is world famous for their towels. I’m an absolute fan of how RubyKaigi highlights the local region and encourages attendees to explore the area. Not only does this introduce international attendees to parts of Japan they might otherwise not visit, it’s a chance for local attendees to play tourist as well. Community In Matz’s closing keynote, Programming Language for AI Age, he touched on how it’s odd to delegate the fun tasks to an AI. After all, if AI does all the fun things and we do all the hard things, where’s the joy for us in that? To me, creating software is a collaborative activity—through collaboration we produce better software. Writing code is fun! Being able to connect with others is fun! Talking to new people is fun! I’ve met so many amazing people through the Ruby community, and RubyKaigi has played an important role in that. Through the community I’ve gotten advice, learned new things, and shared resources. My sketch-notes have been appreciated by others, and as I walk around there are #rubyfriends everywhere who all make me feel so welcomed. RubyKaigi attracts a variety of attendees: developers and non-developers, Ruby experts and Ruby beginners. It’s a fun conference with a wonderful community, and even though it’s a technical conference, non-technical people can enjoy participating as well. Community growth comes with its own issues, but I think attracting newcomers is an important aspect of RubyKaigi. As someone who first came as a developer completely new to Ruby, every year I learn more and am inspired to give back to the Ruby community. I hope that RubyKaigi continues to inspire participants to love Ruby, and encourages them to understand and improve it. By growing the Ruby community, we ensure Ruby will continue on as a Programming Language for the AI Age. Looking forward to Hakodate next year, and to seeing you all there! PS: Surprise, Detective Conan? I really love the Detective Conan series. This year RubyKaigi Day Three and the 2025 Detective Conan movie premiere were on the same day . . . so as soon as Matsuda-san said, “It’s over!” I ran out of the hall to go watch the movie at Cinema Sunshine Kinuyama. And next year’s RubyKaigi location, Hakodate, was the setting for the 2024 Detective Conan movie. What a deep connection RubyKaigi and Detective Conan have! Detective Conan decorations set up at the cinema in Kinuyama

20 hours ago 2 votes
Logo:

Cyrillic version of Internet Explorer logo. Because it’s iconic.

yesterday 1 votes
Talking to Espressif’s Bootloader

In my article about Espressif’s Automatic Reset, I briefly showed UART output from the bootloader, but did not go in more details. In this article, I want to go just a bit further, by showing some two-way interactions. We’ll use the initial basic “real” UART setup. Note that I did not connect DTR/RTS to RST/IO0. … Continue reading Talking to Espressif’s Bootloader → The post Talking to Espressif’s Bootloader appeared first on Quentin Santos.

2 days ago 3 votes
Beware the Complexity Merchants

When smart people get their high from building complex systems to solve simple problems, you're not going to have a good time

2 days ago 2 votes