Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
17
Previously in compiler basics: 1. lisp to assembly 3. LLVM 4. LLVM conditionals and compiling fibonacci 5. LLVM system calls 6. an x86 upgrade In this post we'll extend the compiler to support defining functions and variables. Additionally, we'll require the program's entrypoint to be within a main function. The resulting code can be found here. Function definition The simplest function definition we need to support is for our main function. This will look like this: $ cat basic.lisp (def main () (+ 1 2)) Where compiling and running it should produce a return code of 3: $ node ulisp.js basic.lisp $ ./build/a.out $ echo $? 3 Parsing function definitions The entire language is defined in S-expressions and we already parse S-expressions. $ node > const { parse } = require('./parser'); > JSON.stringify(parse('(def main () (+ 1 2))')); '[[["def","main",[],["+",1,2]]],""]' So we're done! Code generation There are two tricky parts to code generation once...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Notes on software development

Burn your title

I've been a developer, a manager, a cofounder, and now I'm a developer again. I ran away from each position until being a founder because I felt like I was limited by what I was allowed to do. But I reached an enlightment of sorts during my career progression: that everyone around me was dying for someone to pick things up, for employees to show engagement and agency. We think of our titles as our limits. We're quick to say and believe, "that isn't my job". While in reality titles reflect the minimum expected of us, not the maximum that is open to us. Moving your career forward Trying to figure out what (new minimum) you must do to get promoted seems kind of backwards to me, reinforcing our sense of our own limits. Instead, at every stage in your career, focus on doing the intersection of: what you see needs to be done (that isn't being done) what you are capable of doing what you have the desire/energy (or would find fulfillment) doing And this is the path to promotion and a successful and interesting career. Burn your title. Burn your job description. I mean, keep your boss happy for sure. Keep your teammates happy by supporting them and building them up and communicating well. But don't wait to be officially made a lead or given a new title to do what otherwise fits into that intersection above. And if after doing this for some time, demonstrating this level of agency, you are not promoted, it just means you're not at the right company or right organization within your company and you should look elsewhere. What's more this work you did (at a company that doesn't appreciate your agency, if that happens to be the case) merely makes the case stronger for your successful interview at the next company. There's no downside. The cynical, and perhaps realistic, alternative to this is to do politics to get promoted. Or to do not do politics but to do things that don't align with your long-term goals. I'm not personally interested in either path so I'm not covering them here. I'm interested in the intersection of things that move me in the direction I want, things that are useful to the company, and things that I am capable of doing (in addition to whatever minimum work I must actually do). Examples Here's a peek at what this looks like for me as an individual contributor, a programmer, at EnterpriseDB. I started the EDB Engineering Newsletter because it seemed like we needed to do a better job telling the world the awesome things our engineering team is doing. (You know we're one of the biggest contributors to Postgres? Bruce Momjian, Robert Haas, Peter Eisentraut, etc. work here? The guy who implemented the WAL and MVCC in Postgres is my teammate?) Nobody asked me to do that. I started publishing blog views for the entire company once a month internally. Nobody asked me to do that. I wrote a number of internal docs and tutorials on the product because we were just obviously missing them. Nobody asked me to do that. I started a fortnightly incident review meeting for my team because it seemed like we were missing chances to update docs and teach each other. Nobody asked me to do that. I write a bunch of random posts for the company blog on what I've learned. Nobody asked me to do that. These are just a few of the random things that seemed like a good idea for me to do on top of my Actual Work as a developer, which I think I do a decent job of on its own. In closing Don't burn out. Don't do things you aren't asked for and don't find rewarding. Or that won't pave the way toward the career you want. I'm trying to be very careful not to advocate anything along those lines. But also don't wait to be asked to do something. Do what is interesting and obvious and rewarding to you. Interesting opportunities seem to come most reliably when you make them for yourself. Burn your title pic.twitter.com/4bQRPMX4EZ — Phil Eaton (@eatonphil) April 22, 2025

2 months ago 27 votes
Transactions are a protocol

Transactions are not an intrinsic part of a storage system. Any storage system can be made transactional: Redis, S3, the filesystem, etc. Delta Lake and Orleans demonstrated techniques to make S3 (or cloud storage in general) transactional. Epoxy demonstrated techniques to make Redis (and any other system) transactional. And of course there's always good old Two-Phase Commit. If you don't want to read those papers, I wrote about a simplified implementation of Delta Lake and also wrote about a simplified MVCC implementation over a generic key-value storage layer. It is both the beauty and the burden of transactions that they are not intrinsic to a storage system. Postgres and MySQL and SQLite have transactions. But you don't need to use them. It isn't possible to require you to use transactions. Many developers, myself a few years ago included, do not know why you should use them. (Hint: read Designing Data Intensive Applications.) And you can take it even further by ignoring the transaction layer of an existing transactional database and implement your own transaction layer as Convex has done (the Epoxy paper above also does this). It isn't entirely clear that you have a lot to lose by implementing your own transaction layer since the indexes you'd want on the version field of a value would only be as expensive or slow as any other secondary index in a transactional database. Though why you'd do this isn't entirely clear (I will like to read about this from Convex some time). It's useful to see transaction protocols as another tool in your system design tool chest when you care about consistency, atomicity, and isolation. Especially as you build systems that span data systems. Maybe, as Ben Hindman hinted at the last NYC Systems, even proprietary APIs will eventually provide something like two-phase commit so physical systems outside our control can become transactional too. Transactions are a protocol short new post pic.twitter.com/nTj5LZUpUr — Phil Eaton (@eatonphil) April 20, 2025

3 months ago 29 votes
Things that go wrong with disk IO

There are a few interesting scenarios to keep in mind when writing applications (not just databases!) that interact with read and writes files, particularly in transactional contexts where you actually care about the integrity of the data and when you are editing data in place (versus copy-on-write for example). If I don't say otherwise I'm talking about behavior on Linux. The research version of this blog post is Parity Lost and Parity Regained and Characteristics, Impact, and Tolerance of Partial Disk Failures. These two papers also go into the frequency of some of the issues discussed here. These behaviors actually happen in real life! Thank you to Alex Miller and George Xanthakis for reviewing a draft of this post. Terminology Some of these terms are reused in different contexts, and sometimes they are reused because they effectively mean the same thing in a certain configuration. But I'll try to be explicit to avoid confusion. Sector The smallest amount of data that can be read and written atomically by hardware. It used to be 512 bytes, but on modern disks it is often 4KiB. There doesn't seem to be any safe assumption you can make about sector size, despite file system defaults (see below). You must check your disks to know. Block (filesystem/kernel view) Typically set to the sector size since only this block size is atomic. The default in ext4 is 4KiB. Page (kernel view) A disk block that is in memory. Any reads/writes less than the size of a block will read the entire block into kernel memory even if less than that amount is sent back to userland. Page (database/application view) The smallest amount of data the system (database, application, etc.) chooses to act on, when it's read or written or held in memory. The page size is some multiple of the filesystem/kernel block size (including the multiple being 1). SQLite's default page size is 4KiB. MySQL's default page size is 16KiB. Postgres's default page size is 8KiB. Things that go wrong The data didn't reach disk By default, file writes succeed when the data is copied into kernel memory (buffered IO). The man page for write(2) says: A successful return from write() does not make any guarantee that data has been committed to disk. On some filesystems, including NFS, it does not even guarantee that space has successfully been reserved for the data. In this case, some errors might be delayed until a future write(), fsync(2), or even close(2). The only way to be sure is to call fsync(2) after you are done writing all your data. If you don't call fsync on Linux the data isn't necessarily durably on disk, and if the system crashes or restarts before the disk writes the data to non-volatile storage, you may lose data. With O_DIRECT, file writes succeed when the data is copied to at least the disk cache. Alternatively you could open the file with O_DIRECT|O_SYNC (or O_DIRECT|O_DSYNC) and forgo fsync calls. fsync on macOS is a no-op. If you're confused, read Userland Disk I/O. Postgres, SQLite, MongoDB, MySQL fsync data before considering a transaction successful by default. RocksDB does not. The data was fsynced but fsync failed fsync isn't guaranteed to succeed. And when it fails you can't tell which write failed. It may not even be a failure of a write to a file that your process opened: Ideally, the kernel would report errors only on file descriptions on which writes were done that subsequently failed to be written back. The generic pagecache infrastructure does not track the file descriptions that have dirtied each individual page however, so determining which file descriptors should get back an error is not possible. Instead, the generic writeback error tracking infrastructure in the kernel settles for reporting errors to fsync on all file descriptions that were open at the time that the error occurred. In a situation with multiple writers, all of them will get back an error on a subsequent fsync, even if all of the writes done through that particular file descriptor succeeded (or even if there were no writes on that file descriptor at all). Don't be 2018-era Postgres. The only way to have known which exact write failed would be to open the file with O_DIRECT|O_SYNC (or O_DIRECT|O_DSYNC), though this is not the only way to handle fsync failures. The data was corrupted If you don't checksum your data on write and check the checksum on read (as well as periodic scrubbing a la ZFS) you will never be aware if and when the data gets corrupted and you will have to restore (who knows how far back in time) from backups if and when you notice. ZFS, MongoDB (WiredTiger), MySQL (InnoDB), and RocksDB checksum data by default. Postgres and SQLite do not (though databases created from Postgres 18+ will). You should probably turn on checksums on any system that supports it, regardless of the default. The data was partially written Only when the page size you write = block size of your filesystem = sector size of your disk is a write guaranteed to be atomic. If you need to write multiple sectors of data atomically there is the risk that some sectors are written and then the system crashes or restarts. This is called torn writes or torn pages. Postgres, SQLite, and MySQL (InnoDB) handle torn writes. Torn writes are by definition not relevant to immutable storage systems like RocksDB (and other LSM Tree or Copy-on-Write systems like MongoDB (WiredTiger)) unless writes (that update metadata) span sectors. If your file system duplicates all writes like MySQL (InnoDB) does (like you can with data=journal in ext4) you may also not have to worry about torn writes. On the other hand, this amplifies writes 2x. The data didn't reach disk, part 2 Sometimes fsync succeeds but the data isn't actually on disk because the disk is lying. These are called lost writes or phantom writes. You can be resilient to phantom writes by always reading back what you wrote (expensive) or versioning what you wrote. Databases and file systems generally do not seem to handle this situation. The data was written to the wrong place, read from the wrong place If you aren't including where data is supposed to be on disk as part of the checksum or page itself, you risk being unaware that you wrote data to the wrong place or that you read from the wrong place. This is called misdirected writes/reads. Databases and file systems generally do not seem to handle this situation. Further reading In increasing levels of paranoia (laudatory) follow ZFS, Andrea and Remzi Arpaci-Dusseau, and TigerBeetle.

3 months ago 32 votes
Phil Eaton on Technical Blogging

This is an external post of mine. Click here if you are not redirected.

3 months ago 43 votes
Minimal downtime Postgres major version upgrades with EDB Postgres Distributed

This is an external post of mine. Click here if you are not redirected.

4 months ago 35 votes

More in technology

PSA: part of your Kagi subscription fee goes to a Russian company (Yandex)

Today I learned that Kagi uses Yandex as part of its search infrastructure, making up about 2% of their costs, and their CEO has confirmed that they do not plan to change that. To quote: Yandex represents about 2% of our total costs and is only one of dozens of sources we use. To put this in perspective: removing any single source would degrade search quality for all users while having minimal economic impact on any particular region. The world doesn’t need another politicized search engine. It needs one that works exceptionally well, regardless of the political climate. That’s what we’re building. That is unfortunate, as I found Kagi to be a good product with an interesting take on utilizing LLM models with search that is kind of useful, but I cannot in good heart continue to support it while they unapologetically finance a major company that has ties to the Russian government, the same country that is actively waging a war against Ukraine, an European country, for over 11 years, during which they’ve committed countless war crimes against civilians and military personnel. Kagi has the freedom to decide how they build the best search engine, and I have the freedom to use something else. Please send all your whataboutisms to /dev/null.

2 days ago 3 votes
Alvik Fight Club: A creative twist on coding, competition, and collaboration

What happens when you hand an educational robot to a group of developers and ask them to build something fun? At Arduino, you get a multiplayer robot showdown that’s part battle, part programming lesson, and entirely Alvik. The idea for Alvik Fight Club first came to life during one of our internal Make Tanks, in […] The post Alvik Fight Club: A creative twist on coding, competition, and collaboration appeared first on Arduino Blog.

4 days ago 7 votes
Vote for the July 2025 + Post Topic

Past ads get a second chance.

5 days ago 10 votes
How a Hibernate deprecation log message made our Java backend service super slow

It was time to upgrade Hibernate on that one Java monolithic1 backend service that my team was responsible for. We took great precautions with these types of changes due to the scale of the system, splitting changes into as many small parts as possible and releasing them as often as possible. With bigger changes we opted for running a few instances of the new version in parallel to the existing one. Then came Hibernate 5.2. Hibernate 5.2 introduced a new warning log to indicate that the existing API for writing queries is deprecated. Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead Every time you used the Criteria API it would print the line. Just one little issue there. Can you see it? Every time you used the Criteria API it would print the line. In a poorly written Java backend service, one HTTP request can make multiple queries to the database. With hundreds of millions of HTTP requests, this can easily balloon to billions of additional logs a day. Well, that’s exactly what happened to our service, resulting in the CPU usage jumping up considerably and the latency of the service being negatively impacted. We didn’t have the foresight to compare every metric against every instance of the service, and when the metrics were summarized across all instances, this increase was not that noticeable while both new and existing instances of the service were running. Aside from the service itself, this had negative effects downstream as well. If you have a solution for collecting your service logs for analysis and retention, and it’s priced on the amount of logs that you print out, then this can end up being a very costly issue for you. We resolved the issue by making a configuration change to our logger that disabled these specific logs. This does make me wonder who else may have been impacted by this change over the years and what that impact might’ve looked like regarding the resource usage on a world-wide scale. I’m not blaming the Hibernate developers, they had good intentions, but the impact of an innocent change like that was likely not taken into account for large-scale services. Last I heard, the people behind Hibernate are a very small team, and yet their software powers much of the world, including critical infrastructure like the banking system. I’m well aware that we’re talking about Hibernate releases that were released around the time I was still a junior developer (2016-2018). Some call it technical debt, others call it over half a decade of neglect. unmaintaned monoliths suck, but so do unmaintained microservices. ↩︎

5 days ago 12 votes
The History of Windows XP

NT Vincit Omnia

6 days ago 12 votes