Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
29
Pure CSS Simple Dropdown Plugin 2018-09-20 I find myself blowing away default browser select styling and implementing my own custom dropdowns far more often than I’d like. So, I recently created a very simple and clean component using just pure CSS. Check out the CodePen below and feel free to morph it as you see fit for your own projects! Live CodePen Example
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from bt RSS Feed

Installing OpenBSD on Linveo KVM VPS

Installing OpenBSD on Linveo KVM VPS 2024-10-21 I recently came across an amazing deal for a VPS on Linveo. For just $15 a year they provide: AMD KVM 1GB 1024 MB RAM 1 CPU Core 25 GB NVMe SSD 2000 GB Bandwidth It’s a pretty great deal and I suggest you look more into it if you’re interested! But this post is more focused on setting up OpenBSD via the custom ISO option in the KVM dashboard. Linveo already provides several Linux OS options, along with FreeBSD by default (which is great!). Since there is no OpenBSD template we need to do things manually. Getting Started Once you have your initial VPS up and running, login to the main dashboard and navigate to the Media tab. Under CD/DVD-ROM you’ll want to click “Custom CD/DVD” and enter the direct link to the install76.iso: https://cdn.openbsd.org/pub/OpenBSD/7.6/amd64/install76.iso The "Media" tab of the Linveo Dashboard. Use the official ISO link and set the Boot Order to CD/DVD. Select “Insert”, then set your Boot Order to CD/DVD and click “Apply”. Once complete, Restart your server. Installing via VNC With the server rebooting, jump over to Options and click on “Browser VNC” to launch the web-based VNC client. From here we will boot into the OpenBSD installer and get things going! Follow the installer as you normally would when installing OpenBSD (if you’re unsure, I have a step-by-step walkthrough) until you reach the IPv4 selection. At this point you will want to input your servers IPv4 and IPv6 IPs found under your Network section of your dashboard. Next you will want to set the IPv6 route to first default listed option (not “none”). After that is complete, choose cd0 for your install media (don’t worry about http yet). Continue with the rest of the install (make users if desired, etc) until it tells you to reboot the machine. Go back to the Linveo Dashboard, switch your Boot Order back to “Harddrive” and reboot the machine directly. Booting into OpenBSD Load into the VNC client again. If you did everything correctly you should be greeted with the OpenBSD login prompt. There are a few tweaks we still need to make, so login as the root user. Remember how we installed our sets directly from the cd0? We’ll want to change that. Since we are running OpenBSD “virtually” through KVM, our target network interface will be vio0. Edit the /etc/hostname.vio0 file and add the following: dhcp !route add default <your_gateway_ip> The <your_gateway_ip> can be found under the Network tab of your dashboard. The next file we need to tweak is /etc/resolv.conf. Add the following to it: nameserver 8.8.8.8 nameserver 1.1.1.1 These nameservers are based on your selected IPs under the Resolvers section of Network in the Linveo dashboard. Change these as you see fit, so long as they match what you place in the resolve.conf file. Finally, the last file we need to edit is /etc/pf.conf. Like the others, add the following: pass out proto { tcp, udp } from any to any port 53 Final Stretch Now just reboot the server. Log back in as your desired user and everything should be working as expected! You can perform a simple test to check: ping openbsd.org This should work - meaning your network is up and running! Now you’re free to enjoy the beauty that is OpenBSD.

10 months ago 89 votes
Vertical Tabs in Safari

Vertical Tabs in Safari 2024-09-26 I use Firefox as my main browser (specifically the Nightly build) which has vertical tabs built-in. There are instances where I need to use Safari, such as debugging or testing iOS devices, and in those instances I prefer to have a similar experience to that of Firefox. Luckily, Apple has finally made it fairly straight forward to do so. Click the Sidebar icon in the top left of the Safari browser Right click and group your current tab(s) (I normally name mine something uninspired like “My Tabs” or simply “Tabs”) For an extra “clean look”, remove the horizontal tabs by right clicking the top bar, selected Customize Toolbar and dragging the tabs out When everything is set properly, you’ll have something that looks like this: One minor drawback is not having access to a direct URL input, since we have removed the horizontal tab bar altogether. Using a set of curated bookmarks could help avoid the need for direct input, along with setting our new tab page to DuckDuckGo or any other search engine.

11 months ago 96 votes
Build and Deploy Websites Automatically with Git

Build and Deploy Websites Automatically with Git 2024-09-20 I recently began the process of setting up my self-hosted1 cgit server as my main code forge. Updating repos via cgit on NearlyFreeSpeech on its own has been simple enough, but it lacked the “wow-factor” of having some sort of automated build process. I looked into a bunch of different tools that I could add to my workflow and automate deploying changes. The problem was they all seemed to be fairly bloated or overly complex for my needs. Then I realized I could simply use post-receive hooks which were already built-in to git! You can’t get more simple than that… So I thought it would be best to document my full process. These notes are more for my future self when I inevitably forget this, but hopefully others can benefit from it! Before We Begin This “tutorial” assumes that you already have a git server setup. It shouldn’t matter what kind of forge you’re using, so long as you have access to the hooks/ directory and have the ability to write a custom post-receive script. For my purposes I will be running standard git via the web through cgit, hosted on NearlyFreeSpeech (FreeBSD based). Overview Here is a quick rundown of what we plan to do: Write a custom post-receive script in the repo of our choice Build and deploy our project when a remote push to master is made Nothing crazy. Once you get the hang of things it’s really simple. Prepping Our Servers Before we get into the nitty-gritty, there are a few items we need to take care of first: Your main git repo needs ssh access to your web hosting (deploy) server. Make sure to add your public key and run a connection test first (before running the post-receive hook) in order to approve the “fingerprinting”. You will need to git clone your main git repo in a private/admin area of your deploy server. In the examples below, mine is cloned under /home/private/_deploys Once you do both of those tasks, continue with the rest of the article! The post-receive Script I will be using my own personal website as the main project for this example. My site is built with wruby, so the build instructions are specific to that generator. If you use Jekyll or something similar, you will need to tweak those commands for your own purposes. Head into your main git repo (not the cloned one on your deploy server), navigate under the hooks/ directory and create a new file named post-receive containing the following: #!/bin/bash # Get the branch that was pushed while read oldrev newrev ref do branch=$(echo $ref | cut -d/ -f3) if [ "$branch" == "master" ]; then echo "Deploying..." # Build on the remote server ssh user@deployserver.net << EOF set -e # Stop on any error cd /home/private/_deploys/btxx.org git pull origin master gem install 'kramdown:2.4.0' 'rss:0.3.0' make build rsync -a build/* ~/public/btxx.org/ EOF echo "Build synced to the deployment server." echo "Deployment complete." fi done Let’s break everything down. First we check if the branch being pushed to the remote server is master. Only if this is true do we proceed. (Feel free to change this if you prefer something like production or deploy) if [ "$branch" == "master" ]; then Then we ssh into the server (ie. deployserver.net) which will perform the build commands and also host these built files. ssh user@deployserver.net << EOF Setting set -e ensures that the script stops if any errors are triggered. set -e # Stop on any error Next, we navigate into the previously mentioned “private” directory, pull the latest changes from master, and run the required build commands (in this case installing gems and running make build) cd /home/private/_deploys/btxx.org git pull origin master gem install 'kramdown:2.4.0' 'rss:0.3.0' make build Finally, rsync is run to copy just the build directory to our public-facing site directory. rsync -a build/* ~/public/btxx.org/ With that saved and finished, be sure to give this file proper permissions: chmod +x post-receive That’s all there is to it! Time to Test! Now make changes to your main git project and push those up into master. You should see the post-receive commands printing out into your terminal successfully. Now check out your website to see the changes. Good stuff. Still Using sourcehut My go-to code forge was previously handled through sourcehut, which will now be used for mirroring my repos and handling mailing lists (since I don’t feel like hosting something like that myself - yet!). This switch over was nothing against sourcehut itself but more of a “I want to control all aspects of my projects” mentality. I hope this was helpful and please feel free to reach out with suggestions or improvements! By self-hosted I mean a NearlyFreeSpeech instance ↩

11 months ago 108 votes
Burning & Playing PS2 Games without a Modded Console

Burning & Playing PS2 Games without a Modded Console 2024-09-02 Important: I do not support pirating or obtaining illegal copies of video games. This process should only be used to copy your existing PS2 games for backup, in case of accidental damage to the original disc. Requirements Note: This tutorial is tailored towards macOS users, but most things should work similar on Windows or Linux. You will need: An official PS2 game disc (the one you wish to copy) A PS2 Slim console An Apple device with a optical DVD drive (or a portable USB DVD drive) Some time and a coffee! (or tea) Create an ISO Image of Your PS2 Disc: Insert your PS2 disc into your optical drive. Open Disk Utility (Applications > Utilities) In Disk Utility, select your PS2 disc from the sidebar Click on the File menu, then select New Image > Image from [Disc Name] Choose a destination to save the ISO file and select the format as DVD/CD Master Name your file and click Save. Disk Utility will create a .cdr file, which is essentially an ISO file Before we move on, we will need to convert that newly created cdr file into ISO. Navigate to the directory where the .cdr file is located and use the hdiutil command to convert the .cdr file to an ISO file: hdiutil convert yourfile.cdr -format UDTO -o yourfile.iso You’ll end up with a file named yourfile.iso.cdr. Rename it by removing the .cdr extension to make it an .iso file: mv yourfile.iso.cdr yourfile.iso Done and done. Getting Started For Mac and Linux users, you will need to install Wine in order to run the patcher: # macOS brew install wine-stable # Linux (Debian) apt install wine Clone & Run the Patcher Clone the FreeDVDBoot ESR Patcher: git clone https://git.sr.ht/~bt/fdvdb-esr Navigate to the cloned project folder: cd /path/to/fdvdb-esr The run the executable: wine FDVDB_ESR_Patcher.exe Now you need to select your previously cloned ISO file, use the default Payload setting and then click Patch!. After a few seconds your file should be patched. Burning Our ISO to DVD It’s time for the main event! Insert a blank DVD-R into your disc drive and mount it. Then right click on your patched ISO file and run “Burn Disk Image to Disc...". From here, you want to make sure you select the slowest write speed and enable verification. Once the file is written to the disc and verified (verification might fail - it is safe to ignore) you can remove the disc from the drive. Before Playing the Game Make sure you change the PS2 disc speed from Standard to Fast in the main “Browser” setting before you put the game into your console. After that, enjoy playing your cloned PS2 game!

a year ago 84 votes
"This Key is Useless Now. Discard?"

“This Key is Useless Now. Discard?” 2024-08-28 The title of this article probably triggers nostalgic memories for old school Resident Evil veterans like myself. My personal favourite in the series (not that anyone asked) was the original, 1998 version of Resident Evil 2 (RE2). I believe that game stands the test of time and is very close to a masterpiece. The recent remake lost a lot of the charm and nuance that made the original so great, which is why I consistently fire up the PS1 version on my PS2 Slim. Resident Evil 2 (PS1) running on my PS2, hooked up to my Toshiba CRT TV. But the point of this post isn’t to gush over RE2. Instead I would like to discuss how well RE2 handled its interface and user experience across multiple in-game systems. HUD? What HUD? Just like the first Resident Evil that came before it, RE2 has no in-game HUD (heads-up display) to speak of. It’s just your playable character and the environment. No ammo-counters. No health bars. No “quest” markers. Nothing. This is how the game looks while you play. Zero HUD elements. The game does provide you with an inventory system, which holds your core items, weapons and notes you find along your journey. Opening up this sub-menu allows you to heal, reload weapons, combine objects or puzzle items, or read through previously collected documents. Not only is this more immersive (HUDs don’t exist for us in the real world, we need to look through our packs as well…) it also gets out of the way. The main inventory screen. Shows everything you need to know, only when you need it. (I can hear this screenshot...) I don’t need a visual element in the bottom corner showing me a list of “items” I can cycle through. I don’t want an ammo counter cluttering up my screen with information I only need to see in combat or while manually reloading. If those are pieces of information I need, I’ll explicitly open and look for it. Don’t make assumptions about what is important to me on screen. Capcom took this concept of less visual clutter even further in regards to maps and the character health status. Where We’re Going, We Don’t Need Roads Mini-Maps A great deal of newer games come pre-packaged with a mini-map on the main interface. In certain instances this works just fine, but most are 100% UI clutter. Something to add to the screen. I can only assume some devs believe it is “helpful”. Most times it’s simply a distraction. Thank goodness most games allow you to disable them. As for RE2, you collect maps throughout your adventure and, just like most other systems in the game, you need to consciously open the map menu to view them. You know, just like in real life. This creates a higher tension as well, since you need to constantly reference your map (on initial playthroughs) to figure out where the heck to go. You feel the pressure of someone frantically pulling out a physical map and scanning their surroundings. It also helps the player build a mental model in their head, thus providing even more of that sweet, sweet immersion. The map of the Raccoon City Police Station. No Pain, No Gain The game doesn’t display any health bar or player status information. In order to view your current status (symbolized by “Fine”, “Caution” or “Danger”) you need to open your inventory screen. From here you can heal yourself (if needed) and see the status type change in real-time. The "condition" health status. This is fine. But that isn’t the only way to visually see your current status. Here’s a scenario: you’re traveling down a hallway, turn a corner and run right into the arms of a zombie. She takes a couple good bites out of your neck before you push her aside. You unload some handgun rounds into her and down she goes. As you run over her body she reaches out and chomps on your leg as a final “goodbye”. You break free and move along but notice something different in your character’s movement - they’re holding their stomach and limping. Here we can see the character "Hunk" holding his stomach and limping, indicating an injury without the need for a custom HUD element. If this was your first time playing, most players would instinctively open the inventory menu, where their characters health is displayed, and (in this instance) be greeted with a “Caution” status. This is another example of subtle UX design. I don’t need to know the health status of my character until an action is required (in this example: healing). The health system is out of the way but not hidden. This keeps the focus on immersion, not baby-sitting the game’s interface. A Key to Every Lock Hey! This section is in reference to the title of the article. We made it! …But yes, discarding keys in RE2 is a subtle example of fantastic user experience. As a player, I know for certain this key is no longer needed. I can safely discard it and free up precious space from my inventory. There is also a sense of accomplishment, a feeling of “I’ve completed a task” or an internal checkbox being ticked. Progress has been made! Don’t overlook how powerful of a interaction this little text prompt is. Ask any veteran of the series and they will tell you this prompt is almost euphoric. The game's prompt asking if you'd like to discard a useless key. Perfection. Inspiring Greatness RE2 is certainly not the first or last game to implement these “minimal” game systems. A more “modern” example is Dead Space (DS), along with its very faithful remake. In DS the character’s health is displayed directly on the character model itself, and a similar inventory screen is used to manage items. An ammo-counter is visible but only when the player aims their weapon. Pretty great stuff and another masterpiece of survival horror. In Dead Space, the character's health bar is set as part of their spacesuit. The Point I guess my main takeaway is that designers and developers should try their best to keep user experience intuitive. I know that sounds extremely generic but it is a lot more complex than one might think. Try to be as direct as possible while remaining subtle. It’s a delicate balance but experiences like RE2 show us it is attainable. I’d love to talk more, but I have another play-through of RE2 to complete…

a year ago 87 votes

More in programming

first-class merges and cover letters

Although it looks really good, I have not yet tried the Jujutsu (jj) version control system, mainly because it’s not yet clearly superior to Magit. But I have been following jj discussions with great interest. One of the things that jj has not yet tackled is how to do better than git refs / branches / tags. As I underestand it, jj currently has something like Mercurial bookmarks, which are more like raw git ref plumbing than a high-level porcelain feature. In particular, jj lacks signed or annotated tags, and it doesn’t have branch names that always automatically refer to the tip. This is clearly a temporary state of affairs because jj is still incomplete and under development and these gaps are going to be filled. But the discussions have led me to think about how git’s branches are unsatisfactory, and what could be done to improve them. branch merge rebase squash fork cover letters previous branch workflow questions branch One of the huge improvements in git compared to Subversion was git’s support for merges. Subversion proudly advertised its support for lightweight branches, but a branch is not very useful if you can’t merge it: an un-mergeable branch is not a tool you can use to help with work-in-progress development. The point of this anecdote is to illustrate that rather than trying to make branches better, we should try to make merges better and branches will get better as a consequence. Let’s consider a few common workflows and how git makes them all unsatisfactory in various ways. Skip to cover letters and previous branch below where I eventually get to the point. merge A basic merge workflow is, create a feature branch hack, hack, review, hack, approve merge back to the trunk The main problem is when it comes to the merge, there may be conflicts due to concurrent work on the trunk. Git encourages you to resolve conflicts while creating the merge commit, which tends to bypass the normal review process. Git also gives you an ugly useless canned commit message for merges, that hides what you did to resolve the conflicts. If the feature branch is a linear record of the work then it can be cluttered with commits to address comments from reviewers and to fix mistakes. Some people like an accurate record of the history, but others prefer the repository to contain clean logical changes that will make sense in years to come, keeping the clutter in the code review system. rebase A rebase-oriented workflow deals with the problems of the merge workflow but introduces new problems. Primarily, rebasing is intended to produce a tidy logical commit history. And when a feature branch is rebased onto the trunk before it is merged, a simple fast-forward check makes it trivial to verify that the merge will be clean (whether it uses separate merge commit or directly fast-forwards the trunk). However, it’s hard to compare the state of the feature branch before and after the rebase. The current and previous tips of the branch (amongst other clutter) are recorded in the reflog of the person who did the rebase, but they can’t share their reflog. A force-push erases the previous branch from the server. Git forges sometimes make it possible to compare a branch before and after a rebase, but it’s usually very inconvenient, which makes it hard to see if review comments have been addressed. And a reviewer can’t fetch past versions of the branch from the server to review them locally. You can mitigate these problems by adding commits in --autosquash format, and delay rebasing until just before merge. However that reintroduces the problem of merge conflicts: if the autosquash doesn’t apply cleanly the branch should have another round of review to make sure the conflicts were resolved OK. squash When the trunk consists of a sequence of merge commits, the --first-parent log is very uninformative. A common way to make the history of the trunk more informative, and deal with the problems of cluttered feature branches and poor rebase support, is to squash the feature branch into a single commit on the trunk instead of mergeing. This encourages merge requests to be roughly the size of one commit, which is arguably a good thing. However, it can be uncomfortably confining for larger features, or cause extra busy-work co-ordinating changes across multiple merge requests. And squashed feature branches have the same merge conflict problem as rebase --autosquash. fork Feature branches can’t always be short-lived. In the past I have maintained local hacks that were used in production but were not (not yet?) suitable to submit upstream. I have tried keeping a stack of these local patches on a git branch that gets rebased onto each upstream release. With this setup the problem of reviewing successive versions of a merge request becomes the bigger problem of keeping track of how the stack of patches evolved over longer periods of time. cover letters Cover letters are common in the email patch workflow that predates git, and they are supported by git format-patch. Github and other forges have a webby version of the cover letter: the message that starts off a pull request or merge request. In git, cover letters are second-class citizens: they aren’t stored in the repository. But many of the problems I outlined above have neat solutions if cover letters become first-class citizens, with a Jujutsu twist. A first-class cover letter starts off as a prototype for a merge request, and becomes the eventual merge commit. Instead of unhelpful auto-generated merge commits, you get helpful and informative messages. No extra work is needed since we’re already writing cover letters. Good merge commit messages make good --first-parent logs. The cover letter subject line works as a branch name. No more need to invent filename-compatible branch names! Jujutsu doesn’t make you name branches, giving them random names instead. It shows the subject line of the topmost commit as a reminder of what the branch is for. If there’s an explicit cover letter the subject line will be a better summary of the branch as a whole. I often find the last commit on a branch is some post-feature cleanup, and that kind of commit has a subject line that is never a good summary of its feature branch. As a prototype for the merge commit, the cover letter can contain the resolution of all the merge conflicts in a way that can be shared and reviewed. In Jujutsu, where conflicts are first class, the cover letter commit can contain unresolved conflicts: you don’t have to clean them up when creating the merge, you can leave that job until later. If you can share a prototype of your merge commit, then it becomes possible for your collaborators to review any merge conflicts and how you resolved them. To distinguish a cover letter from a merge commit object, a cover letter object has a “target” header which is a special kind of parent header. A cover letter also has a normal parent commit header that refers to earlier commits in the feature branch. The target is what will become the first parent of the eventual merge commit. previous branch The other ingredient is to add a “previous branch” header, another special kind of parent commit header. The previous branch header refers to an older version of the cover letter and, transitively, an older version of the whole feature branch. Typically the previous branch header will match the last shared version of the branch, i.e. the commit hash of the server’s copy of the feature branch. The previous branch header isn’t changed during normal work on the feature branch. As the branch is revised and rebased, the commit hash of the cover letter will change fairly frequently. These changes are recorded in git’s reflog or jj’s oplog, but not in the “previous branch” chain. You can use the previous branch chain to examine diffs between versions of the feature branch as a whole. If commits have Gerrit-style or jj-style change-IDs then it’s fairly easy to find and compare previous versions of an individual commit. The previous branch header supports interdiff code review, or allows you to retain past iterations of a patch series. workflow Here are some sketchy notes on how these features might work in practice. One way to use cover letters is jj-style, where it’s convenient to edit commits that aren’t at the tip of a branch, and easy to reshuffle commits so that a branch has a deliberate narrative. When you create a new feature branch, it starts off as an empty cover letter with both target and parent pointing at the same commit. Alternatively, you might start a branch ad hoc, and later cap it with a cover letter. If this is a small change and rebase + fast-forward is allowed, you can edit the “cover letter” to contain the whole change. Otherwise, you can hack on the branch any which way. Shuffle the commits that should be part of the merge request so that they occur before the cover letter, and edit the cover letter to summarize the preceding commits. When you first push the branch, there’s (still) no need to give it a name: the server can see that this is (probably) going to be a new merge request because the top commit has a target branch and its change-ID doesn’t match an existing merge request. Also when you push, your client automatically creates a new instance of your cover letter, adding a “previous branch” header to indicate that the old version was shared. The commits on the branch that were pushed are now immutable; rebases and edits affect the new version of the branch. During review there will typically be multiple iterations of the branch to address feedback. The chain of previous branch headers allows reviewers to see how commits were changed to address feedback, interdiff style. The branch can be merged when the target header matches the current trunk and there are no conflicts left to resolve. When the time comes to merge the branch, there are several options: For a merge workflow, the cover letter is used to make a new commit on the trunk, changing the target header into the first parent commit, and dropping the previous branch header. Or, if you like to preserve more history, the previous branch chain can be retained. Or you can drop the cover letter and fast foward the branch on to the trunk. Or you can squash the branch on to the trunk, using the cover letter as the commit message. questions This is a fairly rough idea: I’m sure that some of the details won’t work in practice without a lot of careful work on compatibility and deployability. Do the new commit headers (“target” and “previous branch”) need to be headers? What are the compatibility issues with adding new headers that refer to other commits? How would a server handle a push of an unnamed branch? How could someone else pull a copy of it? How feasible is it to use cover letter subject lines instead of branch names? The previous branch header is doing a similar job to a remote tracking branch. Is there an opportunity to simplify how we keep a local cache of the server state? Despite all that, I think something along these lines could make branches / reviews / reworks / merges less awkward. How you merge should me a matter of your project’s preferred style, without interference from technical limitations that force you to trade off one annoyance against another. There remains a non-technical limitation: I have assumed that contributors are comfortable enough with version control to use a history-editing workflow effectively. I’ve lost all perspective on how hard this is for a newbie to learn; I expect (or hope?) jj makes it much easier than git rebase.

10 hours ago 4 votes
ARM is great, ARM is terrible (and so is RISC-V)

I’ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I’ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service … Continue reading ARM is great, ARM is terrible (and so is RISC-V) →

22 hours ago 3 votes
Many Hard Leetcode Problems are Easy Constraint Problems

In my first interview out of college I was asked the change counter problem: Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies). I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview. But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day. int: total; array[int] of int: values = [10, 9, 1]; array[index_set(values)] of var 0..: coins; constraint sum (c in index_set(coins)) (coins[c] * values[c]) == total; solve minimize sum(coins); You can try this online here. It'll give you a prompt to put in total and then give you successively-better solutions: coins = [0, 0, 37]; ---------- coins = [0, 1, 28]; ---------- coins = [0, 2, 19]; ---------- coins = [0, 3, 10]; ---------- coins = [0, 4, 1]; ---------- coins = [1, 3, 0]; ---------- Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems.1 Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is. More examples This was a question in a different interview (which I thankfully passed): Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later. It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem: array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; var int: buy; var int: sell; var int: profit = prices[sell] - prices[buy]; constraint sell > buy; constraint profit > 0; solve maximize profit; Reminder, link to trying it online here. While working at that job, one interview question we tested out was: Given a list, determine if three numbers in that list can be added or subtracted to give 0? This is a satisfaction problem, not a constraint problem: we don't need the "best answer", any answer will do. We eventually decided against it for being too tricky for the engineers we were targeting. But it's not tricky in a solver; include "globals.mzn"; array[int] of int: numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array[index_set(numbers)] of var {0, -1, 1}: choices; constraint sum(n in index_set(numbers)) (numbers[n] * choices[n]) = 0; constraint count(choices, -1) + count(choices, 1) = 3; solve satisfy; Okay, one last one, a problem I saw last year at Chipy AlgoSIG. Basically they pick some leetcode problems and we all do them. I failed to solve this one: Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram. The "proper" solution is a tricky thing involving tracking lots of bookkeeping states, which you can completely bypass by expressing it as constraints: array[int] of int: numbers = [2,1,5,6,2,3]; var 1..length(numbers): x; var 1..length(numbers): dx; var 1..: y; constraint x + dx <= length(numbers); constraint forall (i in x..(x+dx)) (y <= numbers[i]); var int: area = (dx+1)*y; solve maximize area; output ["(\(x)->\(x+dx))*\(y) = \(area)"] There's even a way to automatically visualize the solution (using vis_geost_2d), but I didn't feel like figuring it out in time for the newsletter. Is this better? Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver. The real advantage of solvers, though, is how well they handle new constraints. Take the stock picking problem above. I can write an O(n²) algorithm in a few minutes and the O(n) algorithm if you give me some time to think. Now change the problem to Maximize the profit by buying and selling up to max_sales stocks, but you can only buy or sell one stock at a given time and you can only hold up to max_hold stocks at a time? That's a way harder problem to write even an inefficient algorithm for! While the constraint problem is only a tiny bit more complicated: include "globals.mzn"; int: max_sales = 3; int: max_hold = 2; array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array [1..max_sales] of var int: buy; array [1..max_sales] of var int: sell; array [index_set(prices)] of var 0..max_hold: stocks_held; var int: profit = sum(s in 1..max_sales) (prices[sell[s]] - prices[buy[s]]); constraint forall (s in 1..max_sales) (sell[s] > buy[s]); constraint profit > 0; constraint forall(i in index_set(prices)) (stocks_held[i] = (count(s in 1..max_sales) (buy[s] <= i) - count(s in 1..max_sales) (sell[s] <= i))); constraint alldifferent(buy ++ sell); solve maximize profit; output ["buy at \(buy)\n", "sell at \(sell)\n", "for \(profit)"]; Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking. Because my dad will email me if I don't explain this: "leetcode" is slang for "tricky algorithmic interview questions that have little-to-no relevance in the actual job you're interviewing for." It's from leetcode.com. ↩

22 hours ago 3 votes
If Apple cared about privacy

Defaults matter

yesterday 6 votes
btrfs on a Raspberry Pi

I’m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others. I’ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over … Continue reading btrfs on a Raspberry Pi →

2 days ago 4 votes