Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
29
When starting a new project, evaluating the budget needed for cloud hosting can be a tricky question. On one side, you will hear horror stories of people waking up to an unexpected 100k$ bill from their provider. But on the other hand, you will see providers advertising costs sometimes as low as 4$ per month for a virtual machine. In this article, I will perform load testing on one of those 4$ VPS (from an unnamed provider) to figure out if the promise of running your production on such a low budget is realistic. Annie Spratt The test application For this test, I designed a simple CRUD application in Go. It mimics a blogging application and lets user create posts, lists the latest posts, and display a single post. In other words, it has the following three routes: a GET / route that renders an HTML template and shows the title of the 10 latest posts a GET /<post_id> route that renders an HTML template and shows the title and body of the selected post a POST / route that accepts a...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Alice GG

Playing with open source LLMs

Every 6 months or so, I decide to leave my cave and check out what the cool kids are doing with AI. Apparently the latest trend is to use fancy command line tools to write code using LLMs. This is a very nice change, since it suddenly makes AI compatible with my allergy to getting out of the terminal. The most popular of these tools seems to be Claude Code. It promises to be able to build in total autonomy, being able to use search code, write code, run tests, lint, and commit the changes. While this sounds great on paper, I’m not keen on getting locked into vendor tools from an unprofitable company. At some point, they will either need to raise their prices, enshittify their product, or most likely do both. So I went looking for what the free and open source alternatives are. Picking a model There’s a large amount of open source large language models on the market, with new ones getting released all the time. However, they are not all ready to be used locally in coding tasks, so I had to try a bunch of them before settling on one. deepseek-r1:8b Deepseek is the most popular open source model right now. It was created by the eponymous Chinese company. It made the news by beating numerous benchmarks while being trained on a budget that is probably lower than the compensation of some OpenAI workers. The 8b variant only weights 5.2 GB and runs decently on limited hardware, like my three years old Mac. This model is famous for forgetting about world events from 1989, but also seems to have a few issues when faced with concrete coding tasks. It is a reasoning model, meaning it “thinks” before acting, which should lead to improved accuracy. In practice, it regularly gets stuck indefinitely searching where it should start and jumping from one problem to the other in a loop. This can happen even on simple problems, and made it unusable for me. mistral:7b Mistral is the French alternative to American and Chinese models. I have already talked about their 7b model on this blog. It is worth noting that they have kept updating their models, and it should now be much more accurate than two years ago. Mistral is not a reasoning model, so it will jump straight to answering. This is very good if you’re working with tasks where speed and low compute use are a priority. Sadly, the accuracy doesn’t seem good enough for coding. Even on simple tasks, it will hallucinate functions or randomly delete parts of the code I didn’t want to touch. qwen3:8b Another model from China, qwen3 was created by the folks at Alibaba. It also claims impressive benchmark results, and can work as both a reasoning or non-thinking model. It was made with modern AI tooling in mind, by supporting MCPs and a framework for agentic development. This model actually seems to work as expected, providing somewhat accurate code output while not hanging in the reasoning part. Since it runs decently on my local setup, I decided to stick to that model for now. Setting up a local API with Ollama Ollama is now the default way to download and run local LLMs. It can be simply installed by downloading it from their website. Once installed, it works like Docker for models, by giving us access to commands like pull, run, or rm. Ollama will expose an API on localhost which can be used by other programs. For example, you can use it from your Python programs through ollama-python. Pair programming with aider The next piece of software I installed is aider. I assume it’s pronounced like the French word, but I could not confirm that. Aider describes itself as a “pair programming” application. Its main job is to pass context to the model, let it write the output to files, run linters, and commit the changes. Getting started It can be installed using the official Python package or via Homebrew if you use Mac. Once it is installed, just navigate to your code repository and launch it: export OLLAMA_API_BASE=http://127.0.0.1:11434 aider --model ollama_chat/qwen3:8b The CLI should automatically create some configuration files and add them to the repo’s .gitignore. Usage Aider isn’t meant to be left alone in complete autonomy. You’ll have to guide the AI through the process of making changes to your repository. To start, use the /add command to add files you want to focus on. Those files will be passed entirely to the model’s context and the model will be able to write in them. You can then ask questions using the /ask command. If you want to generate code, a good strategy can be to starting by requesting a plan of actions. When you want it to actually write to the files, you can prompt it using the /code command. This is also the default mode. There’s no absolute guarantee that it will follow a plan if you agreed on one previously, but it is still a good idea to have one. The /architect command seems to automatically ask for a plan, accept it, and write the code. The specificity of this command is that it lets you use different models to plan and write the changes. Refactoring I tried coding with aider in a few situations to see how it performs in practice. First, I tried making it do a simple refactoring on Itako, which is a project of average complexity. When pointed to the exact part of code where the issues happened, and explained explicitly what to do, the model managed to change the target struct according to the instructions. It did unexpectedly change a function that was outside the scope of what I asked, but this was easy to spot. On paper, this looks like a success. In practice, the time spent crafting a prompt, waiting for the AI to run and fixing the small issue that came up immensely exceeds the 10 minutes it would have taken me to edit the file myself. I don’t think coding that way would lead me to a massive performance improvement for now. Greenfield project For a second scenario, I wanted to see how it would perform on a brand-new project. I quickly set up a Python virtual environment, and asked aider to work with me at building a simple project. We would be opening a file containing Japanese text, parsing it with fugashi, and counting the words. To my surprise, this was a disaster. All I got was a bunch on hallucination riddled python that wouldn’t run under any circumstances. It may be that the lack of context actually made it harder for the model to generate code. Troubleshooting Finally, I went back to Itako, and decided to check how it would perform on common troubleshooting tasks. I introduced a few bugs to my code and gathered some error messages. I then proceeded to simply give aider the files mentioned by the error message and just use /ask to have it explain the errors to me, without requiring it to implement the code. This part did work very well. If I compare it with Googling unknown error messages, I think this can cut the time spent on the issue by half This is not just because Google is getting worse every day, but the model having access to the actual code does give it a massive advantage. I do think this setup is something I can use instead of the occasional frustration of scrolling through StackOverflow threads when something unexpected breaks. What about the Qwen CLI? With everyone jumping on the trend of CLI tools for LLMs, the Qwen team released its own Qwen Code. It can be installed using npm, and connects to a local model if configured like this: export OPENAI_API_KEY="ollama" export OPENAI_BASE_URL="http://localhost:11434/v1/" export OPENAI_MODEL="qwen3:8b" Compared to aider, it aims at being fully autonomous. For example, it will search your repository using grep. However, I didn’t manage to get it to successfully write any code. The tool seems optimized for larger, online models, with context sizes up to 1M tokens. Our local qwen3 context only has a 40k tokens context size, which can get overwhelmed very quickly when browsing entire code repositories. Even when I didn’t run out of context, the tool mysteriously failed when trying to write files. It insists it can only write to absolute paths, which the model doesn’t seem to agree with providing. I did not investigate the issue further.

a month ago 31 votes
Building Itako: an investment tracking application

Around 2 years ago, I had some time on my hands and got really interested into investing and playing with stock market data. After reading way too many books on the subject, I started building my own tools which led me to creating Itako. It’s a portfolio tracking and data visualization software that lets user journal their stock market transactions, and learn about their portfolio performance and diversification. Itako is now in a stable state, with enough features, so I felt like it was a good time to summarize this project. A prototype The very first thing I did was to create a script that could generate random walks to simulate investment returns over time, and return the graph in the CLI. This was fairly simple, and 250 lines of Python later, I could visualize those random walks going up and down. It allowed me to write my Shannon’s demon article, which explains the non-intuitive advantage of diversification and regular re-balancing. Of course, you can’t really go that far with random walks. Soon after this first step was finished, I decided to integrate real data from the stock market. Once my little script was able to fetch stock quotes from an API, I was able to do some fun things like compute the correlations between different tickers, or evaluate different strategies. Edelfelt As I started building more and more stuff, I got tired of playing with a bunch of Python scripts. I decided to consolidate my project in the form of a CLI that would act as a Swiss-army knife for my experiments. I called it Edelfelt. For this iteration, I also switched from Python to Go. Since the project was growing, Go allows me to really easily create robust and maintainable code that I can still painlessly work with years after. Edelfelt was able to read a list of transactions, turn this into a portfolio which is made up of different positions over time. From this portfolio, it generates a graph, but also output metrics such as the portfolio’s Sharpe ratio (basically the return/risk), compounded returns, total returns, and standard deviation. It could quickly be used to get a portfolio correlation matrix (showing if any 2 positions behave similarly), and weights vector (the % each position is taking in your portfolio). Lastly, you could use it to compare the portfolio to a benchmark (like the S&P500 or the 10-year US treasuries) and determine the excess returns from our strategies. Alpha To make this useable by other people I decided to turn this into a web application, named Itako. I decided to re-use Mikochi’s stack, which is Go and Gin for the backend with a Preact frontend. For a database, I decided to go with MySQL, because it is the only major database system that I have never seen catastrophically fail in production. The data visualization got a serious upgrade, thanks to Echarts (which is really awesome). I also decided to use DaisyUI to get some shiny looking CSS. The backend used the previous code from Edelfelt as a module. This avoided me from having to re-develop everything from scratch. The main improvement is that now, we could cache stock prices in our database. I didn’t expect that to be complicated, sadly cache invalidation is a pain. When I had a decent bunch of features to work with, I pushed the application to production and got around a dozen people to try it. People liked the visualizations, but also found a lot of bugs. Most were pretty minor UI issues, but some were more tricky. I hadn’t planned for some situations, like stock splits and delistings. Beta Aside from fixing the bugs noticed during the Alpha testing, the next step was to handle currency conversions. The aim of this was to let people have stocks from different countries in their portfolios. Getting currencies data is relatively simple since there’s a lot of APIs offering that. However, it completely changes how investment returns are calculated. This led me to refactoring Itako’s business logic and completely getting rid of the dependency to Edelfelt. I decided to go a bit more public about the project and made a Reddit post about it. Around 100 users decided to try Itako and I received plenty of feedback. The main point I learned from this was that I needed to focus on making the application a bit easier to use, and give more actionable data. Release To be honest, the project sat abandoned for a long while after this round of Beta testing. Nothing bad about it, I just really had to focus on shipping our Dice’n Goblins in time. After the game release, I manage to find a bit of time to work on Itako again. I shipped quite a few improvements. Users can now use a simplified input form and abstract away handling cash. Those who do not like this can head to their settings and change it back. I also added a bit of polish, and added a much needed rebalancing feature. This feature shows you the output of different rebalancing strategies on your portfolio. Currently, it implements two very common strategies: equal weights and risk parity. Quite recently, Itako removed its “Beta” badge. The project has most of the features I wanted to build when I created it. It feels weird saying it because I spent way too much time on this, but it is finished. If you want to try it out, just head to itako.app. If you have questions, feedback, or encountered any issues while using Itako, don’t hesitate to email me about it.

a month ago 3 votes
Building a container orchestrator

Kubernetes is not exactly the most fun piece of technology around. Learning it isn’t easy, and learning the surrounding ecosystem is even harder. Even those who have managed to tame it are still afraid of getting paged by an ETCD cluster corruption, a Kubelet certificate expiration, or the DNS breaking down (and somehow, it’s always the DNS). Samuel Sianipar If you’re like me, the thought of making your own orchestrator has crossed your mind a few times. The result would, of course, be a magical piece of technology that is both simple to learn and wouldn’t break down every weekend. Sadly, the task seems daunting. Kubernetes is a multi-million lines of code project which has been worked on for more than a decade. The good thing is someone wrote a book that can serve as a good starting point to explore the idea of building our own container orchestrator. This book is named “Build an Orchestrator in Go”, written by Tim Boring, published by Manning. The tasks The basic unit of our container orchestrator is called a “task”. A task represents a single container. It contains configuration data, like the container’s name, image and exposed ports. Most importantly, it indicates the container state, and so acts as a state machine. The state of a task can be Pending, Scheduled, Running, Completed or Failed. Each task will need to interact with a container runtime, through a client. In the book, we use Docker (aka Moby). The client will get its configuration from the task and then proceed to pull the image, create the container and start it. When it is time to finish the task, it will stop the container and remove it. The workers Above the task, we have workers. Each machine in the cluster runs a worker. Workers expose an API through which they receive commands. Those commands are added to a queue to be processed asynchronously. When the queue gets processed, the worker will start or stop tasks using the container client. In addition to exposing the ability to start and stop tasks, the worker must be able to list all the tasks running on it. This demands keeping a task database in the worker’s memory and updating it every time a task change’s state. The worker also needs to be able to provide information about its resources, like the available CPU and memory. The book suggests reading the /proc Linux file system using goprocinfo, but since I use a Mac, I used gopsutil. The manager On top of our cluster of workers, we have the manager. The manager also exposes an API, which allows us to start, stop, and list tasks on the cluster. Every time we want to create a new task, the manager will call a scheduler component. The scheduler has to list the workers that can accept more tasks, assign them a score by suitability and return the best one. When this is done, the manager will send the work to be done using the worker’s API. In the book, the author also suggests that the manager component should keep track of every tasks state by performing regular health checks. Health checks typically consist of querying an HTTP endpoint (i.e. /ready) and checking if it returns 200. In case a health check fails, the manager asks the worker to restart the task. I’m not sure if I agree with this idea. This could lead to the manager and worker having differing opinions about a task state. It will also cause scaling issues: the manager workload will have to grow linearly as we add tasks, and not just when we add workers. As far as I know, in Kubernetes, Kubelet (the equivalent of the worker here) is responsible for performing health checks. The CLI The last part of the project is to create a CLI to make sure our new orchestrator can be used without having to resort to firing up curl. The CLI needs to implement the following features: start a worker start a manager run a task in the cluster stop a task get the task status get the worker node status Using cobra makes this part fairly straightforward. It lets you create very modern feeling command-line apps, with properly formatted help commands and easy argument parsing. Once this is done, we almost have a fully functional orchestrator. We just need to add authentication. And maybe some kind of DaemonSet implementation would be nice. And a way to handle mounting volumes…

2 months ago 40 votes
Discord considered harmful

In the past few years, social media use has gained a bad reputation. More or less everyone is now aware that TikTok is ruining your attention span, and Twitter is radicalizing you into extreme ideologies. But, despite its enormous popularity amongst technology enthusiasts, there’s not a lot of attention given to Discord. I personally have been using Discord so much for so long that the majority of my social circle is made of people I met through the platform. I even spent two years of my life helping run the infrastructure behind the most popular Bot available on Discord. In this article, I will try to give my perspective on Discord, why I think it is harmful, and what can we do about it. appshunter.io A tale of two book clubs To explain my point of view about Discord, I will compare the experience between joining a real-life book-club, and one that communicates exclusively through Discord. This example is about books, but the same issues would apply if it was a community talking about investing, knitting, or collecting stamps. As Marshall McLuhan showed last century, examining media should be done independently of their content. In the first scenario, we have Bob. Bob enjoys reading books, which is generally a solitary hobby. To break this solitude, Bob decides to join a book club. This book club reunites twice a month in a library where they talk about a new book each time. In the second scenario, we have Alice. Alice also likes books. Alice also wants to meet fellow book lovers. Being a nerd, Alice decides to join a Discord server. This server does not have fixed meeting times. Most users simply use the text channels to talk about what they are reading anytime during the day. Crumbs of Belongingness In Bob’s book club, a session typically lasts an hour. First, the librarian takes some time to welcome everyone and introduce newcomers. After, that each club member talks about the book they were expected to read. They can talk about what they liked and disliked, how the book made them feel, and the chapters they found particularly noteworthy. Once each member had the time to talk about the book, they vote on the book they are going to read for the next date. After the session is concluded, some members move to the nearest coffeehouse to keep talking. During this session of one hour, Bob spent around one hour socializing. The need for belongingness that drove Bob to join this book club is fully met. On Alice’s side, the server is running 24/7. When she opens the app, even if there are sometimes more than 4000 members of her virtual book club online, most of the time, nobody is talking. If she was to spend an entire hour staring at the server she might witness a dozen or so messages. Those messages may be part of small conversations in which Alice can take part. Sadly, most of the time they will be simple uploads of memes, conversations about books she hasn’t read, or messages that do not convey enough meaning to start a conversation. In one hour of constant Discord use, Alice’s need for socializing has not been met. Susan Q Yin The shop is closed Even if Bob’s library is open every day, the book club is only open for a total of two hours a month. It is enough for Bob. Since the book club fulfills his need, he doesn’t want it to be around for longer. He has not even entertained the thought of joining a second book club, because too many meetings would be overwhelming. For Alice, Discord is always available. No matter if she is at home or away, it is always somewhere in her phone or taskbar. At any moment of the day, she might notice a red circle above the icon. It tells her there are unread messages on Discord. When she notices that, she instinctively stops her current task and opens the app to spend a few minutes checking her messages. Most of the time those messages do not lead to a meaningful conversation. Reading a few messages isn’t enough to meet her need for socialization. So, after having scrolled through the messages, she goes back to waiting for the next notification. Each time she interrupts her current task to check Discord, getting back into the flow can take several minutes or not happen at all. This can easily happen dozens of times a day and cost Alice hundreds of hours each month. Book hopping When Bob gets home, the club only requires him to read the next book. He may also choose to read two books at the same time, one for the book club and one from his personal backlog. But, if he were to keep his efforts to a strict minimum, he would still have things to talk about in the next session. Alice wants to be able to talk with other users about the books they are reading. So she starts reading the books that are trending and get mentionned often. The issue is, Discord’s conversation are instantaneous, and instantaneity compresses time. A book isn’t going to stay popular and relevant for two whole weeks, if it manages to be the thing people talk about for two whole days, it’s already great. Alice might try to purchase and read two to three books a week to keep up with the server rythm. Even if books are not terribly expensive, this can turn a 20 $/month hobby into a 200 $/month hobby. In addition to that, if reading a book takes Alice on average 10 hours, reading 3 books a week would be like adding a part-time job to her schedule. All this, while being constantly interrupted by the need to check if new conversations have been posted to the server. visnu deva Quitting Discord If you are in Alice’s situation, the solution is quite simple: use Discord less, ideally not at all. On my side, I’ve left every server that is not relevant to my current work. I blocked discord.com from the DNS of my coding computer (using NextDNS) and uninstalled the app from my phone. This makes the platform only usable as a direct messaging app, exclusively from my gaming device, which I cannot carry with me. I think many people realize the addictive nature of Discord, yet keep using the application all the time. One common objection to quitting the platform, is that there’s a need for an alternative: maybe we should go back to forums, or IRC, or use Matrix, etc… I don’t think any alternative internet chat platform can solve the problem. The real problem is that we want to be able to talk to people without leaving home, at any time, without any inconvenience. But what we should do is exactly that, leave home and join a real book club, one that is not open 24/7, and one where the members take the time to listen to each other. In the software community, we have also been convinced that every one of our projects needs to be on Discord. Every game needs a server, open-source projects offer support on Discord, and a bunch of AI startups even use it as their main user interface. I even made a server for Dice’n Goblins. I don’t think it’s really that useful. I’m not even sure it’s that convenient. Popular games are not popular because they have big servers, they have big servers because they are popular. Successful open-source projects often don’t even have a server.

2 months ago 38 votes
Thoughts on releasing our first indie game

Two weeks ago we released Dice’n Goblins, our first game on Steam. This project allowed me to discover and learn a lot of new things about game development and the industry. I will use this blog post to write down what I consider to be the most important lessons from the months spent working on this. The development started around 2 years ago when Daphnée started prototyping a dungeon crawler featuring a goblin protagonist. After a few iterations, the game combat started featuring dice, and then those dice could be used to make combos. In May 2024, the game was baptized Dice’n Goblins, and a Steam page was created featuring some early gameplay screenshots and footage. I joined the project full-time around this period. Almost one year later, after amassing more than 8000 wishlists, the game finally released on Steam on April 4th, 2025. It was received positively by the gaming press, with great reviews from PCGamer and LadiesGamers. It now sits at 92% positive reviews from players on Steam. Building RPGs isn’t easy As you can see from the above timeline, building this game took almost two years and two programmers. This is actually not that long if you consider that other indie RPGs have taken more than 6 years to come out. The main issue with the genre is that you need to create a believable world. In practice, this requires programming many different systems that will interact together to give the impression of a cohesive universe. Every time you add a new system, you need to think about how it will fit all the existing game features. For example, players typically expect an RPG to have a shop system. Of course, this means designing a shop non-player character (or building) and creating a UI that is displayed when you interact with it. But this also means thinking through a lot of other systems: combat needs to be changed to reward the player with gold, every item needs a price tag, chests should sometimes reward the player with gold, etc… Adding too many systems can quickly get into scope creep territory, and make the development exponentially longer. But you can only get away with removing so much until your game stops being an RPG. Making a game without a shop might be acceptable, but the experience still needs to have more features than “walking around and fighting monsters” to feel complete. RPGs are also, by definition, narrative experiences. While some games have managed to get away with procedurally generating 90% of the content, in general, you’ll need to get your hands dirty, write a story, and design a bunch of maps. Creating enough content for a game to fit 12h+ without having the player go through repetitive grind will by itself take a lot of time. Having said all that, I definitely wouldn’t do any other kind of games than RPGs, because this is what I enjoy playing. I don’t think I would be able to nail what makes other genres fun if I don’t play them enough to understand what separates the good from the mediocre. Marketing isn’t that complicated Everyone in the game dev community knows that there are way too many games releasing on Steam. To stand out amongst the 50+ games coming out every day, it’s important not only to have a finished product but also to plan a marketing campaign well in advance. For most people coming from a software engineering background, like me, this can feel extremely daunting. Our education and jobs do not prepare us well for this kind of task. In practice, it’s not that complicated. If your brain is able to provision a Kubernetes cluster, then you are most definitely capable of running a marketing campaign. Like anything else, it’s a skill that you can learn over time by practicing it, and iteratively improving your methods. During the 8 months following the Steam page release, we tried basically everything you can think of as a way to promote the game. Every time something was having a positive impact, we would do it more, and we quickly stopped things with low impact. The most important thing to keep in mind is your target audience. If you know who wants the game of games you’re making, it is very easy to find where they hang out and talk to them. This is however not an easy question to answer for every game. For a long while, we were not sure who would like Dice’n Goblins. Is it people who like Etrian Odyssey? Fans of Dicey Dungeons? Nostalgic players of Paper Mario? For us, the answer was mostly #1, with a bit of #3. Once we figured out what was our target audience, how to communicate with them, and most importantly, had a game that was visually appealing enough, marketing became very straightforward. This is why we really struggled to get our first 1000 wishlists, but getting the last 5000 was actually not that complicated. Publishers aren’t magic At some point, balancing the workload of actually building the game and figuring out how to market it felt too much for a two-person team. We therefore did what many indie studios do, and decided to work with a publisher. We worked with Rogue Duck Interactive, who previously published Dice & Fold, a fairly successful dice roguelike. Without getting too much into details, it didn’t work out as planned and we decided, by mutual agreement, to go back to self-publishing Dice’n Goblins. The issue simply came from the audience question mentioned earlier. Even though Dice & Fold and Dice’n Goblins share some similarities, they target a different audience, which requires a completely different approach to marketing. The lesson learned is that when picking a publisher, the most important thing you can do is to check that their current game catalog really matches the idea you have of your own game. If you’re building a fast-paced FPS, a publisher that only has experience with cozy simulation games will not be able to help you efficiently. In our situation, a publisher with experience in roguelikes and casual strategy games wasn’t a good fit for an RPG. In addition to that, I don’t think the idea of using a publisher to remove marketing toil and focus on making the game is that much of a good idea in the long term. While it definitely helps to remove the pressure from handling social media accounts and ad campaigns, new effort will be required in communicating and negotiating with the publishing team. In the end, the difference between the work saved and the work gained might not have been worth selling a chunk of your game. Conclusion After all this was said and done, one big question I haven’t answered is: would I do it again? The answer is definitely yes. Not only building this game was an extremely satisfying endeavor, but so much has been learned and built while doing it, it would be a shame not to go ahead and do a second one.

4 months ago 51 votes

More in programming

If Apple cared about privacy

Defaults matter

9 hours ago 4 votes
ARM is great, ARM is terrible (and so is RISC-V)

I’ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I’ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service … Continue reading ARM is great, ARM is terrible (and so is RISC-V) →

7 hours ago 2 votes
Many Hard Leetcode Problems are Easy Constraint Problems

In my first interview out of college I was asked the change counter problem: Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies). I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview. But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day. int: total; array[int] of int: values = [10, 9, 1]; array[index_set(values)] of var 0..: coins; constraint sum (c in index_set(coins)) (coins[c] * values[c]) == total; solve minimize sum(coins); You can try this online here. It'll give you a prompt to put in total and then give you successively-better solutions: coins = [0, 0, 37]; ---------- coins = [0, 1, 28]; ---------- coins = [0, 2, 19]; ---------- coins = [0, 3, 10]; ---------- coins = [0, 4, 1]; ---------- coins = [1, 3, 0]; ---------- Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems.1 Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is. More examples This was a question in a different interview (which I thankfully passed): Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later. It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem: array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; var int: buy; var int: sell; var int: profit = prices[sell] - prices[buy]; constraint sell > buy; constraint profit > 0; solve maximize profit; Reminder, link to trying it online here. While working at that job, one interview question we tested out was: Given a list, determine if three numbers in that list can be added or subtracted to give 0? This is a satisfaction problem, not a constraint problem: we don't need the "best answer", any answer will do. We eventually decided against it for being too tricky for the engineers we were targeting. But it's not tricky in a solver; include "globals.mzn"; array[int] of int: numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array[index_set(numbers)] of var {0, -1, 1}: choices; constraint sum(n in index_set(numbers)) (numbers[n] * choices[n]) = 0; constraint count(choices, -1) + count(choices, 1) = 3; solve satisfy; Okay, one last one, a problem I saw last year at Chipy AlgoSIG. Basically they pick some leetcode problems and we all do them. I failed to solve this one: Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram. The "proper" solution is a tricky thing involving tracking lots of bookkeeping states, which you can completely bypass by expressing it as constraints: array[int] of int: numbers = [2,1,5,6,2,3]; var 1..length(numbers): x; var 1..length(numbers): dx; var 1..: y; constraint x + dx <= length(numbers); constraint forall (i in x..(x+dx)) (y <= numbers[i]); var int: area = (dx+1)*y; solve maximize area; output ["(\(x)->\(x+dx))*\(y) = \(area)"] There's even a way to automatically visualize the solution (using vis_geost_2d), but I didn't feel like figuring it out in time for the newsletter. Is this better? Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver. The real advantage of solvers, though, is how well they handle new constraints. Take the stock picking problem above. I can write an O(n²) algorithm in a few minutes and the O(n) algorithm if you give me some time to think. Now change the problem to Maximize the profit by buying and selling up to max_sales stocks, but you can only buy or sell one stock at a given time and you can only hold up to max_hold stocks at a time? That's a way harder problem to write even an inefficient algorithm for! While the constraint problem is only a tiny bit more complicated: include "globals.mzn"; int: max_sales = 3; int: max_hold = 2; array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array [1..max_sales] of var int: buy; array [1..max_sales] of var int: sell; array [index_set(prices)] of var 0..max_hold: stocks_held; var int: profit = sum(s in 1..max_sales) (prices[sell[s]] - prices[buy[s]]); constraint forall (s in 1..max_sales) (sell[s] > buy[s]); constraint profit > 0; constraint forall(i in index_set(prices)) (stocks_held[i] = (count(s in 1..max_sales) (buy[s] <= i) - count(s in 1..max_sales) (sell[s] <= i))); constraint alldifferent(buy ++ sell); solve maximize profit; output ["buy at \(buy)\n", "sell at \(sell)\n", "for \(profit)"]; Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking. Because my dad will email me if I don't explain this: "leetcode" is slang for "tricky algorithmic interview questions that have little-to-no relevance in the actual job you're interviewing for." It's from leetcode.com. ↩

7 hours ago 2 votes
btrfs on a Raspberry Pi

I’m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others. I’ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over … Continue reading btrfs on a Raspberry Pi →

yesterday 3 votes
Stumbling upon

Something like a channel changer, for the web. That's what the idea was at first. But it led to a whole new path of discovery that even the site's creators couldn't have predicted. The post Stumbling upon appeared first on The History of the Web.

yesterday 7 votes