More from Jibran’s Perspective
I’ve completed a freelance project I was working on for a few months, and have started saying no to new opportunities. It’s time to work on one of my own ideas again. This is part of my plan to start failing more. I’ve decided to build a business sending gift cards to Pakistan - and eventually other countries in that corner of the world. Why? A few years ago I had sent a gift card to a colleague in the UK. I found a number of very good options. They all had websites that inspired confidence, and used robust payment methods (Stripe in my example) that I could trust with my credit card. I recently had to send a gift card to a colleague in Pakistan. I was confident that I would find a bunch of great options; instead I only found one that I could think of trusting with my money. I ended up using their services and the card was delivered, but there were a number of problems I saw: No trust building around card payments. There was no clear mention of which provider they used. I did a bank transfer instead of using a CC. This meant my payment was manually verified and the card was only sent after a few hours. There was no confirmation email about my order. I was worried enough to call their helpline to confirm that my order had gone through. Once they had sent the card (which I also had to confirm via phone), I only got a confirmation email the next day. To get an invoice to expense this, I had to send them an email. I’m still waiting on an invoice. There were multiple colleagues who chipped in on this gift card. I had to collect the money from them and then pay for the card myself. In my previous experience of sending a gift card to the UK, I was able to include my colleagues in the process. They were able to add their contributions directly to the gift card I selected and a card of the total amount was sent to the recipient. Finally, there was no option for the receiver to choose which gift card they wanted. Instead I had to choose for them. There is a “Universal Gift Card” they claim works at all merchants and is the one I got, but redeeming that would be slightly more complicated. Interestingly, my colleague didn’t open the email they received with the gift card because they thought it was a spam/scam/malicious email. Only after I asked if they had received the card did they end up opening it. I know a better user experience exists. I want to bring the same to Pakistan and solve my own problem at the same time. Is there a market for this? I believe so, because: It’s a problem I’ve just faced. I’ve seen my wife having to deal with low-trust companies sending gifts to Pakistan. Gift cards are different, but eventually I could also add the option to send physical gifts to the recipient. I’ve seen my employer deal with this. Recently a baby gift basket arrived 2 months after the baby was born. 🤯 This is a recurring problem. People & companies need to send gift cards on birthdays, weddings, etc. With more companies starting to hire remotely in Pakistan, this could be a valuable service for businesses to subscribe to. Validation? I haven’t found an easy way to validate this idea. There is no community of “people sending gift cards to Pakistan” that I can tap into. That isn’t a cohort I can find in one place. I could make a list of B2B customers; companies that hire remotely in Pakistan. However, I want to start with individual customers - because I’m starting from a place of solving my own problem. It should be possible to pivot to B2B if I don’t find any interest from individual customers. Validation then involves me starting with a blog - suggesting gift cards to send to Pakistan. I’ll use SEO to bring in traffic. If I see enough visitors, I could start building a business. This also means that if/when the actual product launches, I’ll have a distribution channel already working. What if I’m wrong? There’s a very strong possibility that I’m wrong about this idea. That I’ll spend a bunch of time for it to get nowhere, or that I have picked a problem that isn’t very valuable to solve. This is my unique brand of fear of failure. I used to think I didn’t fear failing, because I had already failed many times. Instead, my fear of failure manifests as a fear of picking the wrong thing and wasting time on it. The way I am dealing with this is to realize that if I don’t pick anything - which I have frequently done in the past - I have an exactly 0% chance of succeeding. Just trying something makes that probability > 0%. You miss 100% of the shots you don’t take. Another thing that’s helping me is to time box this idea. I will spend 6 weeks on building the blog and populating it with as much useful content as possible. After that I can spend an hour or two every week to add a few more pieces of content. I can start researching and working on a different idea after the 6 week period and wait for the SEO to have an impact before making a decision to continue or abandon this.
As part of a contracting project, I’ve been building an analytics dashboard for a feedback collection SaaS. The app is built in Ruby on Rails and given all the nice things I’ve heard about Kamal; I decided to use it for deploying the app. The experience has been phenomenal; outside of some frustration with the initial deployment. The app is deployed on a pretty standard AWS setup; a couple of EC2 servers hosting the web app running inside Docker containers, and a load balancer in front. One of the problems I faced during the initial deployment was forwarding headers from the AWS application load balancer to the RoR server running in the Docker container. The challenge with Kamal is that it relies heavily on Traefik, and while Traefik is a great tool, it takes some getting used to. It’s configuration is not very intuitive, and there’s no easy way to see how things are configured outside of looking at the text logs. The Traefik document is pretty thorough, so a bit of searching led me to this CLI argument which needs to be passed to the Traefik container: entrypoints.http.forwardedheaders.insecure: true However, no matter what I tried, when I added this, the app container would stop responding to web requests. Without the config the container would work but throw an exception related to the Origin header not matching the configured hosts. After a lot of experimentation, I stumbled upon the other config I needed to add by pure luck. entrypoints.http.address: ":80" As far as I can tell, when I added the forwardedheaders config, the entrypoint no longer got the correct address configuration. I’m not sure if this is related to Kamal or Traefik. Kamal deploy.yml If you’re looking to replicate a similar setup, here’s the Kamal deploy.yml file that I am using with this project to deploy to AWS, with a load balancer terminating the SSL connection and forwarding traffic to web servers that are configured via Kamal. As a bonus, this config also deploys Sidekiq for background tasks. service: <SERVICE NAME> image: <IMAGE NAME> ssh: user: ubuntu proxy: "ubuntu@A.B.C.D" servers: web: hosts: - "A.B.C.D" - "A.B.C.D" labels: traefik.http.routers.<SERVICE NAME>-web.rule: Host(`<YOUR HOST NAME>`) sidekiq: hosts: - "A.B.C.D" - "A.B.C.D" traefik: false cmd: bundle exec sidekiq registry: server: <AWS ACCOUNT ID>.dkr.ecr.<AWS REGION>.amazonaws.com username: AWS password: <%= %x(aws ecr get-login-password --region <AWS REGION>) %> builder: local: arch: amd64 # Because I develop on a Apple Silicon machine, I need to use a build target env: clear: - DATABASE_URL: <DATABASE URL> secret: - RAILS_MASTER_KEY - DB_PASSWORD traefik: args: entrypoints.http.address: ":80" entrypoints.http.forwardedheaders.insecure: true log.level: DEBUG accesslog: true accesslog.format: json
I have failed, and that is exactly what I had hoped for a few months ago in this blog post. This is a good failure. It has taught me things, lessons I can use in the future to avoid failing this way again. But first a bit of context. What did I fail at? In February of 2024 I decide to try my hands on my first “Indie Hacker” hustle, something that would make me money on the internet without having to trade my time for it. A product instead of consultancy services that I usually provide. I had seen a number of people on Twitter (X) rave about how well their bootstrap templates were doing; and I had just gotten out of a consultancy project where I needed to connect a Next.js frontend to a Django backend. I thought it was the perfect project to start my indie hacking journey. I put up a launch post and started working, updating a build log as I went along. I gave myself until 28th March 2024 to finish it. That of course did not happen. Let’s talk about why I failed and what I learned. Episode 1: The one where I don’t understand the meaning of MVP My initial plan was to build a Django+Next.js boilerplate template the provided all of these: the base template that provided a Django backend & Next.js frontend working authentication b/w the backend & frontend Dockerfile that would create the backend & frontend containers for deployment Terraform scripts to setup an infrastructure on AWS Celery + Redis for background task processing TailwindCSS for the frontend (comes mostly for free with Next.js) social auth This looks like something achievable in a week or two of work - but only if you’re working full time on this. I failed to consider that I have a day job and a life. I was barely able to tick of the first two of these deliverables by the time my 6 week deadline came up. As a good friend told me later, I should have focused on the minimum amount of value I could deliver. Just having the first two things on my list be done would have been enough. I couldn’t charge the $20 I had planned for, but I could have charged $1-$5 for just that. And if no one was interested in spending the cost of a coffee on the MVP of the template, that would have been a good signal that this wasn’t going anywhere in it’s current shape. Instead, by focusing on building something much bigger, I robbed myself of the ability to validate the idea quickly. I spent all my available time coding the template instead of trying to talk to potential customers about it. Lesson 1: Scope down aggressively. Episode 2: Where I jumped on the hype-wagon I settled on building a boilerplate template because that’s what I had seen a lot of people on Twitter/X doing lately; I’m chalking this down to recency bias. I had no personal interest in a boilerplate template. It’s also not a product that I would personally use. I have so far made one project that uses this tech stack. Most of my other projects are Django, and Ruby on Rails. The most successful boilerplate templates I come across are from people who made a bunch of projects in 1 tech stack then realized they needed to do the same thing over-and-over again; which they then packaged into a template they could use. Selling to others was a bonus at first I guess. I was very enthusiastic about the project at the start, but as time went on I had to force myself to work on it. My lack of interest in this type of project was a big factor. Another factor was there being no way to see the fruits of my labor. I am currently working on an analytics dashboard for another client (a RoR project) and every time I build a feature, I love to play around with it in my free time. I test how it works, make sure the UX is a good one, and just play around and admire the app I’ve made. Without me using my template to build new projects, I lacked that feedback loop. Without the loop, I quickly lost interest. Lesson 2: Build something I can use myself. This isn’t a job I’m getting paid for, so the only motivation I have initially until it starts generating money is to build something interesting for myself. Episode 3: Where I had nothing for potential customers to play around with This is related to the 1st lesson. Because I didn’t have a path to quickly get something out there, there was no way for me to get my “product” into the hands of people who could test and provide feedback. I think the problem with a boilerplate template style of product is that you can’t give people a half-backed thing and ask them to test it. Unlike other SaaS apps, there’s no mid-way version of a template. Customers have to “buy-in” to use your template with any project they are starting. With SaaS, users can sign up and test, and then leave if they don’t like it. There’s no easy way of testing with a template. Lesson 3: Build something that can be tested by potential customers easily. For now, I’m going to stick with SaaS style web apps. Conclusion Moving forward: I’ll be working on web app products that users can sign up for and test very quickly. My next few experiments/products will be things that I can use myself as well. I’ll post what I’m going to work on next when I decide and have some time away from my job & freelance projects that are currently in progress.
If you’re just looking for implementation instructions, skip my ramblings and go straight to the code here. I’m currently working on my first project after deciding that I needed to fail more and practice finishing projects instead of abandoning them midway once they got “boring”. Anyways… This one is till in it’s interesting phase, so here’s a blog post with some things I learned yesterday while working on it. The project is a boilerplate template that should make it easy for devs. to start a new project with a Django backend and a Next.js frontend, something I had to struggle with recently. The problem The first thing I’m looking to solve is authentication. That was my biggest challenge when working on the contracting project that inspired this template. While there are a number of good posts around how to setup authentication b/w Django & Next.js, nothing “definitive” came up and I had to cobble together a weird mess of Django+DRF (Django Rest Framework) and Next.js+NextAuth, sharing a token from Django that was masquarading as a JWT token for Next.js. It wasn’t pretty and I knew I could do better. The options I considered 2 options for authenticating the Next.js frontend with the Django backend: Token based auth. On logging in, a user receives a token that is stored in local storage by the frontend and send with every request to the backend. Session/Cookie based auth. This is how authentication works in Django by default and is very easy to get started with - it basically comes for free out of the box when you start a new Django project. While token based auth. is what almost everyone suggests to use when using a Next.js frontend with any backend technology, I wanted to give session based auth. a try. I was curious what it would take to make it work - if it was even possible. tl;dr: It was possible to use cookie/session auth. b/w Django & Next.js - though with a few constraints which make it less appealing than the token based solution What follows are my notes on how to set it up, the problems I faced, and why for the template I’m going to go with token based auth. instead. Learning how CORS & Set-Cookie works It took me a few hours to get my head around how cross-origin requests and cookies work together, but the actual implementation was surprisingly straight forward. This “mini-quest” gave me a chance to learn a lot about how CORS and cookies work, and I’m happy with the time I spent on this. These are the resources which helped me the most (all are from MDN): Cross-Origin Resource Sharing Same-origin policy Using HTTP cookies Set-Cookie And finally, there was a surprise waiting for me! Browsers are almost universally making changes to restrict 3rd party or cross-domain cookies because of their privacy implications. Here’s a nice article from MDN about it: Saying goodbye to third-party cookies in 2024. This is the reason why; while this approach works, I won’t be using it in the template. More on that later. Implementation Implementing the session based auth. b/w Django & Next.js is pretty simple. Django configuration Install the django-cors-headers Python package. Add "corsheaders", to your INSTALLED_APPS. Add the "corsheaders.middleware.CorsMiddleware", middleware, right above the existing CommonMiddleware. Set CORS_ALLOWED_ORIGINS = ["http://localhost:3000"], replacing the URL with your frontend URL. Set CORS_ALLOW_CREDENTIALS = True Configure settings.py to allow cross-domain access for the session cookie. Set SESSION_COOKIE_SAMESITE = "None" Set SESSION_COOKIE_SECURE = True Next.js configuration No configuration is needed on the frontend. However, you do need to use the credentials: "include", option when using the fetch() API to access your backend. Here’s a minimal example. "use client"; import { BACKEND_URL } from "@/constants"; async function signIn() { const loginData = new FormData(); loginData.append("username", "admin"); loginData.append("password", "admin"); return await fetch(`${BACKEND_URL}/accounts/login/`, { method: "POST", body: loginData, credentials: "include", }); } async function whoAmI() { console.log( await fetch(`${BACKEND_URL}/accounts/me/`, { method: "GET", credentials: "include", }), ); } export default function Home() { return ( <main className="flex min-h-dvh w-full flex-col justify-around"> <h1 className="text-center">Home</h1> <button className="" onClick={signIn}> Sign In </button> <button onClick={whoAmI}>Who Am I</button> </main> ); } That’s it. That simple piece of code & configuration took me hours to find. Hopefully you can use this example to skip all that time spent trying to figure things out. Side quest log: Initially, I was not using the credentials: "include" option in the signIn() function above; thinking that I didn’t need to send any cookies with the login call, only the second API call to the /accounts/me endpoint. That mistake cost me about 2 hours of debugging time. If I had RTFM correctly the first time, I would have seen this: include: Tells browsers to include credentials in both same- and cross-origin requests, and always use any credentials sent back in responses. The credentials: "include" not only controls if cookies are sent, but also if they are saved when returned by the server. Why I won’t use this solution in the template Browsers are phasing out 3rd party cookies (Saying goodbye to third-party cookies in 2024) and adding features to work around that restriction where needed. The simplest way that doesn’t require much change is to use Cookies Having Independent Partitioned State (CHIPS). To enable CHIPS, you simply put a Partitioned flag on your Set-Cookie header, like so: Set-Cookie: session_id=1234; SameSite=None; Secure; Path=/; Partitioned; Unfortunately, there’s no straight forward way to do this in Django for now. There’s an open issue to resolve this, but looking at the comments, it won’t likely be solved anytime soon. Considering this, I opted to use the token based auth. method for my template. I’ll write a blog on that once I get it working over the next few days.
More in programming
The first in a series of posts about doing things the right way
The best engineering teams take control of their tools. They help develop the frameworks and libraries they depend on, and they do this by running production code on edge — the unreleased next version. That's where progress is made, that's where participation matters most. This sounds scary at first. Edge? Isn't that just another word for danger? What if there's a bug?! Yes, what if? Do you think bugs either just magically appear or disappear? No, they're put there by programmers and removed by the very same. If you want bug-free frameworks and libraries, you have to work for it, but if you do, the reward for your responsibility is increased engineering excellence. Take Rails 8.1, as an example. We just released the first beta version at Rails World, but Shopify, GitHub, 37signals, and a handful of other frontier teams have already been running this code in production for almost a year. Of course, there were bugs along the way, but good automated testing and diligent programmers caught virtually all of them before they went to production. It didn't always used to be this way. Once upon a time, I felt like I had one of the only teams running Rails on edge in production. But now two of the most important web apps in the world are doing the same! At an incredible scale and criticality. This has allowed both of them, and the few others with the same frontier ambition, to foster a truly elite engineering culture. One that isn't just a consumer of open source software, but a real-time co-creator. This is a step function in competence and prowess for any team. It's also an incredible motivation boost. When your programmers are able to directly influence the tools they're working with, they're far more likely to do so, and thus they go deeper, learn more, and create connections to experts in the same situation elsewhere. But this requires being able to immediately use the improvements or bug fixes they help devise. It doesn't work if you sit around waiting patiently for the next release before you dare dive in. Far more companies could do this. Far more companies should do this. Whether it's with Ruby, Rails, Omarchy, or whatever you're using, your team could level up by getting more involved, taking responsibility for finding issues on edge, and reaping the reward of excellence in the process. So what are you waiting on?
A deep dive into the Action Cache, the CAS, and the security issues that arise from using Bazel with a remote cache but without remote execution
(I present to you my stream of consciousness on the topic of casing as it applies to the web platform.) I’m reading about the new command and commandfor attributes — which I’m super excited about, declarative behavior invocation in HTML? YES PLEASE!! — and one thing that strikes me is the casing in these APIs. For example, the command attribute has a variety of values in HTML which correspond to APIs in JavaScript. The show-popover attribute value maps to .showPopover() in JavaScript. hide-popover maps to .hidePopover(), etc. So what we have is: lowercase in attribute names e.g. commandfor="..." kebab-case in attribute values e.g. show-popover camelCase for JS counterparts e.g. showPopover() After thinking about this a little more, I remember that HTML attributes names are case insensitive, so the browser will normalize them to lowercase during parsing. Given that, I suppose you could write commandFor="..." but it’s effectively the same. Ok, lowercase attribute names in HTML makes sense. The related popover attributes follow the same convention: popovertarget popovertargetaction And there are many other attribute names in HTML that are lowercase, e.g.: maxlength novalidate contenteditable autocomplete formenctype So that all makes sense. But wait, there are some attribute names with hyphens in them, like aria-label="..." and data-value="...". So why isn’t it command-for="..."? Well, upon further reflection, I suppose those attributes were named that way for extensibility’s sake: they are essentially wildcard attributes that represent a family of attributes that are all under the same namespace: aria-* and data-*. But wait, isn’t that an argument for doing popover-target and popover-target-action? Or command and command-for? But wait (I keep saying that) there are kebab-case attribute names in HTML — like http-equiv on the <meta> tag, or accept-charset on the form tag — but those seem more like legacy exceptions. It seems like the only answer here is: there is no rule. Naming is driven by convention and decisions are made on a case-by-case basis. But if I had to summarize, it would probably be that the default casing for new APIs tends to follow the rules I outlined at the start (and what’s reflected in the new command APIs): lowercase for HTML attributes names kebab-case for HTML attribute values camelCase for JS counterparts Let’s not even get into SVG attribute names We need one of those “bless this mess” signs that we can hang over the World Wide Web. Email · Mastodon · Bluesky
Greetings everyone! You might have noticed that it's September and I don't have the next version of Logic for Programmers ready. As penance, here's ten free copies of the book. So a few months ago I wrote a newsletter about how we use nondeterminism in formal methods. The overarching idea: Nondeterminism is when multiple paths are possible from a starting state. A system preserves a property if it holds on all possible paths. If even one path violates the property, then we have a bug. An intuitive model of this is that for this is that when faced with a nondeterministic choice, the system always makes the worst possible choice. This is sometimes called demonic nondeterminism and is favored in formal methods because we are paranoid to a fault. The opposite would be angelic nondeterminism, where the system always makes the best possible choice. A property then holds if any possible path satisfies that property.1 This is not as common in FM, but it still has its uses! "Players can access the secret level" or "We can always shut down the computer" are reachability properties, that something is possible even if not actually done. In broader computer science research, I'd say that angelic nondeterminism is more popular, due to its widespread use in complexity analysis and programming languages. Complexity Analysis P is the set of all "decision problems" (basically, boolean functions) can be solved in polynomial time: there's an algorithm that's worst-case in O(n), O(n²), O(n³), etc.2 NP is the set of all problems that can be solved in polynomial time by an algorithm with angelic nondeterminism.3 For example, the question "does list l contain x" can be solved in O(1) time by a nondeterministic algorithm: fun is_member(l: List[T], x: T): bool { if l == [] {return false}; guess i in 0..<(len(l)-1); return l[i] == x; } Say call is_member([a, b, c, d], c). The best possible choice would be to guess i = 2, which would correctly return true. Now call is_member([a, b], d). No matter what we guess, the algorithm correctly returns false. and just return false. Ergo, O(1). NP stands for "Nondeterministic Polynomial". (And I just now realized something pretty cool: you can say that P is the set of all problems solvable in polynomial time under demonic nondeterminism, which is a nice parallel between the two classes.) Computer scientists have proven that angelic nondeterminism doesn't give us any more "power": there are no problems solvable with AN that aren't also solvable deterministically. The big question is whether AN is more efficient: it is widely believed, but not proven, that there are problems in NP but not in P. Most famously, "Is there any variable assignment that makes this boolean formula true?" A polynomial AN algorithm is again easy: fun SAT(f(x1, x2, …: bool): bool): bool { N = num_params(f) for i in 1..=num_params(f) { guess x_i in {true, false} } return f(x_1, x_2, …) } The best deterministic algorithms we have to solve the same problem are worst-case exponential with the number of boolean parameters. This a real frustrating problem because real computers don't have angelic nondeterminism, so problems like SAT remain hard. We can solve most "well-behaved" instances of the problem in reasonable time, but the worst-case instances get intractable real fast. Means of Abstraction We can directly turn an AN algorithm into a (possibly much slower) deterministic algorithm, such as by backtracking. This makes AN a pretty good abstraction over what an algorithm is doing. Does the regex (a+b)\1+ match "abaabaabaab"? Yes, if the regex engine nondeterministically guesses that it needs to start at the third letter and make the group aab. How does my PL's regex implementation find that match? I dunno, backtracking or NFA construction or something, I don't need to know the deterministic specifics in order to use the nondeterministic abstraction. Neel Krishnaswami has a great definition of 'declarative language': "any language with a semantics has some nontrivial existential quantifiers in it". I'm not sure if this is identical to saying "a language with an angelic nondeterministic abstraction", but they must be pretty close, and all of his examples match: SQL's selects and joins Parsing DSLs Logic programming's unification Constraint solving On top of that I'd add CSS selectors and planner's actions; all nondeterministic abstractions over a deterministic implementation. He also says that the things programmers hate most in declarative languages are features that "that expose the operational model": constraint solver search strategies, Prolog cuts, regex backreferences, etc. Which again matches my experiences with angelic nondeterminism: I dread features that force me to understand the deterministic implementation. But they're necessary, since P probably != NP and so we need to worry about operational optimizations. Eldritch Nondeterminism If you need to know the ratio of good/bad paths, the number of good paths, or probability, or anything more than "there is a good path" or "there is a bad path", you are beyond the reach of heaven or hell. Angelic and demonic nondeterminism are duals: angelic returns "yes" if some choice: correct and demonic returns "no" if !all choice: correct, which is the same as some choice: !correct. ↩ Pet peeve about Big-O notation: O(n²) is the set of all algorithms that, for sufficiently large problem sizes, grow no faster that quadratically. "Bubblesort has O(n²) complexity" should be written Bubblesort in O(n²), not Bubblesort = O(n²). ↩ To be precise, solvable in polynomial time by a Nondeterministic Turing Machine, a very particular model of computation. We can broadly talk about P and NP without framing everything in terms of Turing machines, but some details of complexity classes (like the existence "weak NP-hardness") kinda need Turing machines to make sense. ↩