More from Jibran’s Perspective
I’ve completed a freelance project I was working on for a few months, and have started saying no to new opportunities. It’s time to work on one of my own ideas again. This is part of my plan to start failing more. I’ve decided to build a business sending gift cards to Pakistan - and eventually other countries in that corner of the world. Why? A few years ago I had sent a gift card to a colleague in the UK. I found a number of very good options. They all had websites that inspired confidence, and used robust payment methods (Stripe in my example) that I could trust with my credit card. I recently had to send a gift card to a colleague in Pakistan. I was confident that I would find a bunch of great options; instead I only found one that I could think of trusting with my money. I ended up using their services and the card was delivered, but there were a number of problems I saw: No trust building around card payments. There was no clear mention of which provider they used. I did a bank transfer instead of using a CC. This meant my payment was manually verified and the card was only sent after a few hours. There was no confirmation email about my order. I was worried enough to call their helpline to confirm that my order had gone through. Once they had sent the card (which I also had to confirm via phone), I only got a confirmation email the next day. To get an invoice to expense this, I had to send them an email. I’m still waiting on an invoice. There were multiple colleagues who chipped in on this gift card. I had to collect the money from them and then pay for the card myself. In my previous experience of sending a gift card to the UK, I was able to include my colleagues in the process. They were able to add their contributions directly to the gift card I selected and a card of the total amount was sent to the recipient. Finally, there was no option for the receiver to choose which gift card they wanted. Instead I had to choose for them. There is a “Universal Gift Card” they claim works at all merchants and is the one I got, but redeeming that would be slightly more complicated. Interestingly, my colleague didn’t open the email they received with the gift card because they thought it was a spam/scam/malicious email. Only after I asked if they had received the card did they end up opening it. I know a better user experience exists. I want to bring the same to Pakistan and solve my own problem at the same time. Is there a market for this? I believe so, because: It’s a problem I’ve just faced. I’ve seen my wife having to deal with low-trust companies sending gifts to Pakistan. Gift cards are different, but eventually I could also add the option to send physical gifts to the recipient. I’ve seen my employer deal with this. Recently a baby gift basket arrived 2 months after the baby was born. 🤯 This is a recurring problem. People & companies need to send gift cards on birthdays, weddings, etc. With more companies starting to hire remotely in Pakistan, this could be a valuable service for businesses to subscribe to. Validation? I haven’t found an easy way to validate this idea. There is no community of “people sending gift cards to Pakistan” that I can tap into. That isn’t a cohort I can find in one place. I could make a list of B2B customers; companies that hire remotely in Pakistan. However, I want to start with individual customers - because I’m starting from a place of solving my own problem. It should be possible to pivot to B2B if I don’t find any interest from individual customers. Validation then involves me starting with a blog - suggesting gift cards to send to Pakistan. I’ll use SEO to bring in traffic. If I see enough visitors, I could start building a business. This also means that if/when the actual product launches, I’ll have a distribution channel already working. What if I’m wrong? There’s a very strong possibility that I’m wrong about this idea. That I’ll spend a bunch of time for it to get nowhere, or that I have picked a problem that isn’t very valuable to solve. This is my unique brand of fear of failure. I used to think I didn’t fear failing, because I had already failed many times. Instead, my fear of failure manifests as a fear of picking the wrong thing and wasting time on it. The way I am dealing with this is to realize that if I don’t pick anything - which I have frequently done in the past - I have an exactly 0% chance of succeeding. Just trying something makes that probability > 0%. You miss 100% of the shots you don’t take. Another thing that’s helping me is to time box this idea. I will spend 6 weeks on building the blog and populating it with as much useful content as possible. After that I can spend an hour or two every week to add a few more pieces of content. I can start researching and working on a different idea after the 6 week period and wait for the SEO to have an impact before making a decision to continue or abandon this.
I have failed, and that is exactly what I had hoped for a few months ago in this blog post. This is a good failure. It has taught me things, lessons I can use in the future to avoid failing this way again. But first a bit of context. What did I fail at? In February of 2024 I decide to try my hands on my first “Indie Hacker” hustle, something that would make me money on the internet without having to trade my time for it. A product instead of consultancy services that I usually provide. I had seen a number of people on Twitter (X) rave about how well their bootstrap templates were doing; and I had just gotten out of a consultancy project where I needed to connect a Next.js frontend to a Django backend. I thought it was the perfect project to start my indie hacking journey. I put up a launch post and started working, updating a build log as I went along. I gave myself until 28th March 2024 to finish it. That of course did not happen. Let’s talk about why I failed and what I learned. Episode 1: The one where I don’t understand the meaning of MVP My initial plan was to build a Django+Next.js boilerplate template the provided all of these: the base template that provided a Django backend & Next.js frontend working authentication b/w the backend & frontend Dockerfile that would create the backend & frontend containers for deployment Terraform scripts to setup an infrastructure on AWS Celery + Redis for background task processing TailwindCSS for the frontend (comes mostly for free with Next.js) social auth This looks like something achievable in a week or two of work - but only if you’re working full time on this. I failed to consider that I have a day job and a life. I was barely able to tick of the first two of these deliverables by the time my 6 week deadline came up. As a good friend told me later, I should have focused on the minimum amount of value I could deliver. Just having the first two things on my list be done would have been enough. I couldn’t charge the $20 I had planned for, but I could have charged $1-$5 for just that. And if no one was interested in spending the cost of a coffee on the MVP of the template, that would have been a good signal that this wasn’t going anywhere in it’s current shape. Instead, by focusing on building something much bigger, I robbed myself of the ability to validate the idea quickly. I spent all my available time coding the template instead of trying to talk to potential customers about it. Lesson 1: Scope down aggressively. Episode 2: Where I jumped on the hype-wagon I settled on building a boilerplate template because that’s what I had seen a lot of people on Twitter/X doing lately; I’m chalking this down to recency bias. I had no personal interest in a boilerplate template. It’s also not a product that I would personally use. I have so far made one project that uses this tech stack. Most of my other projects are Django, and Ruby on Rails. The most successful boilerplate templates I come across are from people who made a bunch of projects in 1 tech stack then realized they needed to do the same thing over-and-over again; which they then packaged into a template they could use. Selling to others was a bonus at first I guess. I was very enthusiastic about the project at the start, but as time went on I had to force myself to work on it. My lack of interest in this type of project was a big factor. Another factor was there being no way to see the fruits of my labor. I am currently working on an analytics dashboard for another client (a RoR project) and every time I build a feature, I love to play around with it in my free time. I test how it works, make sure the UX is a good one, and just play around and admire the app I’ve made. Without me using my template to build new projects, I lacked that feedback loop. Without the loop, I quickly lost interest. Lesson 2: Build something I can use myself. This isn’t a job I’m getting paid for, so the only motivation I have initially until it starts generating money is to build something interesting for myself. Episode 3: Where I had nothing for potential customers to play around with This is related to the 1st lesson. Because I didn’t have a path to quickly get something out there, there was no way for me to get my “product” into the hands of people who could test and provide feedback. I think the problem with a boilerplate template style of product is that you can’t give people a half-backed thing and ask them to test it. Unlike other SaaS apps, there’s no mid-way version of a template. Customers have to “buy-in” to use your template with any project they are starting. With SaaS, users can sign up and test, and then leave if they don’t like it. There’s no easy way of testing with a template. Lesson 3: Build something that can be tested by potential customers easily. For now, I’m going to stick with SaaS style web apps. Conclusion Moving forward: I’ll be working on web app products that users can sign up for and test very quickly. My next few experiments/products will be things that I can use myself as well. I’ll post what I’m going to work on next when I decide and have some time away from my job & freelance projects that are currently in progress.
If you’re just looking for implementation instructions, skip my ramblings and go straight to the code here. I’m currently working on my first project after deciding that I needed to fail more and practice finishing projects instead of abandoning them midway once they got “boring”. Anyways… This one is till in it’s interesting phase, so here’s a blog post with some things I learned yesterday while working on it. The project is a boilerplate template that should make it easy for devs. to start a new project with a Django backend and a Next.js frontend, something I had to struggle with recently. The problem The first thing I’m looking to solve is authentication. That was my biggest challenge when working on the contracting project that inspired this template. While there are a number of good posts around how to setup authentication b/w Django & Next.js, nothing “definitive” came up and I had to cobble together a weird mess of Django+DRF (Django Rest Framework) and Next.js+NextAuth, sharing a token from Django that was masquarading as a JWT token for Next.js. It wasn’t pretty and I knew I could do better. The options I considered 2 options for authenticating the Next.js frontend with the Django backend: Token based auth. On logging in, a user receives a token that is stored in local storage by the frontend and send with every request to the backend. Session/Cookie based auth. This is how authentication works in Django by default and is very easy to get started with - it basically comes for free out of the box when you start a new Django project. While token based auth. is what almost everyone suggests to use when using a Next.js frontend with any backend technology, I wanted to give session based auth. a try. I was curious what it would take to make it work - if it was even possible. tl;dr: It was possible to use cookie/session auth. b/w Django & Next.js - though with a few constraints which make it less appealing than the token based solution What follows are my notes on how to set it up, the problems I faced, and why for the template I’m going to go with token based auth. instead. Learning how CORS & Set-Cookie works It took me a few hours to get my head around how cross-origin requests and cookies work together, but the actual implementation was surprisingly straight forward. This “mini-quest” gave me a chance to learn a lot about how CORS and cookies work, and I’m happy with the time I spent on this. These are the resources which helped me the most (all are from MDN): Cross-Origin Resource Sharing Same-origin policy Using HTTP cookies Set-Cookie And finally, there was a surprise waiting for me! Browsers are almost universally making changes to restrict 3rd party or cross-domain cookies because of their privacy implications. Here’s a nice article from MDN about it: Saying goodbye to third-party cookies in 2024. This is the reason why; while this approach works, I won’t be using it in the template. More on that later. Implementation Implementing the session based auth. b/w Django & Next.js is pretty simple. Django configuration Install the django-cors-headers Python package. Add "corsheaders", to your INSTALLED_APPS. Add the "corsheaders.middleware.CorsMiddleware", middleware, right above the existing CommonMiddleware. Set CORS_ALLOWED_ORIGINS = ["http://localhost:3000"], replacing the URL with your frontend URL. Set CORS_ALLOW_CREDENTIALS = True Configure settings.py to allow cross-domain access for the session cookie. Set SESSION_COOKIE_SAMESITE = "None" Set SESSION_COOKIE_SECURE = True Next.js configuration No configuration is needed on the frontend. However, you do need to use the credentials: "include", option when using the fetch() API to access your backend. Here’s a minimal example. "use client"; import { BACKEND_URL } from "@/constants"; async function signIn() { const loginData = new FormData(); loginData.append("username", "admin"); loginData.append("password", "admin"); return await fetch(`${BACKEND_URL}/accounts/login/`, { method: "POST", body: loginData, credentials: "include", }); } async function whoAmI() { console.log( await fetch(`${BACKEND_URL}/accounts/me/`, { method: "GET", credentials: "include", }), ); } export default function Home() { return ( <main className="flex min-h-dvh w-full flex-col justify-around"> <h1 className="text-center">Home</h1> <button className="" onClick={signIn}> Sign In </button> <button onClick={whoAmI}>Who Am I</button> </main> ); } That’s it. That simple piece of code & configuration took me hours to find. Hopefully you can use this example to skip all that time spent trying to figure things out. Side quest log: Initially, I was not using the credentials: "include" option in the signIn() function above; thinking that I didn’t need to send any cookies with the login call, only the second API call to the /accounts/me endpoint. That mistake cost me about 2 hours of debugging time. If I had RTFM correctly the first time, I would have seen this: include: Tells browsers to include credentials in both same- and cross-origin requests, and always use any credentials sent back in responses. The credentials: "include" not only controls if cookies are sent, but also if they are saved when returned by the server. Why I won’t use this solution in the template Browsers are phasing out 3rd party cookies (Saying goodbye to third-party cookies in 2024) and adding features to work around that restriction where needed. The simplest way that doesn’t require much change is to use Cookies Having Independent Partitioned State (CHIPS). To enable CHIPS, you simply put a Partitioned flag on your Set-Cookie header, like so: Set-Cookie: session_id=1234; SameSite=None; Secure; Path=/; Partitioned; Unfortunately, there’s no straight forward way to do this in Django for now. There’s an open issue to resolve this, but looking at the comments, it won’t likely be solved anytime soon. Considering this, I opted to use the token based auth. method for my template. I’ll write a blog on that once I get it working over the next few days.
Links: Gumroad page Build Log My accidental new years resolution was to work on the 1 problem that has plagued me for my entire adult life; failure to commit and focus. I decided to work in 6 week “sprints” (inspired by Shape Up) and complete the projects I start - for some known definition of complete. This is the 1st project I have decided to work on. I’ll work on this from today (15th Feb 2024) to (28th Mar 2024). I’ll follow-up then with another post talking about how it went. The project The goal is to make & sell a Django + NextJS boilerplate template. What’s a boilerplate template? It’s the source code for a project that’s already setup with many things that are needed in a new project; for example: Stripe subscriptions functionality Background jobs CSS framework User/team management A great example is Saas Pegasus, which seems like an amazing boilerplate loved by many people. My boilerplate is going to be much simpler - and also much cheaper. SaaS Pegasus comes with so many features that it’s worth the $249 starting price. I’m aiming for $5-$10. Goals My goal is to sell this boilerplate to at least 10 people - and have them be happy using it. This means: talking to prospective customers and seeing if this can be useful to them. People will have the option of scheduling a 15 minute pre-purchase call with me for $5 to see if this would be useful to them. The payment is purely to make sure that I only spend time talking to people who are somewhat serious about purchasing. providing excellent after sales support. I’ll include a 60 minute setup call with me for any purchase. While a 60 minute call for a $10 sale isn’t scalable, it’s a great way for me to talk to customers at the start. having a no questions asked refund policy. My experiences with running an e-commerce store in the past tell me this is an amazing way to build trust. provide on-going support, updates, and fixes over email. build a mailing list of people interested in my work who I can email when I launch my future projects. The deliverable The boilerplate will allow developers to quickly start a project that uses Django for the backend and NextJS for the frontend. My recent experiences with another project in this tech stack required me to spend significant time on: figuring out how to setup authentication b/w Django & NextJS (this took the most time & effort) setting up Django Rest Framework so I could write APIs that would be used by the frontend writing Docker files that would build 2 containers - backend & frontend writing Terraform scripts to deploy those containers to AWS ECS writing config & scripts to run the project on Gitpod so it could be easily worked on by my team members My plan is to build a boilerplate that already has most those features built in, plus a few extras: Celery with Redis for background task processing Tailwind CSS for the frontend (in my project I used ChakraUI but Tailwind would be a better option for a boilerplate) If there’s demand for it, a stretch goal is to include social auth (sign-in with Google/Apple/etc) Once complete, I’ll put this on Gumroad and create a landing page there. From then on, it’s all about marketing it; that’s the part which I have no experience with and hope to learn the most from. The marketing plan This is the area where I lack any experience; so I’m not sure how I’m going to market this. Some ideas I have: build it in public on Twitter. I have a tiny Twitter following (312 followers) so not sure how useful this could be. But I have to try something. share it with people asking how to setup Django & NextJS on forums like Reddit, Stackoverflow, and others. maybe write a blog post on how to setup Django & NextJS and then link to the boilerplate from there. The blog post would provider all the steps necessary for the basic setup and the boilerplate would go beyond that with something that’s ready to use. The build log I’d also like to create a build log with this project. This will be a daily note of what I did for this project. I’ll keep it in my notes app Reflect and periodically put it here in this blog post. These daily notes might also serve as content for my build-in-public marketing strategy.
More in programming
I recently wrote about How to Use SSH with FIDO2/U2F Security Keys, which I now use on almost all of my machines. The last one that needed this was my Raspberry Pi hooked up to my DEC vt510 terminal and IBM mechanical keyboard. Yes I do still use that setup! To my surprise, generating a … Continue reading I Learned We All Have Linux Seats, and I’m Not Entirely Pleased →
Back in 2017–2020, while I was on the Blaze team at Google, I took on a 20% project that turned into a bit of an obsession: sandboxfs. Born out of my work supporting iOS development, it was my attempt to solve a persistent pain point that frustrated both internal teams and external users alike: Bazel’s
Dan Abramov in “Static as a Server”: Static is a server that runs ahead of time. “Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff. Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm: “Static” is server responding ahead of time to an anticipated requests with identical responses. “Dynamic” is a server responding at request time to anticipated requests with varying responses. But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what. Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with. Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family: “Static” is the same card sent to everyone “Dynamic” is a hand-written card to each individual But between these two are infinite possibilities, such as: A hand-written card that’s photocopied and sent to everyone A printed template with the same hand-written note to everyone A printed template with a different hand-written note for just some people etc. Are those examples “static” or “dynamic”? [Cue endless debate]. The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code. I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between. Email · Mastodon · Bluesky
In my note taking application Edna I’ve implemented unorthodox UI feature: in the editor a top left navigation element is only visible when you’re moving the mouse or when mouse is over the element. Here’s UI hidden: Here’s UI visible: The thinking is: when writing, you want max window space dedicated to the editor. When you move mouse, you’re not writing so I can show additional UI. In my case it’s a way to launch note opener or open a starred or recently opened note. Implementation details Here’s how to implement this: the element we show hide has CSS visibility set to hidden. That way the element is not shown but it takes part of layout so we can test if mouse is over it even when it’s not visible. To make the element visible we change the visibility to visible we can register multiple HTML elements for tracking if mouse is over an element. In typical usage we would only we install mousemove handler. In the handler we set isMouseMoving variable and clear it after a second of inactivity using setTimeout for every registered HTML element we check if mouse is over the element Svelte 5 implementation details This can be implemented in any web framework. Here’s how to do it in Svelte 5. We want to use Svelte 5 reactivity so we have: class MouseOverElement { element; isMoving = $state(false); isOver = $state(false); } An element is shown if (isMoving || isOver) == true. To start tracking an element we use registerMuseOverElement(el: HTMLElement) : MouseOverElement function, typically in onMount. Here’s typical usage in a component: let element; let mouseOverElement; onMount(() => { mouseOverElement = registerMuseOverElement(element); }); $effect(() => { if (mouseOverElement) { let shouldShow = mouseOverElement.isMoving || mouseOverElement.isOver; let style = shouldShow ? "visible" : "hidden"; element.style.visibility = style; } }); <div bind:this={element}>...</div> Here’s a full implementation of mouse-track.sveltejs: import { len } from "./util"; class MouseOverElement { /** @type {HTMLElement} */ element; isMoving = $state(false); isOver = $state(false); /** * @param {HTMLElement} el */ constructor(el) { this.element = el; } } /** * @param {MouseEvent} e * @param {HTMLElement} el * @returns {boolean} */ function isMouseOverElement(e, el) { if (!el) { return; } const rect = el.getBoundingClientRect(); let x = e.clientX; let y = e.clientY; return x >= rect.left && x <= rect.right && y >= rect.top && y <= rect.bottom; } /** @type {MouseOverElement[]} */ let registered = []; let timeoutId; /** * @param {MouseEvent} e */ function onMouseMove(e) { clearTimeout(timeoutId); timeoutId = setTimeout(() => { for (let moe of registered) { moe.isMoving = false; } }, 1000); for (let moe of registered) { let el = moe.element; moe.isMoving = true; moe.isOver = isMouseOverElement(e, el); } } let didRegister; /** * @param {HTMLElement} el * @returns {MouseOverElement} */ export function registerMuseOverElement(el) { if (!didRegister) { document.addEventListener("mousemove", onMouseMove); didRegister = true; } let res = new MouseOverElement(el); registered.push(res); return res; } /** * @param {HTMLElement} el */ export function unregisterMouseOverElement(el) { let n = registered.length; for (let i = 0; i < n; i++) { if (registered[i].element != el) { continue; } registered.splice(i, 1); if (len(registered) == 0) { document.removeEventListener("mousemove", onMouseMove); didRegister = null; } return; } }
Dan Abramov on his blog (emphasis mine): The division between the frontend and the backend is physical. We can’t escape from the fact that we’re writing client/server applications. Some logic is naturally more suited to either side. But one side should not dominate the other. And we shouldn’t have to change the approach whenever we need to move the boundary. What we need are the tools that let us compose across the stack. What are these tools that allow us to easily change the computation of an application happening between two computers? I think Dan is arguing that RSC is one of these tools. I tend to think of Remix (v1) as one of these tools. Let me try and articulate why by looking at the difference between how we thought of websites in a “JAMstack” architecture vs. how tools (like Remix) are changing that perspective. JAMstack: a website is a collection of static documents which are created by a static site generator and put on a CDN. If you want dynamism, you “opt-out” of a static document for some host-specific solution whose architecture is starkly different from the rest of your site. Remix: a website is a collection of URLs that follow a request/response cycle handled by a server. Dynamism is “built-in” to the architecture and handled on a URL-by-URL basis. You choose how dynamic you want any particular response to be: from a static document on a CDN for everyone, to a custom response on a request-by-request basis for each user. As your needs grow beyond the basic “static files on disk”, a JAMstack architecture often ends up looking like a microservices architecture where you have disparate pieces that work together to create the final whole: your static site generator here, your lambda functions there, your redirect engine over yonder, each with its own requirements and lifecycles once deployed. Remix, in contrast, looks more like a monolith: your origin server handles the request/response lifecycle of all URLs at the time and in the manner of your choosing. Instead of a build tool that generates static documents along with a number of distinct “escape hatches” to handle varying dynamic needs, your entire stack is “just a server” (that can be hosted anywhere you host a server) and you decide how and when to respond to each request — beforehand (at build), or just in time (upon request). No architectural escape hatches necessary. You no longer have to choose upfront whether your site as a whole is “static” or “dynamic”, but rather how much dynamism (if any) is present on a URL-by-URL basis. It’s a sliding scale — a continuum of dynamism — from “completely static, the same for everyone” to “no one line of markup is the same from one request to another”, all of it modeled under the same architecture. And, crucially, that URL-by-URL decision can change as needs change. As Dan Abramov noted in a tweet: [your] build doesn’t have to be modeled as server. but modeling it as a server (which runs once early) lets you later move stuff around. Instead of opting into a single architecture up front with escape hatches for every need that breaks the mold, you’re opting in to the request/response cycle of the web’s natural grain, and deciding how to respond on a case-by-case basis. The web is not a collection of static documents. It’s a collection of URLs — of requests and responses — and tools that align themselves to this grain make composing sites with granular levels of dynamism so much easier. Email · Mastodon · Bluesky