More from Daniel Marino
I’ve always been fascinated to see what other apps or workflows others are using in their day-to-day lives. Every now and then I learn about a new app or some cool trick I didn’t previously know. I doubt anyone seriously cares about what I’m using, but figured I’d list them out anyway—if for no other reason than to keep a historical record at this point in time. Applications Alfred — I have a lifelong license, and I like it. No point in fixing something that isn’t broken. I primarily use it for app switching, but also use it for math, and to search for gifs. Aseprite — Sometimes I do pixel art! Even if the UI is clunky, and some keyboard shortcuts aren’t always convenient, there are some unique features that help facilitate creating pixel art. Audacity — I rarely use it, but sometimes it’s easier to make some quick audio edits with Audacity than to use a full blown DAW. Bear — This is the note-taking, task-tacking app I’ve landed on. The UI is beautiful and it feels snappy. It syncs, so I can use it on my iPhone too. Chrome — I used Arc for the better part of 2024, but after they announced they were done working on it to focus on a new AI-powered browser, I peaced out. There are a couple of features I really missed, but was able to find some extensions to fill those gaps: Copy Current Tab URL, Meetings Page Auto Closer for Zoom, Open Figma app, and JSON Formatter. Figma — I use it because it’s what we use at work. I’m happy enough with Figma. iTerm2 — Has a few features that I like that makes me chose this over Mac’s native Terminal app. Pixelmator Pro — I haven’t paid the Adobe tax for a long time, and it feels good. I started using Pixelmator because at the time it was the best alternative available. I’m comfortable with Pixelmator at this point. I don’t really use image editors often these days, so I probably won’t switch anytime soon. Reaper — My DAW of choice when composing music. It’s very customizable, easyish enough to learn, and the price is right. It also has a die hard community, so I’m always able to find help when I need it. VS Code — I’ve tried a lot of code editors. I prefer Sublime’s UI over VS Code, but VS Code does a lot of things more easily than Sublime does, so I put up with the UI. YouTube Music — I still miss Rdio. YouTube Music works well enough I guess. Paying for YouTube Music has the benefit of not seeing ads on YouTube. Command-line Tools These aren’t apps per se, but these are some tools that I use to help manage packages or that I use regularly when developing. Deno Eleventy Homebrew pure statikk Vite Volta yt-dlp Equipment I have one computer and I use it for everything, and I’m okay with that. It’s more than powerful enough for work, composing music, making games, and occasionaly playing games. Although I have a dedicated home office, lately I tend to work more on the go, often with just my laptop—whether that’s at a cafe, a coworking space, or even just moving around the house. 2021 M1 MacBook Pro AKG K240 Studio Headphones Arturia MiniLab MKII Controller Behringer UMC202HD USB Audio Interface Fender Squire Strat Guitar Fender Squire Bass Guitar Shure SM57 Virtual Instruments This is quite specific for composing music, so if that does’t interest you, feel free to stop reading here. This list is not exhaustive as I’m regularly trying out new VSTs. These are some staples that I use: 🎹 Arturia Analog Lab V (Intro) — My Arturia controller came with this software. It has over 500 presets and I love exploring the variety of sounds. 🎸 Bass Grinder (Free) — I recently came across this VST, and it has a great crunchy overdrive sound for bass guitar. 🥁 Manda Audio Power Drum Kit — Even though you can use this for free, I paid the $9 because it is fantastic. The drums sound real and are great for all styles of music. 🎸 ML Amped Roots (Free) — What I like about this is that I get great metal guitar out of the bost without having to add pedals or chaining other effects. 🥁 ML Drums (Free) — I just started experimenting with this, and the drum tones are amazing. The free set up is pretty limited, but I like how I can add on to the base drum kit to meet my needs rather than having having to buy one big extensive drum VST. 🎹 Spitfire LABS — More variety of eclectic sounds. I also use several built-in VSTs made by Reaper for delay, EQ, reverb, pitch-shifting, and other effects. Reaper’s VSTs are insanely powerful enough for my needs and are much less CPU intensive.
Over the past couple of years I’ve gotten into journaling. Recently I’ve been using a method where you’re given a single inspirational word as a prompt, and go from there. Unfortunately, the process of finding, saving, and accessing inspirational words was a bit of a chore: Google a list of “366 inspirational words”. Get taken to a blog bloated with ads and useless content all in an effort to generate SEO cred. Copy/paste the words into Notion. Fix how the words get formatted becasue Notion is weird, and I have OCD about formatting text. While this gets the job done, I felt like there was room to make this a more pleasant experience. All I really wanted was a small website that serves a single inspirational word on a daily basis without cruft or ads. This would allow me to get the content I want without having to scroll through a long list. I also don't want to manage or store the list of words. Once I've curated a list of words, I want to be done with it. Creating a microsite I love a good microsite, and so I decided to create one that takes all the chore out of obtaining a daily inspirational word. The website is built with all vanilla tech, and doesn’t use any frameworks! It’s nice and lean, and it’s footprint is only 6.5kb. Inspirational words While I’m not a huge fan of AI, I did leverage ChatGPT on obtaining 366 inspirational words. The benefit to ChatGPT was being able to get it to return the words as an array—cutting down on the tedium of having to convert the words I already had into an array. The words are stored in it’s own JSON file, and I use an async/await function to pull in the words, and then process the data upon return. Worth the effort I find these little projects fun and exciting because the scope is super tight, and makes for a great opportunity to learn new things. It’s definitely an overengineered solution to my problem, but it is a much more pleasant experience. And perhaps it will serve other people as well.
Over the past couple of years I’ve gotten into journaling. Recently I’ve been using a method where you’re given a single inspirational word as a prompt, and go from there. Unfortunately, the process of finding, saving, and accessing inspirational words was a bit of a chore: 1. Google a list of “366 inspirational words”. 2. Get taken to a blog bloated with ads and useless content all in an effort to generate SEO cred. 3. Copy/paste the words into Notion. 4. Fix how the words get formatted becasue Notion is weird, and I have OCD about formatting text. While this gets the job done, I felt like there was room to make this a more pleasant experience. All I really wanted was a small website that serves a single inspirational word on a daily basis without cruft or ads. This would allow me to get the content I want without having to scroll through a long list. I also don't want to manage or store the list of words. Once I've curated a list of words, I want to be done with it. ## Creating a microsite I love a good microsite, and so I decided to create one that takes all the chore out of obtaining a [daily inspirational word](https://starzonmyarmz.github.io/daily-inspirational-word/).  The website is built with all vanilla tech, and doesn’t use any frameworks! It’s nice and lean, and it’s footprint is only 6.5kb. ### Inspirational words While I’m not a huge fan of AI, I did leverage ChatGPT on obtaining 366 inspirational words. The benefit to ChatGPT was being able to get it to return the words as an array—cutting down on the tedium of having to convert the words I already had into an array. The words are stored in it’s own JSON file, and I use an async/await function to pull in the words, and then process the data upon return. ## Worth the effort I find these little projects fun and exciting because the scope is super tight, and makes for a great opportunity to learn new things. It’s definitely an overengineered solution to my problem, but it is a much more pleasant experience. And perhaps it will serve other people as well.
The other day I shared why I prefer coding prototypes rather than using design apps to create them. My prototyping environment has evolved over the years. I love to hear how others build prototypes, so I thought I’d share where I’m at now. Maybe you’ll find it useful. A single repository I currently have a single GitHub repo housing all of my prototypes. I do this primarily so I don’t have to remember where any given prototype lives. They all live in the same place! Another benefit is if I pull in a library or some CSS component, I can reuse it in other prototypes without having to go out and grab it from the source again. My old setup In the past I used Sinatra hosted on Heroku. Having Ruby and a basic server (Such as Webrick) in the backend was pretty nice: I could code up some fairly complex prototypes out with realistic url schemes. Gems! If I needed fake data, I could use the Faker gem. If I wanted a table with a 100 rows, I could easily generate that with a super simple loop. But it got clunky. Spinning up a new prototype wasn’t very efficient. Setting up the urls took time. Deploying to Heroku wasn’t always straight forward. Heroku also got rid of their free plan, and I didn’t want to go looking for a new service. Maybe I’m making excuses. A dumb server Now I just use HTML, CSS (vanilla), and Javascript with no special backend. I don’t have Node.js running, and I don’t use a package manager like NPM or Yarn. To start a server I navigate to the prototype directory in iTerm, and run Statikk. Easy peasy. This setup requires no upkeep, so I can focus on actually prototyping! I have a basic file structure for keeping prototypes separate. I typically use Preact for scaffolding. To import Preact or other NPM packages I use esm.sh. When I push changes to the GitHub repo it’s deployed to GitHub pages. I can then share a public URL to folks that need to review the prototype. One glaring problem There’s only one issue I’ve run into using this setup, and its not even related to the setup! The Porchlight design system (which we use at Harvest) doesn’t have it’s styles or components available to consume publicly via CDN. Womp womp. I can get around the CSS issue. I end up having to copy the compiled CSS from the design system and paste it into a new file in my prototype environment. And I’m kind of out of luck with the JavaScript: I have to code these up from scratch. Although, I suppose I could copy the compiled components, and paste them into my prototype environment. The easiest fix is probably to introduce a package manager, but I’d rather not. We have talked about making the design system’s CSS and components available via CDN–it’s just a matter of getting around to it. Prototype! So that’s my current prototyping setup. Maybe it will help inspire you to setup your own prototyping environment. Whether you use code or a design app–you should prototype!
More in programming
Discover how The Epic Programming Principles can transform your web development decision-making, boost your career, and help you build better software.
In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative. Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain. See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet. That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal. That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version. I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity. Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.
Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the Germans in particular are now eager to take inspiration from The Danish Model, given their predicaments. The very same countries that until recently condemned the lack of open-arms/open-border policies they would champion as Moral Superpowers. But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model. Here are just three major cases that's been covered in the Danish media in 2025 alone: Danish public schools are increasingly struggling with violence and threats against students and teachers, primarily from descendants of MENAPT immigrants. In schools with 30% or more immigrants, violence is twice as prevalent. This is causing a flight to private schools from parents who can afford it (including some Syrians!). Some teachers are quitting the profession as a result, saying "the Quran run the class room". Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them". A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match. Problems of this sort have existed in Denmark for well over thirty years. So in a way, none of this should be surprising. But it actually is. Because it shows that long-term assimilation just isn't happening at a scale to tackle these problems. In fact, data shows the opposite: Descendants of MENAPT immigrants are more likely to be violent and troublesome than their parents. That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what? This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue. Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there. There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.
One rabbit hole I can never resist going down is finding the original creator of a piece of art. This sounds simple, but it’s often quite difficult. The Internet is a maze of social media accounts that only exist to repost other people’s art, usually with minimal or non-existent attribution. A popular image spawns a thousand copies, each a little further from the original. Signatures get cropped, creators’ names vanish, and we’re left with meaningless phrases like “no copyright intended”, as if that magically absolves someone of artistic theft. Why do I do this? I’ve always been a bit obsessive, a bit completionist. I’ve worked in cultural heritage for eight years, which has made me more aware of copyright and more curious about provenance. And it’s satisfying to know I’ve found the original source, that I can’t dig any further. This takes time. It’s digital detective work, using tools like Google Lens and TinEye, and it’s not always easy or possible. Sometimes the original pops straight to the top, but other times it takes a lot of digging to find the source of an image. So many of us have become accustomed to art as an endless, anonymous stream of “content”. A beautiful image appears in our feed, we give it a quick heart, and scroll on, with no thought for the human who sweated blood and tears to create it. That original artist feels distant, disconected. Whatever benefit they might get from the “exposure” of your work going viral, they don’t get any if their name has been removed first. I came across two examples recently that remind me it’s not just artists who miss out – it’s everyone who enjoys art. I saw a photo of some traffic lights on Tumblr. I love their misty, nighttime aesthetic, the way the bright colours of the lights cut through the fog, the totality of the surrounding darkness. But there was no name – somebody had just uploaded the image to their Tumblr page, it was reblogged a bunch of times, and then it appeared on my dashboard. Who took it? I used Google Lens to find the original photographer: Lucas Zimmerman. Then I discovered it was part of a series. And there was a sequel. I found interviews. Context. Related work. I found all this cool stuff, but only because I knew Lucas’s name. Traffic Lights, by Lucas Zimmerman. Published on Behance.net under a CC BY‑NC 4.0 license, and reposted here in accordance with that license. The second example was a silent video of somebody making tiny chess pieces, just captioned “wow”. It was clearly an edit of another video, with fast-paced cuts to make it accommodate a short attention span – and again with no attribution. This was a little harder to find – I had to search several frames in Google Lens before I found a summary on a Russian website, which had a link to a YouTube video by metalworker and woodworker Левша (Levsha). This video is four times longer than the cut-up version I found, in higher resolution, and with commentary from the original creator. I don’t speak Russian, but YouTube has auto-translated subtitles. Now I know how this amazing set was made, and I have a much better understanding of the materials and techniques involved. (This includes the delightful name Wenge wood, which I’d never heard before.) https://youtube.com/watch?v=QoKdDK3y-mQ A piece of art is more than just a single image or video. It’s a process, a human story. When art is detached from its context and creator, we lose something fundamental. Creators lose the chance to benefit from their work, and we lose the opportunity to engage with it in a deeper way. We can’t learn how it was made, find their other work, or discover how to make similar art for ourselves. The Internet has done many wonderful things for art, but it’s also a machine for endless copyright infringement. It’s not just about generative AI and content scraping – those are serious issues, but this problem existed long before any of us had heard of ChatGPT. It’s a thousand tiny paper cuts. How many of us have used an image from the Internet because it showed up in a search, without a second thought for its creator? When Google Images says “images may be subject to copyright”, how many of us have really thought about what that means? Next time you want to use an image from the web, look to see if it’s shared under a license that allows reuse, and make sure you include the appropriate attribution – and if not, look for a different image. Finding the original creator is hard, sometimes impossible. The Internet is full of shadows: copies of things that went offline years ago. But when I succeed, it feels worth the effort – both for the original artist and myself. When I read a book or watch a TV show, the credits guide me to the artists, and I can appreciate both them and the rest of their work. I wish the Internet was more like that. I wish the platforms we rely on put more emphasis on credit and attribution, and the people behind art. The next time an image catches your eye, take a moment. Who made this? What does it mean? What’s their story? [If the formatting of this post looks odd in your feed reader, visit the original article]