More from Ian's Blog
I've been wanting hardware-accelerated video encoding on my Linux machine for quite a while now, but ask anybody who's used a Linux machine and they'll tell you of the horrors of Nvidia or AMD drivers. Intel, on the other hand, seems to be taking things in a much different, much more positive direction when it comes to their Arc graphics cards. I've heard positive things from people who use them about the relativly painless driver experience, at least when compared to Nvidia. So I went out and grabbed a used Intel Arc A750 locally and installed it in my server running Rocky 9.4. This post is to document what I did to install the required drivers and support libraries to be able to encode videos with ffmpeg utilizing the hardware of the GPU. This post is specific to Redhat Linux and all Redhat-compatible distros (Rocky, Oracle, etc). If you're using any other distro, this post likely won't help you. Driver Setup The drivers for Intel Arc cards were added into the Linux Kernel in version 6.2 or later, but RHEL 9 uses kernel version 5. Thankfully, Intel provides a repo specific to RHEL 9 where they offer precompiled backports of the drivers against the stable kernel ABI of RHEL 9. Add the Intel Repo Add the following repo file to /etc/yum.repos.d. Note that I'm using RHEL 9.4 here. You will likely need to change 9.4 to whatever version you are using by looking in /etc/redhat-release. Update the baseurl value and ensure that URL exists. [intel-graphics-9.4-unified] name=Intel graphics 9.4 unified enabled=1 gpgcheck=1 baseurl=https://repositories.intel.com/gpu/rhel/9.4/unified/ gpgkey=https://repositories.intel.com/gpu/intel-graphics.key Run dnf clean all for good measure. Install the Software dnf install intel-opencl \ intel-media \ intel-mediasdk \ libmfxgen1 \ libvpl2 \ level-zero \ intel-level-zero-gpu \ mesa-dri-drivers \ mesa-vulkan-drivers \ mesa-vdpau-drivers \ libdrm \ mesa-libEGL \ mesa-lib Reboot your machine for good measure. Verify Device Availability You can verify that your GPU is seen using the following: clinfo | grep "Device Name" Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics ffmpeg Setup Install ffmpeg ffmpeg needs to be compiled with libvpl support. For simplicities sake, you can use this pre-compiled static build of ffmpeg. Download the ffmpeg-master-latest-linux64-gpl.tar.xz binary. Verify ffmpeg support If you're going to use a different copy of ffmpeg, or compile it yourself, you'll want to verify that it has the required support using the following: ./ffmpeg -hide_banner -encoders|grep "qsv" V..... av1_qsv AV1 (Intel Quick Sync Video acceleration) (codec av1) V..... h264_qsv H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) (codec h264) V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc) V..... mjpeg_qsv MJPEG (Intel Quick Sync Video acceleration) (codec mjpeg) V..... mpeg2_qsv MPEG-2 video (Intel Quick Sync Video acceleration) (codec mpeg2video) V..... vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) If you don't have any qsv encoders, your copy of ffmpeg isn't built correctly. Converting a Video I'll be using this video from the Wikimedia Commons for reference, if you want to play along at home. This is a VP9-encoded video in a webm container. Let's re-encode it to H.264 in an MP4 container. I'll skip doing any other transformations to the video for now, just to keep things simple. ./ffmpeg -i Sea_to_Sky_Highlights.webm -c:v h264_qsv Sea_to_Sky_Highlights.mp4 The key parameter here is telling ffmpeg to use the h264_qsv encoder for the video, which is the hardware-accelerated codec. Let's see what kind of difference using hardware acceleration makes: Encoder Time Average FPS h264_qsv (Hardware) 3.02s 296 libx264 (Software) 1m2.996s 162 Using hardware acceleration sped up this operation by 95%. Naturally your numbers won't be the same as mine, as there's lots of variables at play here, but I feel this is a good demonstration that this was a worthwhile investment.
According to the Wayback Machine, I launched my website over a decade ago, in 2013. Just that thought alone makes me feel old, but going back through the old snapshots of my websites made me feel a profound feeling of longing for a time when having a website used to be a novel and enjoyable experience. The Start Although I no longer have the original invoice, I believe I purchased the ianspence.com domain in 2012 when I was studying web design at The Art Institute of Vancouver. Yes, you read that right, I went to art school. Not just any, mind you, but one that was so catastrophically corrupt and profit-focused that it imploded as I was studying there. But thats a story for another time. I obviously didn't have any real plan for what I wanted my website to be, or what I would use it for. I simply wanted a web property where I could play around with what I was learning in school and expand my knowledge of PHP. I hosted my earliest sites from a very used Dell Optiplex GX280 tower in my house on my residential internet. Although the early 2010's were a rough period of my life, having my very own website was something that I was proud of and deeply enjoyed. Somewhere along the way, all that enjoyment was lost. And you don't have to take my own word for it. Just compare my site from 2013 to the site from when I wrote this post in 2024. Graphic Design was Never My Passion 2013's website has an animated carousel of my photography, bright colours, and way too many accordions. Everything was at 100% because I didn't really care about so much about the final product more as I was just screwing around and having fun. 2024's website is, save for the single sample of my photography work, a resume. Boring, professional, sterile of anything resembling personality. Even though the style of my site became more and more muted over the years, I continued to work and build on my site, but what I worked on changed as what I was interested in grew. In the very early 2010s I thought I wanted to go into web design, but after having suffered 4 months of the most dysfunctional Art school there is, I realized that maybe it was web development that interested me. That, eventually, grew into the networking and infrastructure training I went through, and the career I've built for myself. Changing Interests As my interests changed, my focus on the site became less about the design and appearance and more towards what was serving the site itself. My site literally moved out from the basement suite I was living in to a VPS, running custom builds of PHP and Apache HTTPD. At one point, I even played around with load-balancing between my VPS and my home server, which had graduated into something much better than that Dell. Or, at least until my internet provider blocked inbound port 80. This is right around when things stopped being fun anymore. When Things Stopped Being Fun Upon reflection, there were two big mistakes I made that stole all of the enjoyment out of tinkering with my websites: Cloudflare and Scaling. Cloudflare A dear friend of mine (that one knows I'm referring to it) introduced me to Cloudflare sometime in 2014, which I believe was the poison pill that started my journey into taking the fun out of things for me. You see, Cloudflare is designed for big businesses or people with very high traffic websites. Neither of which apply to me. I was just some dweeb who liked PHP. Cloudflare works by sitting between your website's hosting server and your visitors. All traffic from your visitors flows through Cloudflare, where they can do their magic. When configured properly, the original web host is effectively hidden from the web, as people have to go through Cloudflare first. This was the crux of my issues, at least with the focus of this blog post, with Cloudflare. For example, before Lets Encrypt existed, Cloudflare did not allow for TLS at all on their free plans. This was right at the time when I was just starting to learn about TLS, but because I had convinced myself that I had to use Cloudflare, I could never actually use it on my website. This specific problem of TLS never went away, either, as even though Cloudflare now offers free TLS - they have total control over everything and they have the final say in what is and isn't allowed. The problems weren't just TLS, of course, because if you wanted to handle non-HTTP traffic then you couldn't use Cloudflare's proxy. Once again, I encountered the same misguided fear that exposing my web server to the internet was a recipe for disaster. Scaling (or lack thereof) The other, larger, mistake I made was an obsession with scaling and security. Thanks to my education I had learned a lot about enterprise networking and systems administration, and I deigned to apply those skills to my own site. Aggressive caching, using global CDNs, monitoring, automated deployments, telemetry, obsession over response times, and probably more I'm not remembering. All for a mostly static website for a dork that had maybe 5 page views a month. This is a mistake that I see people make all the time, not even just exclusive for websites - people think that they need to be concerned about scaling problems without first asking if those problems even or will ever apply to them. If I was able to host my website from an abused desktop tower with blown caps on cable internet just fine, then why in the world would I need to be obsessing over telemetry and response times. Sadly things only got even worse when I got into cybersecurity, because now on top of all the concerns with performance and scale, I was trying to protect myself from exceedingly unlikely threats. Up until embarrassingly recently I was using complicated file integrity checks using hardware-backed cryptographic keys to ensure that only I could make changes to the content of my site. For some reason, I had built this threat model in my head where I needed to protect against malicious changes on an already well-protected server. All of this created an environment where making any change at all was a cumbersome and time-consuming task. Is it any wonder why I ended up making my site look like a resume when doing any changes to it was so much work? The Lesson I Learned All of this retrospective started because of the death of Cohost, as many of the folks there (myself included) decided to move to posting on their own blogs. This gave me a great chance to see what some of my friends were doing with their sites, and unlocking fond memories of back in 2012 and 2013 when I too was just having fun with my silly little website. All of this led me down to the realization of the true root cause of my misery: In an attempt to try and mimic what enterprises do with their websites in terms of stability and security, I made it difficult to make any changes to my site. Realizing this, I began to unravel the design and decisions I had made, and come up with a much better, simpler, and most of all enjoyable design. How my Website Used to Work The last design of my website that existed before I came to the conclusions I discussed above was as follows: My website was a packaged Docker container image which ran nginx. All of the configuration and static files were included in the image, so theoretically it could run anywhere. That image was being run on Azure Container Instances, a managed container platform, with the image hosted on Azure Container Registries. Cloudflare sat in-front of my website and proxied all connections to it. They took care of the domain registration, DNS, and TLS. Whenever I wanted to make changes to my site, I would create a container image, sign it using a private key on my YubiKey, and push that to Github. A Github action would then deploy that image to Azure Container Registries and trigger the container to restart, pulling the new image. How my Website Works Now Now that you've seen the most ridiculous design for some dork's personal website, let me show you how it works now: Yes, it's really that simple. My websites are now powered by a virtual machine with a public IP address. When users visit my website, they talk directly to that virtual machine. When I want to make changes to my website, I just push them directly to that machine. TLS certificates are provided by Lets Encrypt. I still use Cloudflare as my domain registrar and DNS provider, but - crucially - nothing is proxied through Cloudflare. When you loaded this blog post, your browser talked directly to my server. It's almost virtually identical to the way I was doing things in 2013, and it's been almost invigorating to shed this excess weight and unnecessary complications. But there's actually a whole lot going on behind the scenes that I'm omitting from this graph. Static Websites Are Boring A huge problem with my previous design is that I had locked myself into only having a pure static website, and static websites are boring! Websites are more fun when they can, you know, do things, and doing things with static websites is usually dependent on Javascript and doing them in the users browser. That's lame. Let's take a really simple example: Sometimes I want to show a banner on the top of my website. In a static-only website, unless you hard-code that banner into the HTML, doing this from a static site would require that you use JavaScript to check some backend if a banner should be displayed, and if so - render it. But on a dynamic page? Where server-side rendering is used? This is trivially easy. A key change with my new website is switching away from using containers, which are ephemeral and immutable, over to using a virtual machine, which isn't, and to go back to server-side rendering. I realized that a lot of the fun I was having back then was because PHP is server-side, and you can do a lot of wacky things when you have total control over the server! I'm really burying the lede here, but my new site is powered not by nginx, or httpd, but instead my own web server. Entirely custom and designed for tinkering and hacking. Combined with no longer having to deal with Cloudflare, has given me total control to do whatever the hell I want. Maybe I'll do another post some day talking about this - but I've written enough about websites for one day. Wrapping Up That feeling of wanting to do whatever the hell I want really does sum up the sentiment I've come to over the past few weeks. This is my website. I should be able to do whatever I want with it, free from my self-imposed judgement or concern over non-issues. Kill the cop in your head that tells you to share the concerns that Google has. You're not Google. Websites should be fun. Enterprise tools like Cloudflare and managed containers aren't.
The staff running Cohost have announced (archived) that at the end of 2024 Cohost will be shutting down, with the site going read-only on October 1st 2024. This news was deeply upsetting to receive, as Cohost filled a space left by other social media websites when they stopped being fun and became nothing but tools of corporations. Looking Back I joined Cohost in October of 2022 when it was still unclear if elon musk would go through with his disastrous plan to buy twitter. The moment that deal was confirmed, I abandoned my Twitter account and switched to using Cohost only for a time and never looked back once. I signed up for Cohost Plus! a week later after seeing the potential for what Cohost could become, and what it could mean for a less commercial future. I believed in Cohost. I believed that I could be witness to the birth of a better web, not built on advertising, corporate greed, privacy invasion - but instead on a focus of people, the content they share, and the communities they build. The loss of Cohost is greater than just the loss of the community of friends I've made, it's the dark cloud that has now formed over people who shared their vision, the rise of the question of "why bother". When I look back to my time on twitter, I'm faced with mixed feelings. I dearly miss the community that I had built there - some of us scattered to various places, while others decided to take their leave entirely from social media. I am sad knowing that, despite my best efforts of trying to maintain the connections I've made, I will lose touch with my friends because there is no replacement for Cohost. Although I miss my friends from twitter, I've come to now realize how awful twitter was for me and my well-being. It's funny to me that back when I was using twitter, I and seemingly everybody else knew that twitter was a bad place, and yet we all continued to use it. Even now, well into its nazi bar era, people continue to use it. Cohost showed what a social media site could be if the focus was not on engagement, but on the community. The lack of hard statistics like follower or like count meant that Cohost could never be a popularity contest - and that is something that seemingly no other social media site has come to grips with. The Alternatives Many people are moving, or have already moved, to services like Mastodon and Bluesky. While I am on Mastodon, these services have severe flaws that make them awful as a replacement for Cohost. Mastodon is a difficult to understand web of protocols, services, terminology, and dogma. It suffers from critical "open source brain worm" where libertarian ideals take priority over important safety and usability aspects, and I do not see any viable way to resolve these issues. Imagine trying to convince somebody who isn't technical to "join mastodon". How are they ever going to understand the concept of decentralization, or what an instance is, or who runs the instance, or what client to use, or even what "fediverse" means? This insurmountable barrier ensures that only specific people are taking part in the community, and excludes so many others. Bluesky is the worst of both worlds of being both technically decentralized while also very corporate. It's a desperate attempt to clone twitter as much as possible without stopping for even a moment to evaluate if that includes copying some of twitter's worst mistakes. Looking Forward When it became clear that I was going to walk away from my twitter account there was fear and anxiety. I knew that I would lose connections, some of which I had to work hard to build, but I had to stick to my morals. In the end, things turned out alright. I found a tight-nit community of friends in a private Discord server, found more time to focus on my hobbies and less on doomscrolling, and established this very blog. I know that even though I am very sad and very angry about Cohost, things will turn out alright. So long, Cohost, and thank you, Jae, Colin, Aiden, and Kara for trying. You are all an inspiration to not just desire a better world, but to go out and make an effort to build it. I'll miss you, eggbug.
Recently I have received some questions from users of both TLS Inspector and DNS Inspector enquiring why the apps aren't available on Apple's new Vision Pro headset. While I have briefly discussed this over on my Mastodon, I figured it might be worthwhile putting things down with a bit more permanence. There are two main reasons why I am not making these apps available on the platform: exclusivity and support. A Future of Computing, If You Can Afford It. Apple loudly and proudly calls their new headset the future of computing, marshalling us into the age of spatial computing. However at US$3,500 some (myself) would say that this future is only available to those who can afford it. Three thousand five hundred dollars is, in no small terms, a lot of money for those on budget, and firmly places this item in the category of being an expensive toy. TLS and DNS Inspector are free apps that I spend a great deal of effort making available to as many people as possible. I work to ensure that the apps function on the oldest supported iOS versions so that those who can't afford a new smartphone every year can still use my apps. Ultimately it comes down to a choice: do I work to support legacy devices & versions so help more people who can't afford expensive upgrades, or do I support a grossly expensive toy-like device for people who have more money than sense. To me, I think the answer is Pretty Fucking Obvious. I personally do not currently care that the app is unavailable on these platforms, but if you already have one and are bothered by this, you can afford to donate to me to convince me it's a worthwhile investment. If you have enough money to buy an expensive toy, you have enough money to support a free & open source tool. Supporting a Device I Don't Own When Apple launched the Vision Pro they only made it available in the US market. Apple has a tendency to do this, and it's always felt like a weird decision. Why launch a "great" new product if the majority of the world can't actually buy it? If you ask me, sure does have a strong whiff of American elitism to it. This is what US$3,500 gets you. Image: Apple. Despite this, I saw other Canadians race down across the line to buy a new headset. A year later, Apple made the Vision Pro available in a few more markets, including Canada, at CA$5000. Five thousand dollars. If you didn't already know, supporting TLS and DNS Inspector does, in fact cost me money. At a minimum I have to spend CA$100 a year on an Apple developer membership, plus the cost of a suitable macOS device to build the software. I am happy to spend this money to support the community of users of my app, but CA$5,000 is so far beyond what I am capable of spending on something I already know I won't use outside of development. The Vision Pro, like all other VR devices, do not interest me. I've played around with a friends VR headset and found it to be novel, but not groundbreaking enough to be worth the cost. Also like other VR devices, there is a major lack of content available for it. Most of the VR games are either also playable outside of VR, or glorified tech demos desperately trying to distract you from your buyers remorse. Apple is also struggling with this issue for their Vision Pro, and in their first year of its existence so far all they've really been able to muster is spatial video. Something to entertain you for a few minutes, maybe an hour, before the headset goes in a box to sit unused. What will Change My Mind As it stands, the way I see it there are only two things that will change my mind on this: The price of the Vision Pro is cut by at least 50% Somebody gives me a Vision Pro I think we all know that #1 is never going to happen, so if you feel so strongly enough that TLS and DNS Inspector need to be on the Vision Pro, you can email me to discuss your donation of a device ;)
More in technology
Gary Marcus: Hot Take: GPT 4.5 Is a Nothing Burger Half a trillion dollars later, there is still no viable business model, profits are modest at best for everyone except Nvidia and some consulting forms, there’s still basically no moat, and still no GPT-5. Any reasonable person
A British "DOGE" (of sorts) is rearchitecting the government
Another day, another opportunity to rate my 2025 Apple predictions! iPad Here’s what I predicted would happen with the base iPad this year: I fully expect to see the 11th gen iPad in 2025, and I think it will come with a jump to the A17 Pro or
One of my must-read newsletters for the past several years has been Lenny’s Newsletter, probably best known for its writing on growth and product management, which really means it covered everything you need to create a great company. It expanded into a really well-done podcast; Lenny has always had a knack for finding the best … Continue reading On Lenny’s Podcast →