More from Ian's Blog
Securing My Web Infrastructure A few months ago, I very briefly mentioned that I've migrated all my web infrastructure off Cloudflare, as well as having built a custom web service to host it all. I call this new web service WebCentral and I'd like to talk about some of the steps I've taken and lessons I've learned about how I secure my infrastructure. Building a Threat Model Before you can work to secure any service, you need to understand what your threat model is. This sounds more complicated than it really is; all you must do is consider what your risks how, how likely those risks are to be realized, and what the potential damage or impact those risks could have. My websites don't store or process any user data, so I'm not terribly concerned about exfiltration, instead my primary risks are unauthorized access to the server, exploitation of my code, and denial of service. Although the risks of denial of service are self-explanatory, the primary risk I see needing to protect against is malicious code running on the machine. Malicious actors are always looking for places to run their cryptocurrency miners or spam botnets, and falling victim to that is simply out of the question. While I can do my best to try and ensure I'm writing secure code, there's always going to be the possibility that I or someone else makes a mistake that turns into an exploitable weakness. Therefore, my focus is on minimizing the potential impact should this occur. VPS Security The server that powers the very blog you're reading is a VPS, virtual private server, hosted by Azure. A VPS is just a fancy way to say a virtual machine that you have mostly total control over. A secure web service must start with a secure server hosting it, so let's go into detail about all the steps I take to keep the server safe. Network Security Minimizing the internet-facing exposure is critical for any VPS and can be one of the most effective ways to keep a machine safe. My rule is simple, no open ports other than what is required for user traffic. In effect this only means one thing: I cannot expose SSH to the internet. Doing so protects me against a wide range of threats and also reduces the impact from scanners (more on them later). While Azure itself offers several of ways to interact with a running VPS, I've chosen to disable most of those features and instead rely on my own. I personally need to be able to access the machine over SSH, however, so how do I do that if SSH is blocked? I use a VPN. On my home network is a WireGuard VPN server as well as a Dynamic DNS setup to work-around my rotating residential IP address. The VM will try to connect to the WireGuard VPN on my home network and establish a private tunnel between them. Since the VM is the one initiating the connection (acting as a client) no port must be exposed. With this configuration I can effortlessly access and manage the machine without needing to expose SSH to the internet. I'm also experimenting with, but have not yet fully rolled out, an outbound firewall. Outbound firewalls are far, far more difficult to set up than inbound because you must first have a very good understanding of what and where your machine talks to. OS-Level Security Although the internet footprint of my VPS is restricted to only HTTP and HTTPS, I still must face the risk of someone exploiting a vulnerability in my code. I've taken a few steps to help minimize the impact from a compromise to my web application's security. Automatic Updates First is some of the most basic things everyone should be doing, automatic updates & reboots. Every day I download and install any updates and restart the VPS if needed. All of this is trivially easy with a cron job and built-in tooling. I use this script that runs using a cron job: #!/bin/bash # Check for updates dnf check-update > /dev/null if [[ $? == 0 ]]; then # Nothing to update exit 0 fi # Install updates dnf -y update # Check if need to reboot dnf needs-restarting -r if [[ $? == 1 ]]; then reboot fi Low-Privileged Accounts Second, the actual process serving traffic does not run as root, instead it runs as a dedicated service user without a shell and without sudo permission. Doing this limits the abilities of what an attacker might be able to do, should they somehow have the ability to execute shell code on the machine. A challenge with using non-root users for web services is a specific security restriction enforced by Linux: only the root user can bind to port at or below 1024. Thankfully, however, SystemD services can be granted additional capabilities, one of which is the capability to bind to privileged ports. A single line in the service file is all it takes to overcome this challenge. Filesystem Isolation Lastly, the process also uses a virtualized root filesystem, a process known as chroot(). Chrooting is a method where the Linux kernel effectively lies to the process about where the root of the filesystem is by prepending a path to every call to access the filesystem. To chroot a process, you provide a directory that will act as the filesystem root for that process, meaning if the process were to try and list of contents of the root (/), they'd instead be listing the contents of the directory you specified. When configured properly, this has the effect of an filesystem allowlist - the process is only allowed to access data in the filesystem that you have specifically granted for it, and all of this without complicated permissions. It's important to note, however, that chrooting is often misunderstood as a more involved security control, because it's often incorrectly called a "jail" - referring to BSD's jails. Chrooting a process only isolates the filesystem from the process, but nothing else. In my specific use case it serves as an added layer of protection to guard against simple path transversal bugs. If an attacker were somehow able to trick the server into serving a sensitive file like /etc/passwd, it would fail because that file doesn't exist as far as the process knows. For those wondering, my SystemD service file looks like this: [Unit] Description=webcentral After=syslog.target After=network.target [Service] # I utilize systemd's heartbeat feature, sd-notify Type=notify NotifyAccess=main WatchdogSec=5 # This is the directory that serves as the virtual root for the process RootDirectory=/opt/webcentral/root # The working directory for the process, this is automatically mapped to the # virtual root so while the process sees this path, in actuality it would be # /opt/webcentral/root/opt/webcentral WorkingDirectory=/opt/webcentral # Additional directories to pass through to the process BindReadOnlyPaths=/etc/letsencrypt # Remember all of the paths here are being mapped to the virtual root ExecStart=/opt/webcentral/live/webcentral -d /opt/webcentral/data --production ExecReload=/bin/kill -USR2 "$MAINPID" TimeoutSec=5000 Restart=on-failure # The low-privilege service user to run the process as User=webcentral Group=webcentral # The additional capability to allow this process to bind to privileged ports CapabilityBoundingSet=CAP_NET_BIND_SERVICE [Install] WantedBy=default.target To quickly summarize: Remote Access (SSH) is blocked from the internet, a VPN must be used to access the VM, updates are automatically installed on the VM, the web process itself runs as a low-privileged service account, and the same process is chroot()-ed to shield the VMs filesystem. Service Availability Now it's time to shift focus away from the VPS to the application itself. One of, if not the, biggest benefits of running my own entire web server means that I can deeply integrate security controls how I best see fit. For this, I focus on detection and rejection of malicious clients. Being on the internet means you will be constantly exposed to malicious traffic - it's just a fact of life. The overwhelming majority of this traffic is just scanners, people going over every available IP address and looking widely known and exploitable vulnerabilities, things like leaving credentials out in the open or web shells. Generally, these scanners are one and done - you'll see a small handful of requests from a single address and then never again. I find that trying to block or prevent these scanners is a bit of a fool's errand, however by tracking these scanners over time I can begin to identify patterns to proactively block them early, saving resources. Why this matters is not because of the one-and-done scanners, but instead the malicious ones, the ones that don't just send a handful of requests - they send hundreds, if not thousands, all at once. These scanners risk degrading the service for others by occupying server resources that would better be used for legitimate visitors. To detect malicious hosts, I employ some basic heuristic by focusing on the headers sent by the client, and the paths they're trying to access. Banned Paths Having collected months of data from the traffic I served, I was able to identify some of the most common paths these scanners are looking for. One of the more common treds I see if scanning for weak and vulnerable WordPress configurations. WordPress is an incredibly common content management platform, which also makes it a prime target for attackers. Since I don't use WordPress (and perhaps you shouldn't either...) this made it a good candidate for scanner tracking. Therefore, any request where the path contains any of: "wp-admin", "wp-content", "wp-includes", or "xmlrpc.php" are flagged as malicious and recorded. User Agents The User Agent header is data sent by your web browser to the server that provides a vague description of the browser and the device it's running on. For example, my user agent when I wrote this post is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0 All this really tells the server is that I'm on a Mac running macOS 15 and using Firefox 128. One of the most effective measures I've found to block malicious traffic early is to do some very basic filtering by user agent. The simplest and most effective measure thus far has been to block requests that have no user agent header. I also have a growing list of bogus user agent values, where the header looks valid - but if you check the version numbers of the system or browser, nothing lines up. IP Firewall When clients start getting a bit too rowdy, they get put into the naughty corner temporarily blocked from connecting. Blocked connections happen during the TCP handshake, saving resources as we skip the TLS negotiation. Addresses are blocked 24 hours, and I found this time to be perfectly adequate as most clients quickly give up and move on. ASN Blocks In some extreme situations, it's necessary to block entire services and all of their addresses from accessing my server. This happens when a network provider, such as an ISP, VPN, or cloud provider, fails to do their job in preventing abuse of their services and malicious find home there. Cloud providers have a responsibility to ensure that if a malicious customer is using their service, they would terminate their accounts and stop providing their services. For the most part, these cloud providers do a decent enough job at that. Some providers, however, don't care - at all - and quickly become popular amongst malicious actors. Cloudflare and Alibaba are two great examples. Because of the sheer volume of malicious traffic and total lack of valid user traffic, I block all of Cloudflare and Alibaba's address space. Specifically, I block AS13335 and AS45102. Putting It All Together Summarized, this is the path a request takes when connecting to my server: Upon recieving a TCP connection, the IP address of the client is checked if it's either in a blocked ASN or is individually blocked. If so, the request is quickly rejected. Otherwise, TLS is negotiated, allowing the server to see the details of the actual HTTP request. We then check if the request is for a banned path, or has a banned user agent, if so the IP is blocked for 24 hours and the request is rejected, otherwise the request is served as normal. The Result I feel this graph speaks for itself: This graph shows the number of requests that were blocked per minute. These bursts are the malicious scanners that I'm working to block, and all of these were successful defences against them. This will be a never-ending fight, but that's part of the fun, innit?
I've been wanting hardware-accelerated video encoding on my Linux machine for quite a while now, but ask anybody who's used a Linux machine and they'll tell you of the horrors of Nvidia or AMD drivers. Intel, on the other hand, seems to be taking things in a much different, much more positive direction when it comes to their Arc graphics cards. I've heard positive things from people who use them about the relativly painless driver experience, at least when compared to Nvidia. So I went out and grabbed a used Intel Arc A750 locally and installed it in my server running Rocky 9.4. This post is to document what I did to install the required drivers and support libraries to be able to encode videos with ffmpeg utilizing the hardware of the GPU. This post is specific to Redhat Linux and all Redhat-compatible distros (Rocky, Oracle, etc). If you're using any other distro, this post likely won't help you. Driver Setup The drivers for Intel Arc cards were added into the Linux Kernel in version 6.2 or later, but RHEL 9 uses kernel version 5. Thankfully, Intel provides a repo specific to RHEL 9 where they offer precompiled backports of the drivers against the stable kernel ABI of RHEL 9. Add the Intel Repo Add the following repo file to /etc/yum.repos.d. Note that I'm using RHEL 9.4 here. You will likely need to change 9.4 to whatever version you are using by looking in /etc/redhat-release. Update the baseurl value and ensure that URL exists. [intel-graphics-9.4-unified] name=Intel graphics 9.4 unified enabled=1 gpgcheck=1 baseurl=https://repositories.intel.com/gpu/rhel/9.4/unified/ gpgkey=https://repositories.intel.com/gpu/intel-graphics.key Run dnf clean all for good measure. Install the Software dnf install intel-opencl \ intel-media \ intel-mediasdk \ libmfxgen1 \ libvpl2 \ level-zero \ intel-level-zero-gpu \ mesa-dri-drivers \ mesa-vulkan-drivers \ mesa-vdpau-drivers \ libdrm \ mesa-libEGL \ mesa-lib Reboot your machine for good measure. Verify Device Availability You can verify that your GPU is seen using the following: clinfo | grep "Device Name" Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics ffmpeg Setup Install ffmpeg ffmpeg needs to be compiled with libvpl support. For simplicities sake, you can use this pre-compiled static build of ffmpeg. Download the ffmpeg-master-latest-linux64-gpl.tar.xz binary. Verify ffmpeg support If you're going to use a different copy of ffmpeg, or compile it yourself, you'll want to verify that it has the required support using the following: ./ffmpeg -hide_banner -encoders|grep "qsv" V..... av1_qsv AV1 (Intel Quick Sync Video acceleration) (codec av1) V..... h264_qsv H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) (codec h264) V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc) V..... mjpeg_qsv MJPEG (Intel Quick Sync Video acceleration) (codec mjpeg) V..... mpeg2_qsv MPEG-2 video (Intel Quick Sync Video acceleration) (codec mpeg2video) V..... vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) If you don't have any qsv encoders, your copy of ffmpeg isn't built correctly. Converting a Video I'll be using this video from the Wikimedia Commons for reference, if you want to play along at home. This is a VP9-encoded video in a webm container. Let's re-encode it to H.264 in an MP4 container. I'll skip doing any other transformations to the video for now, just to keep things simple. ./ffmpeg -i Sea_to_Sky_Highlights.webm -c:v h264_qsv Sea_to_Sky_Highlights.mp4 The key parameter here is telling ffmpeg to use the h264_qsv encoder for the video, which is the hardware-accelerated codec. Let's see what kind of difference using hardware acceleration makes: Encoder Time Average FPS h264_qsv (Hardware) 3.02s 296 libx264 (Software) 1m2.996s 162 Using hardware acceleration sped up this operation by 95%. Naturally your numbers won't be the same as mine, as there's lots of variables at play here, but I feel this is a good demonstration that this was a worthwhile investment.
According to the Wayback Machine, I launched my website over a decade ago, in 2013. Just that thought alone makes me feel old, but going back through the old snapshots of my websites made me feel a profound feeling of longing for a time when having a website used to be a novel and enjoyable experience. The Start Although I no longer have the original invoice, I believe I purchased the ianspence.com domain in 2012 when I was studying web design at The Art Institute of Vancouver. Yes, you read that right, I went to art school. Not just any, mind you, but one that was so catastrophically corrupt and profit-focused that it imploded as I was studying there. But thats a story for another time. I obviously didn't have any real plan for what I wanted my website to be, or what I would use it for. I simply wanted a web property where I could play around with what I was learning in school and expand my knowledge of PHP. I hosted my earliest sites from a very used Dell Optiplex GX280 tower in my house on my residential internet. Although the early 2010's were a rough period of my life, having my very own website was something that I was proud of and deeply enjoyed. Somewhere along the way, all that enjoyment was lost. And you don't have to take my own word for it. Just compare my site from 2013 to the site from when I wrote this post in 2024. Graphic Design was Never My Passion 2013's website has an animated carousel of my photography, bright colours, and way too many accordions. Everything was at 100% because I didn't really care about so much about the final product more as I was just screwing around and having fun. 2024's website is, save for the single sample of my photography work, a resume. Boring, professional, sterile of anything resembling personality. Even though the style of my site became more and more muted over the years, I continued to work and build on my site, but what I worked on changed as what I was interested in grew. In the very early 2010s I thought I wanted to go into web design, but after having suffered 4 months of the most dysfunctional Art school there is, I realized that maybe it was web development that interested me. That, eventually, grew into the networking and infrastructure training I went through, and the career I've built for myself. Changing Interests As my interests changed, my focus on the site became less about the design and appearance and more towards what was serving the site itself. My site literally moved out from the basement suite I was living in to a VPS, running custom builds of PHP and Apache HTTPD. At one point, I even played around with load-balancing between my VPS and my home server, which had graduated into something much better than that Dell. Or, at least until my internet provider blocked inbound port 80. This is right around when things stopped being fun anymore. When Things Stopped Being Fun Upon reflection, there were two big mistakes I made that stole all of the enjoyment out of tinkering with my websites: Cloudflare and Scaling. Cloudflare A dear friend of mine (that one knows I'm referring to it) introduced me to Cloudflare sometime in 2014, which I believe was the poison pill that started my journey into taking the fun out of things for me. You see, Cloudflare is designed for big businesses or people with very high traffic websites. Neither of which apply to me. I was just some dweeb who liked PHP. Cloudflare works by sitting between your website's hosting server and your visitors. All traffic from your visitors flows through Cloudflare, where they can do their magic. When configured properly, the original web host is effectively hidden from the web, as people have to go through Cloudflare first. This was the crux of my issues, at least with the focus of this blog post, with Cloudflare. For example, before Lets Encrypt existed, Cloudflare did not allow for TLS at all on their free plans. This was right at the time when I was just starting to learn about TLS, but because I had convinced myself that I had to use Cloudflare, I could never actually use it on my website. This specific problem of TLS never went away, either, as even though Cloudflare now offers free TLS - they have total control over everything and they have the final say in what is and isn't allowed. The problems weren't just TLS, of course, because if you wanted to handle non-HTTP traffic then you couldn't use Cloudflare's proxy. Once again, I encountered the same misguided fear that exposing my web server to the internet was a recipe for disaster. Scaling (or lack thereof) The other, larger, mistake I made was an obsession with scaling and security. Thanks to my education I had learned a lot about enterprise networking and systems administration, and I deigned to apply those skills to my own site. Aggressive caching, using global CDNs, monitoring, automated deployments, telemetry, obsession over response times, and probably more I'm not remembering. All for a mostly static website for a dork that had maybe 5 page views a month. This is a mistake that I see people make all the time, not even just exclusive for websites - people think that they need to be concerned about scaling problems without first asking if those problems even or will ever apply to them. If I was able to host my website from an abused desktop tower with blown caps on cable internet just fine, then why in the world would I need to be obsessing over telemetry and response times. Sadly things only got even worse when I got into cybersecurity, because now on top of all the concerns with performance and scale, I was trying to protect myself from exceedingly unlikely threats. Up until embarrassingly recently I was using complicated file integrity checks using hardware-backed cryptographic keys to ensure that only I could make changes to the content of my site. For some reason, I had built this threat model in my head where I needed to protect against malicious changes on an already well-protected server. All of this created an environment where making any change at all was a cumbersome and time-consuming task. Is it any wonder why I ended up making my site look like a resume when doing any changes to it was so much work? The Lesson I Learned All of this retrospective started because of the death of Cohost, as many of the folks there (myself included) decided to move to posting on their own blogs. This gave me a great chance to see what some of my friends were doing with their sites, and unlocking fond memories of back in 2012 and 2013 when I too was just having fun with my silly little website. All of this led me down to the realization of the true root cause of my misery: In an attempt to try and mimic what enterprises do with their websites in terms of stability and security, I made it difficult to make any changes to my site. Realizing this, I began to unravel the design and decisions I had made, and come up with a much better, simpler, and most of all enjoyable design. How my Website Used to Work The last design of my website that existed before I came to the conclusions I discussed above was as follows: My website was a packaged Docker container image which ran nginx. All of the configuration and static files were included in the image, so theoretically it could run anywhere. That image was being run on Azure Container Instances, a managed container platform, with the image hosted on Azure Container Registries. Cloudflare sat in-front of my website and proxied all connections to it. They took care of the domain registration, DNS, and TLS. Whenever I wanted to make changes to my site, I would create a container image, sign it using a private key on my YubiKey, and push that to Github. A Github action would then deploy that image to Azure Container Registries and trigger the container to restart, pulling the new image. How my Website Works Now Now that you've seen the most ridiculous design for some dork's personal website, let me show you how it works now: Yes, it's really that simple. My websites are now powered by a virtual machine with a public IP address. When users visit my website, they talk directly to that virtual machine. When I want to make changes to my website, I just push them directly to that machine. TLS certificates are provided by Lets Encrypt. I still use Cloudflare as my domain registrar and DNS provider, but - crucially - nothing is proxied through Cloudflare. When you loaded this blog post, your browser talked directly to my server. It's almost virtually identical to the way I was doing things in 2013, and it's been almost invigorating to shed this excess weight and unnecessary complications. But there's actually a whole lot going on behind the scenes that I'm omitting from this graph. Static Websites Are Boring A huge problem with my previous design is that I had locked myself into only having a pure static website, and static websites are boring! Websites are more fun when they can, you know, do things, and doing things with static websites is usually dependent on Javascript and doing them in the users browser. That's lame. Let's take a really simple example: Sometimes I want to show a banner on the top of my website. In a static-only website, unless you hard-code that banner into the HTML, doing this from a static site would require that you use JavaScript to check some backend if a banner should be displayed, and if so - render it. But on a dynamic page? Where server-side rendering is used? This is trivially easy. A key change with my new website is switching away from using containers, which are ephemeral and immutable, over to using a virtual machine, which isn't, and to go back to server-side rendering. I realized that a lot of the fun I was having back then was because PHP is server-side, and you can do a lot of wacky things when you have total control over the server! I'm really burying the lede here, but my new site is powered not by nginx, or httpd, but instead my own web server. Entirely custom and designed for tinkering and hacking. Combined with no longer having to deal with Cloudflare, has given me total control to do whatever the hell I want. Maybe I'll do another post some day talking about this - but I've written enough about websites for one day. Wrapping Up That feeling of wanting to do whatever the hell I want really does sum up the sentiment I've come to over the past few weeks. This is my website. I should be able to do whatever I want with it, free from my self-imposed judgement or concern over non-issues. Kill the cop in your head that tells you to share the concerns that Google has. You're not Google. Websites should be fun. Enterprise tools like Cloudflare and managed containers aren't.
The staff running Cohost have announced (archived) that at the end of 2024 Cohost will be shutting down, with the site going read-only on October 1st 2024. This news was deeply upsetting to receive, as Cohost filled a space left by other social media websites when they stopped being fun and became nothing but tools of corporations. Looking Back I joined Cohost in October of 2022 when it was still unclear if elon musk would go through with his disastrous plan to buy twitter. The moment that deal was confirmed, I abandoned my Twitter account and switched to using Cohost only for a time and never looked back once. I signed up for Cohost Plus! a week later after seeing the potential for what Cohost could become, and what it could mean for a less commercial future. I believed in Cohost. I believed that I could be witness to the birth of a better web, not built on advertising, corporate greed, privacy invasion - but instead on a focus of people, the content they share, and the communities they build. The loss of Cohost is greater than just the loss of the community of friends I've made, it's the dark cloud that has now formed over people who shared their vision, the rise of the question of "why bother". When I look back to my time on twitter, I'm faced with mixed feelings. I dearly miss the community that I had built there - some of us scattered to various places, while others decided to take their leave entirely from social media. I am sad knowing that, despite my best efforts of trying to maintain the connections I've made, I will lose touch with my friends because there is no replacement for Cohost. Although I miss my friends from twitter, I've come to now realize how awful twitter was for me and my well-being. It's funny to me that back when I was using twitter, I and seemingly everybody else knew that twitter was a bad place, and yet we all continued to use it. Even now, well into its nazi bar era, people continue to use it. Cohost showed what a social media site could be if the focus was not on engagement, but on the community. The lack of hard statistics like follower or like count meant that Cohost could never be a popularity contest - and that is something that seemingly no other social media site has come to grips with. The Alternatives Many people are moving, or have already moved, to services like Mastodon and Bluesky. While I am on Mastodon, these services have severe flaws that make them awful as a replacement for Cohost. Mastodon is a difficult to understand web of protocols, services, terminology, and dogma. It suffers from critical "open source brain worm" where libertarian ideals take priority over important safety and usability aspects, and I do not see any viable way to resolve these issues. Imagine trying to convince somebody who isn't technical to "join mastodon". How are they ever going to understand the concept of decentralization, or what an instance is, or who runs the instance, or what client to use, or even what "fediverse" means? This insurmountable barrier ensures that only specific people are taking part in the community, and excludes so many others. Bluesky is the worst of both worlds of being both technically decentralized while also very corporate. It's a desperate attempt to clone twitter as much as possible without stopping for even a moment to evaluate if that includes copying some of twitter's worst mistakes. Looking Forward When it became clear that I was going to walk away from my twitter account there was fear and anxiety. I knew that I would lose connections, some of which I had to work hard to build, but I had to stick to my morals. In the end, things turned out alright. I found a tight-nit community of friends in a private Discord server, found more time to focus on my hobbies and less on doomscrolling, and established this very blog. I know that even though I am very sad and very angry about Cohost, things will turn out alright. So long, Cohost, and thank you, Jae, Colin, Aiden, and Kara for trying. You are all an inspiration to not just desire a better world, but to go out and make an effort to build it. I'll miss you, eggbug.
More in technology
I uploaded YouTube videos from time to time, and a fun comment I often get is “Whoa, this is in 8K!”. Even better, I’ve had comments from the like, seven people with 8K TVs that the video looks awesome on their TV. And you guessed it, I don’t record my videos in 8K! I record them in 4K and upscale them to 8K after the fact. There’s no shortage of AI video upscaling tools today, but they’re of varying quality, and some are great but quite expensive. The legendary Finn Voorhees created a really cool too though, called fx-upscale, that smartly leverages Apple’s built-in MetalFX framework. For the unfamiliar, this library is an extensive of Apple’s Metal graphics library, and adds functionality similar to NVIDIA’s DLSS where it intelligently upscales video using machine learning (AI), so rather than just stretching an image, it uses a model to try to infer what the frame would look like at a higher resolution. It’s primarily geared toward video game use, but Finn’s library shows it does an excellent job for video too. I think this is a really killer utility, and use it for all my videos. I even have a license for Topaz Video AI, which arguably works better, but takes an order of magnitude longer. For instance my recent 38 minute, 4K video took about an hour to render to 8K via fx-upscale on my M1 Pro MacBook Pro, but would take over 24 hours with Topaz Video AI. # Install with homebrew brew install finnvoor/tools/fx-upscale # Outputs a file named my-video Upscaled.mov fx-upscale my-video.mov --width 7680 --codec h265 Anyway, just wanted to give a tip toward a really cool tool! Finn’s even got a [version in the Mac App Store called Unsqueeze](https://apps.apple.com/ca/app/unsqueeze/id6475134617 Unsqueeze) with an actual GUI that’s even easier to use, but I really like the command line version because you get a bit more control over the output. 8K is kinda overkill for most use cases, so to be clear you can go from like, 1080p to 4K as well if you’re so inclined. I just really like 8K for the future proofing of it all, in however many years when 8K TVs are more common I’ll be able to have some of my videos already able to take advantage of that. And it takes long enough to upscale that I’d be surprised to see TVs or YouTube offering that upscaling natively in a way that looks as good given the amount of compute required currently. Obviously very zoomed in to show the difference easier If you ask me, for indie creators, even when 8K displays are more common, the future of recording still probably won’t be in native 8K. 4K recording gives so much detail still that have more than enough details to allow AI to do a compelling upscale to 8K. I think for my next camera I’m going to aim for recording in 6K (so I can still reframe in post), and then continue to output the final result in 4K to be AI upscaled. I’m coming for you, Lumix S1ii.
Talks about the famous Dragon's Lair
totally unreasonable price for a completely untested item, as-was, no returns, with no power supply, no wiring harness and no auxiliary daughterboards. At the end of this article, we'll have it fully playable and wired up to a standard ATX power supply, a composite monitor and off-the-shelf Atari joysticks, and because this board was used for other related games from that era, the process should work with only minor changes on other contemporary Gremlin arcade classics like Blockade, Hustle and Comotion [sic]. It's time for a Refurb Weekend. a July 1982 San Diego Reader article, the locally famous alternative paper I always snitched a copy of when I was downtown, and of which I found a marginally better copy to make these scans. There's also an exceptional multipart history of Gremlin you can read but for now we'll just hit the highlights as they pertain to today's project. ported to V1 Unix and has a simpler three-digit variant Bagels which was even ported to the KIM-1. Unfortunately his friends didn't have minicomputers of their own, so Hauck painstakingly put together a complete re-creation from discrete logic so they could play too, later licensed to Milton Bradley as their COMP IV handheld. Hauck had also been experimenting with processor-controlled video games, developing a simple homebrew unit based around the then-new Intel 8080 CPU that could connect to his television set and play blackjack. Fogleman met Hauck by chance at a component vendor's office and hired him on to enhance the wall game line, but Hauck persisted in his experiments, and additionally presented Fogleman with a new and different machine: a two-player game played with buttons on a video TV display, where each player left a boxy solid trail in an attempt to crowd out the other. To run the fast action on its relatively slow ~2MHz CPU and small amount of RAM, a character generator circuit made from logic chips painted a 256x224 display from 32 8x8 tiles in ROM specified by a 32x28 screen matrix, allowing for more sophisticated shapes and relieving the processor of having to draw the screen itself. (Does this sound like an early 8-bit computer? Hold that thought.) patent application was too late and too slow to stop the ripoffs. (For the record, Atari programmer Dennis Koble was adamant he didn't steal the idea from Gremlin, saying he had seen similar "snake" games on CompuServe and ARPANET, but Nolan Bushnell nevertheless later offered Gremlin $100,000 in "consolation" which the company refused.) Meanwhile, Blockade orders evaporated and Gremlin's attempts to ramp up production couldn't save it, leaving the company with thousands of unused circuit boards, game cabinets and video monitors. While lawsuits against the copycats slowly lumbered forward, Hauck decided to reprogram the existing Blockade hardware to play new games, starting with converting the Comotion board into Hustle in 1977 where players could also nab targets for additional points. The company ensured they had a thousand units ready to ship before even announcing it and sales were enough to recoup at least some of the lost investment. Hauck subsequently created a reworked version of the board with the same CPU for the more advanced game Depthcharge, initially testing poorly with players until the controls were simplified. This game was licensed to Taito as Sub Hunter and the board reworked again for the target shooter Safari, also in 1977, and also licensed by Taito. For 1978, Gremlin made one last release using the Hustle-Comotion board. This game was Blasto. present world record is 8,730), but in two player mode the players can also shoot each other for an even bigger point award. This means two-player games rapidly turn into active hunts, with a smaller bonus awarded to a player as well if the other gets nailed by a mine. shown above with a screenshot of the interactive on-board assembler. Noval also produced an education-targeted system called the Telemath, based on the 760 hardware, which was briefly deployed in a few San Diego Unified elementary schools. Alas, they were long gone before we arrived. Industry observers were impressed by the specs and baffled by the desk. Although the base price of $2995 [about $16,300] was quite reasonable considering its capabilities, you couldn't buy it without its hulking enclosure, which made it a home computer only to the sort of people who would buy a home PDP-8. (Raises hand.) Later upgrades with a Z80 and a full 32K didn't make it any more attractive to buyers and Noval barely sold about a dozen. Some of the rest remained at Gremlin as development systems (since they practically were already), and an intact upgraded unit with aftermarket floppy drives lives at the Computer History Museum. The failure of Noval didn't kill Gremlin outright, but Fogleman was concerned the company lacked sufficient capital to compete more strongly in the rapidly expanding video game market, and Noval didn't provide it. With wall game sales fading fast and cash flow crunched, the company was slowly approaching bankruptcy by the time Blasto hit arcades. At the same time, Sega Enterprises, Inc., then owned by conglomerate Gulf + Western (who also then owned Paramount Pictures), was looking for a quick way to revive its failing North American division which was only surviving on the strength of its aggressively promoted mall arcades. Sega needed development resources to bring out new games States-side, and Gremlin needed money. In September 1978 Fogleman agreed to make Gremlin a Sega subsidiary in return for an undisclosed number of shares, and became a vice chairman. Sega was willing to do just about anything to achieve supremacy on this side of the Pacific. In addition to infusing cash into Gremlin to make new games (as Gremlin/Sega) and distribute others from their Japanese peers and partners (as Sega/Gremlin), Sega also perceived a market opportunity in licensing arcade ports to the growing home computer segment. Texas Instruments' 99/4 had just hit the market in 1979 to howls there was hardly any software, and their close partner Milton Bradley was looking for marketable concepts for cartridge games. Blasto had simple fast action and a good name in the arcades, required only character graphics (well within the 9918 video chip's capabilities) and worked for both one or two players, and Sega had no problem blessing a home port of an older property for cheap. Milton Bradley picked up the license to Hustle as well. Bob Harris for completion, and TI house programmer Kevin Kenney wrote some additional features. 1 to 40 (obviously some thought was given to using the same PCB as much as possible). The power header is also a 10-pin block and the audio and video headers are 4-pin. Oddly, the manual doesn't say anywhere what the measurements are, so I checked them with calipers and got a pitch of around 0.15", which sounds very much like a common 0.156" header. I ordered a small pack of those as an experiment. 0002 because of the control changes: if you have an 814-0001, then you have a prototype. The MAME driver makes reference to an Amutech Mine Sweeper which is a direct and compatible ripoff of this board — despite the game type, it's not based on Depthcharge.) listed with the part numbers for the cocktail, but the ROM contents expected in the hashes actually correspond to the upright. Bipolar ROMs and PROMs are, as the name suggests, built with NPN bipolar junction transistors instead of today's far more common MOSFETs ("MOS transistors"). This makes them lower density but also faster: these particular bipolar PROMs have access times of 55-60ns as opposed to EPROMs or flash ROMs of similar capacity which may be multiple times slower depending on the chip and process. For many applications this doesn't matter much, but in some tightly-timed systems the speed difference can make it difficult to replace bipolar PROMs with more convenient EPROMs, and most modern-day chip programmers can't generate the higher voltage needed to program them (you're basically blowing a whole bunch of microscopic Nichrome metal fuses). Although modern CMOS PROMs are available at comparable speeds, bipolars were once very common, including in military environments where they could be manufactured to tolerate unusually harsh operating conditions. The incomparable Ken Shirriff has a die photo and article on the MMI 5300, an open-collector chip which is one of the military-spec parts from this line. Model 745 KSR and bubble memory Model 763 ASR, use AMD 8080s! The Intel 8080A is a refined version of the original Intel 8080 that works properly with more standard TTL devices (the original could only handle low-power TTL); the "NL" tag is TI's designation for a plastic regular-duty DIP. Its clock source is a 20.79MHz crystal at Y1 which is divided down by ten to yield the nominal clock rate of 2.079MHz, slightly above its maximum rating of 2MHz but stable enough at that speed. The later Intel 8080A-1 could be clocked up to 3.125MHz and of course the successor Intel 8085 and Zilog Z80 processors could run faster still. An interesting absence on this board is an Intel 8224 or equivalent to generate the 8080A's two-phase clock: that's done directly off the crystal oscillator with discrete logic, an elegant (and likely cheaper) design by Hauck. The video output also uses the same crystal. Next to the CPU are pads for the RAM chips. You saw six of them in the last picture under the second character ROM (316-0100M), all 2102 (1Kbit) static RAM. These were the chips I was most expecting to fail, having seen bad SRAM in other systems like my KIM-1. The ones here are 450ns Fairchild 21021 SRAMs in the 21021PC plastic case and "commercial" temperature range, and six of them adds up to 768 bytes of memory. NOS examples and equivalents are fortunately not difficult to find. Closer to the CPU in this picture, however, are two more RAM chip pads that are empty except for tiny factory-installed jumpers. On the Hustle and Blasto boards (both), they remain otherwise unpopulated, and there is an additional jumper between E4 and E5 also visible in the last picture. The Comotion board, however, has an additional 256 bytes of RAM here (as two more 1024x1 SRAMs). On that board these pads have RAM, there are no jumpers on the pads, and the jumper is now between E3 (ground) and E5. This jumper is also on Blockade, even though it has only five 2102s and three dummy jumpers on the other pads. That said, the games don't seem to care how much RAM is present as long as the minimum is: the current MAME driver gives all of them the full 1K. this 8080 system which uses a regulator). Tracing the schematic out further, the -12V line is also used with the +5V and +12V lines to run the video circuit. These are all part of the 10-pin power header. almost this exact sequence of voltages? An AT power supply connector! If we're clever about how we put the two halves on, we can get nearly the right lines in the right places. The six-pin AT P9 connector reversed is +5V, +5V, +5V, -5V, ground, ground, so we can cut the -5V to be the key. The six-pin AT P8 connector not reversed is power-good, +5V (or NC), +12V, -12V, ground, ground, so we cut the +5V to be the key, and cut the power-good line and one of the dangling grounds and wire ground to the power-good pin. Fortunately I had a couple spare AT-to-ATX converter cables from when we redid the AT power supply on the Alpha Micro Eagle 300. connectors since we're going to modify them anyway. A quick couple drops of light-cured cyanoacrylate into the key hole ... Something's alive! An LED glows! Time now for the video connector to see if we can get a picture! a nice 6502 reset circuit). The board does have its own reset circuit, of a sort. You'll notice here that the coin start is wired to the same line, and the manual even makes reference to this ("The circuitry in this game has been arranged so that the insertion of a quarter through the coin mechanism will reset the restart [sic] in the system. This clears up temporary problems caused by power line disturbances, static, etc."). We'll of course be dealing with the coin mechanism a little later, but that doesn't solve the problem of bringing the machine into the attract mode when powered on. I also have doubts that people would have blithely put coins into a machine that was obviously on the fritz. pair is up and down, or left and right, but not which one is exactly which because that depends on the joystick construction. We'll come back to this. Enterprises) to emphasize the brand name more strongly. The company entered a rapid decline with the video game crash of 1983 and the manufacturing assets were sold to Bally Midway with certain publishing rights, but the original Gremlin IP and game development teams stayed with Sega Electronics and remained part of Gulf+Western until they were disbanded. The brand is still retained as part of CBS Media Ventures today though modern Paramount Global doesn't currently use the label for its original purpose. In 1987 the old wall game line was briefly reincarnated under license, also called Gremlin Industries and with some former Gremlin employees, but only released a small number of new machines before folding. Meanwhile, Sega Enterprises separated from Gulf+Western in a 1984 management buyout by original founder David Rosen, Japanese executive Hayao Nakayama and their backers. This Sega is what people consider Sega today, now part of Sega Sammy Holdings, and the rights to the original Gremlin games — including Blasto — are under it. Lane Hauck's last recorded game at Gremlin/Sega was the classic Carnival in 1980 (I played this first on the Intellivision). After leaving the company, he held positions at various companies including San Diego-based projector manufacturer Proxima (notoriously later merging with InFocus), Cypress Semiconductor and its AgigA Tech subsidiary (both now part of Infineon), and Maxim Integrated Products (now part of Analog Devices), and works as a consultant today. I'm not done with Blasto. While I still enjoy playing the TI-99/4A port, there are ... improvements to be made, particularly the fact it's single fire, and it was never ported to anything else. I have ideas, I've been working on it off and on for a year or so and all the main gameplay code is written, so I just have to finish the graphics and music. You'll get to play it. And the arcade board? Well, we have a working game and a working harness that I can build off. I need a better sound amplifier, the "boom" circuit deserves a proper subwoofer, and I should fake up a little circuit using the power-good line from the ATX power supply to substitute for the power interrupt board. Most of all, though, we really need to get it a proper display and cabinet. That's naturally going to need a budget rather larger than my typical projects and I'm already saving up for it. Suggestions for a nice upright cab with display, buttons and joysticks that I can rewire — and afford! — are solicited. On both those counts, to be continued.
Hard data is hard to find, but roughly 100 million books were published prior to the 21st century. Of those, a significant portion were never available in a digital format and haven’t yet been digitized, which means their content is effectively inaccessible to most people today. To bring that content into the digital world, Redditor […] The post This machine automatically scans books from cover to cover appeared first on Arduino Blog.