More from ./techtipsy
I use Uptime Kuma to check the availability of a few services that I run, with the most important one being my blog. It’s really nice. Today I wanted to set it up on a different machine to help troubleshoot and confirm some latency issues that I’ve observed, and for that purpose I picked the cheapest ARM-based Hetzner Cloud VM hosted in Helsinki, Finland. Hetzner provides a public IPv6 address for free, but you have to pay extra for an IPv4 address. I didn’t want to do that out of principle, so I went ahead and copied my Docker Compose definition over to the new server. For some reason, Uptime Kuma would start up on the new IPv6-only VM, but it was unsuccessful in making requests to my services, which support both IPv4 and IPv6. The requests would time out and show up as “Pending” in the UI, and the service logs complained about not being able to deliver e-mails about the failures. I confirmed IPv6 connectivity within the container by running docker exec -it uptime-kuma bash and running a few curl and ping commands with IPv6 flags, had no issues with those. When I added a public IPv4 address to the container, everything started working again. I fixed the issue by explicitly disabling the IPv4 network in the Docker Compose service definition, and that did the trick, Uptime Kuma made successful requests towards my services. It seems that the service defaults to IPv4 due to the internal Docker network giving it an IPv4 network to work with, and that causes issues when your machine doesn’t have any IPv4 network or public IPv4 address associated with it. Here’s an example Docker Compose file: name: uptime-kuma services: uptime-kuma: container_name: uptime-kuma networks: - uptime-kuma ports: - 3001:3001" volumes: - /path/to/your/storage:/app/data image: docker.io/louislam/uptime-kuma restart: always networks: uptime-kuma: enable_ipv6: true enable_ipv4: false That’s it! If you’re interested in different ways to set up IPv6 networking in Docker, check out this overview that I wrote a while ago.
I love 3D printing. Out of all the tech hype cycles and trends over the last decade, this one is genuinely useful. There’s simply something magical about being able to design or download a model from the internet, send it to a machine, and after a few hours you get an actual physical object in return! I don’t own a 3D printer myself, but I’ve had access to people who are happy to help out by printing something for me. So far I’ve printed the following useful things: a Makita vacuum cleaner holder a dual vertical laptop stand it’s such a simple and cheap design, and yet it works incredibly well if you add some rubberized material to the bottom and inside the laptop holder a dual HDD adapter for a Zimaboard a stand for the Steam Deck a carrying case insert for the Steam Deck a case for the Orange Pi Zero There’s so much more that I’d want to print, like various battery holders, controller stands, and IKEA SKÅDIS mounts. There’s also the option of downloading and printing a whole PC case, which is incredibly tempting. Will I finally be able to build the perfect home server according to my very specific requirements? Probably not, given how often my preferences change, but it would be incredibly cool! And yet I don’t own a 3D printer. The main obstacle for me is the time, I feel like in order to be successful with a 3D printer, I’ll need to at the very least learn the basics of filaments, their properties, what parameters to configure and how, how to maintain a 3D printer, how to fix one when it breaks, how to diagnose misalignment issues etc. I’ll also need space for one, extruding hot melting plastic seems like a thing that I’d want to host in a proper workshop and with actual ventilation. It’s a whole-ass hobby, not a half-ass one. Durability can be problematic with 3D prints, even in my limited experience. For example, I tried positioning the Makita vacuum cleaner holder differently, but ended up putting too much strain on the design, which eventually lead to it completely failing. In other cases, filaments like PLA aren’t suitable for designs where they are attached to warm or hot computer parts, they will warp like crazy. I appreciate the hell out of anyone that shares their designs with the world, and especially those that allow remixing or customizing their designs. There are fantastic designs and ideas out there on sites like Printables, and the creativity that’s on display warms my heart.
Today I learned that Kagi uses Yandex as part of its search infrastructure, making up about 2% of their costs, and their CEO has confirmed that they do not plan to change that. To quote: Yandex represents about 2% of our total costs and is only one of dozens of sources we use. To put this in perspective: removing any single source would degrade search quality for all users while having minimal economic impact on any particular region. The world doesn’t need another politicized search engine. It needs one that works exceptionally well, regardless of the political climate. That’s what we’re building. That is unfortunate, as I found Kagi to be a good product with an interesting take on utilizing LLM models with search that is kind of useful, but I cannot in good heart continue to support it while they unapologetically finance a major company that has ties to the Russian government, the same country that is actively waging a war against Ukraine, an European country, for over 11 years, during which they’ve committed countless war crimes against civilians and military personnel. Kagi has the freedom to decide how they build the best search engine, and I have the freedom to use something else. Please send all your whataboutisms to /dev/null.
It was time to upgrade Hibernate on that one Java monolithic1 backend service that my team was responsible for. We took great precautions with these types of changes due to the scale of the system, splitting changes into as many small parts as possible and releasing them as often as possible. With bigger changes we opted for running a few instances of the new version in parallel to the existing one. Then came Hibernate 5.2. Hibernate 5.2 introduced a new warning log to indicate that the existing API for writing queries is deprecated. Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead Every time you used the Criteria API it would print the line. Just one little issue there. Can you see it? Every time you used the Criteria API it would print the line. In a poorly written Java backend service, one HTTP request can make multiple queries to the database. With hundreds of millions of HTTP requests, this can easily balloon to billions of additional logs a day. Well, that’s exactly what happened to our service, resulting in the CPU usage jumping up considerably and the latency of the service being negatively impacted. We didn’t have the foresight to compare every metric against every instance of the service, and when the metrics were summarized across all instances, this increase was not that noticeable while both new and existing instances of the service were running. Aside from the service itself, this had negative effects downstream as well. If you have a solution for collecting your service logs for analysis and retention, and it’s priced on the amount of logs that you print out, then this can end up being a very costly issue for you. We resolved the issue by making a configuration change to our logger that disabled these specific logs. This does make me wonder who else may have been impacted by this change over the years and what that impact might’ve looked like regarding the resource usage on a world-wide scale. I’m not blaming the Hibernate developers, they had good intentions, but the impact of an innocent change like that was likely not taken into account for large-scale services. Last I heard, the people behind Hibernate are a very small team, and yet their software powers much of the world, including critical infrastructure like the banking system. I’m well aware that we’re talking about Hibernate releases that were released around the time I was still a junior developer (2016-2018). Some call it technical debt, others call it over half a decade of neglect. unmaintaned monoliths suck, but so do unmaintained microservices. ↩︎
More in technology
Even if we ignore intelligence, humans are able to speak when other animals — even other great apes — can’t, because of our specialized and complex vocal anatomy. Similarly, ASL (American Sign Language) wouldn’t be possible without our incredible hand and finger dexterity. Like any other complex physiological system, that is difficult to recreate artificially. […] The post A robotic hand with the dexterity to sign the whole ASL alphabet appeared first on Arduino Blog.
A Quick Look Behind the Scenes at Amstrad.
Last December we released our beta Arduino cores based on Zephyr. Today, we are excited to make another step in this beta program for Arduino cores based on Zephyr! ZephyrOS is an open-source, state-of-the-art, real-time operating system (RTOS) designed for low-power, resource-constrained devices. We are transitioning Arduino cores to ZephyrOS to ensure continued support and […] The post Updated Arduino cores with ZephyrOS (beta) appeared first on Arduino Blog.
I use Uptime Kuma to check the availability of a few services that I run, with the most important one being my blog. It’s really nice. Today I wanted to set it up on a different machine to help troubleshoot and confirm some latency issues that I’ve observed, and for that purpose I picked the cheapest ARM-based Hetzner Cloud VM hosted in Helsinki, Finland. Hetzner provides a public IPv6 address for free, but you have to pay extra for an IPv4 address. I didn’t want to do that out of principle, so I went ahead and copied my Docker Compose definition over to the new server. For some reason, Uptime Kuma would start up on the new IPv6-only VM, but it was unsuccessful in making requests to my services, which support both IPv4 and IPv6. The requests would time out and show up as “Pending” in the UI, and the service logs complained about not being able to deliver e-mails about the failures. I confirmed IPv6 connectivity within the container by running docker exec -it uptime-kuma bash and running a few curl and ping commands with IPv6 flags, had no issues with those. When I added a public IPv4 address to the container, everything started working again. I fixed the issue by explicitly disabling the IPv4 network in the Docker Compose service definition, and that did the trick, Uptime Kuma made successful requests towards my services. It seems that the service defaults to IPv4 due to the internal Docker network giving it an IPv4 network to work with, and that causes issues when your machine doesn’t have any IPv4 network or public IPv4 address associated with it. Here’s an example Docker Compose file: name: uptime-kuma services: uptime-kuma: container_name: uptime-kuma networks: - uptime-kuma ports: - 3001:3001" volumes: - /path/to/your/storage:/app/data image: docker.io/louislam/uptime-kuma restart: always networks: uptime-kuma: enable_ipv6: true enable_ipv4: false That’s it! If you’re interested in different ways to set up IPv6 networking in Docker, check out this overview that I wrote a while ago.
In the distant past of about two decades ago, one would need to use a KVM (Keyboard, Video, Mouse) switch to control multiple computers with the same mouse and keyboard — and even then, it would take a button press to move from one to the other. Today, Apple’s Universal Control feature lets users seamlessly […] The post This inexpensive adapter brings Apple Universal Control to vintage Macs appeared first on Arduino Blog.