Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
45
I was just feeling pretty good—I published my article about RSS and it's being pretty well-received. I decided a fitting way to celebrate was to head on over to Feedly and catch up on some reading! I clicked on an engineers blog feed to check out here latest couple posts. I notied an ad in the middle of the feed. That's fair: I am using a free version of Feedly so it only makes sense they'd show me an ad. I clicked "X" in the top corner of the ad. This is where things went sideways. Instead of dismissing the ad, Feedly shows me a popup/tooltip informing me that the only way to remove "this module" (read: advertisment) is to "Back Feedly directly via Feedly Pro." Again, I have no problem that Feedly advertises on its free tier, but providing a dismiss button that doesn't work is dark UX. I feel deceived. I'm not going to pay for your service, I'm not going to click on the ad. Dark UX warning signs # I'm going to assume the best here: Feedly didn't know they had implemented dark UX and...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from pcloadletter

Generative AI will probably make blogs better

Generative AI will probably make blogs better. Have you ever searched for something on Google and found the first one, two, or three blog posts to be utter nonsense? That's because these blog posts have been optimized not for human consumption, but rather to entertain the search engine ranking algorithms. People have figured out the right buzzwords to include in headings, how to game backlinks, and research keywords to write up blog posts about things they know nothing about. Pleasing these bots means raking in the views—and ad revenue (or product referrals, sales leads, etc.). Search Engine Optimization (SEO) may have been the single worst thing that happened to the web. Every year it seems like search results get worse than the previous. The streets of the internet are littered with SEO junk. But now, we may have an escape from this SEO hellscape: generative AI! Think about it: if AI-generated search results (or even direct use of AI chat interfaces) subsumes web search as a primary way to look up information, there will be no more motivation to crank out SEO-driven content. These kinds of articles will fade into obscurity as the only purpose for their existence (monetization) is gone. Perhaps we will be left with the blogosphere of old with webrings and RSS (not that these things went away but they're certainly not mainstream anymore). This, anyways, is my hope. No more blogging to entertain the robots. Just writing stuff you want to write and share with other like-minded folks online.

a month ago 7 votes
My articles don't belong on certain social networks

I write this blog because I enjoy writing. Some people enjoy reading what I write, which makes me feel really great! Recently, I took down a post and stopped writing for a few months because I didn't love the reaction I was getting on social media sites like Reddit and Hacker News. On these social networks, there seems to be an epidemic of "gotcha" commenters, contrarians, and know-it-alls. No matter what you post, you can be sure that folks will come with their sharpest pitchforks to try to skewer you. I'm not sure exactly what it is about those two websites in particular. I suspect it's the gamification of the comment system (more upvotes = more points = dopamine hit). Unfortunately, it seems the easiest way to win points on these sites is to tear down the original content. At any rate, I really don't enjoy bad faith Internet comments and I have a decent-enough following outside of these social networks that I don't really have to endure them. Some might argue I need thicker skin. I don't think that's really true: your experience on the Internet is what you make of it. You don't have to participate in parts of it if you don't want. Also, I know many of you reading this post (likely RSS subscribers at this point) came from Reddit or Hacker News in the first place. I don't mean to insult you or suggest by any means that everyone, or even the majority of users, on these sites are acting in bad faith. Still, I have taken a page from Tom MacWright's playbook and decided to add a bit of javascript to my website that helpfully redirects users from these two sites elsewhere: try { const bannedReferrers = [/news\.ycombinator\.com/i, /reddit\.com/i]; if (document.referrer) { const ref = new URL(document.referrer); if (bannedReferrers.some((r) => r.test(ref.host))) { window.location.href = "https://google.com/"; } } } catch (e) {} After implementing this redirect, I feel a lot more energized to write! I'm no longer worried about having to endlessly caveat my work for fear of getting bludgeoned on social media. I'm writing what I want to write and, if for those of you here to join me, I say thank you!

a year ago 103 votes
Write code that you can understand when you get paged at 2am

The older I get, the more I dislike clever code. This is not a controversial take; it is pretty-well agreed upon that clever code is bad. But I particularly like the on-call responsiblity framing: write code that you can understand when you get paged at 2am. If you have never been lucky enough to get paged a 2am, I'll paint the picture for you: A critical part of the app is down. Your phone starts dinging on your nightstand next to you. You wake up with a start, not quite sure who you are or where you are. You put on your glasses and squint at the way-too-bright screen of your phone. It's PagerDuty. "Oh shit," you think. You pop open your laptop, open the PagerDuty web app, and read the alert. You go to your telemetry and logging systems and figure out approximate whereabouts in the codebase the issue is. You open your IDE and start sweating: "I have no idea what the hell any of this code means." The git blame shows you wrote the code 2 years ago. You thought that abstraction was pretty clever at the time, but now you're paying a price: your code is inscrutable to an exhausted, stressed version of yourself who just wants to get the app back online. Reasons for clever code # There are a few reasons for clever code that I have seen over my career. Thinking clever code is inherently good # I think at some point a lot of engineers end up in a place where they become very skilled in a language before they understand the importance of writing clean, readable code. Consider the following two javascript snippets: snippet 1 const sum = items.reduce( (acc, el) => (typeof el === "number" ? acc + el : acc), 0 ); snippet 2 let sum = 0; for (const item of items) { if (typeof item === "number") { sum = sum + item; } } At one point in my career, I would have assumed the first snippet was superior: fewer lines and uses the reduce method! But I promise far more engineers can very quickly and easily understand what's going on in the second snippet. I would much rather the second snippet in my codebase any day. Premature abstraction # Premature abstractions tend to be pretty common in object-oriented languages. This stackexchange answer made me laugh quite a bit, so I'll use it as an example. Let's say you have a system with employee information. Well perhaps you decide employees are types of humans, so we'd better have a human class, and humans are a type of mammal, so we'd better have a mammal class, and so on. All of a sudden, you might have to navigate several layers up to the animal class to see an employee's properties and methods. As the stackexchange answer succinctly put it: As a result, we ended up with code that really only needed to deal with, say, records of employees, but were carefully written to be ready if you ever hired an arachnid or maybe a crustacean. DRY dogma # Don't Repeat Yourself (DRY) is a coding philosophy where you try to minimize the amount of code repeated in your software. In theory, even repeating code once results in an increased chance that you'll miss updating the code in both places or having inconsistent behavior when you have to implement the code somewhere else. In practice, DRYing up code can sometimes be complex. Perhaps there is a little repeated code shared between client and server. Do we need to create a way to share this logic? If it's only one small instance, it simply may not be worth the complexity of sharing logic. If this is going to be a common issue in the codebase, then perhaps centralizing the logic is worth it. But importantly we can't just assume that one instance of repeated code means we must eliminate the redundancy. What should we aim for instead? # There's definitely a balance to be struck. We can't have purely dumb code with no abstractions: that ends up being pretty error prone. Imagine you're working with an API that has some set of required headers. Forcing all engineers to remember to include those headers with every API call is error-prone. file1 fetch("/api/users", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); fetch(`/api/users/${userId}`, { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file2 fetch("/api/transactions", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); file3 fetch("/api/settings", { headers: { Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, }); Furthermore, having to track down every instance of that API call to update the headers (or any other required info) could be challenging. In this instance, it makes a lot of sense to create some kind of API service that encapsulates the header logic: service function apiRequest(...args) { const [url, headers, ...rest] = args; return fetch( url, { ...headers, Authorization: `Bearer ${token}`, AppVersion: version, XsrfToken: xsrfToken, }, ...rest ); } file1 apiRequest("/api/users"); apiRequest(`/api/users/${userId}`); file2 apiRequest("/api/transactions"); file3 apiRequest("/api/settings"); The apiRequest function is a pretty helpful abstraction. It helps that it is a very minimal abstraction: just enough to prevent future engineers from making mistakes but not so much that it's confusing. These kinds of abstractions, however, can get out of hand. I have see code where making a request looks something like this: const API_PATH = "api"; const USER_PATH = "user"; const TRANSACTIONS_PATH = "transactions"; const SETTINGS_PATH = "settings"; createRequest( endpointGenerationFn, [API_PATH, USER_PATH], getHeaderOverrides("authenticated") ); createRequest( endpointGenerationFn, [API_PATH, USER_PATH, userId], getHeaderOverrides("authenticated") ); There's really no need for this. You're not saving all that much for making variables instead of using strings for paths. In fact, this ends up making it really hard for someone debugging the code to search! Typically, I'd lok for the string "api/user" in my IDE to try to find the location of the request. Would I be able to find it with this abstraction? Would I be able to find it at 2am? Furthermore, passing an endpoint-generation function that consumes the path parts seems like overkill and may be inscrutable to more junior engineers (or, again, 2am you). Keep it as simple as possible # So I think in the end my message is to keep your code as simple as possible. Don't create some abstraction that may or may not be needed eventually. Weigh the maintenance value of DRYing up parts of your codebase versus readability.

a year ago 103 votes
The ChatGPT wrapper product boom is an uncanny valley hellscape

Here we go again: I'm so tired of crypto web3 LLMs. I'm positive there are wonderful applications for LLMs. The ChatGPT web UI seems great for summarizing information from various online sources (as long as you're willing to verify the things that you learn). But a lot fo the "AI businesses" coming out right now are just lightweight wrappers around ChatGPT. It's lazy and unhelpful. Probably the worst offenders are in the content marketing space. We didn't know how lucky we were back in the "This one weird trick for saving money" days. Now, rather than a human writing that junk, we have every article sounding like the writing voice equivalent of the dad from Cocomelon. Here's an approximate technical diagram of how these businesses work: Part 1 is what I like to call the "bilking process." Basically, you put up a flashy landing page promising content generation in exchange for a monthly subscription fee (or discounted annual fee, of course!). No more paying pesky writers! Once the husk of a company has secured the bag, part 2, the "bullshit process," kicks in. Customers provide their niches and the service happily passes queries over to the ChatGPT (or similar) API. Customers are rewarded with stinky garbage articles that sound like they're being narrated by HAL on Prozac in return. Success! I suppose we should have expected as much. With every new tech trend comes a deluge of tech investors trying to find the next great thing. And when this happens, it's a gold rush every time. I will say I'm more optimistic about "AI" (aka machine learning, aka statistics). There are going to be some pretty cool applications of this tech eventually—but your ChatGPT wrapper ain't it.

a year ago 123 votes
Quality is a hard sell in big tech

I have noticed a trend in a handful of products I've worked on at big tech companies. I have friends at other big tech companies that have noticed a similar trend: The products are kind of crummy. Here are some experiences that I have often encountered: the UI is flakey and/or unintuitive there is a lot of cruft in the codebase that has never been cleaned up bugs that have "acceptable" workarounds that never get fixed packages/dependencies are badly out of date the developer experience is crummy (bad build times, easily breakable processes) One of the reasons I have found for these issues is that we simply aren't investing enough time to increase product quality: we have poorly or nonexistent quality metrics, invest minimally in testing infrastructure (and actually writing tests), and don't invest in improving the inner loop. But why is this? My experience has been that quality is simply a hard sell in bigh tech. Let's first talk about something that's an easy sell right now: AI everything. Why is this an easy sell? Well, Microsoft could announce they put ChatGPT in a toaster and their stock price would jump $5/share. The sad truth is that big tech is hyper-focused on doing the things that make their stock prices go up in the short-term. It's hard to make this connection with quality initiatives. If your software is slightly less shitty, the stock price won't jump next week. So instead of being able to sell the obvious benefit of shiny new features, you need to have an Engineering Manager willing to risk having lower impact for the sake of having a better product. Even if there is broad consensus in your team, group, org that these quality improvements are necessary, there's a point up the corporate hierarchy where it simply doesn't matter to them. Certainly not as much as shipping some feature to great fanfare. Part of a bigger strategy? # Cory Doctorow has said some interesting things about enshittification in big tech: "enshittification is a three-stage process: first, surpluses are allocated to users until they are locked in. Then they are withdrawn and given to business-customers until they are locked in. Then all the value is harvested for the company's shareholders, leaving just enough residual value in the service to keep both end-users and business-customers glued to the platform." At a macro level, it's possible this is the strategy: hook users initially, make them dependent on your product, and then cram in superficial features that make the stock go up but don't offer real value, and keep the customers simply because they really have no choice but to use your product (an enterprise Office 365 customer probably isn't switching anytime soon). This does seem to have been a good strategy in the short-term: look at Microsoft's stock ever since they started cranking out AI everything. But how can the quality corner-cutting work long-term? I hope the hubris will backfire # Something will have to give. Big tech products can't just keep getting shittier—can they? I'd like to think some smaller competitors will come eat their lunch, but I'm not sure. Hopefully we're not all too entrenched in the big tech ecosystem for this to happen.

a year ago 42 votes

More in science

The Hidden Engineering of Liquid Dampers in Skyscrapers

[Note that this article is a transcript of the video embedded above.] There’s a new trend in high-rise building design. Maybe you’ve seen this in your city. The best lots are all taken, so developers are stretching the limits to make use of space that isn’t always ideal for skyscrapers. They’re not necessarily taller than buildings of the past, but they are a lot more slender. “Pencil tower” is the term generally used to describe buildings that have a slenderness ratio of more than around 10 to 1, height to width. A lot of popular discussion around skyscrapers is about how tall we can build them. Eventually, you can get so tall that there are no materials strong enough to support the weight. But, pencil towers are the perfect case study in why strength isn’t the only design criterion used in structural engineering. Of course, we don’t want our buildings to fall down, but there’s other stuff we don’t want them to do, too, including flex and sway in the wind. In engineering, this concept is called the serviceability limit state, and it’s an entirely separate consideration from strength. Even if moderate loads don’t cause a structure to fail, the movement they cause can lead to windows breaking, tiles cracking, accelerated fatigue of the structure, and, of course, people on the top floors losing their lunch from disorientation and discomfort. So, limiting wind-induced motions is a major part of high-rise design and, in fact, can be such a driving factor in the engineering of the building that strength is a secondary consideration. Making a building stiffer is the obvious solution. But adding stiffness requires larger columns and beams, and those subtract valuable space within the building itself. Another option is to augment a building’s aerodynamic performance, reducing the loads that winds impose. But that too can compromise the expensive floorspace within. So many engineers are relying on another creative way to limit the vibrations of tall buildings. And of course, I built a model in the garage to show you how this works. I’m Grady, and this is Practical Engineering. One of the very first topics I ever covered on this channel was tuned mass dampers. These are mechanisms that use a large, solid mass to counteract motion in all kinds of structures, dissipating the energy through friction or hydraulics, like the shock absorbers in vehicles. Probably the most famous of these is in the Taipei 101 building. At the top of the tower is a massive steel pendulum, and instead of hiding it away in a mechanical floor, they opened it to visitors, even giving the damper its own mascot. But, mass dampers have a major limitation because of those mechanical parts. The complex springs, dampers, and bearings need regular maintenance, and they are custom-built. That gets pretty expensive. So, what if we could simplify the device? This is my garage-built high-rise. It’s not going to hold many conference room meetings, but it does do a good job swaying from side to side, just like an actual skyscraper. And I built a little tank to go on top here. The technical name for this tank is a tuned liquid column damper, and I can show you how it works. Let’s try it with no water first. Using my digitally calibrated finger, I push the tower over by a prescribed distance, and you can see this would not be a very fun ride. There is some natural damping, but the oscillation goes on for quite a while before the motion stops. Now, let’s put some water in the tank. With the power of movie magic, I can put these side by side so you can really get a sense of the difference. By the way, nearly all of the parts for this demonstration were provided by my friends at Send-Cut-Send. I don’t have a milling machine or laser cutter, so this is a really nice option for getting customized parts made from basically any material - aluminum, steel, acrylic - that are ready to assemble. Instead of complex mechanical devices, liquid column dampers dissipate energy through the movement of water. The liquid in the tank is both the mass and the damper. This works like a pendulum where the fluid oscillates between two columns. Normally, there’s an orifice between the two columns that creates the damping through friction loss as water flows from one side to the other. To make this demo a little simpler, I just put lids on the columns with small holes. I actually bought a fancy air valve to make this adjustable, but it didn’t allow quite enough airflow. So instead, I simplified with a piece of tape. Very technical. Energy transferred to the water through the building is dissipated by the friction of the air as it moves in and out of the columns. And you can even hear this as it happens. Any supplemental damping system starts with a design criterion. This varies around the world, but in the US, this is probability-based. We generally require that peak accelerations with a 1-in-10 chance of being exceeded in a given year be limited to 15-18 milli-gs in residential buildings and 20-25 milli-gs in offices. For reference, the lateral acceleration for highway curve design is usually capped at 100 milli-gs, so the design criteria for buildings is between a fourth and a sixth of that. I think that makes intuitive sense. You don’t want to feel like you’re navigating a highway curve while you sit at your desk at work. It’s helpful to think of these systems in a simplified way. This is the most basic representation: a spring, a damper, and mass on a cart. We know the mass of the building. We can estimate its stiffness. And the building itself has some intrinsic damping, but usually not much. If we add the damping system onto the cart, it’s basically just the same thing at a smaller scale, and the design process is really just choosing the mass and damping systems for the remaining pieces of this puzzle to achieve the design goal. The mass of liquid dampers is usually somewhere between half a percent to two percent of the building’s total weight. The damping is related to the water’s ability to dissipate energy. And the spring needs to be tuned to the building. All buildings vibrate at a natural frequency related to their height and stiffness. Think of it like a big tuning fork full of offices or condos. I can estimate my model’s natural frequency by timing the number of oscillations in a given time interval. It’s about 1.3 hertz or cycles per second. In an ideal tuned damper, the oscillation of the damping system matches that of the building. So tuning the frequency of the damper is an important piece of the puzzle. For a tuned liquid column damper, the tuning mostly comes from the length of the liquid flow path. A longer path results in a lower frequency. The compression of the air above the column in my demo affects this too, and some types of dampers actually take advantage of that phenomenon. I got the best tuning when the liquid level was about halfway up the columns. The orifice has less of an effect on frequency and is used mostly to balance the amount of damping versus the volume of liquid that flows through each cycle. In my model, with one of the holes completely closed off, you can see the water doesn’t move, and you get minimal damping. With the tape mostly covering the hole, you get the most frictional loss, but not all the fluid flows from one side to the other each cycle. When I covered about half of one hole, I got the full fluid flow and the best damping performance. The benefit of a tuned column damper is that it doesn’t take up a lot of space. And because the fluid movement is confined, they’re fairly predictable in behavior. So, these are used in quite a few skyscrapers, including the Random House Tower in Manhattan, One Wall Center in Vancouver (which actually has many walls), and Comcast Center in Philadelphia. But, tuned column liquid dampers have a few downsides. One is that they really only work for flexible structures, like my demo. Just like in a pendulum, the longer the flow path in a column damper, the lower the frequency of the oscillation. For stiffer buildings with higher natural frequencies, tuning requires a very short liquid column, which limits the mass and damping capability to a point where you don’t get much benefit. The other thing is that this is still kind of a complex device with intricate shapes and a custom orifice between the two columns. So, we can get even simpler. This is my model tuned sloshing damper, and it’s about as simple as a damper can get. I put a weight inside the empty tank to make a fair comparison, and we can put it side by side with water in the tank to see how it works. As you can see, sloshing dampers dissipate energy by… sloshing. Again, the water is both the mass and the damper. If you tune it just right, the sloshing happens perfectly out of phase of the motion of the building, reducing the magnitude of the movement and acceleration. And you can see why this might be a little cheaper to build - it’s basically just a swimming pool - four concrete walls, a floor, and some water. There’s just not that much to it. But the simplicity of construction hides the complexity of design. Like a column damper, the frequency of a sloshing damper can be tuned, first by the length of the tank. Just like fretting a guitar string further down the neck makes the note lower, a tank works the same way. As the tank gets longer, its sloshing frequency goes down. That makes sense - it takes longer for the wave to get from one side to the other. But you can also adjust the depth. Waves move slower in shallower water and faster in deeper water. Watch what happens when I overfill the tank. The initial wave starts on the left as the building goes right. It reaches the right side just as the building starts moving left. That’s what we want; it’s counteracting the motion. But then it makes it back to the left before the building starts moving right. It’s actually kind of amplifying the motion, like pushing a kid on a swing. Pretty soon after that, the wave and the building start moving in phase, so there’s pretty much no damping at all. Compare it to the more properly tuned example where most of the wave motion is counteracting the building motion as it sways back and forth. You can see in my demo that a lot of the energy dissipation comes from the breaking waves as they crash against the sides of the tank. That is a pretty complicated phenomenon to predict, and it’s highly dependent on how big the waves are. And even with the level pretty well tuned to the frequency of the building, you can see there’s a lot of complexity in the motion with multiple modes of waves, and not all of them acting against the motion of the building. So, instead of relying on breaking waves, most sloshing dampers use flow obstructions like screens, columns, or baffles. I got a few different options cut out of acrylic so we can try this out. These baffles add drag, increasing the energy dissipation with the water, usually without changing the sloshing frequency. Here’s a side-by-side comparison of the performance without a baffle and with one. You can see that the improvement is pretty dramatic. The motion is more controlled and the behavior is more linear, making this much simpler to predict during the design phase. It’s kind of the best of both worlds since you get damping from the sloshing and the drag of the water passing through the screen. Almost all the motion is stopped in this demo after only three oscillations. I was pretty impressed with this. Here’s all three of the baffle runs side by side. Actually, the one with the smallest holes worked the best in my demo, but deciding the configuration of these baffles is a big challenge in the engineering of these systems because you can’t really just test out a bunch of options at full scale. Devices like this are in service in quite a few high-rise buildings, including Princess Tower in Dubai, and the Museum Tower in Dallas. With no moving parts and very little maintenance except occasionally topping it off to keep the water at the correct level, you can see how it would be easy to choose a sloshing damper for a new high-rise project. But there are some disadvantages. One is volumetric efficiency. You can see that not all the water in the tank is mobilized, especially for smaller movements, which means not all the water is contributing to the damping. The other is non-linearity. The amount of damping changes depending on the magnitude of the movement since drag is related to velocity squared. And even the frequency of the damper isn’t constant; it can change with the wave amplitude as well because of the breaking waves. So you might get good performance at the design level, but not so much for slower winds. Dampers aren’t just used in buildings. Bridges also take advantage of these clever devices, especially on the decks of pedestrian bridges and the towers of long-span bridges. This also happens at a grand scale between the Earth and moon. Tidal bulges in the oceans created by the moon’s tug on Earth dissipate energy through friction and turbulence, which is a big part of why our planet’s rotation is slowing over time. Days used to be a lot shorter when the Earth was young, but we have a planet-scale liquid damper constantly dissipating our rotational energy. But whether it’s bridges or buildings, these dampers usually don’t work perfectly right at the start. Vibrations are complicated. They’re very hard to predict, even with modern tools like simulation software and scale physical models. So, all dampers have to go through a commissioning process. Usually this involves installing accelerometers once construction is nearing completion to measure the structure’s actual natural frequency. The tuning of tuned dampers doesn’t just happen during the design phase; you want some adjustability after construction to make sure they match the structure’s natural frequency exactly so you get the most damping possible. For liquid dampers, that means adjusting the levels in the tanks. And in many cases, buildings might use multiple dampers tuned to slightly different frequencies to improve the performance over a range of conditions. Even in these two basic categories, there is a huge amount of variability and a lot of ongoing research to minimize the tradeoffs these systems come with. The truth is that, relatively speaking, there aren’t that many of these systems in use around the world. Each one is highly customized, and even putting them into categories can get a little tricky. There are even actively controlled liquid dampers. My tuning for the column damper works best for a single magnitude of motion, but you can see that once the swaying gets smaller, the damper isn’t doing a lot to curb it. You can imagine if I constantly adjusted the size of the orifice, I could get better performance over a broader range of unwanted motion. You can do this electronically by having sensors feed into a control system that adjusts a valve position in real-time. Active systems and just the flexibility to tune a damper in general also help deal with changes over time. If a building’s use changes, if new skyscrapers nearby change the wind conditions, or if it gets retrofits that change its natural frequency, the damping system can easily accommodate those changes. In the end, a lot of engineering decisions come down to economics. In most cases, damping is less about safety and more about comfort, which is often harder to pin down. Engineers and building owners face a balancing act between the cost of supplemental damping and the value of the space those systems take up. Tuned mass dampers are kind of household names when it comes to damping. A few buildings like Shanghai Center and Taipei 101 have made them famous. They’re usually the most space-efficient (since steel and concrete are more dense than water). But they’re often more costly to install and maintain. Liquid dampers are the unsung heroes. They take up more space, but they’re simple and cost-effective, especially if the fire codes already require you to have a big tank of water at the top of your building anyway. Maybe someday, an architect will build one out of glass or acrylic, add some blue dye and mica powder, and put it on display as a public showcase. Until then, we’ll just have to know it’s there by feel.

2 hours ago 1 votes
London Inches Closer to Running Transit System Entirely on Renewable Power

Under a new agreement, London will source enough solar power to run its light railway and tram networks entirely on renewable energy. Read more on E360 →

10 hours ago 1 votes
Science slow down - not a simple question

I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories. Actual science and engineering challenges, which require foundational research and creativity to solve. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both.  Alleged science and engineering problems that are really human/sociology issues. Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories.  Confusing these can strongly shape the perception of whether science and engineering research is making progress.  There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated.  For example, see here: https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/  https://news.uchicago.edu/scientific-progress-slowing-james-evans https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/ https://theweek.com/science/world-losing-scientific-innovation-research A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days.  (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.) There is a whole tech bro culture that buys into this, however.  For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half.   On some level, I get it emotionally.  The unbounded future spun in a lot of science fiction seems very far away.  Where is my flying car?  Where is my jet pack?  Where is my moon base?  Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive?  Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945? Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised".  Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US.  These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity.   Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component.  My take at the moment is this: Genuinely transformational breakthroughs are rare.  They often require a combination of novel insights, previously unavailable technological capabilities, and luck.  They don't come on a schedule.   There is no hard and fast rule that guarantees continuous exponential technological progress.  Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special.   If we think of research as a quest for understanding, it's inherently hierarchal.  Civilizational collapses aside, you can only discover how electricity works once.   You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything).  You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation.  We can't always recognize breakthroughs when they happen.  Sure, I don't have a flying car.  I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs.  The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception. Personally, I think we still have a lot more to learn about the natural world.  AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people.  People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal.  Solving foundational problems in molecular biology or quantum information hardware or  photonics or materials is not the same as software development.  (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts.  That's why so many of them seem to ignore real world physical constraints when talking about AI.).  Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week.   More frequent breakthroughs in our understanding and capabilities would be wonderful.  I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.

yesterday 2 votes
Researchers Uncover Hidden Ingredients Behind AI Creativity

Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture. The post Researchers Uncover Hidden Ingredients Behind AI Creativity first appeared on Quanta Magazine

yesterday 2 votes
Animals Adapting to Cities

Humans are dramatically changing the environment of the Earth in many ways. Only about 23% of the land surface (excluding Antarctica) is considered to be “wilderness”, and this is rapidly decreasing. What wilderness is left is also mostly managed conservation areas. Meanwhile, about 3% of the surface is considered urban. I could not find a […] The post Animals Adapting to Cities first appeared on NeuroLogica Blog.

yesterday 2 votes