Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
10
Night is falling on Cerro Pachón. Stray clouds reflect the last few rays of golden light as the sun dips below the horizon. I focus my camera across the summit to the westernmost peak of the mountain. Silhouetted within a dying blaze of red and orange light looms the sphinxlike shape of the Vera C. Rubin Observatory. “Not bad,” says William O’Mullane, the observatory’s deputy project manager, amateur photographer, and master of understatement. We watch as the sky fades through reds and purples to a deep, velvety black. It’s my first night in Chile. For O’Mullane, and hundreds of other astronomers and engineers, it’s the culmination of years of work, as the Rubin Observatory is finally ready to go “on sky.” Rubin is unlike any telescope ever built. Its exceptionally wide field of view, extreme speed, and massive digital camera will soon begin the 10-year Legacy Survey of Space and Time (LSST) across the entire southern sky. The result will be a high-resolution movie of how our solar...
a week ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from IEEE Spectrum

Why JPEGs Still Rule the Web

A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail. For roughly three decades, the JPEG has been the World Wide Web’s primary image format. But it wasn’t the one the Web started with. In fact, the first mainstream graphical browser, NCSA Mosaic, didn’t initially support inline JPEG files—just inline GIFs, along with a couple of other formats forgotten to history. However, the JPEG had many advantages over the format it quickly usurped. aspect_ratio Despite not appearing together right away—it first appeared in Netscape in 1995, three years after the image standard was officially published—the JPEG and web browser fit together naturally. JPEG files degraded more gracefully than GIFs, retaining more of the picture’s initial form—and that allowed the format to scale to greater levels of success. While it wasn’t capable of animation, it progressively expanded from something a modem could pokily render to a format that was good enough for high-end professional photography. For the internet’s purposes, the degradation was the important part. But it wasn’t the only thing that made the JPEG immensely valuable to the digital world. An essential part was that it was a documented standard built by numerous stakeholders. The GIF was a de facto standard. The JPEG was an actual one How important is it that JPEG was a standard? Let me tell you a story. During a 2013 New York Times interview conducted just before he received an award honoring his creation, GIF creator Steve Wilhite stepped into a debate he unwittingly created. Simply put, nobody knew how to pronounce the acronym for the image format he had fostered, the Graphics Interchange Format. He used the moment to attempt to set the record straight—it was pronounced like the peanut butter brand: “It is a soft ‘G,’ pronounced ‘jif.’ End of story,” he said. I posted a quote from Wilhite on my popular Tumblr around that time, a period when the social media site was the center of the GIF universe. And soon afterward, my post got thousands of reblogs—nearly all of them disagreeing with Wilhite. Soon, Wilhite’s quote became a meme. The situation paints how Wilhite, who died in 2022, did not develop his format by committee. He could say it sounded like “JIF” because he built it himself. He was handed the project as a CompuServe employee in 1987; he produced the object, and that was that. The initial document describing how it works? Dead simple. 38 years later, we’re still using the GIF—but it never rose to the same prevalence of JPEG. The JPEG, which formally emerged about five years later, was very much not that situation. Far from it, in fact—it’s the difference between a de facto standard and an actual one. And that proved essential to its eventual ubiquity. We’re going to degrade the quality of this image throughout this article. At its full image size, it’s 13.7 megabytes.Irina Iriser How the JPEG format came to life Built with input from dozens of stakeholders, the Joint Photographic Experts Group ultimately aimed to create a format that fit everyone’s needs. (Reflecting its committee-led roots, there would be no confusion about the format’s name—an acronym of the organization that designed it.) And when the format was finally unleashed on the world, it was the subject of a more than 600-page book. JPEG: Still Image Data Compression Standard, written by IBM employees and JPEG organization stakeholders William B. Pennebaker and Joan L. Mitchell, describes a landscape of multimedia imagery, held back without a way to balance the need for photorealistic images and immediacy. Standardization, they believed, could fix this. “The problem was not so much the lack of algorithms for image compression (as there is a long history of technical work in this area),” the authors wrote, “but, rather, the lack of a standard algorithm—one which would allow an interchange of images between diverse applications.” And they were absolutely right. For more than 30 years, JPEG has made high-quality, high-resolution photography accessible in operating systems far and wide. Although we no longer need to compress JPEGs to within an inch of their life, having that capability helped enable the modern internet. As the book notes, Mitchell and Pennebaker were given IBM’s support to follow through this research and work with the JPEG committee, and that support led them to develop many of the JPEG format’s foundational patents. Described in patents filed by Mitchell and Pennebaker in 1988, IBM and other members of the JPEG standards committee, such as AT&T and Canon, were developing ways to use compression to make high-quality images easier to deliver in confined settings. Each member brought their own needs to the process. Canon, obviously, was more focused on printers and photography, while AT&T’s interests were tied to data transmission. Together, the companies left behind a standard that has stood the test of time. All this means, funnily enough, that the first place that a program capable of using JPEG compression appeared was not MacOS or Windows, but OS/2—a fascinating-but-failed graphical operating system created by Pennebaker and Mitchell’s employer, IBM. As early as 1990, OS/2 supported the format through the OS/2 Image Support application. At 50 percent of its initial quality, the image is down to about 2.6 MB. By dropping half of the image’s quality, we brought it down to one-fifth of the original file size. Original image: Irina Iriser What a JPEG does when you heavily compress it The thing that differentiates a JPEG file from a PNG or a GIF is how the data degrades as you compress it. The goal for a JPEG image is to still look like a photo when all is said and done, even if some compression is necessary to make it all work at a reasonable size. That way, you can display something that looks close to the original image in fewer bytes. Or, as Pennebaker and Mitchell put it, “the most effective compression is achieved by approximating the original image (rather than reproducing it exactly).” Central to this is a compression process called discrete cosine transform (DCT), a lossy form of compression encoding heavily used in all sorts of compressed formats, most notably in digital audio and signal processing. Essentially, it delivers a lower-quality product by removing details, while still keeping the heart of the original product through approximation. The stronger the cosine transformation, the more compressed the final result. The algorithm, developed by researchers in the 1970s, essentially takes a grid of data and treats it as if you’re controlling its frequency with a knob. The data rate is controlled like water from a faucet: The more data you want, the higher the setting. DCT allows a trickle of data to still come out in highly compressed situations, even if it means a slightly compromised result. In other words, you may not keep all the data when you compress it, but DCT allows you to keep the heart of it. (See this video for a more technical but still somewhat easy-to-follow description of DCT.) DCT is everywhere. If you have ever seen a streaming video or an online radio stream that degraded in quality because your bandwidth suddenly declined, you’ve witnessed DCT being utilized in real time. A JPEG file doesn’t have to leverage the DCT with just one method, as JPEG: Still Image Data Compression Standard explains: The JPEG standard describes a family of large image compression techniques, rather than a single compression technique. It provides a “tool kit” of compression techniques from which applications can select elements that satisfy their particular requirements. The toolkit has four modes: Sequential DCT, which displays the compressed image in order, like a window shade slowly being rolled down Progressive DCT, which displays the full image in the lowest-resolution format, then adds detail as more information rolls in Sequential lossless, which uses the window shade format but doesn’t compress the image Hierarchical mode, which combines the prior three modes—so maybe it starts with a progressive mode, then loads DCT compression slowly, but then reaches a lossless final result At the time the JPEG was being created, modems were extremely common. That meant images loaded slowly, making Progressive DCT the most fitting format for the early internet. Over time, the progressive DCT mode has become less common, as many computers can simply load the sequential DCT in one fell swoop. That same forest, saved at 5 percent quality. Down to about 419 kilobytes.Original image: Irina Iriser When an image is compressed with DCT, the change tends to be less noticeable in busier, more textured areas of the picture, like hair or foliage. Those areas are harder to compress, which means they keep their integrity longer. It tends to be more noticeable, however, with solid colors or in areas where the image sharply changes from one color to another—like text on a page. Ever screenshot a social media post, only for it to look noisy? Congratulations, you just made a JPEG file. Other formats, like PNG, do better with text, because their compression format is intended to be non-lossy. (Side note: PNG’s compression format, DEFLATE, was designed by Phil Katz, who also created the ZIP format. The PNG format uses it in part because it was a license-free compression format. So it turns out the brilliant coder with the sad life story improved the internet in multiple ways before his untimely passing.) In many ways, the JPEG is one tool in our image-making toolkit. Despite its age and maturity, it remains one of our best options for sharing photos on the internet. But it is not a tool for every setting—despite the fact that, like a wrench sometimes used as a hammer, we often leverage it that way. Forgent Networks claimed to own the JPEG’s defining algorithm The JPEG format gained popularity in the ’90s for reasons beyond the quality of the format. Patents also played a role: Starting in 1994, the tech company Unisys attempted to bill individual users who relied on GIF files, which used a patent the company owned. This made the free-to-use JPEG more popular. (This situation also led to the creation of the patent-free PNG format.) While the JPEG was standards-based, it could still have faced the same fate as the GIF, thanks to the quirks of the patent system. A few years before the file format came to life, a pair of Compression Labs employees filed a patent application that dealt with the compression of motion graphics. By the time anyone noticed its similarity to JPEG compression, the format was ubiquitous. Our forest, saved at 1 percent quality. This image is only about 239 KB in size, yet it’s still easily recognizable as the same photo. That’s the power of the JPEG.Original image: Irina Iriser Then in 1997, a company named Forgent Networks acquired Compression Labs. The company eventually spotted the patent and began filing lawsuits over it, a series of events it saw as a stroke of good luck. “The patent, in some respects, is a lottery ticket,” Forgent Chief Financial Officer Jay Peterson told CNET in 2005. “If you told me five years ago that ‘You have the patent for JPEG,’ I wouldn’t have believed it.” While Forgent’s claim of ownership of the JPEG compression algorithm was tenuous, it ultimately saw more success with its legal battles than Unisys did. The company earned more than $100 million from digital camera makers before the patent finally ran out of steam around 2007. The company also attempted to extract licensing fees from the PC industry. Eventually, Forgent agreed to a modest $8 million settlement. As the company took an increasingly aggressive approach to its acquired patent, it began to lose battles both in the court of public opinion and in actual courtrooms. Critics pounced on examples of prior art, while courts limited the patent’s use to motion-based uses like video. By 2007, Forgent’s compression patent expired—and its litigation-heavy approach to business went away. That year, the company became Asure Software, which now specializes in payroll and HR solutions. Talk about a reboot. Why the JPEG won’t die The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. The format is widely used by the Library of Congress and specialized sites like the Internet Archive, however, it is less popular as an end-user format. See the forest JPEG degrade from its full resolution to 1 percent quality in this GIF. Original image: Irina Iriser Other image technologies have had somewhat more luck getting past the JPEG format. The Google-supported WebP is popular with website developers (and controversial with end users). Meanwhile, the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000. Still, the JPEG will be difficult to kill at this juncture. These days, the format is similar to MP3 or ZIP files—two legacy formats too popular and widely used to kill. Other formats that compress the files better and do the same things more efficiently are out there, but it’s difficult to topple a format with a 30-year head start. Shaking off the JPEG is easier said than done. I think most people will be fine to keep it around. Ernie Smith is the editor of Tedium, a long-running newsletter that hunts for the end of the long tail.

2 weeks ago 10 votes
The Birth of the University as Innovation Incubator

This article is excerpted from Every American an Innovator: How Innovation Became a Way of Life, by Matthew Wisnioski (The MIT Press, 2025). Imagine a point-to-point transportation service in which two parties communicate at a distance. A passenger in need of a ride contacts the service via phone. A complex algorithm based on time, distance, and volume informs both passenger and driver of the journey’s cost before it begins. This novel business plan promises efficient service and lower costs. It has the potential to disrupt an overregulated taxi monopoly in cities across the country. Its enhanced transparency may even reduce racial discrimination by preestablishing pickups regardless of race. aspect_ratio Every American an Innovator: How Innovation Became a Way of Life, by Matthew Wisnioski (The MIT Press, 2025).The MIT Press Carnegie Mellon University. The dial-a-ride service was designed to resurrect a defunct cab company that had once served Pittsburgh’s African American neighborhoods. National Science Foundation, the CED was envisioned as an innovation “hatchery,” intended to challenge the norms of research science and higher education, foster risk-taking, birth campus startups focused on market-based technological solutions to social problems, and remake American science to serve national needs. Are innovators born or made? During the Cold War, the model for training scientists and engineers in the United States was one of manpower in service to a linear model of innovation: Scientists pursued “basic” discovery in universities and federal laboratories; engineer–scientists conducted “applied” research elsewhere on campus; engineers developed those ideas in giant teams for companies such as Lockheed and Boeing; and research managers oversaw the whole process. This model dictated national science policy, elevated the scientist as a national hero in pursuit of truth beyond politics, and pumped hundreds of millions of dollars into higher education. In practice, the lines between basic and applied research were blurred, but the perceived hierarchy was integral to the NSF and the university research culture that it helped to foster. RELATED: Innovation Magazine and the Birth of a Buzzword The question was, how? And would the universities be willing to remake themselves to support innovation? The NSF experiments with innovation At the Utah Innovation Center, engineering students John DeJong and Douglas Kihm worked on a programmable electronics breadboard.Special Collections, J. Willard Marriott Library, The University of Utah In 1972, NSF director H. Guyford Stever established the Office of Experimental R&D Incentives to “incentivize” innovation for national needs by supporting research on “how the government [could] most effectively accelerate the transfer of new technology into productive enterprise.” Stever stressed the experimental nature of the program because many in the NSF and the scientific community resisted the idea of goal-directed research. Innovation, with its connotations of profit and social change, was even more suspect. To lead the initiative, Stever appointed C.B. Smith, a research manager at United Aircraft Corp., who in turn brought in engineers with industrial experience, including Robert Colton, an automotive engineer. Colton led the university Innovation Center experiment that gave rise to Carnegie Mellon’s CED. The NSF chose four universities that captured a range of approaches to innovation incubation. MIT targeted undergrads through formal coursework and an innovation “co-op” that assisted in turning ideas into products. The University of Oregon evaluated the ideas of garage inventors from across the country. The University of Utah emphasized an ecosystem of biotech and computer graphics startups coming out of its research labs. And Carnegie Mellon established a nonprofit corporation to support graduate student ventures, including the dial-a-ride service. Grad student Fritz Faulhaber holds one of the radio-coupled taxi meters that Carnegie Mellon students installed in Pittsburgh cabs in the 1970s.Ralph Guggenheim;Jerome McCavitt/Carnegie-Mellon Alumni News Carnegie Mellon got one of the first university incubators Carnegie Mellon had all the components that experts believed were necessary for innovation: strong engineering, a world-class business school, novel approaches to urban planning with a focus on community needs, and a tradition of industrial design and the practical arts. CMU leaders claimed that the school was smaller, younger, more interdisciplinary, and more agile than MIT. Dwight Baumann. Baumann exemplified a new kind of educator-entrepreneur. The son of North Dakota farmers, he had graduated from North Dakota State University, then headed to MIT for a Ph.D. in mechanical engineering, where he discovered a love of teaching. He also garnered a reputation as an unusually creative engineer with an interest in solving problems that addressed human needs. In the 1950s and 1960s, first as a student and then as an MIT professor, Baumann helped develop one of the first computer-aided-design programs, as well as computer interfaces for the blind and the nation’s first dial-a-ride paratransit system. Dwight Baumann, director of Carnegie Mellon’s Center for Entrepreneurial Development, believed that a modern university should provide entrepreneurial education. Carnegie Mellon University Archives The CED’s mission was to support entrepreneurs in the earliest stages of the innovation process when they needed space and seed funding. It created an environment for students to make a “sequence of nonfatal mistakes,” so they could fail and develop self-confidence for navigating the risks and uncertainties of entrepreneurial life. It targeted graduate students who already had advanced scientific and engineering training and a viable idea for a business. Carnegie Mellon’s dial-a-ride service replicated the Peoples Cab Co., which had provided taxi service to Black communities in Pittsburgh. Charles “Teenie” Harris/Carnegie Museum of Art/Getty Images A few CED students did create successful startups. The breakout hit was Compuguard, founded by electrical engineering Ph.D. students Romesh Wadhwani and Krishnahadi Pribad, who hailed from India and Indonesia, respectively. The pair spent 18 months developing a security bracelet that used wireless signals to protect vulnerable people in dangerous work environments. But after failing to convert their prototype into a working design, they pivoted to a security- and energy-monitoring system for schools, prisons, and warehouses. Wadhwani Foundation supports innovation and entrepreneurship education worldwide, particularly in emerging economies. Wharton School and elsewhere. In 1983, Baumann’s onetime partner Jack Thorne took the lead of the new Enterprise Corp., which aimed to help Pittsburgh’s entrepreneurs raise venture capital. Baumann was kicked out of his garage to make room for the initiative. Was the NSF’s experiment in innovation a success? As the university Innovation Center experiment wrapped up in the late 1970s, the NSF patted itself on the back in a series of reports, conferences, and articles. “The ultimate effect of the Innovation Centers,” it stated, would be “the regrowth of invention, innovation, and entrepreneurship in the American economic system.” The NSF claimed that the experiment produced dozens of new ventures with US $20 million in gross revenue, employed nearly 800 people, and yielded $4 million in tax revenue. Yet, by 1979, license returns from intellectual property had generated only $100,000. “Today, the legacies of the NSF experiment are visible on nearly every college campus.” Critics included Senator William Proxmire of Wisconsin, who pointed to the banana peelers, video games, and sports equipment pursued in the centers to lambast them as “wasteful federal spending” of “questionable benefit to the American taxpayer.” And so the impacts of the NSF’s Innovation Center experiment weren’t immediately obvious. Many faculty and administrators of that era were still apt to view such programs as frivolous, nonacademic, or not worth the investment.

3 weeks ago 13 votes
The Data Reveals Top Patent Portfolios

Eight years is a long time in the world of patents. When we last published what we then called the Patent Power Scorecard, in 2017, it was a different technological and social landscape—Google had just filed a patent application on the transformer architecture, a momentous advance that spawned the generative AI revolution. China was just beginning to produce quality, affordable electric vehicles at scale. And the COVID pandemic wasn’t on anyone’s dance card. Eight years is also a long time in the world of magazines, where we regularly play around with formats for articles and infographics. We now have more readers online than we do in print, so our art team is leveraging advances in interactive design software to make complex datasets grokkable at a glance, whether you’re on your phone or flipping through the pages of the magazine. The scorecard’s return in this issue follows the return last month of The Data, which ran as our back page for several years; it’s curated by a different editor every month and edited by Editorial Director for Content Development Glenn Zorpette. As we set out to recast the scorecard for this decade, we sought to strike the right balance between comprehensiveness and clarity, especially on a mobile-phone screen. As our Digital Product Designer Erik Vrielink, Assistant Editor Gwendolyn Rak, and Community Manager Kohava Mendelsohn explained to me, they wanted something that would be eye-catching while avoiding information overload. The solution they arrived at—a dynamic sunburst visualization—lets readers grasp the essential takeaways at glance in print, while the digital version, allows readers to dive as deep as they want into the data. Working with sci-tech-focused data-mining company 1790 Analytics, which we partnered with on the original Patent Power Scorecard, the team prioritized three key metrics or characteristics: patent Pipeline Power (which goes beyond mere quantity to assess quality and impact), number of patents, and the country where companies are based. This last characteristic has become increasingly significant as geopolitical tensions reshape the global technology landscape. As 1790 Analytics cofounders Anthony Breitzman and Patrick Thomas note, the next few years could be particularly interesting as organizations adjust their patenting strategies in response to changing market access. Some trends leap out immediately. In consumer electronics, Apple dominates Pipeline Power despite having a patent portfolio one-third the size of Samsung’s—a testament to the Cupertino company’s focus on high-impact innovations. The aerospace sector has seen dramatic consolidation, with RTX (formerly Raytheon Technologies) now encompassing multiple subsidiaries that appear separately on our scorecard. And in the university rankings, Harvard has seized the top spot from traditional tech powerhouses like MIT and Stanford, driven by patents that are more often cited as prior art in other recent patents. And then there are the subtle shifts that become apparent only when you dig deeper into the data. The rise of SEL (Semiconductor Energy Laboratory) over TSMC (Taiwan Semiconductor Manufacturing Co.) in semiconductor design, despite having far fewer patents, suggests again that true innovation isn’t just about filing patents—it’s about creating technologies that others build upon. Looking ahead, the real test will be how these patent portfolios translate into actual products and services. Patents are promises of innovation; the scorecard helps us see what companies are making those promises and the R&D investments to realize them. As we enter an era when technological leadership increasingly determines economic and strategic power, understanding these patterns is more crucial than ever.

a month ago 6 votes
The Data Reveals Top Patent Portfolios

Eight years is a long time in the world of patents. When we last published what we then called the Patent Power Scorecard, in 2017, it was a different technological and social landscape—Google had just filed a patent application on the transformer architecture, a momentous advance that spawned the generative AI revolution. China was just beginning to produce quality, affordable electric vehicles at scale. And the COVID pandemic wasn’t on anyone’s dance card. Eight years is also a long time in the world of magazines, where we regularly play around with formats for articles and infographics. We now have more readers online than we do in print, so our art team is leveraging advances in interactive design software to make complex datasets grokkable at a glance, whether you’re on your phone or flipping through the pages of the magazine. The scorecard’s return in this issue follows the return last month of The Data, which ran as our back page for several years; it’s curated by a different editor every month and edited by Editorial Director for Content Development Glenn Zorpette. As we set out to recast the scorecard for this decade, we sought to strike the right balance between comprehensiveness and clarity, especially on a mobile-phone screen. As our Digital Product Designer Erik Vrielink, Assistant Editor Gwendolyn Rak, and Community Manager Kohava Mendelsohn explained to me, they wanted something that would be eye-catching while avoiding information overload. The solution they arrived at—a dynamic sunburst visualization—lets readers grasp the essential takeaways at glance in print, while the digital version, allows readers to dive as deep as they want into the data. Working with sci-tech-focused data-mining company 1790 Analytics, which we partnered with on the original Patent Power Scorecard, the team prioritized three key metrics or characteristics: patent Pipeline Power (which goes beyond mere quantity to assess quality and impact), number of patents, and the country where companies are based. This last characteristic has become increasingly significant as geopolitical tensions reshape the global technology landscape. As 1790 Analytics cofounders Anthony Breitzman and Patrick Thomas note, the next few years could be particularly interesting as organizations adjust their patenting strategies in response to changing market access. Some trends leap out immediately. In consumer electronics, Apple dominates Pipeline Power despite having a patent portfolio one-third the size of Samsung’s—a testament to the Cupertino company’s focus on high-impact innovations. The aerospace sector has seen dramatic consolidation, with RTX (formerly Raytheon Technologies) now encompassing multiple subsidiaries that appear separately on our scorecard. And in the university rankings, Harvard has seized the top spot from traditional tech powerhouses like MIT and Stanford, driven by patents that are more often cited as prior art in other recent patents. And then there are the subtle shifts that become apparent only when you dig deeper into the data. The rise of SEL (Semiconductor Energy Laboratory) over TSMC (Taiwan Semiconductor Manufacturing Co.) in semiconductor design, despite having far fewer patents, suggests again that true innovation isn’t just about filing patents—it’s about creating technologies that others build upon. Looking ahead, the real test will be how these patent portfolios translate into actual products and services. Patents are promises of innovation; the scorecard helps us see what companies are making those promises and the R&D investments to realize them. As we enter an era when technological leadership increasingly determines economic and strategic power, understanding these patterns is more crucial than ever.

a month ago 7 votes

More in science

The Hidden Engineering of Liquid Dampers in Skyscrapers

[Note that this article is a transcript of the video embedded above.] There’s a new trend in high-rise building design. Maybe you’ve seen this in your city. The best lots are all taken, so developers are stretching the limits to make use of space that isn’t always ideal for skyscrapers. They’re not necessarily taller than buildings of the past, but they are a lot more slender. “Pencil tower” is the term generally used to describe buildings that have a slenderness ratio of more than around 10 to 1, height to width. A lot of popular discussion around skyscrapers is about how tall we can build them. Eventually, you can get so tall that there are no materials strong enough to support the weight. But, pencil towers are the perfect case study in why strength isn’t the only design criterion used in structural engineering. Of course, we don’t want our buildings to fall down, but there’s other stuff we don’t want them to do, too, including flex and sway in the wind. In engineering, this concept is called the serviceability limit state, and it’s an entirely separate consideration from strength. Even if moderate loads don’t cause a structure to fail, the movement they cause can lead to windows breaking, tiles cracking, accelerated fatigue of the structure, and, of course, people on the top floors losing their lunch from disorientation and discomfort. So, limiting wind-induced motions is a major part of high-rise design and, in fact, can be such a driving factor in the engineering of the building that strength is a secondary consideration. Making a building stiffer is the obvious solution. But adding stiffness requires larger columns and beams, and those subtract valuable space within the building itself. Another option is to augment a building’s aerodynamic performance, reducing the loads that winds impose. But that too can compromise the expensive floorspace within. So many engineers are relying on another creative way to limit the vibrations of tall buildings. And of course, I built a model in the garage to show you how this works. I’m Grady, and this is Practical Engineering. One of the very first topics I ever covered on this channel was tuned mass dampers. These are mechanisms that use a large, solid mass to counteract motion in all kinds of structures, dissipating the energy through friction or hydraulics, like the shock absorbers in vehicles. Probably the most famous of these is in the Taipei 101 building. At the top of the tower is a massive steel pendulum, and instead of hiding it away in a mechanical floor, they opened it to visitors, even giving the damper its own mascot. But, mass dampers have a major limitation because of those mechanical parts. The complex springs, dampers, and bearings need regular maintenance, and they are custom-built. That gets pretty expensive. So, what if we could simplify the device? This is my garage-built high-rise. It’s not going to hold many conference room meetings, but it does do a good job swaying from side to side, just like an actual skyscraper. And I built a little tank to go on top here. The technical name for this tank is a tuned liquid column damper, and I can show you how it works. Let’s try it with no water first. Using my digitally calibrated finger, I push the tower over by a prescribed distance, and you can see this would not be a very fun ride. There is some natural damping, but the oscillation goes on for quite a while before the motion stops. Now, let’s put some water in the tank. With the power of movie magic, I can put these side by side so you can really get a sense of the difference. By the way, nearly all of the parts for this demonstration were provided by my friends at Send-Cut-Send. I don’t have a milling machine or laser cutter, so this is a really nice option for getting customized parts made from basically any material - aluminum, steel, acrylic - that are ready to assemble. Instead of complex mechanical devices, liquid column dampers dissipate energy through the movement of water. The liquid in the tank is both the mass and the damper. This works like a pendulum where the fluid oscillates between two columns. Normally, there’s an orifice between the two columns that creates the damping through friction loss as water flows from one side to the other. To make this demo a little simpler, I just put lids on the columns with small holes. I actually bought a fancy air valve to make this adjustable, but it didn’t allow quite enough airflow. So instead, I simplified with a piece of tape. Very technical. Energy transferred to the water through the building is dissipated by the friction of the air as it moves in and out of the columns. And you can even hear this as it happens. Any supplemental damping system starts with a design criterion. This varies around the world, but in the US, this is probability-based. We generally require that peak accelerations with a 1-in-10 chance of being exceeded in a given year be limited to 15-18 milli-gs in residential buildings and 20-25 milli-gs in offices. For reference, the lateral acceleration for highway curve design is usually capped at 100 milli-gs, so the design criteria for buildings is between a fourth and a sixth of that. I think that makes intuitive sense. You don’t want to feel like you’re navigating a highway curve while you sit at your desk at work. It’s helpful to think of these systems in a simplified way. This is the most basic representation: a spring, a damper, and mass on a cart. We know the mass of the building. We can estimate its stiffness. And the building itself has some intrinsic damping, but usually not much. If we add the damping system onto the cart, it’s basically just the same thing at a smaller scale, and the design process is really just choosing the mass and damping systems for the remaining pieces of this puzzle to achieve the design goal. The mass of liquid dampers is usually somewhere between half a percent to two percent of the building’s total weight. The damping is related to the water’s ability to dissipate energy. And the spring needs to be tuned to the building. All buildings vibrate at a natural frequency related to their height and stiffness. Think of it like a big tuning fork full of offices or condos. I can estimate my model’s natural frequency by timing the number of oscillations in a given time interval. It’s about 1.3 hertz or cycles per second. In an ideal tuned damper, the oscillation of the damping system matches that of the building. So tuning the frequency of the damper is an important piece of the puzzle. For a tuned liquid column damper, the tuning mostly comes from the length of the liquid flow path. A longer path results in a lower frequency. The compression of the air above the column in my demo affects this too, and some types of dampers actually take advantage of that phenomenon. I got the best tuning when the liquid level was about halfway up the columns. The orifice has less of an effect on frequency and is used mostly to balance the amount of damping versus the volume of liquid that flows through each cycle. In my model, with one of the holes completely closed off, you can see the water doesn’t move, and you get minimal damping. With the tape mostly covering the hole, you get the most frictional loss, but not all the fluid flows from one side to the other each cycle. When I covered about half of one hole, I got the full fluid flow and the best damping performance. The benefit of a tuned column damper is that it doesn’t take up a lot of space. And because the fluid movement is confined, they’re fairly predictable in behavior. So, these are used in quite a few skyscrapers, including the Random House Tower in Manhattan, One Wall Center in Vancouver (which actually has many walls), and Comcast Center in Philadelphia. But, tuned column liquid dampers have a few downsides. One is that they really only work for flexible structures, like my demo. Just like in a pendulum, the longer the flow path in a column damper, the lower the frequency of the oscillation. For stiffer buildings with higher natural frequencies, tuning requires a very short liquid column, which limits the mass and damping capability to a point where you don’t get much benefit. The other thing is that this is still kind of a complex device with intricate shapes and a custom orifice between the two columns. So, we can get even simpler. This is my model tuned sloshing damper, and it’s about as simple as a damper can get. I put a weight inside the empty tank to make a fair comparison, and we can put it side by side with water in the tank to see how it works. As you can see, sloshing dampers dissipate energy by… sloshing. Again, the water is both the mass and the damper. If you tune it just right, the sloshing happens perfectly out of phase of the motion of the building, reducing the magnitude of the movement and acceleration. And you can see why this might be a little cheaper to build - it’s basically just a swimming pool - four concrete walls, a floor, and some water. There’s just not that much to it. But the simplicity of construction hides the complexity of design. Like a column damper, the frequency of a sloshing damper can be tuned, first by the length of the tank. Just like fretting a guitar string further down the neck makes the note lower, a tank works the same way. As the tank gets longer, its sloshing frequency goes down. That makes sense - it takes longer for the wave to get from one side to the other. But you can also adjust the depth. Waves move slower in shallower water and faster in deeper water. Watch what happens when I overfill the tank. The initial wave starts on the left as the building goes right. It reaches the right side just as the building starts moving left. That’s what we want; it’s counteracting the motion. But then it makes it back to the left before the building starts moving right. It’s actually kind of amplifying the motion, like pushing a kid on a swing. Pretty soon after that, the wave and the building start moving in phase, so there’s pretty much no damping at all. Compare it to the more properly tuned example where most of the wave motion is counteracting the building motion as it sways back and forth. You can see in my demo that a lot of the energy dissipation comes from the breaking waves as they crash against the sides of the tank. That is a pretty complicated phenomenon to predict, and it’s highly dependent on how big the waves are. And even with the level pretty well tuned to the frequency of the building, you can see there’s a lot of complexity in the motion with multiple modes of waves, and not all of them acting against the motion of the building. So, instead of relying on breaking waves, most sloshing dampers use flow obstructions like screens, columns, or baffles. I got a few different options cut out of acrylic so we can try this out. These baffles add drag, increasing the energy dissipation with the water, usually without changing the sloshing frequency. Here’s a side-by-side comparison of the performance without a baffle and with one. You can see that the improvement is pretty dramatic. The motion is more controlled and the behavior is more linear, making this much simpler to predict during the design phase. It’s kind of the best of both worlds since you get damping from the sloshing and the drag of the water passing through the screen. Almost all the motion is stopped in this demo after only three oscillations. I was pretty impressed with this. Here’s all three of the baffle runs side by side. Actually, the one with the smallest holes worked the best in my demo, but deciding the configuration of these baffles is a big challenge in the engineering of these systems because you can’t really just test out a bunch of options at full scale. Devices like this are in service in quite a few high-rise buildings, including Princess Tower in Dubai, and the Museum Tower in Dallas. With no moving parts and very little maintenance except occasionally topping it off to keep the water at the correct level, you can see how it would be easy to choose a sloshing damper for a new high-rise project. But there are some disadvantages. One is volumetric efficiency. You can see that not all the water in the tank is mobilized, especially for smaller movements, which means not all the water is contributing to the damping. The other is non-linearity. The amount of damping changes depending on the magnitude of the movement since drag is related to velocity squared. And even the frequency of the damper isn’t constant; it can change with the wave amplitude as well because of the breaking waves. So you might get good performance at the design level, but not so much for slower winds. Dampers aren’t just used in buildings. Bridges also take advantage of these clever devices, especially on the decks of pedestrian bridges and the towers of long-span bridges. This also happens at a grand scale between the Earth and moon. Tidal bulges in the oceans created by the moon’s tug on Earth dissipate energy through friction and turbulence, which is a big part of why our planet’s rotation is slowing over time. Days used to be a lot shorter when the Earth was young, but we have a planet-scale liquid damper constantly dissipating our rotational energy. But whether it’s bridges or buildings, these dampers usually don’t work perfectly right at the start. Vibrations are complicated. They’re very hard to predict, even with modern tools like simulation software and scale physical models. So, all dampers have to go through a commissioning process. Usually this involves installing accelerometers once construction is nearing completion to measure the structure’s actual natural frequency. The tuning of tuned dampers doesn’t just happen during the design phase; you want some adjustability after construction to make sure they match the structure’s natural frequency exactly so you get the most damping possible. For liquid dampers, that means adjusting the levels in the tanks. And in many cases, buildings might use multiple dampers tuned to slightly different frequencies to improve the performance over a range of conditions. Even in these two basic categories, there is a huge amount of variability and a lot of ongoing research to minimize the tradeoffs these systems come with. The truth is that, relatively speaking, there aren’t that many of these systems in use around the world. Each one is highly customized, and even putting them into categories can get a little tricky. There are even actively controlled liquid dampers. My tuning for the column damper works best for a single magnitude of motion, but you can see that once the swaying gets smaller, the damper isn’t doing a lot to curb it. You can imagine if I constantly adjusted the size of the orifice, I could get better performance over a broader range of unwanted motion. You can do this electronically by having sensors feed into a control system that adjusts a valve position in real-time. Active systems and just the flexibility to tune a damper in general also help deal with changes over time. If a building’s use changes, if new skyscrapers nearby change the wind conditions, or if it gets retrofits that change its natural frequency, the damping system can easily accommodate those changes. In the end, a lot of engineering decisions come down to economics. In most cases, damping is less about safety and more about comfort, which is often harder to pin down. Engineers and building owners face a balancing act between the cost of supplemental damping and the value of the space those systems take up. Tuned mass dampers are kind of household names when it comes to damping. A few buildings like Shanghai Center and Taipei 101 have made them famous. They’re usually the most space-efficient (since steel and concrete are more dense than water). But they’re often more costly to install and maintain. Liquid dampers are the unsung heroes. They take up more space, but they’re simple and cost-effective, especially if the fire codes already require you to have a big tank of water at the top of your building anyway. Maybe someday, an architect will build one out of glass or acrylic, add some blue dye and mica powder, and put it on display as a public showcase. Until then, we’ll just have to know it’s there by feel.

2 hours ago 1 votes
London Inches Closer to Running Transit System Entirely on Renewable Power

Under a new agreement, London will source enough solar power to run its light railway and tram networks entirely on renewable energy. Read more on E360 →

10 hours ago 1 votes
Science slow down - not a simple question

I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories. Actual science and engineering challenges, which require foundational research and creativity to solve. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both.  Alleged science and engineering problems that are really human/sociology issues. Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories.  Confusing these can strongly shape the perception of whether science and engineering research is making progress.  There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated.  For example, see here: https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/  https://news.uchicago.edu/scientific-progress-slowing-james-evans https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/ https://theweek.com/science/world-losing-scientific-innovation-research A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days.  (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.) There is a whole tech bro culture that buys into this, however.  For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half.   On some level, I get it emotionally.  The unbounded future spun in a lot of science fiction seems very far away.  Where is my flying car?  Where is my jet pack?  Where is my moon base?  Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive?  Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945? Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised".  Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US.  These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity.   Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component.  My take at the moment is this: Genuinely transformational breakthroughs are rare.  They often require a combination of novel insights, previously unavailable technological capabilities, and luck.  They don't come on a schedule.   There is no hard and fast rule that guarantees continuous exponential technological progress.  Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special.   If we think of research as a quest for understanding, it's inherently hierarchal.  Civilizational collapses aside, you can only discover how electricity works once.   You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything).  You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation.  We can't always recognize breakthroughs when they happen.  Sure, I don't have a flying car.  I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs.  The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception. Personally, I think we still have a lot more to learn about the natural world.  AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people.  People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal.  Solving foundational problems in molecular biology or quantum information hardware or  photonics or materials is not the same as software development.  (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts.  That's why so many of them seem to ignore real world physical constraints when talking about AI.).  Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week.   More frequent breakthroughs in our understanding and capabilities would be wonderful.  I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.

yesterday 2 votes
Researchers Uncover Hidden Ingredients Behind AI Creativity

Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture. The post Researchers Uncover Hidden Ingredients Behind AI Creativity first appeared on Quanta Magazine

yesterday 2 votes
Animals Adapting to Cities

Humans are dramatically changing the environment of the Earth in many ways. Only about 23% of the land surface (excluding Antarctica) is considered to be “wilderness”, and this is rapidly decreasing. What wilderness is left is also mostly managed conservation areas. Meanwhile, about 3% of the surface is considered urban. I could not find a […] The post Animals Adapting to Cities first appeared on NeuroLogica Blog.

yesterday 2 votes