Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
20
Many things have been happening in and around US science.  This is a non-exhaustive list of recent developments and links: There have been very large scale personnel cuts across HHS, FDA, CDC, NIH - see here.  This includes groups like the people who monitor lead in drinking water.   There is reporting about the upcoming presidential budget requests about NASA and NOAA.  The requested cuts are very deep.  To quote Eric Berger's article linked above, for the science part of NASA, "Among the proposals were: A two-thirds cut to astrophysics, down to $487 million; a greater than two-thirds cut to heliophysics, down to $455 million; a greater than 50 percent cut to Earth science, down to $1.033 billion; and a 30 percent cut to Planetary science, down to $1.929 billion."  The proposed cuts to NOAA are similarly deep, seeking to end climate study in the agency, as Science puts it. The full presidential budget request, including NSF, DOE, NIST, etc. is still to come.  Remember, Congress in...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from nanoscale views

So you want to build a science/engineering laboratory building

A very quick summary of some non-negative news developments: The NSF awarded 500 more graduate fellowships this week, bringing the total for this year up to 1500.  (Apologies for the X link.)  This is still 25% lower than last year's number, and of course far below the original CHIPS and Science act target of 3000, but it's better than the alternative.  I think we can now all agree that the supposed large-scale bipartisan support for the CHIPS and Science act was illusory. There seems to be some initial signs of pushback on the senate side regarding the proposed massive science funding cuts.  Again, now is the time to make views known to legislators - I am told by multiple people with experience in this arena that it really can matter. There was a statement earlier this week that apparently the US won't be going after Chinese student visas.  This would carry more weight if it didn't look like US leadership was wandering ergodically through all possible things to say with no actual plan or memory. On to the main topic of this post.  Thanks to my professional age (older than dirt) and my experience (overseeing shared research infrastructure; being involved in a couple of building design and construction projects; and working on PI lab designs and build-outs), I have some key advice and lessons learned for anyone designing a new big science/engineering research building.  This list is by no means complete, and I invite readers to add their insights in the comments.  While it seems likely that many universities will be curtailing big capital construction projects in the near term because of financial uncertainty, I hope this may still come in handy to someone.   Any big laboratory building should have a dedicated loading dock with central receiving.  If you're spending $100M-200M on a building, this is not something that you should "value engineer" away.  The long term goal is a building that operates well for the PIs and is easy to maintain, and you're going to need to be able to bring in big crates for lab and service equipment.  You should have a freight elevator adjacent to the dock.   You should also think hard about what kind of equipment will have to be moved in and out of the building when designing hallways, floor layouts, and door widths.  You don't want to have to take out walls, doorframes, or windows, or to need a crane to hoist equipment into upper floors because it can't get around corners. Think hard about process gasses and storage tanks at the beginning.  Will PIs need to have gas cylinders and liquid nitrogen and argon tanks brought in and out in high volumes all the time, with all the attendant safety concerns?  Would you be better off getting LN2 or LAr tanks even though campus architects will say they are unsightly?   Likewise, consider whether you should have building-wide service for "lab vacuum", N2 gas, compressed air, DI water, etc.  If not and PIs have those needs, you should plan ahead to deal with this. Gas cylinder and chemical storage - do you have enough on-site storage space for empty cylinders and back-up supply cylinders?  If this is a very chemistry-heavy building, think hard about safety and storing solvents.  Make sure you design for adequate exhaust capacity for fume hoods.  Someone will always want to add more hoods.  While all things are possible with huge expenditures, it's better to make sure you have capacity to spare, because adding hoods beyond the initial capacity would likely require a huge redo of the building HVAC systems. Speaking of HVAC, think really hard about controls and monitoring.  Are you going to have labs that need tight requirements on temperature and humidity?  When you set these up, put have enough sensors of the right types in the right places, and make sure that your system is designed to work even when the outside air conditions are at their seasonal extremes (hot and humid in the summer, cold and dry in the winter).  Also, consider having a vestibule (air lock) for the main building entrance - you'd rather not scoop a bunch of hot, humid air (or freezing, super-dry air) into the building every time a student opens the door. Still on HVAC, make sure that power outages and restarts don't lead to weird situations like having the whole building at negative pressure relative to the outside, or duct work bulging or collapsing. Still on HVAC, actually think about where the condensate drains for the fan units will overflow if they get plugged up or overwhelmed.  You really don't want water spilling all over a rack of networking equipment in an IT closet.  Trust me. Chilled water:  Whether it's the process chilled water for the air conditioning, or the secondary chilled water for lab equipment, make sure that the loop is built correctly.   Incompatible metals (e.g., some genius throws in a cast iron fitting somewhere, or joints between dissimilar metals) can lead to years and years of problems down the line.  Make sure lines are flushed and monitored for cleanliness, and have filters in each lab that can be checked and maintained easily. Electrical - design with future needs in mind.  If possible, it's a good idea to have PI labs with their own isolation transformers, to try to mitigate inter-lab electrical noise issues.  Make sure your electrical contractors understand the idea of having "clean" vs. "dirty" power and can set up the grounding accordingly while still being in code. Still on electrical, consider building-wide surge protection, and think about emergency power capacity.  For those who don't know, emergency power is usually a motor-generator that kicks in after a few seconds to make sure that emergency lighting and critical systems (including lab exhaust) keep going. Ceiling heights, duct work, etc. - It's not unusual for some PIs to have tall pieces of equipment.  Think about how you will accommodate these.  Pits in the floors of basement labs?  5 meter slab-to-slab spacing?  Think also about how ductwork and conduits are routed.  You don't want someone to tell you that installation of a new apparatus is going to cost a bonus $100K because shifting a duct sideways by half a meter will require a complete HVAC redesign. Think about the balance between lab space and office space/student seating.  No one likes giant cubicle farm student seating, but it does have capacity.  In these days of zoom and remote access to experiments, the way students and postdocs use offices is evolving, which makes planning difficult.  Health and safety folks would definitely prefer not to have personnel effectively headquartered directly in lab spaces.  Seriously, though, when programming a building, you need to think about how many people per PI lab space will need places to sit.  I have yet to see a building initially designed with enough seating to handle all the personnel needs if every PI lab were fully occupied and at a high level of research activity.  Think about maintenance down the line.  Every major building system has some lifespan.  If a big air handler fails, is it accessible and serviceable, or would that require taking out walls or cutting equipment into pieces and disrupting the entire building?  Do you want to set up a situation where you may have to do this every decade?  (Asking for a friend.) Entering the realm of fantasy, use your vast power and influence to get your organization to emphasize preventative maintenance at an appropriate level, consistently over the years.  Universities (and national labs and industrial labs) love "deferred maintenance" because kicking the can down the road can make a possible cost issue now into someone else's problem later.  Saving money in the short term can be very tempting.  It's also often easier and more glamorous to raise money for the new J. Smith Laboratory for Physical Sciences than it is to raise money to replace the HVAC system in the old D. Jones Engineering Building.  Avoid this temptation, or one day (inevitably when times are tight) your university will notice that it has $300M in deferred maintenance needs. I may update this list as more items occur to me, but please feel free to add input/ideas.

3 days ago 7 votes
A precision measurement science mystery - new physics or incomplete calculations?

Again, as a distraction from persistently concerning news, here is a science mystery of which I was previously unaware. The role of approximations in physics is something that very often comes as a shock to new students.  There is this cultural expectation out there that because physics is all about quantitative understanding of physical phenomena, and the typical way we teach math and science in K12 education, we should be able to get exact solutions to many of our attempts to model nature mathematically.   In practice, though, constructing physics theories is almost always about approximations, either in the formulation of the model itself (e.g. let's consider the motion of an electron about the proton in the hydrogen atom by treating the proton as infinitely massive and of negligible size) or in solving the mathematics (e.g., we can't write an exact analytical solution of the problem when including relativity, but we can do an order-by-order expansion in powers of \(p/mc\)).  Theorists have a very clear understanding of what means to say that an approximation is "well controlled" - you know on both physical and mathematical grounds that a series expansion actually converges, for example.   Some problems are simpler than others, just by virtue of having a very limited number of particles and degrees of freedom, and some problems also lend themselves to high precision measurements.  The hydrogen atom problem is an example of both features.  Just two spin-1/2 particles (if we approximate the proton as a lumped object) and readily accessible to optical spectroscopy to measure the energy levels for comparison with theory.  We can do perturbative treatments to account for other effects of relativity, spin-orbit coupling, interactions with nuclear spin, and quantum electrodynamic corrections (here and here).  A hallmark of atomic physics is the remarkable precision and accuracy of these calculations when compared with experiment.  (The \(g\)-factor of the electron is experimentally known to a part in \(10^{10}\) and matches calculations out to fifth order in \(\alpha = e^2/(4 \pi \epsilon_{0}\hbar c)\).).   The helium atom is a bit more complicated, having two electrons and a more complicated nucleus, but over the last hundred years we've learned a lot about how to do both calculations and spectroscopy.   As explained here, there is a problem.  It is possible to put helium into an excited metastable triplet state with one electron in the \(1s\) orbital, the other electron in the \(2s\) orbital, and their spins in a triplet configuration.  Then one can measure the ionization energy of that system - the minimum energy required to kick an electron out of the atom and off to infinity.  This energy can be calculated to seventh order in \(\alpha\), and the theorists think that they're accounting for everything, including the finite (but tiny) size of the nucleus.  The issue:  The calculation and the experiment differ by about 2 nano-eV.  That may not sound like a big deal, but the experimental uncertainty is supposed to be a little over 0.08 nano-eV, and the uncertainty in the calculation is estimated to be 0.4 nano-eV.  This works out to something like a 9\(\sigma\) discrepancy.  Most recently, a quantitatively very similar discrepancy shows up in the case of measurements performed in 3He rather than 4He.   This is pretty weird.  Historically, it would seem that the most likely answer is a problem with either the measurements (though that seems doubtful, since precision spectroscopy is such a well-developed set of techniques), the calculation (though that also seems weird, since the relevant physics seems well known), or both.  The exciting possibility is that somehow there is new physics at work that we don't understand, but that's a long shot.  Still, something fun to consider (as my colleagues (and I) try to push back on the dismantling of US scientific research.)

a week ago 9 votes
Pushing back on US science cuts: Now is a critical time

Every week has brought more news about actions that, either as a collateral effect or a deliberate goal, will deeply damage science and engineering research in the US.  Put aside for a moment the tremendously important issue of student visas (where there seems to be a policy of strategic vagueness, to maximize the implicit threat that there may be selective actions).  Put aside the statement from a Justice Department official that there is a general plan is to "bring these universities to their knees", on the pretext that this is somehow about civil rights.   The detailed version of the presidential budget request for FY26 is now out (pdf here for the NSF portion).  If enacted, it would be deeply damaging to science and engineering research in the US and the pipeline of trained students who support the technology sector.  Taking NSF first:  The topline NSF budget would be cut from $8.34B to $3.28B.  Engineering would be cut by 75%, Math and Physical Science by 66.8%.  The anticipated agency-wide success rate for grants would nominally drop below 7%, though that is misleading (basically taking the present average success rate and cutting it by 2/3, while some programs are already more competitive than others.).  In practice, many programs already have future-year obligations, and any remaining funds will have to go there, meaning that many programs would likely have no awards at all in the coming fiscal year.  The NSF's CAREER program (that agency's flagship young investigator program) would go away  This plan would also close one of the LIGO observatories (see previous link).  (This would be an extra bonus level of stupid, since LIGO's ability to do science relies on having two facilities, to avoid false positives and to identify event locations in the sky.  You might as well say that you'll keep an accelerator running but not the detector.)  Here is the table that I think hits hardest, dollars aside: The number of people involved in NSF activities would drop by 240,000.  The graduate research fellowship program would be cut by more than half.  The NSF research training grant program (another vector for grad fellowships) would be eliminated.   The situation at NIH and NASA is at least as bleak.  See here for a discussion from Joshua Weitz at Maryland which includes this plot:  This proposed dismantling of US research and especially the pipeline of students who support the technology sector (including medical research, computer science, AI, the semiconductor industry, chemistry and chemical engineering, the energy industry) is astonishing in absolute terms.  It also does not square with the claim of some of our elected officials and high tech CEOs to worry about US competitiveness in science and engineering.  (These proposed cuts are not about fiscal responsibility; just the amount added in the proposed DOD budget dwarfs these cuts by more than a factor of 3.) If you are a US citizen and think this is the wrong direction, now is the time to talk to your representatives in Congress. In the past, Congress has ignored presidential budget requests for big cuts.  The American Physical Society, for example, has tools to help with this.  Contacting legislators by phone is also made easy these days.  From the standpoint of public outreach, Cornell has an effort backing large-scale writing of editorials and letters to the editor.

2 weeks ago 7 votes
Quick survey - machine shops and maker spaces

Recent events are very dire for research at US universities, and I will write further about those, but first a quick unrelated survey for those at such institutions.  Back in the day, it was common for physics and some other (mechanical engineering?) departments to have machine shops with professional staff.  In the last 15-20 years, there has been a huge growth in maker-spaces on campuses to modernize and augment those capabilities, though often maker-spaces are aimed at undergraduate design courses rather than doing work to support sponsored research projects (and grad students, postdocs, etc.).  At the same time, it is now easier than ever (modulo tariffs) to upload CAD drawings to a website and get a shop in another country to ship finished parts to you. Quick questions:   Does your university have a traditional or maker-space-augmented machine shop available to support sponsored research?  If so, who administers this - a department, a college/school, the office of research?  Does the shop charge competitive rates relative to outside vendors?  Are grad students trained to do work themselves, and are there professional machinists - how does that mix work? Thanks for your responses.  Feel free to email me if you'd prefer to discuss offline.

2 weeks ago 12 votes
How badly has NSF funding already been effectively cut?

This NY Times feature lets you see how each piece of NSF's funding has been reduced this year relative to the normalized average spanning in the last decade.  Note: this fiscal year, thanks to the continuing resolution, the actual agency budget has not actually been cut like this. They are just not spending congressionally appropriated agency funds.  The agency, fearing/assuming that its budget will get hammered next fiscal year, does not want to start awards that it won't be able to fund in out-years. The result is that this is effectively obeying in advance the presidential budget request for FY26.  (And it's highly likely that some will point to unspent funds later in the year and use that as a justification for cuts, when in fact it's anticipation of possible cuts that has led to unspent funds.  I'm sure the Germans have a polysyllabic word for this.  In English, "Catch-22" is close.) I encourage you to click the link and go to the article where the graphic is interactive (if it works in your location - not sure about whether the link works internationally).  The different colored regions are approximately each of the NSF directorates (in their old organizational structure).  Each subsection is a particular program.   Seems like whoever designed the graphic was a fan of Tufte, and the scaling of the shaded areas does quantitatively reflect funding changes.  However, most people have a tough time estimating relative areas of irregular polygons.  Award funding in physics (the left-most section of the middle region) is down 85% relative to past years.  Math is down 72%.  Chemistry is down 57%.  Materials is down 63%.  Earth sciences is down 80%.  Polar programs (you know, those folks who run all the amazing experiments in Antarctica) is down 88%.   I know my readers are likely tired of me harping on NSF, but it's both important and a comparatively transparent example of what is also happening at other agencies.  If you are a US citizen and think that this is the wrong path, then push on your congressional delegation about the upcoming budget.

3 weeks ago 16 votes

More in science

How Sewage Recycling Works

[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.

23 hours ago 4 votes
Why JPEGs Still Rule the Web

A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail. For roughly three decades, the JPEG has been the World Wide Web’s primary image format. But it wasn’t the one the Web started with. In fact, the first mainstream graphical browser, NCSA Mosaic, didn’t initially support inline JPEG files—just inline GIFs, along with a couple of other formats forgotten to history. However, the JPEG had many advantages over the format it quickly usurped. aspect_ratio Despite not appearing together right away—it first appeared in Netscape in 1995, three years after the image standard was officially published—the JPEG and web browser fit together naturally. JPEG files degraded more gracefully than GIFs, retaining more of the picture’s initial form—and that allowed the format to scale to greater levels of success. While it wasn’t capable of animation, it progressively expanded from something a modem could pokily render to a format that was good enough for high-end professional photography. For the internet’s purposes, the degradation was the important part. But it wasn’t the only thing that made the JPEG immensely valuable to the digital world. An essential part was that it was a documented standard built by numerous stakeholders. The GIF was a de facto standard. The JPEG was an actual one How important is it that JPEG was a standard? Let me tell you a story. During a 2013 New York Times interview conducted just before he received an award honoring his creation, GIF creator Steve Wilhite stepped into a debate he unwittingly created. Simply put, nobody knew how to pronounce the acronym for the image format he had fostered, the Graphics Interchange Format. He used the moment to attempt to set the record straight—it was pronounced like the peanut butter brand: “It is a soft ‘G,’ pronounced ‘jif.’ End of story,” he said. I posted a quote from Wilhite on my popular Tumblr around that time, a period when the social media site was the center of the GIF universe. And soon afterward, my post got thousands of reblogs—nearly all of them disagreeing with Wilhite. Soon, Wilhite’s quote became a meme. The situation paints how Wilhite, who died in 2022, did not develop his format by committee. He could say it sounded like “JIF” because he built it himself. He was handed the project as a CompuServe employee in 1987; he produced the object, and that was that. The initial document describing how it works? Dead simple. 38 years later, we’re still using the GIF—but it never rose to the same prevalence of JPEG. The JPEG, which formally emerged about five years later, was very much not that situation. Far from it, in fact—it’s the difference between a de facto standard and an actual one. And that proved essential to its eventual ubiquity. We’re going to degrade the quality of this image throughout this article. At its full image size, it’s 13.7 megabytes.Irina Iriser How the JPEG format came to life Built with input from dozens of stakeholders, the Joint Photographic Experts Group ultimately aimed to create a format that fit everyone’s needs. (Reflecting its committee-led roots, there would be no confusion about the format’s name—an acronym of the organization that designed it.) And when the format was finally unleashed on the world, it was the subject of a more than 600-page book. JPEG: Still Image Data Compression Standard, written by IBM employees and JPEG organization stakeholders William B. Pennebaker and Joan L. Mitchell, describes a landscape of multimedia imagery, held back without a way to balance the need for photorealistic images and immediacy. Standardization, they believed, could fix this. “The problem was not so much the lack of algorithms for image compression (as there is a long history of technical work in this area),” the authors wrote, “but, rather, the lack of a standard algorithm—one which would allow an interchange of images between diverse applications.” And they were absolutely right. For more than 30 years, JPEG has made high-quality, high-resolution photography accessible in operating systems far and wide. Although we no longer need to compress JPEGs to within an inch of their life, having that capability helped enable the modern internet. As the book notes, Mitchell and Pennebaker were given IBM’s support to follow through this research and work with the JPEG committee, and that support led them to develop many of the JPEG format’s foundational patents. Described in patents filed by Mitchell and Pennebaker in 1988, IBM and other members of the JPEG standards committee, such as AT&T and Canon, were developing ways to use compression to make high-quality images easier to deliver in confined settings. Each member brought their own needs to the process. Canon, obviously, was more focused on printers and photography, while AT&T’s interests were tied to data transmission. Together, the companies left behind a standard that has stood the test of time. All this means, funnily enough, that the first place that a program capable of using JPEG compression appeared was not MacOS or Windows, but OS/2—a fascinating-but-failed graphical operating system created by Pennebaker and Mitchell’s employer, IBM. As early as 1990, OS/2 supported the format through the OS/2 Image Support application. At 50 percent of its initial quality, the image is down to about 2.6 MB. By dropping half of the image’s quality, we brought it down to one-fifth of the original file size. Original image: Irina Iriser What a JPEG does when you heavily compress it The thing that differentiates a JPEG file from a PNG or a GIF is how the data degrades as you compress it. The goal for a JPEG image is to still look like a photo when all is said and done, even if some compression is necessary to make it all work at a reasonable size. That way, you can display something that looks close to the original image in fewer bytes. Or, as Pennebaker and Mitchell put it, “the most effective compression is achieved by approximating the original image (rather than reproducing it exactly).” Central to this is a compression process called discrete cosine transform (DCT), a lossy form of compression encoding heavily used in all sorts of compressed formats, most notably in digital audio and signal processing. Essentially, it delivers a lower-quality product by removing details, while still keeping the heart of the original product through approximation. The stronger the cosine transformation, the more compressed the final result. The algorithm, developed by researchers in the 1970s, essentially takes a grid of data and treats it as if you’re controlling its frequency with a knob. The data rate is controlled like water from a faucet: The more data you want, the higher the setting. DCT allows a trickle of data to still come out in highly compressed situations, even if it means a slightly compromised result. In other words, you may not keep all the data when you compress it, but DCT allows you to keep the heart of it. (See this video for a more technical but still somewhat easy-to-follow description of DCT.) DCT is everywhere. If you have ever seen a streaming video or an online radio stream that degraded in quality because your bandwidth suddenly declined, you’ve witnessed DCT being utilized in real time. A JPEG file doesn’t have to leverage the DCT with just one method, as JPEG: Still Image Data Compression Standard explains: The JPEG standard describes a family of large image compression techniques, rather than a single compression technique. It provides a “tool kit” of compression techniques from which applications can select elements that satisfy their particular requirements. The toolkit has four modes: Sequential DCT, which displays the compressed image in order, like a window shade slowly being rolled down Progressive DCT, which displays the full image in the lowest-resolution format, then adds detail as more information rolls in Sequential lossless, which uses the window shade format but doesn’t compress the image Hierarchical mode, which combines the prior three modes—so maybe it starts with a progressive mode, then loads DCT compression slowly, but then reaches a lossless final result At the time the JPEG was being created, modems were extremely common. That meant images loaded slowly, making Progressive DCT the most fitting format for the early internet. Over time, the progressive DCT mode has become less common, as many computers can simply load the sequential DCT in one fell swoop. That same forest, saved at 5 percent quality. Down to about 419 kilobytes.Original image: Irina Iriser When an image is compressed with DCT, the change tends to be less noticeable in busier, more textured areas of the picture, like hair or foliage. Those areas are harder to compress, which means they keep their integrity longer. It tends to be more noticeable, however, with solid colors or in areas where the image sharply changes from one color to another—like text on a page. Ever screenshot a social media post, only for it to look noisy? Congratulations, you just made a JPEG file. Other formats, like PNG, do better with text, because their compression format is intended to be non-lossy. (Side note: PNG’s compression format, DEFLATE, was designed by Phil Katz, who also created the ZIP format. The PNG format uses it in part because it was a license-free compression format. So it turns out the brilliant coder with the sad life story improved the internet in multiple ways before his untimely passing.) In many ways, the JPEG is one tool in our image-making toolkit. Despite its age and maturity, it remains one of our best options for sharing photos on the internet. But it is not a tool for every setting—despite the fact that, like a wrench sometimes used as a hammer, we often leverage it that way. Forgent Networks claimed to own the JPEG’s defining algorithm The JPEG format gained popularity in the ’90s for reasons beyond the quality of the format. Patents also played a role: Starting in 1994, the tech company Unisys attempted to bill individual users who relied on GIF files, which used a patent the company owned. This made the free-to-use JPEG more popular. (This situation also led to the creation of the patent-free PNG format.) While the JPEG was standards-based, it could still have faced the same fate as the GIF, thanks to the quirks of the patent system. A few years before the file format came to life, a pair of Compression Labs employees filed a patent application that dealt with the compression of motion graphics. By the time anyone noticed its similarity to JPEG compression, the format was ubiquitous. Our forest, saved at 1 percent quality. This image is only about 239 KB in size, yet it’s still easily recognizable as the same photo. That’s the power of the JPEG.Original image: Irina Iriser Then in 1997, a company named Forgent Networks acquired Compression Labs. The company eventually spotted the patent and began filing lawsuits over it, a series of events it saw as a stroke of good luck. “The patent, in some respects, is a lottery ticket,” Forgent Chief Financial Officer Jay Peterson told CNET in 2005. “If you told me five years ago that ‘You have the patent for JPEG,’ I wouldn’t have believed it.” While Forgent’s claim of ownership of the JPEG compression algorithm was tenuous, it ultimately saw more success with its legal battles than Unisys did. The company earned more than $100 million from digital camera makers before the patent finally ran out of steam around 2007. The company also attempted to extract licensing fees from the PC industry. Eventually, Forgent agreed to a modest $8 million settlement. As the company took an increasingly aggressive approach to its acquired patent, it began to lose battles both in the court of public opinion and in actual courtrooms. Critics pounced on examples of prior art, while courts limited the patent’s use to motion-based uses like video. By 2007, Forgent’s compression patent expired—and its litigation-heavy approach to business went away. That year, the company became Asure Software, which now specializes in payroll and HR solutions. Talk about a reboot. Why the JPEG won’t die The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. The format is widely used by the Library of Congress and specialized sites like the Internet Archive, however, it is less popular as an end-user format. See the forest JPEG degrade from its full resolution to 1 percent quality in this GIF. Original image: Irina Iriser Other image technologies have had somewhat more luck getting past the JPEG format. The Google-supported WebP is popular with website developers (and controversial with end users). Meanwhile, the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000. Still, the JPEG will be difficult to kill at this juncture. These days, the format is similar to MP3 or ZIP files—two legacy formats too popular and widely used to kill. Other formats that compress the files better and do the same things more efficiently are out there, but it’s difficult to topple a format with a 30-year head start. Shaking off the JPEG is easier said than done. I think most people will be fine to keep it around. Ernie Smith is the editor of Tedium, a long-running newsletter that hunts for the end of the long tail.

23 hours ago 4 votes
How to redraw a city

The planning trick that created Japan's famous urbanism

6 hours ago 1 votes
Discarded U.K. Clothing Dumped in Protected Wetlands in Ghana

Heaps of discarded clothing from the U.K. have been dumped in protected wetlands in Ghana, an investigation found. Read more on E360 →

an hour ago 1 votes
Wet Labs Shouldn’t Be Boring (for young scientists) | Out-Of-Pocket

This is the first touchpoint for science, we should make it more enticing

8 hours ago 1 votes