Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
116
Watching movies and TV series that use digital visual effects to create fantastical worlds lets people escape reality for a few hours. Thanks to advancements in computer-generated technology used to produce films and shows, those worlds are highly realistic. In many cases, it can be difficult to tell what’s real and what isn’t. The groundbreaking tools that make it easier for computers to produce realistic images, introduced as RenderMan by Pixar in 1988, came after years of development by computer scientists Robert L. Cook, Loren Carpenter, Tom Porter, and Patrick M. Hanrahan. RenderMan, a project launched by computer graphics pioneer Edwin Catmull, is behind much of today’s computer-generated imagery and animation, including in the recent fan favorites Avatar: The Way of Water, The Mandalorian, and Nimona. The technology was honored with an IEEE Milestone in December during a ceremony held at Pixar’s Emeryville, Calif., headquarters. The ceremony is available to watch on...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from IEEE Spectrum

32 Bits That Changed Microprocessor Design

In the late 1970s, a time when 8-bit processors were state of the art and CMOS was the underdog of semiconductor technology, engineers at AT&T’s Bell Labs took a bold leap into the future. They made a high-stakes bet to outpace IBM, Intel, and other competitors in chip performance by combining cutting-edge 3.5-micron CMOS fabrication with a novel 32-bit processor architecture. Although their creation—the Bellmac-32 microprocessor—never achieved the commercial fame of earlier ones such as Intel’s 4004 (released in 1971), its influence has proven far more enduring. Virtually every chip in smartphones, laptops, and tablets today relies on the complementary metal-oxide semiconductor principles that the Bellmac-32 pioneered. As the 1980s approached, AT&T was grappling with transformation. For decades, the telecom giant—nicknamed “Ma Bell”—had dominated American voice communications, with its Western Electric subsidiary manufacturing nearly every telephone found in U.S. homes and offices. The U.S. federal government was pressing for antitrust-driven divestiture, but AT&T was granted an opening to expand into computing. With computing firms already entrenched in the market, AT&T couldn’t afford to play catch-up; its strategy was to leap ahead, and the Bellmac-32 was its springboard. The Bellmac-32 chip series has now been honored with an IEEE Milestone. Dedication ceremonies are slated to be held this year at the Nokia Bell Labs’ campus in Murray Hill, N.J., and at the Computer History Museum in Mountain View, Calif. A chip like no other Rather than emulate the industry standard of 8-bit chips, AT&T executives challenged their Bell Labs engineers to deliver something revolutionary: the first commercially viable microprocessor capable of moving 32 bits in one clock cycle. It would require not just a new chip but also an entirely novel architecture—one that could handle telecommunications switching and serve as the backbone for future computing systems. “We weren’t just building a faster chip,” says Michael Condry, who led the architecture team at Bell Labs’ Holmdel facility in New Jersey. “We were trying to design something that could carry both voice and computation into the future.” This configuration of the Bellmac-32 microprocessor had an integrated memory management unit optimized for Unix-like operating systems.AT&T Archives and History Center At the time, CMOS technology was seen as a promising—but risky—alternative to the NMOS and PMOS designs then in use. NMOS chips, which relied solely on N-type transistors, were fast but power-hungry. PMOS chips, which depend on the movement of positively-charged holes, were too slow. CMOS, with its hybrid design, offered the potential for both speed and energy savings. The benefits were so compelling that the industry soon saw that the need for double the number of transistors (NMOS and PMOS for each gate) was worth the tradeoff. As transistor sizes shrank along with the rapid advancement of semiconductor technology described by Moore’s Law, the cost of doubling up the transistor density soon became manageable and eventually became negligible. But when Bell Labs took its high-stakes gamble, large-scale CMOS fabrication was still unproven and looked to be comparatively costly. That didn’t deter Bell Labs. By tapping expertise from its campuses in Holmdel and Murray Hill as well as in Naperville, Ill., the company assembled a dream team of semiconductor engineers. The team included Condry; Sung-Mo “Steve” Kang, a rising star in chip design; Victor Huang, another microprocessor chip designer, and dozens of AT&T Bell Labs employees. They set out in 1978 to master a new CMOS process and create a 32-bit microprocessor from scratch. Designing the architecture The architecture group led by Condry, an IEEE Life Fellow who would later become Intel’s CTO, focused on building a system that would natively support the Unix operating system and the C programming language. Both were in their infancy but destined for dominance. To cope with the era’s memory limitations—kilobytes were precious—they introduced a complex instruction set that required fewer steps to carry out and could be executed in a single clock cycle. The engineers also built the chip to support the VersaModule Eurocard (VME) parallel bus, enabling distributed computing so several nodes could handle data processing in parallel. Making the chip VME-enabled also allowed it to be used for real-time control. The group wrote its own version of Unix, with real-time capabilities to ensure that the new chip design was compatible with industrial automation and similar applications. The Bell Labs engineers also invented domino logic, which ramped up processing speed by reducing delays in complex logic gates. Additional testing and verification techniques were developed and introduced via the Bellmac-32 Module, a sophisticated multi-chipset verification and testing project led by Huang that allowed the complex chip fabrication to have zero or near-zero errors. This was the first of its kind in VLSI testing. The Bell Labs engineers’ systematic plan for double- and triple-checking their colleagues’ work ultimately made the total design of the multiple chipset family work together seamlessly as a complete microcomputer system. Then came the hardest part: actually building the chip. Floor maps and colored pencils “The technology for layout, testing, and high-yield fabrication just wasn’t there,” recalls Kang, an IEEE Life Fellow who later became president of the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea. With no CAD tools available for full-chip verification, Kang says, the team resorted to printing oversize Calcomp plots. The schematics showed how the transistors, circuit lines, and interconnects should be arranged inside the chip to provide the desired outputs. The team assembled them on the floor with adhesive tape to create a massive square map more than 6 meters on a side. Kang and his colleagues traced every circuit by hand with colored pencils, searching for breaks, overlaps, or mishandled interconnects. Getting it made Once the physical design was locked in, the team faced another obstacle: manufacturing. The chips were fabricated at a Western Electric facility in Allentown, Pa., but Kang recalls that the yield rates (the percentage of chips on a silicon wafer that meet performance and quality standards) were dismal. To address that, Kang and his colleagues drove from New Jersey to the plant each day, rolled up their sleeves, and did whatever it took, including sweeping floors and calibrating test equipment, to build camaraderie and instill confidence that the most complicated product the plant workers had ever attempted to produce could indeed be made there. “We weren’t just building a faster chip. We were trying to design something that could carry both voice and computation into the future.” —Michael Condry, Bellmac-32 architecture team lead “The team-building worked out well,” Kang says. “After several months, Western Electric was able to produce more than the required number of good chips.” The first version of the Bellmac-32, which was ready by 1980, fell short of expectations. Instead of hitting a 4-megahertz performance target, it ran at just 2 MHz. The engineers discovered that the state-of-the-art Takeda Riken testing equipment they were using was flawed, with transmission-line effects between the probe and the test head leading to inaccurate measurements, so they worked with a Takeda Riken team to develop correction tables that rectified the measurement errors. The second generation of Bellmac chips had clock speeds that exceeded 6.2 MHz, sometimes reaching 9. That was blazing fast for its time. The 16-bit Intel 8008 processor inside IBM’s original PC released in 1981 ran at 4.77 MHz. Why Bellmac-32 didn’t go mainstream Despite its technical promise, the Bellmac-32 did not find wide commercial use. According to Condry, AT&T’s pivot toward acquiring equipment manufacturer NCR, which it began eyeing in the late 1980s, meant the company chose to back a different line of chips. But by then, the Bellmac-32’s legacy was already growing. “Before Bellmac-32, NMOS was dominant,” Condry says. “But CMOS changed the market because it was shown to be a more effective implementation in the fab.” In time, that realization reshaped the semiconductor landscape. CMOS would become the foundation for modern microprocessors, powering the digital revolution in desktops, smartphones, and more. The audacity of Bell Labs’ bet—to take an untested fabrication process and leapfrog an entire generation of chip architecture—stands as a landmark moment in technological history. As Kang puts it: “We were on the frontier of what was possible. We didn’t just follow the path—we made a new one.” Huang, an IEEE Life Fellow who later became deputy director of the Institute of Microelectronics, Singapore, adds: “This included not only chip architecture and design, but also large-scale chip verification—with CAD but without today’s digital simulation tools or even breadboarding [which is the standard method for checking whether a circuit design for an electronic system that uses chips works before making permanent connections by soldering the circuit elements together].” Condry, Kang, and Huang look back fondly on that period and express their admiration for the many AT&T employees whose skill and dedication made the Bellmac-32 chip series possible. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE North Jersey Section sponsored the nomination.

yesterday 4 votes
Teething Babies and Rainy Days Once Cut Calls Short

Humans are messy. We spill drinks, smudge screens, and bring our electronic devices into countless sticky situations. As anyone who has accidentally dropped their phone into a toilet or pool knows, moisture poses a particular problem. And it’s not a new one: From early telephones to modern cellphones, everyday liquids have frequently conflicted with devices that must stay dry. Consumers often take the blame when leaks and spills inevitably occur. Rachel Plotnick, an associate professor of cinema and media studies at Indiana University Bloomington, studies the relationship between technology and society. Last year, she spoke to IEEE Spectrum about her research on how people interact with buttons and tactile controls. In her new book, License to Spill: Where Dry Devices Meet Liquid Lives (The MIT Press, 2025), Plotnick explores the dynamic between everyday wetness and media devices through historical and contemporary examples, including cameras, vinyl records, and laptops. This adapted excerpt looks back at analog telephones of the 1910s through 1930s, the common practices that interrupted service, and the “trouble men” who were sent to repair phones and reform messy users. Boston Daily Globe in 1908 recounted, for instance, how a mother only learned her lesson about her baby’s cord chewing when the baby received a shock—or “got stung”—and the phone service went out. These youthful oral fixations rarely caused harm to the chewer, but were “injurious” to the telephone cord. License to Spill is Rachel Plotnick’s second book. Her first, Power Button: A History of Pleasure, Panic, and the Politics of Pushing (The MIT Press, 2018), explores the history and politics of push buttons. The MIT Press Telephony. Painters washed ceilings, which dripped; telephones sat near windows during storms; phone cords came in contact with moist radiators. A telephone chief operator who handled service complaints recounted that “a frequent combination in interior decoration is the canary bird and desk telephone occupying the same table. The canary bird includes the telephone in his morning bath,” thus leading to out-of-order service calls. housewife” who damaged wiring by scrubbing her telephone with water or cleaning fluid, and men in offices who dangerously propped their wet umbrellas against the wire. Wetness lurked everywhere in people’s spaces and habits; phone companies argued that one could hardly expect proper service under such circumstances—especially if users didn’t learn to accommodate the phone’s need for dryness. This differing appraisal of liquids caused problems when telephone customers expected service that would not falter and directed outrage at their provider when outages did occur. Consumers even sometimes admitted to swearing at the telephone receiver and haranguing operators. Telephone company employees, meanwhile, faced intense scrutiny and pressure to tend to telephone infrastructures. “Trouble” took two forms, then, in dealing with customers’ frustration over outages and in dealing with the damage from the wetness itself. The Original Troubleshooters Telephone breakdowns required determinations about the outage’s source. “Trouble men” and “trouble departments” hunted down the probable cause of the damage, which meant sussing out babies, sponges, damp locations, spills, and open windows. If customers wanted to lay blame at workers’ feet in these moments, then repairers labeled customers as abusers of the phone cord. One author attributed at least 50 percent of telephone trouble to cases where “someone has been careless or neglectful.” Trouble men employed medical metaphors to describe their work, as in “he is a physician, and he makes the ills that the telephone is heir to his life study.” Serge Bloch Even if a consumer knew the cord had gotten wet, they didn’t necessarily blame it as the cause of the outage. The repairer often used this as an opportunity to properly socialize the user about wetness and inappropriate telephone treatment. These conversations didn’t always go well: A 1918 article in Popular Science Monthly described an explosive argument between an infuriated woman and a phone company employee over a baby’s cord habits. The permissive mother and teething child had become emblematic of misuse, a photograph of them appearing in Bell Telephone News in 1917 as evidence of common trouble that a telephone (and its repairer) might encounter. However, no one blamed the baby; telephone workers unfailingly held mothers responsible as “bad” users. Teething babies and the mothers that let them play with phone cords were often blamed for telephone troubles. The Telephone Review/License to Spill Armed with such a tool, repairers glorified their own expertise. One wire chief was celebrated as the “original ‘find-out artist’” who could determine a telephone’s underlying troubles even in tricky cases. Telephone company employees leveraged themselves as experts who could attribute wetness’s causes to—in their estimation—uneducated (and even dimwitted) customers, who were often female. Women were often the earliest and most engaged phone users, adopting the device as a key mechanism for social relations, and so they became an easy target. Cost of Wet Phone Cord Repairs Though the phone industry and repairers were often framed as heroes, troubleshooting took its toll on overextended phone workers, and companies suffered a financial burden from repairs. One estimate by the American Telephone and Telegraph Company found that each time a company “clear[ed] wet cord trouble,” it cost a dollar. Phone companies portrayed the telephone as a fragile device that could be easily damaged by everyday life, aiming to make the subscriber a proactively “dry” and compliant user. Everyday sources of wetness, including mops and mustard, could cause hours of phone interruption. Telephony/License to Spill Moisture-Proofing Telephone Cords Although telephone companies put significant effort into reforming their subscribers, the increasing pervasiveness of telephony began to conflict with these abstinent aims. Thus, a new technological solution emerged that put the burden on moisture-proofing the wire. The Stromberg-Carlson Telephone Manufacturing Co. of Rochester, N.Y., began producing copper wire that featured an insulating enamel, two layers of silk, the company’s moisture-proof compound, and a layer of cotton. Called Duratex, the cord withstood a test in which the manufacturer submerged it in water for 48 hours. In its advertising, Stromberg-Carlson warned that many traditional cords—even if they seemed to dry out after wetting—had sustained interior damage so “gradual that it is seldom noticed until the subscriber complains of service.” Serge Bloch The Pickwick Papers, with his many layers of clothing. The product’s hardiness would allow the desk telephone to “withstand any climate,” even one hostile to communication technology. This subtle change meant that the burden to adapt fell to the device rather than the user. As telephone wires began to “penetrate everywhere,” they were imagined as fostering constant and unimpeded connectivity that not even saliva or a spilled drink could interrupt. The move to cord protection was not accompanied by a great deal of fanfare, however. As part of telephone infrastructure, cords faded into the background of conversations. Excerpted from License to Spill by Rachel Plotnick. Reprinted with permission from The MIT Press. Copyright 2025.

a week ago 10 votes
Video Friday: Robotic Hippotherapy Horse Riding Simulator

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS CLAWAR 2025: 5–7 September 2025, SHENZHEN CoRL 2025: 27–30 September 2025, SEOUL IEEE Humanoids: 30 September–2 October 2025, SEOUL World Robot Summit: 10–12 October 2025, OSAKA, JAPAN IROS 2025: 19–25 October 2025, HANGZHOU, CHINA Enjoy today’s videos! Today I learned that “hippotherapy” is not quite what I wanted it to be. The integration of KUKA robots into robotic physiotherapy equipment offers numerous advantages, such as precise motion planning and control of robot-assisted therapy, individualized training, reduced therapist workload and patient progress monitoring. As a result, these robotic therapies can be superior to many conventional physical therapies in restabilizing patients’ limbs. [ Kuka ] MIT engineers are getting in on the robotic ping pong game with a powerful, lightweight design that returns shots with high-speed precision. The new table tennis bot comprises a multijointed robotic arm that is fixed to one end of a ping pong table and wields a standard ping pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types — loop, drive, or chop — to precisely hit the ball to a desired location on the table with various types of spin. [ MIT News ] Pan flipping involves dynamically flipping various objects, such as eggs, burger buns, and meat patties. This demonstrates precision, agility, and the ability to adapt to different challenges in motion control. Our framework enables robots to learn highly dynamic movements. [ GitHub ] via [ Human Centered Autonomy Lab ] Thanks, Haonan! An edible robot made by EPFL scientists leverages a combination of biodegradable fuel and surface tension to zip around the water’s surface, creating a safe – and nutritious – alternative to environmental monitoring devices made from artificial polymers and electronics. [ EPFL ] Traditional quadcopters excel in flight agility and maneuverability, but often face limitations in hovering efficiency and horizontal field of view. Nature-inspired rotary wings, while offering a broader perspective and enhanced hovering efficiency, are hampered by substantial angular momentum restrictions. In this study, we introduce QuadRotary, a novel vehicle that integrates the strengths of both flight characteristics through a reconfigurable design. [ Paper ] via [ Singapore University of Technology and Design ] I like the idea of a humanoid that uses jumping as a primary locomotion mode not because it has to, but because it’s fun. [ PAL Robotics ] I had not realized how much nuance there is to digging stuff up with a shovel. [ Intelligent Motion Laboratory ] A new 10,000 gallon water tank at the University of Michigan will help researchers design, build, and test a variety of autonomous underwater systems that could help robots map lakes and oceans and conduct inspections of ships and bridges. The tank, funded by the Office of Naval Research, allows roboticists to further test projects on robot control and behavior, marine sensing and perception, and multi-vehicle coordination. “The lore is that this helps to jumpstart research, as each testing tank is a living reservoir for all of the knowledge gained from within it,” said Jason Bundoff, Lead Engineer in Research at U-M’s Friedman Marine Hydrodynamics Laboratory. “You mix the waters from other tanks to imbue the newly founded tank with all of that living knowledge from the other tanks, which helps to keep the knowledge from being lost.” [ Michigan Robotics ] If you have a humanoid robot and you’re wondering how it should communicate, here’s the answer. [ Pollen ] Whose side are you on, Dusty? Even construction robots should be mindful about siding with the Empire, though- there can be consequences! - YouTube [ Dusty Robotics ] This Michigan Robotics Seminar is by Danfei Xu from Georgia Tech, on “Generative Task and Motion Planning.” Long-horizon planning is fundamental to our ability to solve complex physical problems, from using tools to cooking dinners. Despite recent progress in commonsense-rich foundation models, the ability to do the same is still lacking in robots, particularly with learning-based approaches. In this talk, I will present a body of work that aims to transform Task and Motion Planning—one of the most powerful computational frameworks in robot planning—into a fully generative model framework, enabling compositional generalization in a largely data-driven approach. [ Michigan Robotics ]

2 weeks ago 16 votes
Amazon’s Vulcan Robots Now Stow Items Faster Than Humans

At an event in Dortmund, Germany today, Amazon announced a new robotic system called Vulcan, which the company is calling “its first robotic system with a genuine sense of touch—designed to transform how robots interact with the physical world.” In the short to medium term, the physical world that Amazon is most concerned with is its warehouses, and Vulcan is designed to assist (or take over, depending on your perspective) with stowing and picking items in its mobile robotic inventory system. Related: Amazon’s Vulcan Robots Are Mastering Picking Packages In two upcoming papers in IEEE Transactions on Robotics, Amazon researchers describe how both the stowing and picking side of the system operates. We covered stowing in detail a couple of years ago, when we spoke with Aaron Parness, the director of applied science at Amazon Robotics. Parness and his team have made a lot of progress on stowing since then, improving speed and reliability over more than 500,000 stows in operational warehouses to the point where the average stowing robot is now slightly faster than the average stowing human. We spoke with Parness to get an update on stowing, as well as an in-depth look at how Vulcan handles picking, which you can find in this separate article. It’s a much different problem, and well worth a read. Optimizing Amazon’s Stowing Process Stowing is the process by which Amazon brings products into its warehouses and adds them to its inventory so that you can order them. Not surprisingly, Amazon has gone to extreme lengths to optimize this process to maximize efficiency in both space and time. Human stowers are presented with a mobile robotic pod full of fabric cubbies (bins) with elastic bands across the front of them to keep stuff from falling out. The human’s job is to find a promising space in a bin, pull the plastic band aside, and stuff the thing into that space. The item’s new home is recorded in Amazon’s system, the pod then drives back into the warehouse, and the next pod comes along, ready for the next item. Different manipulation tools are used to interact with human-optimized bins.Amazon The new paper on stowing includes some interesting numbers about Amazon’s inventory-handling process that helps put the scale of the problem in perspective. More than 14 billion items are stowed by hand every year at Amazon warehouses. Amazon is hoping that Vulcan robots will be able to stow 80 percent of these items at a rate of 300 items per hour, while operating 20 hours per day. It’s a very, very high bar. After a lot of practice, Amazon’s robots are now quite good at the stowing task. Parness tells us that the stow system is operating three times as fast as it was 18 months ago, meaning that it’s actually a little bit faster than an average human. This is exciting, but as Parness explains, expert humans still put the robots to shame. “The fastest humans at this task are like Olympic athletes. They’re far faster than the robots, and they’re able to store items in pods at much higher densities.” High density is important because it means that more stuff can fit into warehouses that are physically closer to more people, which is especially relevant in urban areas where space is at a premium. The best humans can get very creative when it comes to this physical three-dimensional “Tetris-ing,” which the robots are still working on. Where robots do excel is planning ahead, and this is likely why the average robot stower is now able to outpace the average human stower—Tetris-ing is a mental process, too. In the same way that good Tetris players are thinking about where the next piece is going to go, not just the current piece, robots are able to leverage a lot more information than humans can to optimize what gets stowed where and when, says Parness. “When you’re a person doing this task, you’ve got a buffer of 20 or 30 items, and you’re looking for an opportunity to fit those items into different bins, and having to remember which item might go into which space. But the robot knows all of the properties of all of our items at once, and we can also look at all of the bins at the same time along with the bins in the next couple of pods that are coming up. So we can do this optimization over the whole set of information in 100 milliseconds.” Essentially, robots are far better at optimization within the planning side of Tetrising, while humans are (still) far better at the manipulation side, but that gap is closing as robots get more experienced at operating in clutter and contact. Amazon has had Vulcan stowing robots operating for over a year in live warehouses in Germany and Washington state to collect training data, and those robots have successfully stowed hundreds of thousands of items. Stowing is of course only half of what Vulcan is designed to do. Picking offers all kinds of unique challenges too, and you can read our in-depth discussion with Parness on that topic right here.

2 weeks ago 9 votes
Amazon’s Vulcan Robots Are Mastering Picking Packages

As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything in an Amazon warehouse is (usually) exactly where it’s supposed to be, which is typically jammed into some pseudorandom fabric bin the size of a shoebox along with a bunch of other pseudorandom crap. Somehow, this turns out to be the most space- and time-efficient way of doing things, because (as we’ve written about before) you have to consider the process of stowing items away in a warehouse as well as the process of picking them, and that involves some compromises in favor of space and speed. For humans, this isn’t so much of a problem. When someone orders something on Amazon, a human can root around in those bins, shove some things out of the way, and then pull out the item that they’re looking for. This is exactly the sort of thing that robots tend to be terrible at, because not only is this process slightly different every single time, it’s also very hard to define exactly how humans go about it. Related: Amazon’s Vulcan Robots Now Stow Items Faster Than Humans As you might expect, Amazon has been working very very hard on this picking problem. Today at an event in Germany, the company announced Vulcan, a robotic system that can both stow and pick items at human(ish) speeds. Last time we talked with Aaron Parness, the director of applied science at Amazon Robotics, our conversation was focused on stowing—putting items into bins. As part of today’s announcement, Amazon revealed that its robots are now slightly faster at stowing than the average human is. But in the stow context, there’s a limited amount that a robot really has to understand about what’s actually happening in the bin. Fundamentally, the stowing robot’s job is to squoosh whatever is currently in a bin as far to one side as possible in order to make enough room to cram a new item in. As long as the robot is at least somewhat careful not to crushify anything, it’s a relatively straightforward task, at least compared to picking. The choices made when an item is stowed into a bin will affect how hard it is to get that item out of that bin later on—this is called “bin etiquette.” Amazon is trying to learn bin etiquette with AI to make picking more efficient.Amazon The defining problem of picking, as far as robots are concerned, is sensing and manipulation in clutter. “It’s a naturally contact-rich task, and we have to plan on that contact and react to it,” Parness says. And it’s not enough to solve these problems slowly and carefully, because Amazon Robotics is trying to put robots in production, which means that its systems are being directly compared to a not-so-small army of humans who are doing this exact same job very efficiently. “There’s a new science challenge here, which is to identify the right item,” explains Parness. The thing to understand about identifying items in an Amazon warehouse is that there are a lot of them: something like 400 million unique items. One single floor of an Amazon warehouse can easily contain 15,000 pods, which is over a million bins, and Amazon has several hundred warehouses. This is a lot of stuff. In theory, Amazon knows exactly which items are in every single bin. Amazon also knows (again, in theory), the weight and dimensions of each of those items, and probably has some pictures of each item from previous times that the item has been stowed or picked. This is a great starting point for item identification, but as Parness points out, “We have lots of items that aren’t feature rich—imagine all of the different things you might get in a brown cardboard box.” Clutter and Contact As challenging as it is to correctly identify an item in a bin that may be stuffed to the brim with nearly identical items, an even bigger challenge is actually getting that item that you just identified out of the bin. The hardware and software that humans have for doing this task is unmatched by any robot, which is always a problem, but the real complicating factor is dealing with items that are all jumbled together in a small fabric bin. And the picking process itself involves more than just extraction—once the item is out of the bin, you then have to get it to the next order-fulfillment step, which means dropping it into another bin or putting it on a conveyor or something. “When we were originally starting out, we assumed we’d have to carry the item over some distance after we pulled it out of the bin,” explains Parness. “So we were thinking we needed pinch grasping.” A pinch grasp is when you grab something between a finger (or fingers) and your thumb, and at least for humans, it’s a versatile and reliable way of grabbing a wide variety of stuff. But as Parness notes, for robots in this context, it’s more complicated: “Even pinch grasping is not ideal because if you pinch the edge of a book, or the end of a plastic bag with something inside it, you don’t have pose control of the item and it may flop around unpredictably.” At some point, Parness and his team realized that while an item did have to move farther than just out of the bin, it didn’t actually have to get moved by the picking robot itself. Instead, they came up with a lifting conveyor that positions itself directly outside of the bin being picked from, so that all the robot has to do is get the item out of the bin and onto the conveyor. “It doesn’t look that graceful right now,” admits Parness, but it’s a clever use of hardware to substantially simplify the manipulation problem, and has the side benefit of allowing the robot to work more efficiently, since the conveyor can move the item along while the arm starts working on the next pick. Amazon’s robots have different techniques for extracting items from bins, using different gripping hardware depending on what needs to be picked. The type of end effector that the system chooses and the grasping approach depend on what the item is, where it is in the bin, and also what it’s next to. It’s a complicated planning problem that Amazon is tackling with AI, as Parness explains. “We’re starting to build foundation models of items, including properties like how squishy they are, how fragile they are, and whether they tend to get stuck on other items or no. So we’re trying to learn those things, and it’s early stage for us, but we think reasoning about item properties is going to be important to get to that level of reliability that we need.” Reliability has to be superhigh for Amazon (and with many other commercial robotic deployments) simply because small errors multiplied over huge deployments result in an unacceptable amount of screwing up. There’s a very, very long tail of unusual things that Amazon’s robots might encounter when trying to extract an item from a bin. Even if there’s some particularly weird bin situation that might only show up once in a million picks, that still ends up happening many times per day on the scale at which Amazon operates. Fortunately for Amazon, they’ve got humans around, and part of the reason that this robotic system can be effective in production at all is that if the robot gets stuck, or even just sees a bin that it knows is likely to cause problems, it can just give up, route that particular item to a human picker, and move on to the next one. The other new technique that Amazon is implementing is a sort of modern approach to “visual servoing,” where the robot watches itself move and then adjusts its movement based on what it sees. As Parness explains: “It’s an important capability because it allows us to catch problems before they happen. I think that’s probably our biggest innovation, and it spans not just our problem, but problems across robotics.” A (More) Automated Future Parness was very clear that (for better or worse) Amazon isn’t thinking about its stowing and picking robots in terms of replacing humans completely. There’s that long tail of items that need a human touch, and it’s frankly hard to imagine any robotic-manipulation system capable enough to make at least occasional human help unnecessary in an environment like an Amazon warehouse, which somehow manages to maximize organization and chaos at the same time. These stowing and picking robots have been undergoing live testing in an Amazon warehouse in Germany for the past year, where they’re already demonstrating ways in which human workers could directly benefit from their presence. For example, Amazon pods can be up to 2.5 meters tall, meaning that human workers need to use a stepladder to reach the highest bins and bend down to reach the lowest ones. If the robots were primarily tasked with interacting with these bins, it would help humans work faster while putting less stress on their bodies. With the robots so far managing to keep up with human workers, Parness tells us that the emphasis going forward will be primarily on getting better at not screwing up: “I think our speed is in a really good spot. The thing we’re focused on now is getting that last bit of reliability, and that will be our next year of work.” While it may seem like Amazon is optimizing for its own very specific use cases, Parness reiterates that the bigger picture here is using every last one of those 400 million items jumbled into bins as a unique opportunity to do fundamental research on fast, reliable manipulation in complex environments. “If you can build the science to handle high contact and high clutter, we’re going to use it everywhere,” says Parness. “It’s going to be useful for everything, from warehouses to your own home. What we’re working on now are just the first problems that are forcing us to develop these capabilities, but I think it’s the future of robotic manipulation.”

2 weeks ago 12 votes

More in science

Animals as chemical factories

The price of purple

4 hours ago 1 votes
32 Bits That Changed Microprocessor Design

In the late 1970s, a time when 8-bit processors were state of the art and CMOS was the underdog of semiconductor technology, engineers at AT&T’s Bell Labs took a bold leap into the future. They made a high-stakes bet to outpace IBM, Intel, and other competitors in chip performance by combining cutting-edge 3.5-micron CMOS fabrication with a novel 32-bit processor architecture. Although their creation—the Bellmac-32 microprocessor—never achieved the commercial fame of earlier ones such as Intel’s 4004 (released in 1971), its influence has proven far more enduring. Virtually every chip in smartphones, laptops, and tablets today relies on the complementary metal-oxide semiconductor principles that the Bellmac-32 pioneered. As the 1980s approached, AT&T was grappling with transformation. For decades, the telecom giant—nicknamed “Ma Bell”—had dominated American voice communications, with its Western Electric subsidiary manufacturing nearly every telephone found in U.S. homes and offices. The U.S. federal government was pressing for antitrust-driven divestiture, but AT&T was granted an opening to expand into computing. With computing firms already entrenched in the market, AT&T couldn’t afford to play catch-up; its strategy was to leap ahead, and the Bellmac-32 was its springboard. The Bellmac-32 chip series has now been honored with an IEEE Milestone. Dedication ceremonies are slated to be held this year at the Nokia Bell Labs’ campus in Murray Hill, N.J., and at the Computer History Museum in Mountain View, Calif. A chip like no other Rather than emulate the industry standard of 8-bit chips, AT&T executives challenged their Bell Labs engineers to deliver something revolutionary: the first commercially viable microprocessor capable of moving 32 bits in one clock cycle. It would require not just a new chip but also an entirely novel architecture—one that could handle telecommunications switching and serve as the backbone for future computing systems. “We weren’t just building a faster chip,” says Michael Condry, who led the architecture team at Bell Labs’ Holmdel facility in New Jersey. “We were trying to design something that could carry both voice and computation into the future.” This configuration of the Bellmac-32 microprocessor had an integrated memory management unit optimized for Unix-like operating systems.AT&T Archives and History Center At the time, CMOS technology was seen as a promising—but risky—alternative to the NMOS and PMOS designs then in use. NMOS chips, which relied solely on N-type transistors, were fast but power-hungry. PMOS chips, which depend on the movement of positively-charged holes, were too slow. CMOS, with its hybrid design, offered the potential for both speed and energy savings. The benefits were so compelling that the industry soon saw that the need for double the number of transistors (NMOS and PMOS for each gate) was worth the tradeoff. As transistor sizes shrank along with the rapid advancement of semiconductor technology described by Moore’s Law, the cost of doubling up the transistor density soon became manageable and eventually became negligible. But when Bell Labs took its high-stakes gamble, large-scale CMOS fabrication was still unproven and looked to be comparatively costly. That didn’t deter Bell Labs. By tapping expertise from its campuses in Holmdel and Murray Hill as well as in Naperville, Ill., the company assembled a dream team of semiconductor engineers. The team included Condry; Sung-Mo “Steve” Kang, a rising star in chip design; Victor Huang, another microprocessor chip designer, and dozens of AT&T Bell Labs employees. They set out in 1978 to master a new CMOS process and create a 32-bit microprocessor from scratch. Designing the architecture The architecture group led by Condry, an IEEE Life Fellow who would later become Intel’s CTO, focused on building a system that would natively support the Unix operating system and the C programming language. Both were in their infancy but destined for dominance. To cope with the era’s memory limitations—kilobytes were precious—they introduced a complex instruction set that required fewer steps to carry out and could be executed in a single clock cycle. The engineers also built the chip to support the VersaModule Eurocard (VME) parallel bus, enabling distributed computing so several nodes could handle data processing in parallel. Making the chip VME-enabled also allowed it to be used for real-time control. The group wrote its own version of Unix, with real-time capabilities to ensure that the new chip design was compatible with industrial automation and similar applications. The Bell Labs engineers also invented domino logic, which ramped up processing speed by reducing delays in complex logic gates. Additional testing and verification techniques were developed and introduced via the Bellmac-32 Module, a sophisticated multi-chipset verification and testing project led by Huang that allowed the complex chip fabrication to have zero or near-zero errors. This was the first of its kind in VLSI testing. The Bell Labs engineers’ systematic plan for double- and triple-checking their colleagues’ work ultimately made the total design of the multiple chipset family work together seamlessly as a complete microcomputer system. Then came the hardest part: actually building the chip. Floor maps and colored pencils “The technology for layout, testing, and high-yield fabrication just wasn’t there,” recalls Kang, an IEEE Life Fellow who later became president of the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea. With no CAD tools available for full-chip verification, Kang says, the team resorted to printing oversize Calcomp plots. The schematics showed how the transistors, circuit lines, and interconnects should be arranged inside the chip to provide the desired outputs. The team assembled them on the floor with adhesive tape to create a massive square map more than 6 meters on a side. Kang and his colleagues traced every circuit by hand with colored pencils, searching for breaks, overlaps, or mishandled interconnects. Getting it made Once the physical design was locked in, the team faced another obstacle: manufacturing. The chips were fabricated at a Western Electric facility in Allentown, Pa., but Kang recalls that the yield rates (the percentage of chips on a silicon wafer that meet performance and quality standards) were dismal. To address that, Kang and his colleagues drove from New Jersey to the plant each day, rolled up their sleeves, and did whatever it took, including sweeping floors and calibrating test equipment, to build camaraderie and instill confidence that the most complicated product the plant workers had ever attempted to produce could indeed be made there. “We weren’t just building a faster chip. We were trying to design something that could carry both voice and computation into the future.” —Michael Condry, Bellmac-32 architecture team lead “The team-building worked out well,” Kang says. “After several months, Western Electric was able to produce more than the required number of good chips.” The first version of the Bellmac-32, which was ready by 1980, fell short of expectations. Instead of hitting a 4-megahertz performance target, it ran at just 2 MHz. The engineers discovered that the state-of-the-art Takeda Riken testing equipment they were using was flawed, with transmission-line effects between the probe and the test head leading to inaccurate measurements, so they worked with a Takeda Riken team to develop correction tables that rectified the measurement errors. The second generation of Bellmac chips had clock speeds that exceeded 6.2 MHz, sometimes reaching 9. That was blazing fast for its time. The 16-bit Intel 8008 processor inside IBM’s original PC released in 1981 ran at 4.77 MHz. Why Bellmac-32 didn’t go mainstream Despite its technical promise, the Bellmac-32 did not find wide commercial use. According to Condry, AT&T’s pivot toward acquiring equipment manufacturer NCR, which it began eyeing in the late 1980s, meant the company chose to back a different line of chips. But by then, the Bellmac-32’s legacy was already growing. “Before Bellmac-32, NMOS was dominant,” Condry says. “But CMOS changed the market because it was shown to be a more effective implementation in the fab.” In time, that realization reshaped the semiconductor landscape. CMOS would become the foundation for modern microprocessors, powering the digital revolution in desktops, smartphones, and more. The audacity of Bell Labs’ bet—to take an untested fabrication process and leapfrog an entire generation of chip architecture—stands as a landmark moment in technological history. As Kang puts it: “We were on the frontier of what was possible. We didn’t just follow the path—we made a new one.” Huang, an IEEE Life Fellow who later became deputy director of the Institute of Microelectronics, Singapore, adds: “This included not only chip architecture and design, but also large-scale chip verification—with CAD but without today’s digital simulation tools or even breadboarding [which is the standard method for checking whether a circuit design for an electronic system that uses chips works before making permanent connections by soldering the circuit elements together].” Condry, Kang, and Huang look back fondly on that period and express their admiration for the many AT&T employees whose skill and dedication made the Bellmac-32 chip series possible. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE North Jersey Section sponsored the nomination.

yesterday 4 votes
In Test, A.I. Weather Model Fails to Predict Freak Storm

Artificial intelligence is powering weather forecasts that are generally more accurate than conventional forecasts and are faster and cheaper to produce. But new research shows A.I. may fail to predict unprecedented weather events, a troubling finding as warming fuels new extremes. Read more on E360 →

yesterday 2 votes
Graduate Student Solves Classic Problem About the Limits of Addition

A new proof illuminates the hidden patterns that emerge when addition becomes impossible. The post Graduate Student Solves Classic Problem About the Limits of Addition first appeared on Quanta Magazine

yesterday 2 votes
My very busy week

I’m not sure who scheduled ODSC and PyConUS during the same week, but I am unhappy with their decisions. Last Tuesday I presented a talk and co-presented a workshop at ODSC, and on Thursday I presented a tutorial at PyCon. If you would like to follow along with my very busy week, here are the resources: Practical Bayesian Modeling with PyMC Co-presented with Alex Fengler for ODSC East 2025 In this tutorial, we explore Bayesian regression using PyMC – the... Read More Read More The post My very busy week appeared first on Probably Overthinking It.

yesterday 5 votes