Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
36
The Modified Agile for Hardware Development (MAHD) Framework is the ultimate solution for hardware teams seeking the benefits of Agile without the pitfalls of applying software-centric methods. Traditional development approaches, like waterfall, often result in delayed timelines, high risks, and misaligned priorities. Meanwhile, software-based Agile frameworks fail to account for hardware's complexity. MAHD resolves these challenges with a tailored process that blends Agile principles with hardware-specific strategies. Central to MAHD is its On-ramp process, a five-step method designed to kickstart projects with clarity and direction. Teams define User Stories to capture customer needs, outline Product Attributes to guide development, and use the Focus Matrix to link solutions to outcomes. Iterative IPAC cycles, a hallmark of the MAHD Framework, ensure risks are addressed early and progress is continuously tracked. These cycles emphasize integration, prototyping, alignment, and...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from IEEE Spectrum

Video Friday: Helix

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! We’re introducing Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics. This is moderately impressive; my favorite part is probably the hand-offs and that extra little bit of HRI with what we’d call eye contact if these robots had faces. But keep in mind that you’re looking at close to best case for robotic manipulation, and that if the robots had been given the bag instead of well-spaced objects on a single color background, or if the fridge had a normal human amount of stuff in it, they might be having a much different time of it. Also, is it just me, or is the sound on this video very weird? Like, some things make noise, some things don’t, and the robots themselves occasionally sound more like someone just added in some ‘soft actuator sound’ or something. Also also, I’m of a suspicious nature, and when there is an abrupt cut between ‘robot grasps door’ and ‘robot opens door,’ I assume the worst. [ Figure ] Researchers at EPFL have developed a highly agile flat swimming robot. This robot is smaller than a credit card, and propels on the water surface using a pair of undulating soft fins. The fins are driven at resonance by artificial muscles, allowing the robot to perform complex maneuvers. In the future, this robot can be used for monitoring water quality or help with measuring fertilizer concentrations in rice fields [ Paper ] via [ Science Robotics ] I don’t know about you, but I always dance better when getting beaten with a stick. [ Unitree Robotics ] This is big news, people: Sweet Bite Ham Ham, one of the greatest and most useless robots of all time, has a new treat. All yours for about $100, overseas shipping included. [ Ham Ham ] via [ Robotstart ] MagicLab has announced the launch of its first generation self-developed dexterous hand product, the MagicHand S01. The MagicHand S01 has 11 degrees of freedom in a single hand. The MagicHand S01 has a hand load capacity of up to 5 kilograms, and in work environments, can carry loads of over 20 kilograms. [ MagicLab ] Thanks, Ni Tao! No, I’m not creeped out at all, why? [ Clone Robotics ] Happy 40th Birthday to the MIT Media Lab! Since 1985, the MIT Media Lab has provided a home for interdisciplinary research, transformative technologies, and innovative approaches to solving some of humanity’s greatest challenges. As we celebrate our 40th anniversary year, we’re looking ahead to decades more of imagining, designing, and inventing a future in which everyone has the opportunity to flourish. [ MIT Media Lab ] While most soft pneumatic grippers that operate with a single control parameter (such as pressure or airflow) are limited to a single grasping modality, this article introduces a new method for incorporating multiple grasping modalities into vacuum-driven soft grippers. This is achieved by combining stiffness manipulation with a bistable mechanism. Adjusting the airflow tunes the energy barrier of the bistable mechanism, enabling changes in triggering sensitivity and allowing swift transitions between grasping modes. This results in an exceptional versatile gripper, capable of handling a diverse range of objects with varying sizes, shapes, stiffness, and roughness, controlled by a single parameter, airflow, and its interaction with objects. [ Paper ] via [ BruBotics ] Thanks, Bram! In this article, we present a design concept, in which a monolithic soft body is incorporated with a vibration-driven mechanism, called Leafbot. This proposed investigation aims to build a foundation for further terradynamics study of vibration-driven soft robots in a more complicated and confined environment, with potential applications in inspection tasks. [ Paper ] via [ IEEE Transactions on Robots ] We present a hybrid aerial-ground robot that combines the versatility of a quadcopter with enhanced terrestrial mobility. The vehicle features a passive, reconfigurable single wheeled leg, enabling seamless transitions between flight and two ground modes: a stable stance and a dynamic cruising configuration. [ Robotics and Intelligent Systems Laboratory ] I’m not sure I’ve ever seen this trick performed by a robot with soft fingers before. [ Paper ] There are a lot of robots involved in car manufacturing. Like, a lot. [ Kawasaki Robotics ] Steve Willits shows us some recent autonomous drone work being done at the AirLab at CMU’s Robotics Institute. [ Carnegie Mellon University Robotics Institute ] Somebody’s got to test all those luxury handbags and purses. And by somebody, I mean somerobot. [ Qb Robotics ] Do not trust people named Evan. [ Tufts University Human-Robot Interaction Lab ] Meet the Mind: MIT Professor Andreea Bobu. [ MIT ]

21 hours ago 3 votes
Reinforcement Learning Triples Spot’s Running Speed

About a year ago, Boston Dynamics released a research version of its Spot quadruped robot, which comes with a low-level application programming interface (API) that allows direct control of Spot’s joints. Even back then, the rumor was that this API unlocked some significant performance improvements on Spot, including a much faster running speed. That rumor came from the Robotics and AI (RAI) Institute, formerly The AI Institute, formerly the Boston Dynamics AI Institute, and if you were at Marc Raibert’s talk at the ICRA@40 conference in Rotterdam last fall, you already know that it turned out not to be a rumor at all. Today, we’re able to share some of the work that the RAI Institute has been doing to apply reality-grounded reinforcement learning techniques to enable much higher performance from Spot. The same techniques can also help highly dynamic robots operate robustly, and there’s a brand new hardware platform that shows this off: an autonomous bicycle that can jump. See Spot Run This video is showing Spot running at a sustained speed of 5.2 meters per second (11.6 miles per hour). Out of the box, Spot’s top speed is 1.6 meters per second, meaning that RAI’s spot has more than tripled (!) the quadruped’s factory speed. If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like a real dog at all. “The gait is not biological, but the robot isn’t biological,” explains Farbod Farshidian, roboticist at the RAI Institute. “Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.” The best Farshidian can categorize how Spot is moving is that it’s somewhat similar to a trotting gait, except with an added flight phase (with all four feet off the ground at once) that technically turns it into a run. This flight phase is necessary, Farshidian says, because the robot needs that time to successively pull its feet forward fast enough to maintain its speed. This is a “discovered behavior,” in that the robot was not explicitly programmed to “run,” but rather was just required to find the best way of moving as fast as possible. Reinforcement Learning Versus Model Predictive Control The Spot controller that ships with the robot when you buy it from Boston Dynamics is based on model predictive control (MPC), which involves creating a software model that approximates the dynamics of the robot as best you can, and then solving an optimization problem for the tasks that you want the robot to do in real time. It’s a very predictable and reliable method for controlling a robot, but it’s also somewhat rigid, because that original software model won’t be close enough to reality to let you really push the limits of the robot. And if you try to say, “okay, I’m just going to make a super detailed software model of my robot and push the limits that way,” you get stuck because the optimization problem has to be solved for whatever you want the robot to do, in real time, and the more complex the model is, the harder it is to do that quickly enough to be useful. Reinforcement learning (RL), on the other hand, learns offline. You can use as complex of a model as you want, and then take all the time you need in simulation to train a control policy that can then be run very efficiently on the robot. Your browser does not support the video tag. In simulation, a couple of Spots (or hundreds of Spots) can be trained in parallel for robust real-world performance.Robotics and AI Institute In the example of Spot’s top speed, it’s simply not possible to model every last detail for all of the robot’s actuators within a model-based control system that would run in real time on the robot. So instead, simplified (and typically very conservative) assumptions are made about what the actuators are actually doing so that you can expect safe and reliable performance. Farshidian explains that these assumptions make it difficult to develop a useful understanding of what performance limitations actually are. “Many people in robotics know that one of the limitations of running fast is that you’re going to hit the torque and velocity maximum of your actuation system. So, people try to model that using the data sheets of the actuators. For us, the question that we wanted to answer was whether there might exist some other phenomena that was actually limiting performance.” Searching for these other phenomena involved bringing new data into the reinforcement learning pipeline, like detailed actuator models learned from the real world performance of the robot. In Spot’s case, that provided the answer to high-speed running. It turned out that what was limiting Spot’s speed was not the actuators themselves, nor any of the robot’s kinematics: It was simply the batteries not being able to supply enough power. “This was a surprise for me,” Farshidian says, “because I thought we were going to hit the actuator limits first.” Spot’s power system is complex enough that there’s likely some additional wiggle room, and Farshidian says the only thing that prevented them from pushing Spot’s top speed past 5.2 m/s is that they didn’t have access to the battery voltages so they weren’t able to incorporate that real world data into their RL model. “If we had beefier batteries on there, we could have run faster. And if you model that phenomena as well in our simulator, I’m sure that we can push this farther.” Farshidian emphasizes that RAI’s technique is about much more than just getting Spot to run fast—it could also be applied to making Spot move more efficiently to maximize battery life, or more quietly to work better in an office or home environment. Essentially, this is a generalizable tool that can find new ways of expanding the capabilities of any robotic system. And when real world data is used to make a simulated robot better, you can ask the simulation to do more, with confidence that those simulated skills will successfully transfer back onto the real robot. Ultra Mobility Vehicle: Teaching Robot Bikes to Jump Reinforcement learning isn’t just good for maximizing the performance of a robot—it can also make that performance more reliable. The RAI Institute has been experimenting with a completely new kind of robot that they invented in-house: a little jumping bicycle called the Ultra Mobility Vehicle, or UMV, which was trained to do parkour using essentially the same RL pipeline for balancing and driving as was used for Spot’s high speed running. There’s no independent physical stabilization system (like a gyroscope) keeping the UMV from falling over; it’s just a normal bike that can move forwards and backwards and turn its front wheel. As much mass as possible is then packed into the top bit, which actuators can rapidly accelerate up and down. “We’re demonstrating two things in this video,” says Marco Hutter, director of the RAI Institute’s Zurich office. “One is how reinforcement learning helps make the UMV very robust in its driving capabilities in diverse situations. And second, how understanding the robots’ dynamic capabilities allows us to do new things, like jumping on a table which is higher than the robot itself.” “The key of RL in all of this is to discover new behavior and make this robust and reliable under conditions that are very hard to model. That’s where RL really, really shines.” —Marco Hutter, The RAI Institute As impressive as the jumping is, for Hutter, it’s just as difficult (if not more difficult) to do maneuvers that may seem fairly simple, like riding backwards. “Going backwards is highly unstable,” Hutter explains. “At least for us, it was not really possible to do that with a classical [MPC] controller, particularly over rough terrain or with disturbances.” Getting this robot out of the lab and onto terrain to do proper bike parkour is a work in progress that the RAI Institute says they’ll be able to demonstrate in the near future, but it’s really not about what this particular hardware platform can do—it’s about what any robot can do through RL and other learning-based methods, says Hutter. “The bigger picture here is that the hardware of such robotic systems can in theory do a lot more than we were able to achieve with our classic control algorithms. Understanding these hidden limits in hardware systems lets us improve performance and keep pushing the boundaries on control.” Your browser does not support the video tag. Teaching the UMV to drive itself down stairs in sim results in a real robot that can handle stairs at any angle.Robotics and AI Institute Reinforcement Learning for Robots Everywhere Just a few weeks ago, The RAI Institute announced a new partnership with Boston Dynamics ‘to advance humanoid robots through reinforcement learning.’ Humanoids are just another kind of robotic platform, albeit a significantly more complicated one with many more degrees of freedom and things to model and simulate. But when considering the limitations of model predictive control for this level of complexity, a reinforcement learning approach seems almost inevitable, especially when such an approach is already streamlined due to its ability to generalize. “One of the ambitions that we have as an institute is to have solutions which span across all kinds of different platforms,” says Hutter. “It’s about building tools, about building infrastructure, building the basis for this to be done in a broader context. So not only humanoids, but driving vehicles, quadrupeds, you name it. But doing RL research and showcasing some nice first proof of concept is one thing—pushing it to work in the real world under all conditions, while pushing the boundaries in performance, is something else.” Transferring skills into the real world has always been a challenge for robots trained in simulation, precisely because simulation is so friendly to robots. “If you spend enough time,” Farshidian explains, “you can come up with a reward function where eventually the robot will do what you want. What often fails is when you want to transfer that sim behavior to the hardware, because reinforcement learning is very good at finding glitches in your simulator and leveraging them to do the task.” Simulation has been getting much, much better, with new tools, more accurate dynamics, and lots of computing power to throw at the problem. “It’s a hugely powerful ability that we can simulate so many things, and generate so much data almost for free,” Hutter says. But the usefulness of that data is in its connection to reality, making sure that what you’re simulating is accurate enough that a reinforcement learning approach will in fact solve for reality. Bringing physical data collected on real hardware back into the simulation, Hutter believes, is a very promising approach, whether it’s applied to running quadrupeds or jumping bicycles or humanoids. “The combination of the two—of simulation and reality—that’s what I would hypothesize is the right direction.”

yesterday 2 votes
Saving Public Data Takes More Than Simple Snapshots

Shortly after the Trump administration took office in the United States in late January, more than 8,000 pages across several government websites and databases were taken down, the New York Times found. Though many of these have now been restored, thousands of pages were purged of references to gender and diversity initiatives, for example, and others including the U.S. Agency for International Development (USAID) website remain down. By 11 February, a federal judge ruled that the government agencies must restore public access to pages and datasets maintained by the Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA). While many scientists fled to online archives in a panic, ironically, the Justice Department had argued that the physicians who brought the case were not harmed because the removed information was available on the Internet Archive’s Wayback Machine. In response, a federal judge wrote, “The Court is not persuaded,” noting that a user must know the original URL of an archived page in order to view it. The administration’s legal argument “was a bit of an interesting accolade,” says Mark Graham, director of the Wayback Machine, who believes the judge’s ruling was “apropos.” Over the past few weeks, the Internet Archive and other archival sites have received attention for preserving government databases and websites. But these projects have been ongoing for years. The Internet Archive, for example, was founded as a nonprofit dedicated to providing universal access to knowledge nearly 30 years ago, and it now records more than a billion URLs every day, says Graham. Since 2008, Internet Archive has also hosted an accessible copy of the End of Term Web Archive, a collaboration that documents changes to federal government sites before and after administration changes. In the most recent collection, it has already archived more than 500 terabytes of material. Complementary Crawls The Internet Archive’s strength is scale, Graham says. “We can often [preserve] things quickly, at scale. But we don’t have deep experience in analysis.” Meanwhile, groups like the Environmental Data and Governance Initiative and the Association of Health Care Journalists provide help for activists and academics identifying and documenting changes. The Library Innovation Lab at Harvard Law School has also joined the efforts with its archive of data.gov, a 16 TB collection that includes more than 311,000 public datasets and is being updated daily with new data. The project began in late 2024, when the library realized that data sets are often missed in other web crawls, says Jack Cushman, a software engineer and director of the Library Innovation Lab. “You can miss anything where you have to interact with JavaScript or with a button or with a form.” —Jack Cushman, Library Innovation Lab A typical crawl has no trouble capturing basic HTML, PDF, or CSV files. But archiving interactive web services that are driven by databases poses a challenge. It would be impossible to archive a site like Amazon, for example, says Graham. The datasets the Library Innovation Lab (LIL) is working to archive are similarly tricky to capture. “If you’re doing a web crawl and just clicking from link to link, as the End of Term archive does, you can miss anything where you have to interact with JavaScript or with a button or with a form, where you have to ask for permission and then register or download something,” explains Cushman. “We wanted to do something that was complementary to existing web crawls, and the way we did that was to go into APIs,” he says. By going into the API’s, which bypass web pages to access data directly, the LIL’s program could fetch a complete catalog of the data sets—whether CSV, Excel, XML, or other file types—and pull the associated URLs to create an archive. In the case of data.gov, Cushman and his colleagues wrote a script to send the right 300 queries that would fetch 1,000 items per query, then go through the 300,000 total items to gather the data. “What we’re looking for is areas where some automation will unlock a lot of new data that wouldn’t otherwise be unlocked,” says Cushman. The other important factor for the LIL archive was to make sure the data was in a usable format. “You might get something in a web crawl where [the data] is there across 100,000 web pages, but it’s very hard to get it back out into a spreadsheet or something that you can analyze,” Cushman says. Making it usable, both in the data format and user interface, helps create a sustainable archive. Lots Of Copies Keep Stuff Safe The key to preserving the internet’s data is a principle that goes by the acronym LOCKSS: Lots Of Copies Keep Stuff Safe. When the Internet Archive suffered a cyberattack last October, the Archive took down the site for a three-and-a-half week period to audit the entire site and implement security upgrades. “Libraries have traditionally always been under attack, so this is no different,” Graham says. As part of its defense, the Archive now has several copies of the materials in disparate physical locations, both inside and outside the U.S. “The US government is the world’s largest publisher,” Graham notes. It publishes material on a wide range of topics, and “much of it is beneficial to people, not only in this country, but throughout the world, whether that is about energy or health or agriculture or security.” And the fact that many individuals and organizations are contributing to preservation of the digital world is actually a good thing. “The goal is for those copies to be diverse across every metric that you can think of. They should be on different kinds of media. They should be controlled by different people, with different funding sources, in different formats,” says Cushman. “Every form of similarity between your backups creates a risk of loss.” The data.gov archive has its primary copy stored through a cloud service with others as backup. The archive also includes open source software to make it easy to replicate. In addition to maintaining copies, Cushman says it’s important to include cryptographic signatures and timestamps. Each time an archive is created, it’s signed with cryptographic proof of the creator’s email address and time, which can help verify the validity of an archive. An Ongoing Challenge Since President Trump took office, a lot of material has been removed from US federal websites—quantifiably more than previous new administrations, says Graham. On a global scale, however, this isn’t unprecedented, he adds. In the U.S., official government websites have been changed with each new administration since Bill Clinton’s, notes Jason Scott, a “free range archivist” at the Internet Archive and co-founder of digital preservation site Archive Team. “This one’s more chaotic,” Scott says. But “the web is a very high entropy entity ... Google is an archive like a supermarket is a food museum.” The job of digital archivists is a difficult one, especially with a backlog of sites that have existed across the evolution of internet standards. But these efforts are not new. “The ramping up will only be in terms of disk space and bandwidth resources, not the process that has been ongoing,” says Scott. For Cushman, working on this project has underscored the value of public data. “The government data that we have is like a GPS signal,” he says. “It doesn’t tell us where to go, but it tells us what’s around us, so that we can make decisions. Engaging with it for the first time this way has really helped me appreciate what a treasure we have.”

3 days ago 5 votes
Willie Hobbs Moore: STEM Trailblazer

At a time in American history when even the most intelligent Black women were expected to become, at most, teachers or nurses, Willie Hobbs Moore broke with societal expectations to become a noted physicist and engineer. Moore probably is best known for being the first Black woman to earn a Ph.D. in science (physics) in the United States, in 1972. She also is renowned for being an unwavering advocate for getting more Black people into science, technology, engineering, and mathematics. Her achievements have inspired generations of Black students, and women especially, to believe that they could pursue a STEM career. Moore, who died in her Ann Arbor, Mich., home on 14 March 1994, two months shy of her 60th birthday, is the subject of the new book Willie Hobbs Moore—You’ve Got to Be Excellent! The biography, published by IEEE-USA, is the seventh in the organization’s Famous Women Engineers in History series. Moore attended the University of Michigan in Ann Arbor, where she earned bachelor’s and master’s degrees in electrical engineering and, in 1972, her barrier-breaking doctorate in physics. In 2013, the University of Michigan Women in Science and Engineering unit created the Willie Hobbs Moore Awards to honor students, staff, and faculty members who “demonstrate excellence promoting equity” in STEM fields. The university held a symposium in 2022 to honor Moore’s work and celebrate the 50th anniversary of her achievement. Physicist Donnell Walton, former director of the Corning West Technology Center in Silicon Valley and a National Society of Black Physicists board member, praised Moore, saying she indicated that what’s possible is not limited to what’s expected. Walton befriended Moore while he was pursuing his doctorate in applied physics at the university, he says, adding that he admired the strength and perseverance it took for her to thrive in academic and professional arenas where she was the only Black woman. Despite ingrained social norms that tended to push women and minorities into lower-status occupations, Moore refused to be dissuaded from her career. She conducted physics research at the University of Michigan and held several positions in industry before joining Ford Motor Co. in Dearborn, Mich., in 1977. She became a U.S. expert in Japanese quality systems and engineering design, improving Ford’s production processes. She rose through the ranks at the automaker and served as an executive who oversaw the warranty department within the company’s automobile assembly operation. An early trailblazer Moore was born in 1934 in Atlantic City, N.J. According to a Physics Today article that delved into her background, her father was a plumber and her mother worked part time as a hotel chambermaid. An A student throughout high school, Moore displayed a talent for science and mathematics. She became the first person in her family to earn a college degree. She began her studies at the Michigan engineering college in 1954—the same year that the U.S. Supreme Court ruled against legally mandated segregation in public schools. Moore was the only Black female undergraduate in the electrical engineering program. Her academic success makes it clear that being one of one was not an impediment. But race was occasionally an issue. In that same 2022 Physics Today article, Ronald E. Mickens, a physics professor at Clark Atlanta University, told a story about an incident from Moore’s undergraduate days that illustrates the point. One day she encountered the chairman of another engineering college department, and completely unprompted, he told her, “You don’t belong here. Even if you manage to finish, there is no place for you in the professional world you seek.” “There will always be prejudiced people; you’ve got to be prepared to survive in spite of their attitudes.” —Willie Hobbs Moore But she persevered, maintaining her standard of excellence in her academic pursuits. She earned a bachelor’s degree in EE in 1958, followed by an EE master’s degree in 1961. She was the first Black woman to earn those degrees at Michigan. She worked as an engineer at several companies before returning to the university in 1966 to begin working toward a doctorate. She conducted her graduate research under the direction of Samuel Krimm, a noted infrared spectroscopist. Krimm’s work focused on analyzing materials using infrared so he could study their molecular structures. Moore’s dissertation was a theoretical analysis of secondary chlorides for polyvinyl chloride polymers. PVC, a type of plastic, is widely used in construction, health care, and packaging. Moore’s work led to the development of additives that gave PVC pipes greater thermal and mechanical stability and improved their durability. Moore paid for her doctoral studies by working part time at the university, KMS Industries, and Datamax Corp., all in Ann Arbor. Joining KMS as a systems analyst, she supported the optics design staff and established computer requirements for the optics division. She left KMS in 1968 to become a senior analyst at Datamax. In that role, she headed the analytics group, which evaluated the company’s products. After earning her Ph.D. in 1972, for the next five years she was a postdoctoral Fellow and lecturer with the university’s Macromolecular Research Center. She authored more than a dozen papers on protein spectroscopy—the science of analyzing proteins’ structure, composition, and activity by measuring how they interact with electromagnetic radiation. Her work appeared in several prestigious publications including the Journal of Applied Physics, The Journal of Chemical Physics, and the Journal of Molecular Spectroscopy. Despite a promising career in academia, Moore left to work in industry. Ford’s quality control queen Moore joined Ford in 1977 as an assembly engineer. In an interview with The Ann Arbor News, she recalled contending with racial hostility and overt accusations that she was underqualified and had been hired only to fill a quota that was part of the company’s affirmative action program. She demonstrated her value to the organization and became an expert in Japanese methods of quality engineering and manufacturing, particularly those invented by Genichi Taguchi, a renowned engineer and statistician. The Taguchi method emphasized continuous improvement, waste reduction, and employee involvement in projects. Moore pushed Ford to use the approach, which led to higher-quality products and better efficiency. The changes proved critical to boosting the company’s competitiveness against Japanese automakers, which had begun to dominate the automobile market in the late 1970s and early 1980s. Eventually, Moore rose to the company’s executive ranks, overseeing the warranty department of Ford’s assembly operation. In 1985 Moore co-wrote the book Quality Engineering Products and Process Design Optimization with Yuin Wu, vice president of Taguchi Methods Training at ASI Consulting Group in Bingham Farms, Mich. ASI helps businesses develop strategies for improving productivity, engineering, and product quality. In their book, Moore and Wu wrote, “Philosophically, the Taguchi approach is technology rather than theory. It is inductive rather than deductive. It is an engineering tool. The Taguchi approach is concerned with productivity enhancement and cost-effectiveness.” Encouraging more Blacks to study STEM Moore was active in STEM education for minorities, as explored in an article about her published by the American Physical Society. She brought her skills and experience to volunteer activities, intending to produce more STEM professionals who looked like her. She was involved in community science and math programs in Ann Arbor, sponsored by The Links, a service organization for Black women. She also was active with Delta Sigma Theta, a historically Black, service-oriented sorority. She volunteered with the Saturday Academy, a community mentoring program that focuses on developing college-bound students’ life skills. Volunteers also provide subject matter instruction. She advised minority engineering students: “There will always be prejudiced people; you’ve got to be prepared to survive in spite of their attitudes.” Black students she encountered recall her oft-repeated mantra: “You’ve got to be excellent!” In a posthumous tribute essay about Moore, Walton recalled befriending her at the Saturday Academy while tutoring middle and high school students in science and mathematics. “Don Coleman, the former associate provost at Howard University and a good friend of mine,” Walton wrote, “noted that Dr. Hobbs Moore had tutored him when he was an engineering student at the University of Michigan. [Coleman] recalled that she taught the fundamentals and always made him feel as though she was merely reminding him of what he already knew rather than teaching him unfamiliar things.” Walton recalled how dedicated Moore was to ensuring Black students were prepared to follow in her footsteps. He said she was a mainstay at the Saturday Academy until her 24-year battle with cancer made it impossible for her to continue. She was posthumously honored with the Bouchet Award at the National Conference of Black Physics Students in 1995. Edward A. Bouchet was the first Black person to earn a Ph.D. in a science (physics) in the United States. Walton, who said he admired Moore for her determination to light the way for succeeding generations, says the programs that helped him as a young student are no longer being pursued with the fervor they once were. “Particularly right now,” he told the American Institute of Physics in 2024, “we’re seeing a retrenchment, a backlash against programs and initiatives that deal with the historical underrepresentation of women and other people who we know have a history in the United States of being excluded. And if we don’t have interventions in place, there’s nothing to say that it won’t continue.” In the interview, Walton said he is concerned that instead of there being more STEM professionals like Moore, there might be fewer. A lasting legacy Moore’s life is a testament to perseverance, excellence, and the power of mentorship. Her achievements prove that it’s possible to overcome the inertia of low societal expectations and improve the world. Willie Hobbs Moore—You’ve Got to Be Excellent! Biography is available for free to members. The non-member price is US $2.99

6 days ago 6 votes
Video Friday: PARTNR

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA Enjoy today’s videos! There is an immense amount of potential for innovation and development in the field of human-robot collaboration — and we’re excited to release Meta PARTNR, a research framework that includes a large-scale benchmark, dataset and large planning model to jump start additional research in this exciting field. [ Meta PARTNR ] Humanoid is the first AI and robotics company in the UK, creating the world’s leading, commercially scalable, and safe humanoid robots. [ Humanoid ] To complement our review paper, “Grand Challenges for Burrowing Soft Robots,” we present a compilation of soft burrowers, both organic and robotic. Soft organisms use specialized mechanisms for burrowing in granular media, which have inspired the design of many soft robots. To improve the burrowing efficacy of soft robots, there are many grand challenges that must be addressed by roboticists. [ Faboratory Research ] at [ Yale University ] Three small lunar rovers were packed up at NASA’s Jet Propulsion Laboratory for the first leg of their multistage journey to the Moon. These suitcase-size rovers, along with a base station and camera system that will record their travels on the lunar surface, make up NASA’s CADRE (Cooperative Autonomous Distributed Robotic Exploration) technology demonstration.] [ NASA ] MenteeBot V3.0 is a fully vertically integrated humanoid robot, with full-stack AI and proprietary hardware. [ Mentee Robotics ] What do assistance robots look like? From robotic arms attached to a wheelchair to autonomous robots that can pick up and carry objects on their own, assistive robots are making a real difference to the lives of people with limited motor control. [ Cybathlon ] Robots can not perform reactive manipulation and they mostly operate in open-loop while interacting with their environment. Consequently, the current manipulation algorithms either are very inefficient in performance or can only work in highly structured environments. In this paper, we present closed-loop control of a complex manipulation task where a robot uses a tool to interact with objects. [ Paper ] via [ Mitsubishi Electric Research Laboratories ] Thanks, Yuki! When the future becomes the present, anything is possible. In our latest campaign, “The New Normal,” we highlight the journey our riders experience from first seeing Waymo to relishing in the magic of their first ride. How did your first-ride feeling change the way you think about the possibilities of AVs? [ Waymo ] One of a humanoid robot’s unique advantages lies in its bipedal mobility, allowing it to navigate diverse terrains with efficiency and agility. This capability enables Moby to move freely through various environments and assist with high-risk tasks in critical industries like construction, mining, and energy. [ UCR ] Although robots are just tools to us, it’s still important to make them somewhat expressive so they can better integrate into our world. So, we created a small animation of the robot waking up—one that it executes all by itself! [ Pollen Robotics ] In this live demo, an OTTO AMR expert will walk through the key differences between AGVs and AMRs, highlighting how OTTO AMRs address challenges that AGVs cannot. [ OTTO ] by [ Rockwell Automation ] This Carnegie Mellon University Robotics Institute Seminar is from CMU’s Aaron Johnson, on “Uncertainty and Contact with the World.” As robots move out of the lab and factory and into more challenging environments, uncertainty in the robot’s state, dynamics, and contact conditions becomes a fact of life. In this talk, I’ll present some recent work in handling uncertainty in dynamics and contact conditions, in order to both reduce that uncertainty where we can but also generate strategies that do not require perfect knowledge of the world state. [ CMU RI ]

a week ago 8 votes

More in AI

AI Roundup 106: Grok 3

February 21, 2024.

23 hours ago 5 votes
Video Friday: Helix

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! We’re introducing Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics. This is moderately impressive; my favorite part is probably the hand-offs and that extra little bit of HRI with what we’d call eye contact if these robots had faces. But keep in mind that you’re looking at close to best case for robotic manipulation, and that if the robots had been given the bag instead of well-spaced objects on a single color background, or if the fridge had a normal human amount of stuff in it, they might be having a much different time of it. Also, is it just me, or is the sound on this video very weird? Like, some things make noise, some things don’t, and the robots themselves occasionally sound more like someone just added in some ‘soft actuator sound’ or something. Also also, I’m of a suspicious nature, and when there is an abrupt cut between ‘robot grasps door’ and ‘robot opens door,’ I assume the worst. [ Figure ] Researchers at EPFL have developed a highly agile flat swimming robot. This robot is smaller than a credit card, and propels on the water surface using a pair of undulating soft fins. The fins are driven at resonance by artificial muscles, allowing the robot to perform complex maneuvers. In the future, this robot can be used for monitoring water quality or help with measuring fertilizer concentrations in rice fields [ Paper ] via [ Science Robotics ] I don’t know about you, but I always dance better when getting beaten with a stick. [ Unitree Robotics ] This is big news, people: Sweet Bite Ham Ham, one of the greatest and most useless robots of all time, has a new treat. All yours for about $100, overseas shipping included. [ Ham Ham ] via [ Robotstart ] MagicLab has announced the launch of its first generation self-developed dexterous hand product, the MagicHand S01. The MagicHand S01 has 11 degrees of freedom in a single hand. The MagicHand S01 has a hand load capacity of up to 5 kilograms, and in work environments, can carry loads of over 20 kilograms. [ MagicLab ] Thanks, Ni Tao! No, I’m not creeped out at all, why? [ Clone Robotics ] Happy 40th Birthday to the MIT Media Lab! Since 1985, the MIT Media Lab has provided a home for interdisciplinary research, transformative technologies, and innovative approaches to solving some of humanity’s greatest challenges. As we celebrate our 40th anniversary year, we’re looking ahead to decades more of imagining, designing, and inventing a future in which everyone has the opportunity to flourish. [ MIT Media Lab ] While most soft pneumatic grippers that operate with a single control parameter (such as pressure or airflow) are limited to a single grasping modality, this article introduces a new method for incorporating multiple grasping modalities into vacuum-driven soft grippers. This is achieved by combining stiffness manipulation with a bistable mechanism. Adjusting the airflow tunes the energy barrier of the bistable mechanism, enabling changes in triggering sensitivity and allowing swift transitions between grasping modes. This results in an exceptional versatile gripper, capable of handling a diverse range of objects with varying sizes, shapes, stiffness, and roughness, controlled by a single parameter, airflow, and its interaction with objects. [ Paper ] via [ BruBotics ] Thanks, Bram! In this article, we present a design concept, in which a monolithic soft body is incorporated with a vibration-driven mechanism, called Leafbot. This proposed investigation aims to build a foundation for further terradynamics study of vibration-driven soft robots in a more complicated and confined environment, with potential applications in inspection tasks. [ Paper ] via [ IEEE Transactions on Robots ] We present a hybrid aerial-ground robot that combines the versatility of a quadcopter with enhanced terrestrial mobility. The vehicle features a passive, reconfigurable single wheeled leg, enabling seamless transitions between flight and two ground modes: a stable stance and a dynamic cruising configuration. [ Robotics and Intelligent Systems Laboratory ] I’m not sure I’ve ever seen this trick performed by a robot with soft fingers before. [ Paper ] There are a lot of robots involved in car manufacturing. Like, a lot. [ Kawasaki Robotics ] Steve Willits shows us some recent autonomous drone work being done at the AirLab at CMU’s Robotics Institute. [ Carnegie Mellon University Robotics Institute ] Somebody’s got to test all those luxury handbags and purses. And by somebody, I mean somerobot. [ Qb Robotics ] Do not trust people named Evan. [ Tufts University Human-Robot Interaction Lab ] Meet the Mind: MIT Professor Andreea Bobu. [ MIT ]

21 hours ago 3 votes
Reasoning is here to stay

AI Engineering resources 02-21-25

23 hours ago 3 votes
Weak managers

In a previous post I made the point that having a weak manager - a manager without political clout - is really bad news if you’re an…

14 hours ago 2 votes
Reinforcement Learning Triples Spot’s Running Speed

About a year ago, Boston Dynamics released a research version of its Spot quadruped robot, which comes with a low-level application programming interface (API) that allows direct control of Spot’s joints. Even back then, the rumor was that this API unlocked some significant performance improvements on Spot, including a much faster running speed. That rumor came from the Robotics and AI (RAI) Institute, formerly The AI Institute, formerly the Boston Dynamics AI Institute, and if you were at Marc Raibert’s talk at the ICRA@40 conference in Rotterdam last fall, you already know that it turned out not to be a rumor at all. Today, we’re able to share some of the work that the RAI Institute has been doing to apply reality-grounded reinforcement learning techniques to enable much higher performance from Spot. The same techniques can also help highly dynamic robots operate robustly, and there’s a brand new hardware platform that shows this off: an autonomous bicycle that can jump. See Spot Run This video is showing Spot running at a sustained speed of 5.2 meters per second (11.6 miles per hour). Out of the box, Spot’s top speed is 1.6 meters per second, meaning that RAI’s spot has more than tripled (!) the quadruped’s factory speed. If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like a real dog at all. “The gait is not biological, but the robot isn’t biological,” explains Farbod Farshidian, roboticist at the RAI Institute. “Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.” The best Farshidian can categorize how Spot is moving is that it’s somewhat similar to a trotting gait, except with an added flight phase (with all four feet off the ground at once) that technically turns it into a run. This flight phase is necessary, Farshidian says, because the robot needs that time to successively pull its feet forward fast enough to maintain its speed. This is a “discovered behavior,” in that the robot was not explicitly programmed to “run,” but rather was just required to find the best way of moving as fast as possible. Reinforcement Learning Versus Model Predictive Control The Spot controller that ships with the robot when you buy it from Boston Dynamics is based on model predictive control (MPC), which involves creating a software model that approximates the dynamics of the robot as best you can, and then solving an optimization problem for the tasks that you want the robot to do in real time. It’s a very predictable and reliable method for controlling a robot, but it’s also somewhat rigid, because that original software model won’t be close enough to reality to let you really push the limits of the robot. And if you try to say, “okay, I’m just going to make a super detailed software model of my robot and push the limits that way,” you get stuck because the optimization problem has to be solved for whatever you want the robot to do, in real time, and the more complex the model is, the harder it is to do that quickly enough to be useful. Reinforcement learning (RL), on the other hand, learns offline. You can use as complex of a model as you want, and then take all the time you need in simulation to train a control policy that can then be run very efficiently on the robot. Your browser does not support the video tag. In simulation, a couple of Spots (or hundreds of Spots) can be trained in parallel for robust real-world performance.Robotics and AI Institute In the example of Spot’s top speed, it’s simply not possible to model every last detail for all of the robot’s actuators within a model-based control system that would run in real time on the robot. So instead, simplified (and typically very conservative) assumptions are made about what the actuators are actually doing so that you can expect safe and reliable performance. Farshidian explains that these assumptions make it difficult to develop a useful understanding of what performance limitations actually are. “Many people in robotics know that one of the limitations of running fast is that you’re going to hit the torque and velocity maximum of your actuation system. So, people try to model that using the data sheets of the actuators. For us, the question that we wanted to answer was whether there might exist some other phenomena that was actually limiting performance.” Searching for these other phenomena involved bringing new data into the reinforcement learning pipeline, like detailed actuator models learned from the real world performance of the robot. In Spot’s case, that provided the answer to high-speed running. It turned out that what was limiting Spot’s speed was not the actuators themselves, nor any of the robot’s kinematics: It was simply the batteries not being able to supply enough power. “This was a surprise for me,” Farshidian says, “because I thought we were going to hit the actuator limits first.” Spot’s power system is complex enough that there’s likely some additional wiggle room, and Farshidian says the only thing that prevented them from pushing Spot’s top speed past 5.2 m/s is that they didn’t have access to the battery voltages so they weren’t able to incorporate that real world data into their RL model. “If we had beefier batteries on there, we could have run faster. And if you model that phenomena as well in our simulator, I’m sure that we can push this farther.” Farshidian emphasizes that RAI’s technique is about much more than just getting Spot to run fast—it could also be applied to making Spot move more efficiently to maximize battery life, or more quietly to work better in an office or home environment. Essentially, this is a generalizable tool that can find new ways of expanding the capabilities of any robotic system. And when real world data is used to make a simulated robot better, you can ask the simulation to do more, with confidence that those simulated skills will successfully transfer back onto the real robot. Ultra Mobility Vehicle: Teaching Robot Bikes to Jump Reinforcement learning isn’t just good for maximizing the performance of a robot—it can also make that performance more reliable. The RAI Institute has been experimenting with a completely new kind of robot that they invented in-house: a little jumping bicycle called the Ultra Mobility Vehicle, or UMV, which was trained to do parkour using essentially the same RL pipeline for balancing and driving as was used for Spot’s high speed running. There’s no independent physical stabilization system (like a gyroscope) keeping the UMV from falling over; it’s just a normal bike that can move forwards and backwards and turn its front wheel. As much mass as possible is then packed into the top bit, which actuators can rapidly accelerate up and down. “We’re demonstrating two things in this video,” says Marco Hutter, director of the RAI Institute’s Zurich office. “One is how reinforcement learning helps make the UMV very robust in its driving capabilities in diverse situations. And second, how understanding the robots’ dynamic capabilities allows us to do new things, like jumping on a table which is higher than the robot itself.” “The key of RL in all of this is to discover new behavior and make this robust and reliable under conditions that are very hard to model. That’s where RL really, really shines.” —Marco Hutter, The RAI Institute As impressive as the jumping is, for Hutter, it’s just as difficult (if not more difficult) to do maneuvers that may seem fairly simple, like riding backwards. “Going backwards is highly unstable,” Hutter explains. “At least for us, it was not really possible to do that with a classical [MPC] controller, particularly over rough terrain or with disturbances.” Getting this robot out of the lab and onto terrain to do proper bike parkour is a work in progress that the RAI Institute says they’ll be able to demonstrate in the near future, but it’s really not about what this particular hardware platform can do—it’s about what any robot can do through RL and other learning-based methods, says Hutter. “The bigger picture here is that the hardware of such robotic systems can in theory do a lot more than we were able to achieve with our classic control algorithms. Understanding these hidden limits in hardware systems lets us improve performance and keep pushing the boundaries on control.” Your browser does not support the video tag. Teaching the UMV to drive itself down stairs in sim results in a real robot that can handle stairs at any angle.Robotics and AI Institute Reinforcement Learning for Robots Everywhere Just a few weeks ago, The RAI Institute announced a new partnership with Boston Dynamics ‘to advance humanoid robots through reinforcement learning.’ Humanoids are just another kind of robotic platform, albeit a significantly more complicated one with many more degrees of freedom and things to model and simulate. But when considering the limitations of model predictive control for this level of complexity, a reinforcement learning approach seems almost inevitable, especially when such an approach is already streamlined due to its ability to generalize. “One of the ambitions that we have as an institute is to have solutions which span across all kinds of different platforms,” says Hutter. “It’s about building tools, about building infrastructure, building the basis for this to be done in a broader context. So not only humanoids, but driving vehicles, quadrupeds, you name it. But doing RL research and showcasing some nice first proof of concept is one thing—pushing it to work in the real world under all conditions, while pushing the boundaries in performance, is something else.” Transferring skills into the real world has always been a challenge for robots trained in simulation, precisely because simulation is so friendly to robots. “If you spend enough time,” Farshidian explains, “you can come up with a reward function where eventually the robot will do what you want. What often fails is when you want to transfer that sim behavior to the hardware, because reinforcement learning is very good at finding glitches in your simulator and leveraging them to do the task.” Simulation has been getting much, much better, with new tools, more accurate dynamics, and lots of computing power to throw at the problem. “It’s a hugely powerful ability that we can simulate so many things, and generate so much data almost for free,” Hutter says. But the usefulness of that data is in its connection to reality, making sure that what you’re simulating is accurate enough that a reinforcement learning approach will in fact solve for reality. Bringing physical data collected on real hardware back into the simulation, Hutter believes, is a very promising approach, whether it’s applied to running quadrupeds or jumping bicycles or humanoids. “The combination of the two—of simulation and reality—that’s what I would hypothesize is the right direction.”

yesterday 2 votes