Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
1
Physical media fans need not panic yet—you’ll still be able to buy new Blu-Ray movies for your collection. But for those who like to save copies of their own data onto the discs, the remaining options just became more limited: Sony announced last week that it’s ending all production of several recordable media formats—including Blu-Ray discs, MiniDiscs, and MiniDV cassettes—with no successor models. “Considering the market environment and future growth potential of the market, we have decided to discontinue production,” a representative of Sony said in a brief statement to IEEE Spectrum. Though availability is dwindling, most Blu-Ray discs are unaffected. The discs being discontinued are currently only available to consumers in Japan and some professional markets elsewhere, according to Sony. Many consumers in Japan use blank Blu-Ray discs to save TV programs, Sony separately told Gizmodo. Sony, which prototyped the first Blu-Ray discs in 2000, has been selling commercial Blu-Ray...
9 hours ago

More from IEEE Spectrum

AIs and Robots Should Sound Robotic

AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice. AI agents will make calls on our behalf, conversing with others in natural language. All of that is happening, and will be commonplace soon. You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity. We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again. Responsible AI companies that provide voice synthesis or AI voice assistants in any form should add a ring modulator of some standard frequency (say, between 30-80 Hz) and of a minimum amplitude (say, 20 percent). That’s it. People will catch on quickly. Here are a couple of examples you can listen to for examples of what we’re suggesting. The first clip is an AI-generated “podcast” of this article made by Google’s NotebookLM featuring two AI “hosts.” Google’s NotebookLM created the podcast script and audio given only the text of this article. The next two clips feature that same podcast with the AIs’ voices modulated more and less subtly by a ring modulator: Raw audio sample generated by Google’s NotebookLM Audio sample with added ring modulator (30 Hz-25%) Audio sample with added ring modulator (30 Hz-40%) We were able to generate the audio effect with a 50-line Python script generated by Anthropic’s Claude. One of the most well-known robot voices were those of the Daleks from Doctor Who in the 1960s. Back then robot voices were difficult to synthesize, so the audio was actually an actor’s voice run through a ring modulator. It was set to around 30 Hz, as we did in our example, with different modulation depth (amplitude) depending on how strong the robotic effect is meant to be. Our expectation is that the AI industry will test and converge on a good balance of such parameters and settings, and will use better tools than a 50-line Python script, but this highlights how simple it is to achieve. We don’t expect scammers to follow our proposal: They’ll find a way no matter what. But that’s always true of security standards, and a rising tide lifts all boats. We think the bulk of the uses will be with popular voice APIs from major companies--and everyone should know that they’re talking with a robot.

4 hours ago 1 votes
Just How Many Robots Can One Person Control at Once?

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed? In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics. Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires. “Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.” These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots. “These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers. Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day. Testing How Big of a Swarm Humans Can Manage During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to. The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site. DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment. Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.” Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says. www.youtube.com The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment. Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.

4 days ago 4 votes
Video Friday: Hottest On The Ice

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! Are wheeled quadrupeds going to run out of crazy new ways to move anytime soon? Looks like maybe not. [ DEEP Robotics ] A giant eye and tiny feet make this pipe inspection robot exceptionally cute, I think. [ tmsuk ] via [ Robotstart ] Agility seems to be one of the few humanoid companies talking seriously about safety. [ Agility Robotics ] A brain-computer interface, surgically placed in a research participant with tetraplegia, paralysis in all four limbs, provided an unprecedented level of control over a virtual quadcopter—just by thinking about moving their unresponsive fingers. In this video, you’ll see just how the participant of the study controlled the virtual quadcopter using their brain’s thought signals to move a virtual hand controller. [ University of Michigan ] Hair styling is a crucial aspect of personal grooming, significantly influenced by the appearance of front hair. While brushing is commonly used both to detangle hair and for styling purposes, existing research primarily focuses on robotic systems for detangling hair, with limited exploration into robotic hair styling. This research presents a novel robotic system designed to automatically adjust front hairstyles, with an emphasis on path planning for root-centric strand adjustment. [ Paper ] Thanks, Kento! If I’m understanding this correctly, if you’re careful it’s possible to introduce chaos into a blind juggling robot to switch synced juggling to alternate juggling. [ ETH Zurich ] Drones with beaks? Sure, why not. [ GRVC ] Check out this amazing demo preview video we shot in our offices here at OLogic prior to CES 2025. OLogic built this demo robot for MediaTek to show off all kinds of cool things running on a MediaTek Genio 700 processor. The robot is a Create3 base with a custom tower (similar to a TurtleBot) using a Pumpkin Genio 700 EVK, plus a LIDAR and a Orbbec Gemini 335 camera on it. The robot is running ROS2 NAV and finds colored balls on the floor using an NVIDIA TAO model running on the Genio 700 and adds them to the map so the robot can find them. You can direct the robot through RVIZ to go pick up a ball and move it to wherever you want on the map. [ OLogic ] We explore the potential of multimodal large language models (LLMs) for enabling autonomous trash pickup robots to identify objects characterized as trash in complex, context-dependent scenarios. By constructing evaluation datasets with human agreement annotations, we demonstrate that LLMs excel in visually clear cases with high human consensus, while performance is lower in ambiguous cases, reflecting human uncertainty. To validate real-world applicability, we integrate GPT-4o with an open vocabulary object detector and deploy it on a quadruped with a manipulator arm with ROS 2, showing that it is possible to use this information for autonomous trash pickup in practical settings. [ University of Texas at Austin ]

6 days ago 8 votes
A Spy Satellite You’ve Never Heard of Helped Win the Cold War

In the early 1970s, the Cold War had reached a particularly frigid moment, and U.S. military and intelligence officials had a problem. The Soviet Navy was becoming a global maritime threat—and the United States did not have a global ocean-surveillance capability. Adding to the alarm was the emergence of a new Kirov class of nuclear-powered guided-missile battle cruisers, the largest Soviet vessels yet. For the United States, this situation meant that the perilous equilibrium of mutual assured destruction, MAD, which so far had dissuaded either side from launching a nuclear strike, could tilt in the wrong direction. satellite program called Parcae to help keep the Cold War from suddenly toggling to hot. The engineers working on Parcae would have to build the most capable orbiting electronic intelligence system ever. A Parcae satellite was just a few meters long but it had four solar panels that extended several meters out from the body of the satellite. The rod emerging from the satellite was a gravity boom, which kept the orbiter’s signal antennas oriented toward Earth.NRO Dwayne Day, a historian of space technology for the National Academy of Sciences, the United States conducted large naval exercises in 1971, with U.S. ships broadcasting signals, and several types of ELINT satellites attempting to detect them. The tests revealed worrisome weaknesses in the country’s intelligence-gathering satellite systems. One of the big advances of the Parcae program was a three-satellite dispenser that could loft three satellites, which then functioned together in orbit as a group. Seen here are three Parcae satellites on the dispenser.Arthur Collier Even the mere existence of the satellites, which would be built by a band of veteran engineers at the U.S. Naval Research Laboratory (NRL) in Washington, D.C., would remain officially secret until July 2023. That’s when the National Reconnaissance Office declassified a one-page acknowledgment about Parcae. Since its establishment in 1961, the NRO has directed and overseen the nation’s spy-satellite programs, including ones for photoreconnaissance, communications interception, signals intelligence, and radar. With this scant declassification, the Parcae program could at least be celebrated by name and its overall mission revealed during the NRL’s centennial celebration that year. Aspects of the Parcae program had been unofficially outed over the years by a few enterprising journalists in such venues as Aviation Week & Space Technology and The Space Review, by historians like Day, and even by a Russian military advisor in a Ministry of Defense journal. This article is based on these sources, along with additional interviews and written input from Navy engineers who designed, built, operated, and managed Parcae and its precursor satellite systems. They confirm a commonly held but nevertheless profound understanding about the United States during that era. Simply put, there was nothing quite like the paranoia and high stakes of the Cold War to spur engineers into creative frenzies that rapidly produced brilliant national-security technologies, including surveillance systems like Parcae. A Spy Satellite with a Cosmic Cover Name Although the NRO authorized and paid for Parcae, the responsibility to actually design and build it fell to the cold-warrior engineers at NRL and their contractor-partners at such places as Systems Engineering Laboratories and HRB Singer, a signal-analysis and -processing firm in State College, Pa. Galactic Radiation and Background experiment, which was a cover name for the satellite’s secret payload; it also had a bona fide solar-science payload housed in the same shell [see sidebar, “From Quartz-Crystal Detectors to Eavesdropping Satellites”]. On 22 June 1960, GRAB made it into orbit to become the world’s first spy satellite, though there was no opportunity to brag about it. The existence of GRAB’s classified mission was an official secret until 1998. launched in 1961, and the pair of satellites monitored Soviet radar systems for the National Security Agency and the Strategic Air Command. The NSA, headquartered at Fort Meade, Md., is responsible for many aspects of U.S. signals intelligence, notably intercepting and decrypting sensitive communications all over the world and devising machines and algorithms that protect U.S. official communications. The SAC was until 1992 in charge of the country’s strategic bombers and intercontinental ballistic missiles. The Poppy Block II satellites, which had a diameter of 61 centimeters, were outfitted with antennas to pick up signals from Soviet radars [top]. The signals were recorded and retransmitted to ground stations, such as this receiving console photographed in 1965, designated A-GR-2800. NRO The GRAB satellites tracked several thousand Soviet air-defense radars scattered across the vast Russian continent, picking up the radars’ pulses and transmitting them to ground stations in friendly countries around the world. It could take months to eke out useful intelligence from the data, which was hand-delivered to NSA and SAC. There, analysts would examine the data for “signals of interest,” like the proverbial needle in a haystack, interpret their significance, and package the results into reports. All this took days if not weeks, so GRAB data was mostly relevant for overall situational awareness and longer-term strategic planning. declassified in 2004. With multiple satellites in orbit, Poppy could geolocate emission sources, at least roughly. Poppy program, the NRL satellite team showed it was even possible, in principle, to get this information to end users within hours or even less by relaying it directly to ground stations, rather than recording the data first. These first instances of rapidly delivered intelligence fired the imaginations, and expectations, of U.S. national-security leaders and offered a glimpse of the ocean-surveillance capabilities they wanted Parcae to provide. How Parcae Inspired Modern Satellite Signals Intelligence The first of the 12 Parcae missions launched in 1976 and the last, 20 years later. Over its long lifetime, the program had other cryptic cover names, among them White Cloud and Classic Wizard. According to NRO’s declassification memo, it stopped using the Parcae satellites in May 2008. Originally designed as an intercontinental ballistic missile (ICBM), the Atlas F was later repurposed to launch satellites, including Parcae. Peter Hunter Photo Collections Atlas F rocket to deliver three satellites in precise orbital formations, which were essential for their geolocation and tracking functions. (Later launches used the larger Titan IV-A rocket.) This triple launching capability was achieved with a satellite dispenser designed and built by an NRL team led by Peter Wilhelm. As chief engineer for NRL’s satellite-building efforts for some 60 years until his retirement in 2015, Wilhelm directed the development of more than 100 satellites, some of them still classified.. The satellites generally worked in clusters of three (the name Parcae comes from the three fates of Roman mythology), each detecting the radar and radio emissions from Soviet ships. To pinpoint a ship, the satellites were equipped with highly precise, synchronized clocks. Tiny differences in the time when each satellite received the radar signals emitted from the ship were then used to triangulate the ship’s location. The calculated location was updated each time the satellites passed over. A GRAB satellite was prepared for launch in 1960. Peter Wilhelm is standing, at right, in a patterned shirt.NRO Transmissions from the GRAB satellites were received in “huts” [left], likely in a country just outside Soviet borders. In between the two banks of receivers in this photo is the wheel used for manually steering the antennas. These yagi antennas [right] were linearly polarized.NRO Naval Security Group Command, which performed encryption and data-security functions for the Navy. The data was then relayed via communications satellites to Naval facilities worldwide, where it was correlated and turned into intelligence. That intelligence, in the form of Ships Emitter Locating Reports, went out to watch officers and commanders aboard ships at sea and other users. A report might include information about, for example, a newly detected radar signal—the type of radar, its frequencies, pulse, scan rates, and location. Early Minicomputers Spotted Signals of Interest To scour the otherwise overwhelming torrents of raw ELINT data for signals of interest, the Parcae program included an intelligence-analysis data-processing system built around then-high-end computers. These were likely produced by Systems Engineering Laboratories, in Fort Lauderdale, Fla. SEL had produced the SEL-810 and SEL-86 minicomputers used in the Poppy program. These machines included a “real-time interrupt capability,” which enabled the computers to halt data processing to accept and store new data and then resume the processing where it had left off. That feature was useful for a system like Parcae, which continually harvested data. Also crucial to ferreting out important signals was the data-processing software, supplied by vendors whose identities remain classified. The SEL-810 minicomputer was the heart of a data-processing system built to scour the torrents of raw data from the Poppy satellites for signals of interest. Computer History Museum Over time, the Ships Emitter Locating Reports evolved from crude teletype printouts derived from raw intercept data to more user-friendly forms such as automatically displayed maps. The reports delivered the intelligence, security, or military meaning of the intercepts in formats that naval commanders and other end users on the ground and in the air could grasp quickly and put to use. Parcae Tech and the 2-Minute Warning Harvesting and pinpointing radar signatures, though difficult to pull off, wasn’t even the most sobering tech challenge. Even more daunting was Parcae’s requirement to deliver “sensor-to-shooter” intelligence—from a satellite to a ship commander or weapons control station—within minutes. According to Navy Captain James “Mel” Stephenson, who was the first director of the NRO’s Operational Support Office, achieving this goal required advances all along the technology chain. That included the satellites, computer hardware, data-processing algorithms, communications and encryption protocols, broadcast channels, and end-user terminals. From Quartz-Crystal Detectors to Eavesdropping Satellites The seed technology for the U.S. Navy’s entire ELINT-satellite story goes back to World War II, when the Naval Research Laboratory (NRL) became a leading developer in the then-new business of electronic warfare and countermeasures. Think of monitoring an enemy’s radio-control signals, fooling its electronic reconnaissance probes, and evading its radar-detection system. NRL’s foray into satellite-based signals intelligence emerged from a quartz-crystal-based radio-wave detector designed by NRL engineer Reid Mayo that he sometimes personally installed on the periscopes of U.S. submarines. This device helped commanders save their submarines and the lives of those aboard by specifying when and from what direction enemy radars were probing their vessels. In the late 1950s, as the Space Age was lifting off, Mayo and his boss, Howard Lorenzen (who would later hire Lee M. Hammarstrom), were perhaps the first to realize that the same technology should be able to “see” much larger landscapes of enemy radar activity if the detectors could be placed in orbit. Lorenzen was an influential, larger-than-life technology visionary often known as the father of electronic warfare. In 2008, the United States named a missile-range instrumentation ship, which supports and tracks missile launches, after him. Lorenzen’s and Mayo’s engineering concept of “raising the periscope” for the purpose of ELINT gathering was implemented on the first GRAB satellite. The satellite was a secret payload that piggybacked on a publicly announced scientific payload, Solrad, which collected first-of-its-kind data on the sun’s ultraviolet- and X-ray radiation. That data would prove useful for modeling and predicting the behavior of the planet’s ionosphere, which influenced the far-flung radio communication near and dear to the Navy. Though the United States couldn’t brag about the GRAB mission even as the Soviet Union was scoring first after first in the space race, it was the world’s first successful spy payload in orbit, beating by a few months the first successful launch of Corona, the CIA’s maiden space-based photoreconnaissance program. A key figure in the development of those user terminals was Ed Mashman, an engineer who worked as a contractor on Parcae. The terminals had to be tailored according to where they would be used and who would be using them. One early series was known as Prototype Analysis Display Systems, even though the “prototypes” ended up deployed as operational units. Before these display systems became available, Mashman recalled in an interview for IEEE Spectrum, “Much of the data that had been coming in from Classic Wizard just went into the burn bag, because they could not keep up with the high volume.” The intelligence analysts were still relying on an arduous process to determine if the information in the reports was alarming enough to require some kind of action, such as positioning U.S. naval vessels that were close enough to a Soviet vessel to launch an attack. To make such assessments, the analysts had to screen a huge number of teletype reports coming in from the satellites, manually plotting the data on a map to discern which ones might indicate a high-priority threat from the majority that did not. When the “prototype” display systems became available, Mashman recalls, the analysts could “all of a sudden, see it automatically plotted on a map and get useful information out of it…. When some really important thing came from Classic Wizard, it would [alert] the watch officer and show where it was and what it was.” These capabilities were developed during shoulder-to-shoulder work sessions between end users and engineers like Mashman. Those sessions led to an iterative process by which the ELINT system could deliver and package data in user-friendly ways and with a swiftness that was tactically useful. Parcae’s rapid-dissemination model flourished well beyond the end of the program and is one of Parcae’s most enduring legacies. For example, to rapidly distribute intelligence globally, Parcae’s engineering teams built a secure communications channel based on a complex mix of protocols, data-processing algorithms, and tailored transmission waveforms, among other elements. The communications network connecting these pieces became known as the Tactical Receive Equipment and Related Applications Broadcast. As recently as Operation Desert Storm, it was still being used. “During Desert Storm, we added imagery to the…broadcast, enabling it to reach the forces as soon as it was generated,” says Stephenson. Over the course of a 40-year career in national security technologies, Lee M. Hammarstrom rose to the position of chief scientist of the National Reconnaissance Office. U.S. Naval Research Laboratory According to Hammarstrom, Parcae’s communications challenges had to be solved concurrently with the core challenge of managing and parsing the vast amounts of raw data into useful intelligence. Coping with this data deluge began with the satellites themselves, which some participants came to think of as “orbiting peripherals.” The term reflected the fact that the gathering of raw electronic signals was just the beginning of a complex system of complex systems. Even in the late 1960s, when Parcae’s predecessor Poppy was operational, the NRL team and its contractors had totally reconfigured the satellites, data-collection system, ground stations, computers, and other system elements for the task. Collier notes that in addition to supporting military operations, Parcae “was available to help provide maritime-domain awareness for tracking drug, arms and human trafficking as well as general commercial shipping.”

a week ago 14 votes

More in science

Incorruptible Skepticism

Everything, apparently, has a second life on TikTok. At least this keeps us skeptics busy – we have to redebunk everything we have debunked over the last century because it is popping up again on social media, confusing and misinforming another generation. This video is a great example – a short video discussing the “incorruptibility’ […] The post Incorruptible Skepticism first appeared on NeuroLogica Blog.

8 hours ago 1 votes
How Does Life Happen When There’s Barely Any Light?

Under the sea ice during the Arctic’s pitch-black polar night, cells power photosynthesis on the lowest light levels ever observed in nature. The post How Does Life Happen When There’s Barely Any Light? first appeared on Quanta Magazine

yesterday 2 votes
Why housing shortages cause homelessness

It's not just about rents - it's also about the rooms friends and family can't afford to share

yesterday 2 votes
The Most Important Time in History Is Now

AGI Is Coming Sooner Due to o3, DeepSeek, and Other Cutting-Edge AI Developments

yesterday 2 votes