Robots | Popular Science https://www.popsci.com/category/robots/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Tue, 30 Apr 2024 19:04:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Robots | Popular Science https://www.popsci.com/category/robots/ 32 32 Boston Dynamics gives Spot bot a furry makeover https://www.popsci.com/technology/furry-boston-dynamics-spot/ Tue, 30 Apr 2024 19:04:16 +0000 https://www.popsci.com/?p=613083
Boston Dynamics Spot robot in puppet dog costume sitting next to regular Spot robot.
That's certainly one way to honor 'International Dance Day.'. Boston Dynamics/YouTube

'Sparkles' shows off the latest in robo-dog choreography.

The post Boston Dynamics gives Spot bot a furry makeover appeared first on Popular Science.

]]>
Boston Dynamics Spot robot in puppet dog costume sitting next to regular Spot robot.
That's certainly one way to honor 'International Dance Day.'. Boston Dynamics/YouTube

Boston Dynamics may have relocated the bipedal Atlas to a nice farm upstate, but the company continues to let everyone know its four-legged line of Spot robots have a lot of life left in them. And after years of obvious dog-bot comparisons, Spot’s makers finally went ahead and commissioned a full cartoon canine getup for its latest video showcase. Sparkles is here and like its fellow Boston Dynamics family, it’s perfectly capable of cutting a rug.

Dogs photo

Unlike, say, a mini Spot programmed to aid disaster zone search-and-rescue efforts or explore difficult-to-reach areas in nuclear reactors, Sparkles appears designed purely to offer viewers some levity. According to Boston Dynamics, the shimmering, blue, Muppet-like covering is a “custom costume designed just for Spot to explore the intersections of robotics, art, and entertainment” in honor of International Dance Day. In the brief clip, Sparkles can be seen performing a routine alongside a more standardized mini Spot, sans any extra attire.

But Spot bots such as this duo aren’t always programmed to dance for humanity’s applause—their intricate movements highlight the complex software built to take advantage of the machine’s overall maneuverability, balance, and precision. In this case, Sparkles and its partner were trained using Choreographer, a dance-dedicated system made available by Boston Dynamics with entertainment and media industry customers in mind.

[Related: RIP Atlas, the world’s beefiest humanoid robot.]

With Choreographer, Spot owners don’t need a degree in robotics or engineering to get their machines to move in rhythm. Instead, they are able to select from “high-level instruction” options instead of needing to key in specific joint angle and torque parameters. Even if one of Boston Dynamics robots running Choreographer can’t quite pull off a user’s routine, it is coded to approximate the request as best as possible.

“If asked to do something physically impossible, or if faced with an environmental challenge like a slippery floor, Spot will find the possible motion most similar to what was requested and do that instead—analogously to what a human dancer would do,” the company explains.

Choreographer is behind some of Boston Dynamics’ most popular demo showcases, including those BTS dance-off and the “Uptown Funk” videos. It’s nice to see the robots’ moves are consistently improving—but maybe nice still is that it’s at least one more time people don’t need to think about a gun-toting dog bot. Or even what’s in store for humanity after that two-legged successor to Atlas finally hits the market.

The post Boston Dynamics gives Spot bot a furry makeover appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why animals run faster than their robot doppelgängers… for now https://www.popsci.com/technology/animals-run-faster-than-robots/ Wed, 24 Apr 2024 18:00:00 +0000 https://www.popsci.com/?p=612357
robot v roach
Animals inspired robots consistently filament outperform their organic inspirations despite often having better individual components. Animal Inspired Movement and Robotics Lab, CU Boulder

The sum is greater than its parts.

The post Why animals run faster than their robot doppelgängers… for now appeared first on Popular Science.

]]>
robot v roach
Animals inspired robots consistently filament outperform their organic inspirations despite often having better individual components. Animal Inspired Movement and Robotics Lab, CU Boulder

Modern robotics is awash with human-made machines mimicking the animal world. From stadium-surveying robot dogs to daddy long-legs-inspired exploration bots and just about everything in-between, there’s no shortage of mechanized animal doppelgängers roaming the world. Advancements in AI systems, new synthetic materials, and 3D printing have greatly improved these machines’ ability to run, climb, and shimmy their way around obstacles, often in the name of scientific exploration or public after. 

But even with those technical advances and billions of dollars worth investment poured into the robotics industry in recent years, mechanized machines by and large still still lag behind against their biological equals in a head-to-head race. That basic observation underpins a new study by an interdisciplinary group of researchers published this week in the journal Science Robotics. 

The researchers looked at five different “subsystems” associated with running and compared how they stack up better between animals and their robot counterparts. Animals, which rely on a tapestry of delicate bones and tissues, initially seem worse than machines on almost every individual component level. Their true advantage, the researchers discovered, actually lies in their complex and interconnected control over their bodies. That fluid interoperability makes animals greater than the sum of their individual parts. 

TK
Researchers compared how animal-inspired robotics and their organic counterparts stacked up when compared against five different subsystems associated with running. Credit: Animal Inspired Movement and Robotics Lab, CU Boulder

“The way things turned out is that, with only minor exceptions, the engineering subsystems outperform the biological equivalents—and sometimes radically outperformed them,” SRI International Senior Research Engineer and paper co-author Tom Libby said in a statement. “But also what’s very, very clear is that, if you compare animals to robots at the whole system level, in terms of movement, animals are amazing. And robots have yet to catch up.”

Animals benefit from biological complexity and generations of evolution 

Each of the five researchers focused on one specific subsystem associated with running in both animals and machines. These systems were broken down into power, frame, actuation, sensing, and control. Individually, machines beat out animals in almost all of these categories. In the case of frames, for example, robots with lightweight but strong carbon fiber bodies could support larger mass structures without buckling compared to animal bones. Similarly, the researchers concluded a robot’s computer-aided control system outperforms an animal’s nervous system in terms of overall latency and bandwidth. 

But even though robots seemingly have stronger, more robust individual parts, animas are nonetheless more adept at making them work seamlessly together as a cohesive “whole.” That difference plays itself plainly when animals and robots are tested in real-world environments. While newer robots can certifiably accelerate quickly and even perform some acrobatic feats they pale in comparison to their biological counterparts in terms of fluidity and adaptability. Robots sometimes navigate tough terrain, but animals can effortlessly overcome obstacles like mud, snow, vegetation, and rubble without thinking twice about what they are doing. 

[ Related: Can this robot help solve a guide dog shortage? ]

“A wildebeest can migrate for thousands of [kilometers] over rough terrain, a mountain goat can climb up a literal cliff, finding footholds that don’t even seem to be there, and cockroaches can lose a leg and not slow down,” Simon Fraser University Department of Biomedical Physiology and Kinesiology professor Max Donelan wrote. “We have no robots capable of anything like this endurance, agility and robustness.”

Animals also have another huge leg up: time. Unlike advanced robots which have only really made strides in the past few decades, animals have had millions, or in some cases, billions of years of evolution on their side. Animals, the researchers note, have a “substantial headstart over engineering.” On the flip side, robots have done an admirable job of closing that gap with staggering speed. The researchers say they are “optimistic” that robots will someday outrun animals.

“It [advances in robots] will move faster, because evolution is undirected,” University of Washington Department of Electrical & Computer Engineering Associate Professor Sam Burden said. “There are ways that we can move much more quickly when we engineer robots than we can through evolution—but evolution has a massive head start.”

Researchers hope these findings could help future development of running robots. Armed with these findings, robot makers could decide to focus more of their time and effort on component integration rather than simply building ever better and stronger hardware. 

“The lesson we take from biology is that, although further improvements to components and subsystems are beneficial, the greatest opportunity to improve running robots is to make better use of existing parts,” the researchers wrote.”

The post Why animals run faster than their robot doppelgängers… for now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Daddy long-legs-inspired robot could one day squirm through Martian caves https://www.popsci.com/technology/spider-robot/ Wed, 17 Apr 2024 18:00:00 +0000 https://www.popsci.com/?p=611312
Close-up photos of ReachBot.
Close-up photos of ReachBot. BDML Stanford University

The spiderbot's extendable legs can grasp onto uneven rock surfaces and propel it forward.

The post Daddy long-legs-inspired robot could one day squirm through Martian caves appeared first on Popular Science.

]]>
Close-up photos of ReachBot.
Close-up photos of ReachBot. BDML Stanford University

Robotic engineers are no stranger to turning to nature for inspiration. In recent years, birds, dogs, extinct sea creatures, and even humans themselves have all served as jumping off point for new mechanical designs. Now, researchers from Stanford are citing the Harvestman spider, better known as a daddy long-legs as inspiration for a new robot design they believe could be better equipped at navigating uneven rocky caverns and lava tubes. One day, they hope this spider-like design could even help robots navigate the icy caverns of the moon and Mars. 

How does the spider robot work?

The researchers introduced their new machine called the “ReachBot” in a paper published today in the journal Science Robotics. ReachBot features multiple extendable boom limbs which it can use to reach out for rocks and propel itself forward. Each limb comes attached with a three finger gripper that grabs onto the rocks and uses them as anchor points. The long-legged design means the robot’s limbs can potentially access the floor, ceiling, and walls of a lava tube or cave, which in turn provide increased leverage. This unique positioning, the researchers write, lets the ReachBot “assume a wide variety of possible configurations, bracing stances, and force application options.”

Harvestman spider, better known as a “daddy long-legs."
Harvestman spider, better known as a daddy long-legs. DepositPhotos

ReachBot attempts to fill in a form-factor gap among existing exploration robots. Small robots, the researchers argue, are useful for navigating through tight corridors but typically have limited reach. Larger robots, by contrast, might be able to reach more area but can get bogged down by their heft mass and mechanical complexity. ReachBot offers a compromise by relying on a small main body with limbs that can expand and reach out if necessary. 

The robot utilizes a set of onboard sensors to scale the area ahead of it and look for concave rocks or other signs suggestive of a graspable area. Like a physical spider. ReachBot doesn’t immediately assume rock surfaces are flat, but instead seeks “rounded features that the gripper can partially enclose.” Researchers say they tested the robot in simulation to help it improve its ability to correctly identify grippable surface areas and aid in footstep planning. Following the simulation, ReachBot was tested in the real-world in an unmanned lava tube near Pisgah crater in Mojave Desert. 

“Results from the field test confirm the predictions of maximum grasp forces and underscore the importance of identifying and steering toward convex rock features that provide a strong grip,” the researchers write. “They also highlight a characteristic of grasp planning with ReachBot, which is that identifying, aiming for, and extending booms involves a higher level of commitment than grasping objects in manufacturing scenarios.”

ReachBot could help researchers explore deep caves and caverns on other planets

Researchers believe ReachBot’s arachnid design could have extraterrestrial applications. Lava tubes like in the Mojave Desert where the robot was tested removes some of the area on the surface of the moon and Mars. In the latter example, researchers say ancient subsurface environments on the Red Planet remain relatively unchanged the time when some believe the planet may have been habitable. These sheltered cavern areas, they write, “could provide sites for future human habitation.” 

In theory, future exploratory space robots could use a design like ReachBot’s to explore deeper into areas contemporary robots could find inaccessible. Elsewhere, researchers are exploring how three-legged jumping machines and four-legged, dog inspired robots could similarly help scientists learn more about undiscovered areas of our solar system neighbors. 

The post Daddy long-legs-inspired robot could one day squirm through Martian caves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
RIP Atlas, the world’s beefiest humanoid robot https://www.popsci.com/technology/rip-atlas-robot/ Wed, 17 Apr 2024 15:07:22 +0000 https://www.popsci.com/?p=611248
atlas robot goodbye
The Atlas humanoid robot previously showed off its ability to perform impressive acrobatics and choreographed dance routines. YouTube/Boston Dynamics

After 11 years, Boston Dynamics is retiring its iconic 330-pound bipedal robot and replacing it with a lighter, all electric little sibling.

The post RIP Atlas, the world’s beefiest humanoid robot appeared first on Popular Science.

]]>
atlas robot goodbye
The Atlas humanoid robot previously showed off its ability to perform impressive acrobatics and choreographed dance routines. YouTube/Boston Dynamics

Atlas, the hulking 330-pound acrobatic Boston Dynamics robot (that filled us with equal parts awe and horror) is officially going into retirement. The 11-year-old iconic bi-pedal giant will be replaced by a much lighter, all electric successor. The transition marks an end of an era for Boston Dynamics as consumer and investor interest turns increasingly toward smaller, scalable humanoid robots capable of performing manufacturing tasks.

Boston Dynamics commemorated the original “hydraulic Atlas,” with a farwell video this week. The video shows Atlas sprinting through an obstacle course before clumsily crashing to the ground. What follows is a collection of some of Atlas’ greatest hits over its 11-year-lifespan. Video clips show the robot evolving over the years from a rigid, slow hunk of metal to a powerful robotic athlete capable of performing backflips and hurling large objects with seemingly superhuman strengths. Of course, the video also shows Boston Dynamics engineers mercilessly showing, attacking, and tripping Atlas throughout the years to test its capabilities. 

Robots photo

Atlas was originally produced in  2013 for the Pentagon’s Defense Advanced Research Agency. At the time, DARPA hearladed Atlas as “one of the most advanced humanoid robots ever built.” The impressive and often imposing machine went viral on the internet with a collection of eye-grabbing videos showing it engaging in everything from wild acrobatics and synchronized dance routines to over-the-top construction site work

Robots photo
Robots photo

Boston Dynamics pivots to lighter, more commercial humanoid robots 

In a surprise announcement Wednesday, Boston Dynamics revealed Atlas’ name will actually live on, though in a much different form factor. The company released a brief video showing off what looks like a match for a smaller, bipedal robot with a circular head lying on a test room floor.

Robots photo

In a blog post, the company described this new successor as a “fully electric Atlas robot designed for real-world applications.” The company says the new Atlas will have a broader range of motion than its predecessor. Hyundai, which acquired Boston Dynamics in 2021, will begin testing the new Atlas in its factories in the coming months, according to the blog post. 

Long-term commercialization and scalability appear to have been primary factors driving the old Atlas’ into retirement. Since the Hyundai question, Boston Dynamics has pushed to widely sell other, more affordable products like its “Spot” quadruped and its “Stretch” warehouse robot. The old Atlas, by contrast, was never sold commercially and may have simply cost too much for any manufacturer to justify buying it. Boston Dynamics hinted towards its commercial plans for the new Atlas in its most recent press release.

“Given our track record of successful commercialization, we are confident in our plan to not just create an impressive R&D project, but to deliver a valuable solution,” the company said. 

Despite being primarily an R&D tool, the original Atlas still played an important symbolic role, acting as a technical ceiling for what other bipedal robots could achieve. Its prominence helped usher in a new wave of human-like robots from startups like Figure and Agility Robotics, which are looking to combine robotics with large language models to make them more useful at completing tasks in the real world. Ironically, it’s looking like the new Atlas will now compete against the upstarts its predecessor inspired. 

[ Related: OpenAI wants to make a walking, talking humanoid robot smarter ]

The post RIP Atlas, the world’s beefiest humanoid robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Startup pitches a paintball-armed, AI-powered home security camera https://www.popsci.com/technology/paintball-armed-ai-home-security-camera/ Mon, 15 Apr 2024 14:51:01 +0000 https://www.popsci.com/?p=610934
PaintCam Eve shooting paintballs at home
PaintCam Eve supposedly will guard your home using the threat of volatile ammunition. Credit: PaintCam

PaintCam Eve also offers a teargas pellet upgrade.

The post Startup pitches a paintball-armed, AI-powered home security camera appeared first on Popular Science.

]]>
PaintCam Eve shooting paintballs at home
PaintCam Eve supposedly will guard your home using the threat of volatile ammunition. Credit: PaintCam

It’s a bold pitch for homeowners: What if you let a small tech startup’s crowdfunded AI surveillance system dispense vigilante justice for you?

A Slovenia-based company called OZ-IT recently announced PaintCam Eve, a line of autonomous property monitoring devices that will utilize motion detection and facial recognition to guard against supposed intruders. In the company’s zany promo video, a voiceover promises Eve will protect owners from burglars, unwanted animal guests, and any hapless passersby who fail to heed its “zero compliance, zero tolerance” warning.

The consequences for shrugging off Eve’s threats: Getting blasted with paintballs, or perhaps even teargas pellets.

“Experience ultimate peace of mind,” PaintCam’s website declares, as Eve will offer owners a “perfect fusion of video security and physical presence” thanks to its “unintrusive [sic] design that stands as a beacon of safety.”

AI photo

And to the naysayers worried Eve could indiscriminately bombard a neighbor’s child with a bruising paintball volley, or accidentally hock riot control chemicals at an unsuspecting Amazon Prime delivery driver? Have no fear—the robot’s “EVA” AI system will leverage live video streaming to a user’s app, as well as employ facial recognition technology system that would allow designated people to pass by unscathed.

In the company’s promotional video, there appears to be a combination of automatic and manual screening capabilities. At one point, Eve is shown issuing a verbal warning to an intruder, offering them a five-second countdown to leave its designated perimeter. When the stranger fails to comply, Eve automatically fires a paintball at his chest. Later, a man watches from his PaintCam app’s livestream as his frantic daughter waves at Eve’s camera to spare her boyfriend, which her father allows.

“If an unknown face appears next to someone known—perhaps your daughter’s new boyfriend—PaintCam defers to your instructions,” reads a portion of product’s website.

Presumably, determining pre-authorized visitors would involve them allowing 3D facial scans to store in Eve’s system for future reference. (Because facial recognition AI has such an accurate track record devoid of racial bias.) At the very least, require owners to clear each unknown newcomer. Either way, the details are sparse on PaintCam’s website.

Gif of PaintCam scanning boyfriend
What true peace of mind looks like. Credit: PaintCam

But as New Atlas points out, there aren’t exactly a bunch of detailed specs or price ranges available just yet, beyond the allure of suburban crowd control gadgetry. OZ-IT vows Eve will include all the smart home security basics like live monitoring, night vision, object tracking, movement detection, night vision, as well as video storage and playback capabilities.

There are apparently “Standard,” “Advanced,” and “Elite” versions of PaintCam Eve in the works. The basic tier only gets owners “smart security” and “app on/off” capabilities, while Eve+ also offers animal detection. Eve Pro apparently is the only one to include facial recognition, which implies the other two models could be a tad more… indiscriminate in their surveillance methodologies. It’s unclear how much extra you’ll need to shell out for the teargas tier, too.

PaintCam’s Kickstarter is set to go live on April 23. No word on release date for now, but whenever it arrives, Eve’s makers promise a “safer, more colorful future” for everyone. That’s certainly one way of describing it.

The post Startup pitches a paintball-armed, AI-powered home security camera appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a tripod robot test its asteroid leaping skills https://www.popsci.com/technology/spacehopper-zero-gravity/ Fri, 12 Apr 2024 13:35:48 +0000 https://www.popsci.com/?p=610621
SpaceHopper robot in midair during parabolic flight test
SpaceHopper is designed to harness an asteroid's microgravity to leap across its surface. Credit: ETH Zurich / Nicolas Courtioux

SpaceHopper maneuvered in zero gravity aboard a parabolic flight.

The post Watch a tripod robot test its asteroid leaping skills appeared first on Popular Science.

]]>
SpaceHopper robot in midair during parabolic flight test
SpaceHopper is designed to harness an asteroid's microgravity to leap across its surface. Credit: ETH Zurich / Nicolas Courtioux

Before astronauts leave Earth’s gravity for days, weeks, or even months at a time, they practice aboard NASA’s famous parabolic flights. During these intense rides in modified passenger jets, trainees experience a series of stomach-churning ups and downs as the aircraft’s steep up-and-down movements create zero-g environments. Recently, however, a robot received similar education as their human counterparts—potentially ahead of its own journeys to space.

A couple years back, eight students at ETH Zürich in Switzerland helped design the SpaceHopper. Engineered specifically to handle low-gravity environments like asteroids, the small, three-legged bot is meant to (you guessed it) hop across its surroundings. Using a neural network trained in simulations with deep reinforcement learning, SpaceHopper is built to jump, coast along by leveraging an asteroid’s low-gravity, then orient and stabilize itself mid-air before safely landing on the ground. From there, it repeats this process to efficiently span large distances.

But it’s one thing to design a machine that theoretically works in computer simulations—it’s another thing to build and test it in the real-world.

Private Space Flight photo

Sending SpaceHopper to the nearest asteroid isn’t exactly a cost-effective or simple way to conduct a trial run. But thanks to the European Space Agency and Novespace, a company specializing in zero-g plane rides, the robot could test out its moves in the next best thing.

Over the course of a recent 30 minute parabolic flight, researchers let SpaceHopper perform in a small enclosure aboard Novespace’s Airbus A310 for upwards of 30 zero-g simulations, each lasting between 20-25 seconds. In one experiment, handlers released the robot in the middle of the air once the plane hit zero gravity, then observed it resituate itself to specific orientations using only its leg movements. In a second test, the team programmed SpaceHopper to leap off the ground and reorient itself before gently colliding with a nearby safety net.

Because a parabolic flight creates completely zero-g environments, SpaceHopper actually made its debut in less gravity than it would on a hypothetical asteroid. Because of this, the robot couldn’t “land” as it would in a microgravity situation, but demonstrating its ability to orient and adjust in real-time was still a major step forward for researchers. 

[Related: NASA’s OSIRIS mission delivered asteroid samples to Earth.]

“Until that moment, we had no idea how well this would work, and what the robot would actually do,” SpaceHopper team member Fabio Bühler said in ETH Zürich’s recent highlight video. “That’s why we were so excited when we saw it worked. It was a massive weight off of our shoulders.”

SpaceHopper’s creators believe deploying their jumpy bot to an asteroid one day could help astronomers gain new insights into the universe’s history, as well as provide information into our solar system’s earliest eras. Additionally, many asteroids are filled with valuable rare earth metals—resources that could provide a huge benefit across numerous industries back at home.

The post Watch a tripod robot test its asteroid leaping skills appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch two tiny, AI-powered robots play soccer https://www.popsci.com/technology/deepmind-robot-soccer/ Wed, 10 Apr 2024 18:00:00 +0000 https://www.popsci.com/?p=610317
Two robots playing soccer
Deep reinforcement learning allowed a pair of robots to play against one another. Credit: Google DeepMind / Tuomas Haarnoja

Google DeepMind's bipedal bots go head-to-head after years of prep.

The post Watch two tiny, AI-powered robots play soccer appeared first on Popular Science.

]]>
Two robots playing soccer
Deep reinforcement learning allowed a pair of robots to play against one another. Credit: Google DeepMind / Tuomas Haarnoja

Google DeepMind is now able to train tiny, off-the-shelf robots to square off on the soccer field. In a new paper published today in Science Robotics, researchers detail their recent efforts to adapt a machine learning subset known as deep reinforcement learning (deep RL) to teach bipedal bots a simplified version of the sport. The team notes that while similar experiments created extremely agile quadrupedal robots (see: Boston Dynamics Spot) in the past, much less work has been conducted for two-legged, humanoid machines. But new footage of the bots dribbling, defending, and shooting goals shows off just how good a coach deep reinforcement learning could be for humanoid machines.

While ultimately meant for massive tasks like climate forecasting and materials engineering, Google DeepMind can also absolutely obliterate human competitors in games like chess, go, and even Starcraft II. But all those strategic maneuvers don’t require complex physical movement and coordination. So while DeepMind can study simulated soccer movements, it hasn’t been able to translate to a physical playing field—but that’s quickly changing.

AI photo

To make the miniature Messi’s, engineers first developed and trained two deep RL skill sets in computer simulations—the ability to get up from the ground and how to score goals against an untrained opponent. From there, they virtually trained their system to play a full one-on-one soccer matchup by combining these skill sets, then randomly pairing them against partially trained copies of themselves.

[Related: Google DeepMind’s AI forecasting is outperforming the ‘gold standard’ model.]

“Thus, in the second stage, the agent learned to combine previously learned skills, refine them to the full soccer task, and predict and anticipate the opponent’s behavior,” researchers wrote in their paper introduction, later noting that, “During play, the agents transitioned between all of these behaviors fluidly.”

AI photo

Thanks to the deep RL framework, DeepMind-powered agents soon learned to improve on existing abilities, including how to kick and shoot the soccer ball, block shots, and even defend their own goal against an attacking opponent by using its body as a shield.

During a series of one-on-one matches using robots utilizing the deep RL training, the two mechanical athletes walked, turned, kicked, and uprighted themselves faster than if engineers simply supplied them a scripted baseline of skills. These weren’t miniscule improvements, either—compared to a non-adaptable scripted baseline, the robots walked 181 percent faster, turned 302 percent faster, kicked 34 percent faster, and took 63 percent less time to get up after falling. What’s more, the deep RL-trained robots also showed new, emergent behaviors like pivoting on their feet and spinning. Such actions would be extremely challenging to pre-script otherwise.

Screenshots of robots playing soccer
Credit: Google DeepMind

There’s still some work to do before DeepMind-powered robots make it to the RoboCup. For these initial tests, researchers completely relied on simulation-based deep RL training before transferring that information to physical robots. In the future, engineers want to combine both virtual and real-time reinforcement training for their bots. They also hope to scale up their robots, but that will require much more experimentation and fine-tuning.

The team believes that utilizing similar deep RL approaches for soccer, as well as many other tasks, could further improve bipedal robots movements and real-time adaptation capabilities. Still, it’s unlikely you’ll need to worry about DeepMind humanoid robots on full-sized soccer fields—or in the labor market—just yet. At the same time, given their continuous improvements, it’s probably not a bad idea to get ready to blow the whistle on them.

The post Watch two tiny, AI-powered robots play soccer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Apple’s ‘next big thing’ could be a robot butler https://www.popsci.com/technology/apple-home-robot/ Thu, 04 Apr 2024 17:31:24 +0000 https://www.popsci.com/?p=609497
A possible at-home autonomous robot is the latest examples of Apple renewed push into AI-enabled products.
A possible at-home autonomous robot is the latest examples of Apple renewed push into AI-enabled products. DepositPhotos

The autonomous bot could one day follow you around and clean your room.

The post Apple’s ‘next big thing’ could be a robot butler appeared first on Popular Science.

]]>
A possible at-home autonomous robot is the latest examples of Apple renewed push into AI-enabled products.
A possible at-home autonomous robot is the latest examples of Apple renewed push into AI-enabled products. DepositPhotos

Apple’s vision for the future could involve an autonomous robot butler. That’s according to a new Bloomberg report which claims the iPhone maker is reallocating resources from its now defunct car project and shifting them towards an at-home, mobile robot. Though still in the early stages of research, the possible autonomous robot highlights Apple’s resparked interest in AI-enabled technology

The proposed robot, Bloomberg says, is intended for home use and could follow its users around. Apple engineers are reportedly considering using AI models to help the robot navigate through rooms. Presumably, that means it would also need cameras or other onboard sensors to see the world around it. 

[ Related: Watch us try to make ‘Butler in a Box’ work like it’s 1983 ]

Sources speaking with Bloomberg say Apple engineers are interested in a fully autonomous robot that can take care of everyday chores like cleaning up and washing dishes. The feasibility of that actually happening in the near term, however, remains technically challenging. Apple is reportedly also interested in having the robot function as a kind of mobile video conferencing tool. 

There’s still far more questions than answers about the supposed Apple bot. It’s unclear, for example, whether it will roll around on wheels like Amazon’s Astro home robot, or if it will follow the rising trend of bipedal robots that are shaped more like humans. It’s also unclear exactly  how it would complete its tasks , when it will ship, or how much it would cost. It’s entirely possible the robot, like many other early stage product ideas, may never see the light of day. Other unrealized Apple ideas left to the dustbin of history include a “AirPower” wireless charging mat and early tablet-like prototype supposedly capable of sending and receiving faxes. 

[ Related: OpenAI wants to make a walking, talking humanoid robot smarter

Without more details, one can only speculate how an Apple bot may one day be put to use. Could it follow in the footsteps of past robots capable of assembling a salad? Or maybe it could give its users a relaxing massage or simply open a carton of eggs, as past robots have done. Other “assistance robots” are also being developed to guide people with vision loss and help people with dementia find lost objects. It’s unclear if Apple intends to enter those spaces. Apple did not immediately respond to PopSci’s request for comment. 

Apple reportedly shifting from cars to robots  

News of the alleged Apple robot comes just two months after the company officially ended its electric vehicle ambitions. The car project, known publicly as “Titan,” dated back to at least 2014 but was plagued by leadership shakeups and repeated delays. Many of the roughly 2,000 Apple employees working on the car were reportedly folded into the company’s other AI-related projects. Some of the insights and technology developed for the car, Bloomberg notes, could end up making its way into robots, if it’s in fact ever released.

The supposed robot project further signals Apple’s interest in tapping into AI products. Apple has faced criticism in recent months for looking like a generative AI laggard compared to top tech competitors like Google and Microsoft. To that end, the company is now reportedly in “active negotiations” with Google to bring its Gemini AI model to future iPhones. An AI-enabled robot, if successful, presents Apple with a unique opportunity to lean further into AI doing what they do best: seamlessly integrating hardware and software. 

Integrated large language can help robots interact with humans 

At home robots aren’t entirely new. iRobot, the company behind the popular Roomba  autonomous vacuum cleaner claims it has sold more than 40 million of the devices worldwide. Amazon is also pursuing its own, more advanced at robot primarily used for home monitoring. Those robots, however, are far less sophisticated than what could be around the corner. Robotics firms like Figure are already taking advanced language models and integrating them into bipedal, humanoid machines capable of performing tasks and holding a conversation. Figure recently partnered with BMW to bring its robots to a South Carolina manufacturing facility where it will move sheet metal and perform other tasks. Amazon is reportedly already testing humanoid robots in some of its warehouses. 

An Apple robot, if it ever materializes, will likely follow the trend of integrating large language models into advanced robotics in order to effectively communicate with its owner. Whether or not customers will actually welcome such a device into their homes remains to be seen. 61% of adults surveyed in 2021 by the Brookings Institution, a public policy think tank,said they were uncomfortable with robots. More than a third (37%) of adults polled by Pew Research said they were “more concerned than excited” regarding the prospect of more AI used in daily life. 

The post Apple’s ‘next big thing’ could be a robot butler appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ https://www.popsci.com/technology/slide-whistle-quartet/ Wed, 03 Apr 2024 21:00:00 +0000 https://www.popsci.com/?p=609382
Slide Whistle robot quartet
Somehow, it only took Tim Alex Jacobs two weeks to build. YouTube

Well, the notes start coming and they don't stop coming.

The post Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ appeared first on Popular Science.

]]>
Slide Whistle robot quartet
Somehow, it only took Tim Alex Jacobs two weeks to build. YouTube

The slide whistle isn’t known as a particularly difficult instrument to play—there’s a reason they’re usually marketed to children. But designing, programming, and building a robotic slide whistle quartet? That takes a solid background in computer science, a maddening amount of trial-and-error, logistical adjustments to account for “shrinkflation,” and at least two weeks to make it all happen.

That said, if you’re confident in your technical abilities, you too can construct a portable slide-whistle symphony-in-a-box capable of belting out Smash Mouth’s seminal, Billboard-topping masterpiece “All Star.” Fast forward to the 4:47 mark to listen to the tune. 

AI photo


Despite his initial apology for “crimes against all things musical,” it seems as though Tim Alex Jacobs isn’t feeling too guilty about his ongoing robot slide whistle hobby. Also known online as “mitxela,” Jacobs has documented his DIY musical endeavors on his YouTube channel for years. It appears plans to create MIDI-controlled, automated slide whistle systems have been in the works since at least 2018, but it’s difficult to envision anything much more absurd than Jacob’s latest iteration, which manages to link four separate instruments alongside motorized fans and mechanical controls, all within a latchable carrying case.

Aside from the overall wonky tones that come from slide whistles in general, Jacobs notes just how difficult it would be to calibrate four of them. What’s more, each whistle’s dedicated fan motor differs slightly from one another, making the resultant pressures unpredictable. To compensate for this, Jacobs drilled holes in the pumps to create intentional air leaks, allowing him to run the motors closer to full power than before without overheating.

[Related: Check out some of the past year’s most innovative musical inventions.]

“If we can run them at a higher power level, then the effects of friction will be less significant,” Jacobs explains. But although this reportedly helped a bit, he admits the results were “far from adequate.” Attaching contact microphones to each slide whistle was also a possibility, but the work involved in calibrating them to properly isolate the whistle tones simply wasn’t worth it.

So what was worth the effort? Well, programming the whistles to play “All Star” in its entirety, of course. The four instruments are in no way tuned to one another, but honestly, it probably wouldn’t be as entertaining if they somehow possessed perfect pitch.

Jacobs appears to have plans for further fine tuning (so to speak) down the line, but it’s unclear if he’ll stick with Smash Mouth, or move onto another 90s pop-rock band.

The post Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot named ‘Emo’ can out-smile you by 840 milliseconds https://www.popsci.com/technology/emo-smile-robot-head/ Fri, 29 Mar 2024 14:00:00 +0000 https://www.popsci.com/?p=608662
Yuhang Hu working on Emo robot head
Emo contains 26 actuators to help mimic human smiles. John Abbott/Columbia Engineering

The bot's head and face are designed to simulate facial interactions in conversation with humans.

The post A robot named ‘Emo’ can out-smile you by 840 milliseconds appeared first on Popular Science.

]]>
Yuhang Hu working on Emo robot head
Emo contains 26 actuators to help mimic human smiles. John Abbott/Columbia Engineering

If you want your humanoid robot to realistically simulate facial expressions, it’s all about timing. And for the past five years, engineers at Columbia University’s Creative Machines Lab have been honing their robot’s reflexes down to the millisecond. Their results, detailed in a new study published in Science Robotics, are now available to see for yourself.

Meet Emo, the robot head capable of anticipating and mirroring human facial expressions, including smiles, within 840 milliseconds. But whether or not you’ll be left smiling at the end of the demonstration video remains to be seen.

AI photo

AI is getting pretty good at mimicking human conversations—heavy emphasis on “mimicking.” But when it comes to visibly approximating emotions, their physical robots counterparts still have a lot of catching up to do. A machine misjudging when to smile isn’t just awkward–it draws attention to its artificiality. 

Human brains, in comparison, are incredibly adept at interpreting huge amounts of visual cues in real-time, and then responding accordingly with various facial movements. Apart from making it extremely difficult to teach AI-powered robots the nuances of expression, it’s also hard to build a mechanical face capable of realistic muscle movements that don’t veer into the uncanny.

[Related: Please think twice before letting AI scan your penis for STIs.]

Emo’s creators attempt to solve some of these issues, or at the very least, help narrow the gap between human and robot expressivity. To construct their new bot, a team led by AI and robotics expert Hod Lipson first designed a realistic robotic human head that includes 26 separate actuators to enable tiny facial expression features. Each of Emo’s pupils also contained high-resolution cameras to follow the eyes of its human conversation partner—another important, nonverbal visual cue for people. Finally, Lipson’s team layered a silicone “skin” over Emo’s mechanical parts to make it all a little less.. you know, creepy.

From there, researchers built two separate AI models to work in tandem—one to predict human expressions through a target face’s minuscule expressions, and another to quickly issue motor responses for a robot face. Using sample videos of human facial expressions, Emo’s AI then learned emotional intricacies frame-by-frame. Within just a few hours, Emo was capable of observing, interpreting, and responding to the little facial shifts people tend to make as they begin to smile. What’s more, it can now do so within about 840 milliseconds.

“I think predicting human facial expressions accurately is a revolution in [human-robot interactions,” Yuhang Hu, Columbia Engineering PhD student and study lead author, said earlier this week. “Traditionally, robots have not been designed to consider humans’ expressions during interactions. Now, the robot can integrate human facial expressions as feedback.”

Right now, Emo lacks any verbal interpretation skills, so it can only interact by analyzing human facial expressions. Lipson, Hu, and the rest of their collaborators hope to soon combine the physical abilities with a large language model system such as ChatGPT. If they can accomplish this, then Emo will be even closer to natural(ish) human interactions. Of course, there’s a lot more to relatability than smiles, smirks, and grins, which the scientists appear to be focusing on. (“The mimicking of expressions such as pouting or frowning should be approached with caution because these could potentially be misconstrued as mockery or convey unintended sentiments.”) However, at some point, the future robot overlords may need to know what to do with our grimaces and scowls.

The post A robot named ‘Emo’ can out-smile you by 840 milliseconds appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Get ready for the robotic fish revolution https://www.popsci.com/technology/get-ready-for-the-robotic-fish-revolution/ Fri, 29 Mar 2024 12:00:00 +0000 https://www.popsci.com/?p=608633
a fish robot in an aquarium
Around the world, researchers developing robots that look and swim like fish say their aquatic automatons are cheaper, easier to use, and less disruptive to sea life than the remotely operated vehicles (ROVs) scientists use today. DepositPhotos

Scientists say swarms of robotic fish could soon make traditional underwater research vehicles obsolete.

The post Get ready for the robotic fish revolution appeared first on Popular Science.

]]>
a fish robot in an aquarium
Around the world, researchers developing robots that look and swim like fish say their aquatic automatons are cheaper, easier to use, and less disruptive to sea life than the remotely operated vehicles (ROVs) scientists use today. DepositPhotos

This article was originally featured on Hakai Magazine, an online publication about science and society in coastal ecosystems. Read more stories like this at hakaimagazine.com.

Human technology has long drawn inspiration from the natural world: The first airplanes were modeled after birds. The designer of Velcro was inspired by the irksome burrs he often had to pick off his dog. And in recent years, engineers eager to explore the world’s oceans have been taking cues from the creatures that do it best: fish.

Around the world, researchers developing robots that look and swim like fish say their aquatic automatons are cheaper, easier to use, and less disruptive to sea life than the remotely operated vehicles (ROVs) scientists use today. In a recent review of the technology’s advances, scientists claim only a few technical problems stand in the way of a robotic fish revolution.

Over the past few decades, engineers have designed prototype robotic fish for a variety of purposes. While some are built to carry out specific tasks—such as tricking other fish in a labsimulating fish hydrodynamics, or gathering plastics from the ocean—the majority are designed to traverse the seas while collecting data. These robotic explorers are typically equipped with video cameras to document any life forms they encounter and sensors to measure depth, temperature, and acidity. Some of these machines—including a robotic catfish named Charlie, developed by the CIA—can even take and store water samples.

While modern ROVs can already do all these tasks and more, the review’s authors argue that robotic fish will be the tools of the future.

“The jobs done by existing [ROVs] can be done by robotic fish,” says Weicheng Cui, a marine engineer at Westlake University in China and a coauthor of the review. And “what cannot be done by existing ROVs may [also] be done by robotic fish.”

Since the invention of the first tethered ROV in 1953—a contraption named Poodle—scientists have increasingly relied on ROVs to help them reach parts of the ocean that are too deep or dangerous for scuba divers. ROVs can go to depths that divers can’t reach, spend a virtually unlimited amount of time there, and bring back specimens, both living and not, from their trips.

While ROVs have been a boon for science, most models are large and expensive. The ROVs used by scientific organizations, such as the Monterey Bay Aquarium Research Institute (MBARI), the Woods Hole Oceanographic Institution, the Schmidt Ocean Institute, and OceanX, can weigh nearly as much as a rhinoceros and cost millions of dollars. Such large, high-end ROVs also require a crane to deploy and must be tethered to a mother ship while in the water.

In contrast, robotic fish are battery-powered bots that typically weigh only a few kilograms and cost a couple thousand dollars. Although some have been designed to resemble real fish, robotic fish typically come in neutral colors and resemble their biological counterparts in shape only. Yet, according to Tsam Lung You, an engineer at the University of Bristol in England who was not involved in the review, even the most unrealistic robot fish are less disruptive to aquatic life than the average ROV.

Unlike most ROVs that use propellers to get around, robotic fish swim like the animals that inspired them. Flexing their tails back and forth, robotic fish glide through the water quietly and don’t seem to disturb the surrounding marine life—an advantage for researchers looking to study underwater organisms in their natural environments.

Because robotic fish are small and stealthy, scientists may be able to use them to observe sensitive species or venture into the nooks and crannies of coral reefs, lava tubes, and undersea caves. Although robotic fish are highly maneuverable, current models have one big downside: their range is very limited. With no mother ship to supply them with power and limited room to hold batteries, today’s robotic fish can only spend a few hours in the water at a time.

For robotic fish to make modern ROVs obsolete, they’ll need a key piece that’s currently missing: a docking station where they can autonomously recharge their batteries. Cui envisions a future where schools of small robotic fish work together to accomplish big tasks and take turns docking at underwater charging stations powered by a renewable energy source, like wave power.

“Instead of one [ROV], we can use many robotic fish,” Cui says. “This will greatly increase the efficiency of deep-sea operations.”

This potential future relies on the development of autonomous underwater charging stations, but Cui and his colleagues believe these can be built using existing technologies. The potential docking station’s core, he says, would likely be a wireless charging system. Cui says this fishy future could come to fruition in under a decade if the demand is great enough.

Still, getting scientists to trade in their ROVs for schools of robotic fish may be a tough sell, says Paul Clarkson, the director of husbandry operations at the Monterey Bay Aquarium in California.

“For decades, we’ve benefited from using the remotely operated vehicles designed and operated by our research and technology partner, MBARI,” says Clarkson. “Their ROVs are an essential part of our work and research, and the capabilities they provide make them an irreplaceable tool.”

That said, he adds, “with the threats of climate change, habitat destruction, overfishing, and plastic pollution, we need to consider what advantages new innovations may offer in understanding our changing world.”

This article first appeared in Hakai Magazine and is republished here with permission.

The post Get ready for the robotic fish revolution appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Drones offer a glimpse inside Fukushima nuclear reactor 13 years after disaster https://www.popsci.com/environment/fukushima-reactor-drones/ Fri, 22 Mar 2024 18:00:00 +0000 https://www.popsci.com/?p=607517
Aerial view of Fukushima nuclear reactor meltdown
In this satellite view, the Fukushima Dai-ichi Nuclear Power plant after a massive earthquake and subsequent tsunami on March 14, 2011 in Futaba, Japan. DigitalGlobe via Getty Images via Getty Images

The tiny robots could only explore a small portion of No. 1 reactor’s main structural support, showing the cleanup challenges ahead.

The post Drones offer a glimpse inside Fukushima nuclear reactor 13 years after disaster appeared first on Popular Science.

]]>
Aerial view of Fukushima nuclear reactor meltdown
In this satellite view, the Fukushima Dai-ichi Nuclear Power plant after a massive earthquake and subsequent tsunami on March 14, 2011 in Futaba, Japan. DigitalGlobe via Getty Images via Getty Images

A team of miniature drones recently entered the radioactive ruins of one of Fukushima’s nuclear reactors in an attempt to help Japanese officials continue planning their decades’ long cleanup effort. But if the images released earlier this week didn’t fully underscore just how much work is still needed, new footage from the tiny robots’ excursion certainly highlights the many challenges ahead.

On Thursday, Tokyo Electric Power Company Holdings (TEPCO), the Japanese utility organization that oversees the Fukushima Daiichi plant reclamation project, revealed three-minutes of video recorded by a bread slice-sized flying drone alongside a snake-like bot that provided its light. Obtained during TEPCO’s two-day probe, the new clip offers viewers some of the best looks yet at what remains of portions of the Fukushima Daiichi nuclear facility—specifically, the main structural support in its No. 1 reactor’s primary containment vessel.

The Fukushima plant suffered a catastrophic meltdown on March 11, 2011, after a magnitude 9.0 earthquake off the Japanese coast produced a 130-foot-tall tsunami that subsequently bore down on the region. Of the three reactors damaged during the disaster, No. 1 is considered the most severely impacted. A total of 880 tons of molten radioactive fuel debris is believed to remain within those reactors, with No.1 believed to contain the largest amount. An estimated 160,000 people were evacuated from the surrounding areas, with only limited returns allowed the following year. Around 20,000 people are believed to have been killed during the tsunami itself.

Last week’s drone-gathered images and video show the remains of the No. 1 reactor’s control-rod drive mechanism, alongside other equipment attached to the core, which indicate the parts were dislodged during the meltdown. According to NHK World, “agglomerated or icicle-shaped objects” seen in certain areas could be nuclear fuel debris composed of “a mixture of molten nuclear fuel and surrounding devices.”

[Related: Japan begins releasing treated Fukushima waste water into the Pacific Ocean.]

Experts say only a fraction of the damage could be accessed by the drones due to logistical difficulties, and that the robots couldn’t reach the core bottom because of poor visibility. Similarly, radiation levels could not be ascertained during this mission, since the drones did not include instruments such as dosimeters so as to remain light enough to maneuver through the plant.

Drones photo

TEPCO now plans to analyze the drone data to better establish a plan of action to collect and remove the radioactive debris within Fukushima. In August 2023, officials began a multiphase project to release treated radioactive wastewater from the plant into the Pacific Ocean. While deemed safe by multiple agencies and watchdogs, the ongoing endeavor has received strong pushback from neighboring countries, including China.

The Japanese government and TEPCO have previously estimated cleanup will take 30-40 years, although critics believe the timeline to be extremely optimistic.

The post Drones offer a glimpse inside Fukushima nuclear reactor 13 years after disaster appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Parkour’ robot dog can leap, jump, and crawl its way through complex obstacle courses https://www.popsci.com/technology/parkour-robot/ Thu, 14 Mar 2024 17:30:00 +0000 https://www.popsci.com/?p=606569
parkour robot
Researches used a model trained on parkour completions to teach this 100 pound robot to run through an obstacle course. YouTube

Researchers studied some very agile 'freerunners' to make this machine

The post ‘Parkour’ robot dog can leap, jump, and crawl its way through complex obstacle courses appeared first on Popular Science.

]]>
parkour robot
Researches used a model trained on parkour completions to teach this 100 pound robot to run through an obstacle course. YouTube

Four-legged, dog-inspired quadruped robots have already proven capable of a variety of tasks, from remotely monitoring sports stadiums and guiding blind persons to inspecting potentially hazardous research areas. These robots are generally more agile than their hulking bipedal counterparts, but they have mostly failed to match the fluid grace and athleticism of their furry canine inspirations. Now, a new wall-scaling robot is pushing the boundary of what these quadrupeds are capable of, and it’s doing so with a bit of flair. 

Researchers from ETH Zurich are trying to close the mobility gap between robots and animals with a new highly-mobile robot capable of running, jumping, and crawling its way through obstacle courses. 

Robots photo

The researchers, who published their findings in Science Robotics this week, set out to to teach ANYmal, a 100 pound quadruped robot made by the firm ANYbotics how to mimic human “freerunners” who engage in an underground sport referred to by many as “parkour.” 

In a nutshell, parkour centers around getting from one point to another in the fastest way possible and often involves swiftly crawling and leaping through obstacles along the way. Parkour, which is performed on obstacle courses or even in dense urban areas, requires a combination of athleticism and rapid decision making. ANYmal was up to the task. The newly improved robot was able to complete the  basic parkour course below moving at a clip of six feet per second. 

How did ANYmal learn parkour? 

A video of newly trained ANYmal in action shared by ETH Zurich shows the beefy red robot clambering up a small wooden staircase before leading over a small gap to land on another platform. Without breaking its stride, the robot charges forward then dives down to scramble underneath an obstacle in a motion resembling a scouring insect. ANYmal quickly pushed itself back up so it can climb vertically up another crate slightly taller than its body. The video shows the robot completing the course even when the obstacles are scrambled in different orders. 

ANYmal uses onboard laser sensors to perceive its environment and create maps it can use to autonomously plan and execute a travel path. Four lightweight carbon legs and 12 identical motors propel it towards its target. The ETH researchers set out to improve the robot’s movement by using a neural network composed of three separate modules, each devoted to locomotion, perception, and navigation selectively. 

In their paper, the researchers say they developed the navigation element of ANYmal to understand the robot’s capabilities in walking, jumping, and crouching. Armed with that context, ANYmal can automatically adjust its behavior depending on the type of obstacles prevented. The end result: a robot that can quickly identify and react to a range of obstacles and traverse 

them. 

ETH doctoral student Nikita Rudin, one of the researchers working on ANYmal’s improvements, is reportedly himself a parkour enthusiast and tapped into that experience during the research. 

“Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential but I had a different opinion,” Rudin recently told Science Daily. “In fact, I was sure that a lot more could be done with the mechanics of legged robots.”

Researchers trained the model on examples of human freerunners who complete parkour courses. ANYmal learned its new skills through basic trial and error in repeated simulated environments. Eventually, the robot was deployed in a physical obstacles course where it used those lessons learned in simulation to complete the course. 

“By aiming to match the agility of free runners, we can better understand the limitations of each component in the pipeline from perception to actuation, circumvent those limits, and generally increase the capabilities of our robots, which in return paves the road for many new applications, such as search and rescue in collapsed buildings or complex natural terrains,” the research said in a statement

Parkour inspiration could improve robots mean for search and rescue and space exploration

Unlike human freerunning, ANYmal’s new skills are intended for more than just looking cool. Looking to the future, researchers believe similar improvements could be applied to search and rescue robots to help them vault over obstacles or scurry into difficulty to reach areas during search and rescue missions. One day, the researchers note, these same type of advancements could possibly even aid in space exploration robots traversing harsh, rocky surfaces of the moon and other planets. The added location could also apply to quadruped robots investigating hazardous areas of the Large Hadron Collider in Geneva, Switzerland. 

Those use cases are, for now, still largely hypothetical. In the meantime, ANYMal will join the likes of Boston Dynamics’ Spot and Atlas robots on the ever growing list of metal machines uncannily performing athletic feats once reserved living beings. 

The post ‘Parkour’ robot dog can leap, jump, and crawl its way through complex obstacle courses appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot tried to give me ‘the world’s most advanced massage’ https://www.popsci.com/technology/robot-massage-aescape/ Wed, 13 Mar 2024 13:00:00 +0000 https://www.popsci.com/?p=606313
woman lays on blue table with two robot arms touching her back
Do I look relaxed?. Annie Colbert/PopSci

I fear robots, but love a good back rub.

The post A robot tried to give me ‘the world’s most advanced massage’ appeared first on Popular Science.

]]>
woman lays on blue table with two robot arms touching her back
Do I look relaxed?. Annie Colbert/PopSci

The vibe of Aescape’s massage rooms is familiar: soft lighting, inviting earth tones, a fresh smell, and stylish chairs with fluffy pillows that you only use to daintily pile your clothes on top of. Like a typical spa, the room also has a massage table, but instead of the classic flat surface covered in a white sheet, it’s a large, pill-shaped device. And then there’s the pair of large robotic arms.

Aescape is a New York City-based company that has created an “AI-enhanced” massage table. As a person who is highly skeptical of robots but loves massages, I volunteered to find out if a robot massage can live up to the incredibly satisfying experience of a massage by a human.

Aescape calls itself “The World’s Most Advanced Massage” but will it alleviate the stubborn back knots of a middle-aged mom who walks everywhere but refuses to stretch or properly hydrate and spends all day hunched over a laptop? On a Monday afternoon in February, me and my aches and pains visited Aescape’s offices and test labs for an appointment with the massage table to find out.

room with two chairs, wood on walls, massage table with robot arms
Futuristic spa vibes. Image: Annie Colbert/PopSci

Now at this point you may be wondering but scared to ask: Are you naked for the massage? The answer is no. Instead of stripping down and awkwardly positioning yourself under a sheet, Aescape provides “Aerwar apparel” for rent or purchase. The tight-fitting dark gray pants and matching shirt feel like a combination of yoga gear and a wetsuit. After squeezing into the stretchy ‘fit and pulling my hair back, I took a moment to appreciate how I looked like a humanoid sea lion.

two photos: one of a woman in tight workout clothes and a massage table with white robot arms
L: Mock turtlenecked and ready to go. R: The Aescape massage table. Images: Annie Colbert/PopSci

Now it was massage time. Aescape CEO Eric Litman helped walk me through the setup process, but the machines are designed to be self-service. Litman has spent seven years developing Aescape with $80 million in funding. He says that the idea for a fully automated, AI-driven massage table “was born out of a personal pain point” of being unable to have focused, regular treatment for a chronic pain. 

The entire massage happens while you lay on your stomach. Your face pokes through a hole that’s conveniently large enough to accommodate eye glasses. I would normally remove my glasses for a massage, but the Aescape table includes a tablet about the size of a laptop screen under the face hole that helps you control all aspects of your massage. Without my glasses, I might end up picking a weak calf massage and bad nature sounds music–nightmare scenario. I worried about my face feeling squished or ending up with the dreaded bagel-shaped indent across my forehead, cheeks, and chin but the wide space for my upper face made me forget I was even wearing my glasses. 

a woman with glasses in a hole
My face in a refreshingly comfortable hole. Image: Annie Colbert/PopSci

Before the robot hands get to work detangling my mess of muscles, I use the touchscreen to select a massage type (I go with a 30-minute “Total Back & Glutes Relief”), musical preference (a chill surf music), and pressure (as strong as humanly, errr robotly, possible). The table scans my body for 1.1 million 3D data points to map out my treatment. Aescape also says that the table learns more about your body and preferences with each massage. “We use AI to personalize your massage-going experience. Our data, trained by [massage] therapists, learns from your feedback over time, adjusting treatments to your preferences,” Litman tells PopSci

One issue I’ve encountered while trying robotic back massagers or massage chairs is a one-size-fits-all approach and ending up with rotating massage balls bruising my spine instead of providing any actual relief. By scanning each body, Aescape aims to remedy the fact that bodies come in many different shapes and sizes. The scan takes less than a minute and it’s time for the robot arms to start digging in. 

The massage started with a five-minute full body acclimation that allowed me to adjust to the feeling of warm robot nubs (yes, they’re heated) pushing on my muscles. I worried the robot “hands” would feel like a frisky WALL-E, but the shape felt more like a large, firm human palm. And again, it was warm–like the underside of an overworked laptop. The touchscreen under my face laid out exactly what to expect for the next 30 minutes and showed where I was at in the massage and what was coming next. I found this incredibly comforting as a person who spends half her massages trying to figure out the time instead of relaxing. The screen also shows a live view of exactly what muscles are being worked on, avoiding the surprise of “oh, I guess we’re doing legs now?”

two images: an image of a body and a tablet
L: Butt massage, Image: Annie Colbert/PopSci. R: The Aescape touchscreen. Image: Aescape

Throughout the 30-minute massage, I played with the pressure settings–softer on the legs, harder on the back–and found the minute-by-minute customizations helpful for getting exactly what I wanted. You also have the option to “move on” or “focus” on different body parts as the massage progresses. This is a huge win for people like me who are shy about asking a human masseuse to switch it up for fear of an awkward interaction. 

I thought the screen in my face might distract from the relaxation element of a typical massage but I found it easy to close my eyes and let the robot do its thing. 

Fitness & Exercise photo

OK, but how did it feel? Honestly, great. My expectation was that it would be pleasant, probably not tear off a limb (again, fear of robots), and feel moderately nice. But within the first pass of the robot arms on my extremely tight traps (the vaguely trapezoid-shaped muscles that sit between the shoulders and the neck), I felt pleasantly surprised. On the subway ride home, my body felt more relaxed and the next day I still felt a looseness in my back despite returning to child-lugging and computer-typing duties. 

I wouldn’t say the Aescape massage table is a one-to-one replacement for a traditional massage because it’s currently limited to a face-down position and doesn’t get all the tiny muscles, but it’s a solid supplement to regular body care. The massage tables can be booked via an app and plan to be integrated into hotels, spas, corporate offices, and fitness centers throughout 2024. The convenience and customization are major plus sides for anyone looking for quick relief. 

The Aescape massage table is launching in New York City at 10 Equinox locations this spring with 30-minute massages starting at $60.

The post A robot tried to give me ‘the world’s most advanced massage’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hat-wearing cyborg jellyfish could one day explore the ocean depths https://www.popsci.com/technology/cyborg-jellyfish-biorobot/ Mon, 11 Mar 2024 16:30:00 +0000 https://www.popsci.com/?p=606077
Concept art of cyborg jellyfish with forebody attachments
An artist's rendering of jellyfish donning Caltech's sensor hat. Credit: Caltech/Rebecca Konte

A cheap pair of accessories may transform some of the Earth’s oldest creatures into high-tech, deep sea researchers.

The post Hat-wearing cyborg jellyfish could one day explore the ocean depths appeared first on Popular Science.

]]>
Concept art of cyborg jellyfish with forebody attachments
An artist's rendering of jellyfish donning Caltech's sensor hat. Credit: Caltech/Rebecca Konte

To better understand the ocean’s overall health, researchers hope to harness some of evolution’s simplest creatures as tools to assess aquatic ecosystems. All they need is $20 worth of materials, a 3D-printer, and some jellyfish hats. 

Jellyfish first began bobbing through Earth’s ancient oceans at least half a billion years ago, making them some of the planet’s oldest creatures. In all that time, however, their biology has remained pretty consistent—a bell-shaped, brainless head attached to a mass of tentacles, all of which is composed of around 95 percent water. Unfortunately, that same steady state can’t be said of their habitat, thanks to humanity’s ongoing environmental impacts.

Although it’s notoriously dangerous, technologically challenging, and expensive for humans to reach the ocean’s deepest regions, jellyfish do it all the time. Knowing this, a team of Caltech researchers, led by aeronautics and mechanical engineering professor John Dabiri, first created a jellyfish-inspired robot to explore the abyss. While the bot’s natural source material is Earth’s most energy efficient swimmer, the mechanical imitation couldn’t quite match the real thing. Dabiri and colleagues soon realized another option: bringing the robotics to actual jellyfish.

Ocean photo

“Since they don’t have a brain or the ability to sense pain, we’ve been able to collaborate with bioethicists to develop this biohybrid robotic application in a way that’s ethically principled,” Dabiri said in a recent profile.

First up was a pacemaker-like implant capable of controlling the animal’s speed. Given its efficiency, a jellyfish with the implant could swim three times as fast as normal while only requiring double the energy. After some additional tinkering, the team then designed a “forebody” that also harmlessly attaches to a jelly’s bell.

This 3D-printed, hat-like addition not only houses electronics and sensors, but makes its wearer even faster. Its sleek shape is “much like the pointed end of an arrow,” described Simon Anuszczyk, the Caltech graduate student and study lead author who came up with the forebody design. In a specially built, three-story vertical aquarium, the cyborg hat-sporting jellyfish could swim 4.5 times faster than its regular counterparts.

[Related: Even without brains, jellyfish learn from their mistakes.]

By controlling their jellies’ vertical ascent and descent, Dabiri’s team believes the biohybrids could one day help gather deep ocean data previously obtainable only by using extremely costly research vessels and equipment. Although handlers can only control the up-and-down movement of their cyborg animals at the moment, researchers believe additional work could make them fully steerable in any direction. They’ll also need to develop a sensor array capable of withstanding the deep sea’s crushing pressures, but the team is confident they are up to the challenge.

“It’s well known that the ocean is critical for determining our present and future climate on land, and yet, we still know surprisingly little about the ocean, especially away from the surface,” Dabiri said. “Our goal is to finally move that needle by taking an unconventional approach inspired by one of the few animals that already successfully explores the entire ocean.”

The post Hat-wearing cyborg jellyfish could one day explore the ocean depths appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Can this robot help solve a guide dog shortage? https://www.popsci.com/technology/can-this-robot-help-solve-a-guide-dog-shortage/ Mon, 11 Mar 2024 14:55:43 +0000 https://www.popsci.com/?p=606035
The Glide guiding robot uses a combination of onboard sensors and cameras inspired by autonomous vehicles to detect hazards and lead visually impaired people toward to their destinations.
The Glide guiding robot uses a combination of onboard sensors and cameras inspired by autonomous vehicles to detect hazards and lead visually impaired people toward to their destinations. Courtesy Glidance

Millions of partially sighted and blind persons don't have guide dogs. Autonomous machines may one day close the accessibility gap.

The post Can this robot help solve a guide dog shortage? appeared first on Popular Science.

]]>
The Glide guiding robot uses a combination of onboard sensors and cameras inspired by autonomous vehicles to detect hazards and lead visually impaired people toward to their destinations.
The Glide guiding robot uses a combination of onboard sensors and cameras inspired by autonomous vehicles to detect hazards and lead visually impaired people toward to their destinations. Courtesy Glidance

Glidance CEO Amos Miller is one of an estimated 253 million people worldwide who live with a moderate to severe vision impairment. The overwhelming majority of those people currently navigate through the world without access to highly trained guide dogs or difficult to master walking canes. Miller, who was diagnosed with retinitis pigmentosa when he was five and has since lost most of his sight, believes his company’s autonomous guide robot could offer a solution to that mammoth accessibility gap. At just five pounds, Miller says “Glide” can spot obstacles and safely navigate blind people to their destinations, all at a fraction of the cost it takes to train and maintain highly specialized guide dogs. 

“In order to make a material difference to somebody who’s not very confident getting out and about, you need something that is physically connected to the ground and guide you,” Miller told PopSci. “And that’s what Glide is.”

“…we have these autonomous vehicles, we have self-guided drones, we land these drones on Mars and I’m sitting there at the airport and waiting for somebody to come to guide me to my gate. There’s something off here.”

Miller isn’t alone. Researchers across multiple continents are conducting experiments and testing the viability of robots, some of which happen to look like dogs themselves, as aides for the blind. If successful, these devices could bring an added layer of accessibility to large chunks of partially sighted and blind persons who’ve largely been left behind as autonomous technology has advanced around them. The Glide builds off of years of advances in robotics research and may represent one of the most promising new tools for aiding accessibility. Still, researchers say the technology isn’t poised to replace guide dogs anytime and would more likely attempt to fill in accessibility gaps for people who are unable or interested in owning a dog.  

How does the Glide robot work?

Though researchers have explored the idea of guide robots for at least five years, companies are just now on the verge of bringing products to market. Glidance, which was founded last year, is developing a robot ally called Glide, which it describes as the world’ “first self-guided primary mobility aid.” It resembles a small vacuum cleaner with a handle and two small wheels. The 9 by 9 inch robot uses “passive kinetic guidance” in place of a motor, so users simply push it forward to start it. Glidance says the device can be charged using a standard electrical outlet and can last up to 8 hours of “active use.”

“It’s as easy and familiar as holding onto someone’s hand.”

Glidance says the Glide uses an array of onboard sensors, cameras, and AI to analyze the immediate surroundings and guide users away from hazards. A haptic feedback sensor in the handle sends feedback to the users instructing them when they should slow down. Miller says the device will work with navigation apps which will allow users the ability to simply input a destination and have the device guide them towards it, not unlike an autonomous vehicle

Robots photo

This video shows legendary blind musician Stevie Wonder using Glide to walk through a room during this year’s Consumer Electronics Show in Las Vegas.

Glidance CEO Amos Miller, who lost most of his eyesight in his early twenties, told PopSci he came up with the idea for Glide device partly in response to personal mobility challenges he faced in his own life, especially during travel. Despite rapid advances in robotics and autonomous technology in other sectors like automobiles, the technology appeared to have left blind people behind. 

“There came this moment where you kind of say, okay, we have these autonomous vehicles, we have self-guided drones, we land these drones on Mars and I’m sitting there at the airport and waiting for somebody to come to guide me to my gate,” Miller said. “There’s something off here.” 

In the US, according to the guide dog breeder Guiding Eyes, only 2% of partially sighted and blind persons had guide dogs in 2017. Though these dogs are often provided for free, training and breeding them can cost upwards of $50,000 per dog, and only around half of those puppies pass their training. Working guide dogs are also typically retired and replaced after five or six years. All of those elements combined make the demand for these dogs far greater than the supply. Guide robots, in theory, could fill in part of that gap. 

Though past research has explored using Boston Dynamics’ Spot and other four-legged dog-like robots as guides, Miller believes those approaches won’t realistically work at scale. Aside from prohibitively high upfront cost, those large, bulky machines are also ill suited for mobile partially sighted and blind persons who need to move around the world. Unlike a large quadruped, Miller says users can easily collapse the Glide’s handle and take it with them on a bus or airplane. It’s also less likely to attract unwanted attention than a four legged robot imbued with sci-fi horror stereotypes. 

“People ask me, why doesn’t it have four legs and climb up steps and the reason is because it’s going to cost you $75,000 if it did,” Miller said. “If we are going to have impact at scale, we have to play the game, we have to leverage the incredible technology coming out of the autonomous vehicle industry but build a super simple device [that’s] powerful and affordable.”

Miller described the experience of using the Glide as similar to holding onto another person’s arm for guidance. When the device ships, Miller says users can operate in either a spontaneous or directive mode. The former is intended for users who know roughly where they want to go but who use the Glide to keep them on a straight path and avoid potentially hazardous obstacles. The directive mode, by contrast, will let users enter a specific destination and have Glide guide the user through haptic feedback and audio on how to get there. Miller gave the example of a traveler connecting Glide to an airport app which then instructs the device how to get to a gate. Users can also pre-program their own routes, which lets the Glide remember the directions to a local coffee shop or other frequent destinations. (Glide will utilize voice commands and accessibility options of existing apps.)

“You just walk and glide autonomously, Glide steers the way, avoids obstacles and gets you to your destination,” Miller said. “It’s as easy and familiar as holding onto someone’s hand.”

It’s unclear exactly when partially sighted and blind persons will get to that hand-holding sensation though. Glidance is taking preorders for the device and expects a beta for it to shop later this year. Miller says the device will likely have an unspecified up front cost as well as a subscription. That subscription price will vary depending on the level of features users want. Miller said Glidance plans to use a “Tesla model” where Glide ships with a baseline set of features and then improves via over the air updates over time.

Researchers are testing various types of guide robots 

Previous research into guidance robots have provided mixed results. One 2022 study conducted by a University of Copenhan researcher ran an experiment where a person walked through a path obstructed by a pallet, first with their guide dog, and again while holding onto Boston Dynamics Spot quadruped equipped with “sensing and semiotic capabilities.” The guide dog noticed the pallet and gently judged its owner to the side, avoiding the obstacle altogether. Spot, on the other hand, also noticed the pallet but instructed the blind person to walk on top and over it. Both the robot dog and the human stepped up the pallet in an awkward, misaligned, and potentially dangerous angle.

More recently, researchers from Binghamton University in New York used a reinforcement learning AI system to quickly train a four-legged robot to successfully guide a blind person around obstacles in a lab hallway. Computer Science Associate Professor Shiqi Zhang, who was involved with the project, told PopSci research into assistance robots has accelerated in recent years thanks to rapid advances in artificial intelligence and greater access to once wildly expensive robots. Still, Zhang agrees the hulking size of those robots likely made them ill-suited to assist blind people in everyday life. 

 “To serve the people with visual impairment, we need smaller size for charging, lower cost, and pretty easy to replace components,” Zhang said. “I think the goal is different and [Boston Dynamics’] Spot robots are not developed for this purpose. So there is a gap.”

Are robots better than guide dogs?

Miller stressed that he isn’t looking to replace guide dogs which he described as the current “gold standard” for getting around. Instead, he hopes the Glide device can cater to the vastly underserved 98% of partially sighted and blind persons who either can’t access a dog or don’t want the responsibility of owning one. But Miller said there are certain aspects of a robot guide that could make it more useful even than a well trained dog. Though guide dogs can learn some routes, that pales in comparison to on demand navigation tools available in a robot. And unlike guide dog owners, robot owners also don’t have to spend weeks training it or navigate around laborious travel restrictions. Robots also crucially don’t shed or defecate.

“You get the dog without the responsibility of the dog,” Miller said. 

“I’m not trying to compete with the dog. I’m trying to provide a solution for people who don’t use them.”

Zhang says robots also potentially have a leg up over dogs thanks to their upgradability. Researchers and product engineers can feasibly patch bugs and add new software quickly via updates. The robots could also be upgraded over time with new sensors or other hardware. In other words robots, unlike old dogs, can learn new tricks.

But that doesn’t mean it’s time to write-off dogs altogether. For now, guide dogs are much more adept at navigating their physical environment than their robot counterparts. Dogs can adjust to slippery terrain and are highly adept at spotting potentially dangerous situations. And then there’s the stair issue. Dogs have no problem paying their way to the second floor of an apartment building but the same can’t be said for robots. 

“Most dogs can do that [climbing stairs] without much difficulty, but it’s actually a pretty challenging problem for quadruped robots,” Zhang said. “Robots are able to do that, but it really requires a lot of computational resources, fancy sensors and real-time systems to get there.”

There’s also the intangible element of comparisonship dogs can provide, Zhang said. Some people simply like their dog and find comfort in their presence. While Miller said his device isn’t necessarily trying to compete with dogs on compassion, he claims he has already seen talk with the robot as if it were an “embodied agent.” Unlike a dog, the robot can also talk back. 

“Once we have voice interaction done I do think that there will be a sense of relationship,” Miller said.

But guide robots have skeptics as well. A spokesperson for the National Federation of the Blind told PopSci many past inventions have tried to take the place of tried and true canes and guide dogs but had failed. Jason McKee, an instructor at Guide Dogs Queensland in Australia recently told the ABC robotic dogs were impressive machinery and could be useful for traveling internationally but still couldn’t compare to a well trained canine. 

“Nothing will beat the companionship, the intuition of the dog, then be able to forward think, see ahead and then make the movements to guide someone safely,” McKee said. 

Miller, on the other hand, believes there’s a future where guide dogs and robots exist in harmony, complementing each other rather than competing. 

“I want to be very clear that my goal is to empower people who are not using dogs or cans today. I’m not trying to compete with the dog,” Miller said. “I’m trying to provide a solution for people who don’t use them.”

Update 03/12/24 2:44 PM: The weight of Glide has been updated. A voice command feature mention has been added.

The post Can this robot help solve a guide dog shortage? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Oh good, the humanoid robots are running even faster now https://www.popsci.com/technology/fastest-humanoid-robot/ Tue, 05 Mar 2024 17:05:32 +0000 https://www.popsci.com/?p=605431
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers.
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers. YouTube

Shanghai's Unitree Robotics says their H1 robot trots at 7.38 mph—nearly two miles’ faster than the Boston Dynamics' Atlas.

The post Oh good, the humanoid robots are running even faster now appeared first on Popular Science.

]]>
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers.
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers. YouTube

Step aside, Atlas: A new bipedal bot reportedly laid claim to the world’s fastest full-sized humanoid machine. According to the Shanghai-based startup, Unitree Robotics, its H1 V3.0 now clocks in at 7.38 mph while gingerly walking along a flat surface. With the previous Guinness World Record set at 5.59 mph by the Boston Dynamics robot, H1’s new self-reported achievement could be a pretty massive improvement. If that weren’t enough, if pulled off its new feat while apparently wearing pants. (Or, more specifically, chaps.) 

[Related: OpenAI wants to make a walking, talking humanoid robot smarter.]

In a new video, Unitree’s H1 can also be seen trotting across a park courtyard, lifting and transporting a small crate, jumping, as well as ascending and descending stairs. It also can perform a choreographed, TikTok-esque dance troupe routine—basically an industry requirement, at this point. It’s also wearing pants, for some reason.

Engineering photo

At 71-inches tall, H1 is about as tall as an average human, although considerably lighter at just 100 pounds. According to Unitree, the robot utilizes both a 3D LiDAR sensor alongside a depth camera to supply 360-degree visual information. One other interesting feature in H1’s overall design is its hollow torso and limbs, which house all of the bot’s electrical routing. Although it currently doesn’t currently include articulated hands (they sort of look like wiffle balls at the moment), Unitree is reportedly developing the appendages to integrate into future versions.

Alongside its quadrupedal B1 robot, Unitree aims to take on existing competitors like Boston Dynamics by offering potentially more affordable products. H1’s current estimated price tag is somewhere between $90,000 and $150,000—that’s likely more than most people are willing to shell out for a robot (even a world record-holder) but with Atlas rumored to cost $150,000 minimum, it might prove attractive to researchers and other companies.

Major companies like Hyundai and Amazon (not to mention the military) are extremely interested in these two- and four-legged robots—either through integrating them into increasingly automated workplaces, or… strapping guns to them, apparently. In the meantime, startups including OpenAI are aiming to make these machines “smarter” and more responsive to real-time human interactions.

But while H1 is allegedly the fastest humanoid robot for the time being, it still doesn’t appear to be nearly as agile as the parkouring Atlas… or, it should be noted, as egg-friendly as Tesla’s latest Optimus prototype. And although both H1 and Atlas can walk faster than a lot of humans and keep pace with most joggers, their biological inspirations can still break away at a full sprint. For now, at least…

Oh, wait.

The post Oh good, the humanoid robots are running even faster now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI promised humanlike machines–in 1958 https://www.popsci.com/technology/ai-humanoid-robots-history/ Sun, 03 Mar 2024 17:00:00 +0000 https://www.popsci.com/?p=605203
vintage photo of scientists with a robot prototype
Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr

We’ve been here before.

The post AI promised humanlike machines–in 1958 appeared first on Popular Science.

]]>
vintage photo of scientists with a robot prototype
Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr

This article was originally featured on The Conversation.

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

AI boom and bust

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Familiar refrains

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm–unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.

Lessons to heed

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.

The post AI promised humanlike machines–in 1958 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI wants to make a walking, talking humanoid robot smarter https://www.popsci.com/technology/openai-wants-to-make-a-walking-talking-humanoid-robot-smarter/ Thu, 29 Feb 2024 13:00:00 +0000 https://www.popsci.com/?p=604845
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations.
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations. Figure

Figure’s founder Brett Adcock says a new partnership with OpenAI could help its robots hold conversation and learn from its mistakes over time.

The post OpenAI wants to make a walking, talking humanoid robot smarter appeared first on Popular Science.

]]>
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations.
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations. Figure

Just a few years ago, attempts at autonomous, human-shaped bipedal robots were laughable and far-fetched. Two-legged robots competing in high-profile Pentagon challenges famously stumbled and fell their way through obstacle courses like an inebriated pub-crawler while Tesla’s highly-hyped humanoid bot, years later, turned out to be nothing more than a man dancing in a skin-tight bodysuit.

But, despite those gaffs, robotics firms pressed on and now several believe their walking machines could work alongside human manufacturing workers in only a few short years. Figure, one of the more prominent companies in the humanoid robot space, this week told PopSci it raised $675 million in funding from some of the tech industry’s biggest players, including Microsoft, Nvidia, and Amazon founder Jeff Bezos. The company also announced it has struck a new agreement with generative AI giant, OpenAI to “develop next generation AI models for humanoid robots.” The partnership marks one of the most significant examples yet of an AI software company working to integrate its tools into physical robots. 

[ Related: BMW plans to put humanoid robots in a South Carolina factory to do… something ]

Figure Founder and CEO Brett Adcock described the partnership as a “huge milestone for robotics.” Eventually, Adcock hopes the partnership with OpenAI will lead to a robot that can work side-by-side with humans completing tasks and holding a conversation. By working with OpenAI, creators of the world’s most popular large language model, Adcock says Figure will be able to further improve the robot’s “semantic” understanding which should make it more useful in work scenarios. 

“I think it’s getting more clear that this [humanoid robotics] are becoming more and more an engineering problem than it is a research problem,” Adcock said. “Actually being able to build a humanoid [robot] and put it into the world of useful work is actually starting to be possible.” 

Why is OpenAI working with a humanoid robotics company? 

Founded in 2021, Figure is developing a 5 ‘6, 130-pound bipedal “general purpose” robot it claims can lift objects around 45 pounds and walk 2.7 miles per hour. Figure believes its robots could one day help address possible labor shortages in manufacturing jobs and generally “enable the automation of difficult, unsafe, or tedious tasks.” Though it’s unclear just how reliably current humanoid robots can actually execute those types of tasks, Figure recently released a video showing its Figure 01 model slowly walking towards a stack of create, grabbing one with its two hands and loading it into a conveyor belt. The company claims the robot performed the entire job autonomously. 

AI photo

Supporters of humanoid-style robots say their bi-pedal form-factor makes them more adept at climbing stairs and navigating uneven or unpredictable ground compared to the more typical wheeled or tracked alternatives. The technology underpinning these types of robots has notably come a long way from the embarrassing stumbles of previous years. Speaking with Wired last year, Figure Chief Technology Officer Jerry Pratt said Figure’s robots could complete the Pentagon’s test course in a quarter of the time it took machines to finish it back in 2015, thanks in part to advances in computer vision technology. Other bipedal robots, like Boston Dynamics’ Atlas, can already perform backflips and chuck large objects.  

Figure says its new “collaboration agreement” with OpenAI will combine OpenAI’s research with it’s own experience in robotics hardware and software. If successful, Figure believes the partnership will enhance its robot’s ability to “process and reason from language.” That ability to understand language and act on it could, in theory, allow the robots to better work alongside a human warehouse worker or take verbal commands. 

“We see a tremendous advantage of having a large language model or multi models model on the robot so that we can interact with it and give what we call ‘semantic understanding,’” Adcock said. 

Over the long-term, Adcock said people interacting with the Figure should be able to speak with the robot in plain language. The robot can then create a list of tasks and complete them autonomously. The partnership with OpenAI could also help the Figure robot self-correct and learn from its past mistakes, which should lead to quicker improvements in tasks. The Figure robot already possesses the ability to speak, Adcock said, and can use its cameras to describe what it “sees” in front of it. It can also describe what may have happened in a given area over a period of time. 

“We’ve always planned to come back to robotics and we see a path with Figure to explore what humanoid robots can achieve when powered by highly capable multimodal models,” Open AI VP of Product and Partnerships Peter Welinder said in a statement sent to PopSci.  

OpenAI and Figure aren’t the only ones trying to integrate language models into human-looking robots. Last year, Elon Musk biography Walter Isaacson wrote an article for Time claiming the Tesla CEO was exploring ways to integrate his company’s improving Optimus humanoid robot and its “Dojo” supercomputer with the goal of creating so-called artificial general intelligence, a term some researchers use to describe a machine capable of performing above human level capability at many tasks. 

Tech giants are betting big on Figure to win out in a brewing humanoid robot race 

Figure hopes the support from OpenAI, in addition to its massive new wave of funding, could speed-up Figure’s timeline for making its product available commercially. The $675 million in funding Figure revealed this week was reportedly over $150 more than the amount it has initially sought, according to Bloomberg. The company says it’s planning to use that capital to scale up its AI training, robotic manufacturing, and add on new engineers. Figure currently has 80 employees. 

But Figure isn’t the only company looking to commercialize humanoid robots. 1X Technologies AS, another humanoid robotics company with significant investment from OpenAI, recently raised $100 million. Oregon-based Agility Robotics, which demonstrated how its robots could perform a variety of simple warehouse tasks autonomously, is reportedly already testing machines in Amazon warehouses. Figure, for its part, recently announced a partnership with BMW to bring the humanoid robot to the carmaker’s Spartanburg, South Carolina manufacturing facility. 

All of these companies are racing to cement their place as an early dominant force in an industry some supporters believe could be a real money-maker in the near-future. In 2022, Goldman Sachs predicted the global humanoid robot market could reach $154 billion by 2035. If that sounds like a lot, it’s a fraction of the $3 trillion financial services company Macquarie estimates the industry could be worth by 2050. That’s roughly the value of Apple today. 

But much still has to happen before any of those lofty visions resemble reality. These still-developing technologies are just now being trialed and tested within major manufacturing facilities. The most impressive of these robots, like the dancing giants produced by Boston Dynamics, remain extremely expensive to manufacture. It’s also still unclear whether or not these robots can, or ever will, be able to respond to complex tasks with the same degree of flexibility as a human worker. 

Generally, it’s still unclear what exact problems these are best suited to solve.  Both Elon Musk and Figure have said their machines could complete assignments too dangerous or unappealing to humans, though what those exact use cases are hasn’t been articulated clearly. BMW, for example, previously told PopSci it was still “investigating concepts,” when asked how it plans to deploy Figure’s robots. Adcock went a step further, suggesting the Figure robot could be used to move sheet metal or perform other body shop tasks. Adcock said Figure has five primary use cases for the robot in the facility in mind that they have not yet publicly announced. 

The issue of what to do with these robots when they are made isn’t unique to Figure. In an interview with PopSci, Carnegie Mellon Department of Mechanical Engineering Associate Professor Ding Zhao called that issue of use-cases the “billion-dollar question.” 

“Generally speaking, we are still exploring the capabilities of humanoid robots, how effectively we can collect data and train them, and how to ensure their safety when they interact with the physical world.” 

Zhao went on to say robots which are intended to work alongside humans will also have to invest heavily in safety, which he argued could even match or exceed development costs. 

The robots themselves need to improve as well, especially in real world work environments that are less predictable and more “messy” than typical robot training facilities. Adcock says the robot’s speed at tasks and ability to handle larger and more diverse types of payloads will also need to increase. But all of those challenges, he argued, can be improved through powerful AI models like the type OpenAI is building. 

“We think we can solve a lot of this with AI systems,” Adcock said. “We really believe here that the future of general purpose robots is through learning, through AI learning.”

The post OpenAI wants to make a walking, talking humanoid robot smarter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Researchers taught a robot dog to open a door with its leg https://www.popsci.com/technology/robot-dog-opens-door-with-leg/ Fri, 23 Feb 2024 20:00:00 +0000 https://www.popsci.com/?p=604111
Researchers taught this dog-inspired robot to open doors with its leg using a reinforcement learning model.
Researchers taught this dog-inspired robot to open doors with its leg using a reinforcement learning model. YouTube

The robot can bust open doors, carry a backpack, and even collect rock samples.

The post Researchers taught a robot dog to open a door with its leg appeared first on Popular Science.

]]>
Researchers taught this dog-inspired robot to open doors with its leg using a reinforcement learning model.
Researchers taught this dog-inspired robot to open doors with its leg using a reinforcement learning model. YouTube

Four legged, dog-inspired robots have grown in popularity among scientists and first responders in recent years thanks to their unique ability to quickly and safely maneuver areas hazardous or inaccessible to humans. Some of those robots, like Boston Dynamics’ Spot, can use large claws and other attachments to help them interact with the world around them. But those additional limbs aren’t always ideal since they add weight and take up extra space, both of which could limit the robo-dog’s effectiveness in tight corridors.

Robots photo

Researchers at ETH Zurich’s Robotic Systems Lab in Switzerland tried to solve that dilemma by training their own robot dog to use one of its four limbs to complete tasks like opening a door and moving objects while simultaneously relying on the other three limbs to walk and maintain balance. In a recently released paper documenting their findings, the researchers say they believe this novel use of the robot’s limb could one day aid space exploration and other scenarios where weight and mechanical real-estate are at a premium. 

How did the robot dog use its leg? 

Researchers used a reinforcement learning model to teach the robot dog, an ANYmal model made by the firm ANYbotic, to complete a series of tasks where it had to manipulate its environment. The model was rewarded with positive reinforcement when the robot placed its front right limb into the desired location. On the flip side, the model revived negative reinforcement when the robot used jerky, potential unsafe movements. From there, the robot learned to use its remaining three legs to balance and move around. Researchers were able to move the robot around using a joystick on a remote controller. 

Robots photo

Photos of the experiment show the robot raising its front limb and placing it beside a door handle before shifting its weight to open it. Ironically, the motion isn’t dissimilar to a furry living dog lifting its leg to relieve himself over a fire hydrant. Elsewhere, the robot can be seen wrapping a backpack strap around its limb and then listing the bag up and over into a plastic container. Researchers also experimented with attaching a claw-like gripper to the end of the leg which allowed the robot to successfully grab and collect rock samples. The robot could also use its arm to move small obstacles out of its way and press otherwise difficult to reach buttons. 

“Our work shows that numerous manipulation tasks can be solved by only doing pedipulation with quadrupedal robots,” the researchers wrote. “This insight will be relevant for future works on the design and control of legged mobile manipulators.”

Researchers trained the robot on irregular terrain to ensure it could still maintain balance even when faced with less than ideal real-world scenarios it may come across in the real world. To test the robot’s balance, the researchers placed it on top of a slick whiteboard with barely any friction and had it try to complete tasks. It slipped but didn’t fall. 

Robots photo

How could this three-legged robo dog be useful

Though the three-legged dog wasn’t necessarily as effective as other models with an attachable claws, the researchers say its real strength is in its simplicity. By forfeiting any additional claws or tools, the researchers their robot avoids adding more unnecessary mechanical complexity. This approach also cuts down on weight and could reduce energy consumption, both of which could prove particularly useful in space exploration or remote search and rescue missions. 

At the present, quadruped robots’ general inability to manipulate their environment means they are relegated only to inspection and surveillance tasks. The ETH Zurich researchers findings, however, hint toward a future where these robots can use AI models to learn about the world around them and use their limbs to interact with objects and complete more complicated tasks.

The post Researchers taught a robot dog to open a door with its leg appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
First remote, zero-gravity surgery performed on the ISS from Earth (on rubber) https://www.popsci.com/technology/remote-surgery-robot-iss/ Fri, 16 Feb 2024 15:00:00 +0000 https://www.popsci.com/?p=602988
Surgeon using spaceMIRA remote surgery tool on ISS
A team of surgeons used rubber bands to represent human tissue aboard the ISS. Credit: Virtual Incision

Surgeons in Nebraska controlled spaceMIRA from 250 miles below the ISS as it cut through simulated human tissue.

The post First remote, zero-gravity surgery performed on the ISS from Earth (on rubber) appeared first on Popular Science.

]]>
Surgeon using spaceMIRA remote surgery tool on ISS
A team of surgeons used rubber bands to represent human tissue aboard the ISS. Credit: Virtual Incision

Researchers successfully completed the first remote, zero-gravity “surgery” procedure aboard the International Space Station. Over the weekend, surgeons based at the University of Nebraska spent two hours testing out a small robotic arm dubbed the Miniaturized In Vivo Robotic Assistant, or spaceMIRA, aboard the ISS as it orbited roughly 250 miles above their heads. 

But don’t worry—no ISS astronauts were in need of desperate medical attention. Instead, the experiment utilized rubber bands to simulate human skin during its proof-of-concept demonstration on Saturday.

[Related: ‘Odie’ is en route for its potentially historic moon landing.]

Injuries are inevitable, but that little fact of life gets complicated when the nearest hospital is a seven-month, 300-million-mile journey away. But even if an incredibly skilled doctor is among the first people to step foot on Mars, they can’t be trained to handle every possible emergency. Certain issues, such as invasive surgeries, will likely require backup help. To mitigate these problems in certain situations, remote controlled operations could offer a possible solution.

Designed by Virtual Incision, a startup developing remote-controlled medical tools for the world’s most isolated regions, spaceMIRA weights only two pounds and takes up about as much shelf-space as a toaster oven. One end of its wandlike is topped with a pair of pronglike arms—a left one to grip, and right one to cut.

[Related: 5 space robots that could heal human bodies—or even grow new ones ]

Speaking with CNN on Wednesday, Virtual Incision cofounder and chief technology officer Shane Farritor explained spaceMIRA’s engineering could offer Earthbound the hands and eyes needed to perform “a lot of procedures minimally invasively.”

On February 10, a six-surgeon team in Lincoln, Nebraska, took spaceMIRA (recently arrived aboard the ISS via a SpaceX Falcon 9 rocket) for its inaugural test drive. One arm gripped a mock tissue sample, and the other used scissors to dissect specific portions of the elastic rubber bands.

spaceMIRA prototype on desk
A version of the spaceMIRA (seen above) traveled to the ISS earlier this month. Credit: Virtual Incision

While researchers deemed the experiment a success, surgeons noted the difficulty in accounting for lag time. Communications between Earth and the ISS are delayed about 0.85 seconds—while a minor inconvenience in most circumstances, even milliseconds can mean a matter of life or death during certain medical emergencies. Once on the moon, Artemis astronauts and NASA headquarters will deal with a full 1.3 seconds of delay between both sending and receiving data. On Mars, the first human explorers will face a full hour of waiting after firing off their message, then waiting for a response. Even taking recent laser communications breakthroughs into consideration, patience will remain a virtue for everyone involved in future lunar and Mars expeditions.

This means that, for the time being, devices like spaceMIRA are unlikely to help in split second medical decisions. But for smaller issues—say, a lunar resident’s stitch up after taking a tumble, such medical tools could prove invaluable for everyone involved. In the meantime, Virtual Incision’s remote controlled equipment could still find plenty of uses here on Earth.

The post First remote, zero-gravity surgery performed on the ISS from Earth (on rubber) appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This edible, wriggling robot mimics experience of eating moving food https://www.popsci.com/technology/edible-moving-soft-robot-japan/ Thu, 15 Feb 2024 22:00:00 +0000 https://www.popsci.com/?p=603044
Edible soft robot on table
The gelatin gummy component wriggles when inflated with air. Osaka University

In Japanese ‘odorigui’ cuisine, food is still alive. This gyrating robot is not.

The post This edible, wriggling robot mimics experience of eating moving food appeared first on Popular Science.

]]>
Edible soft robot on table
The gelatin gummy component wriggles when inflated with air. Osaka University

Remember the old reality show competition stunt of getting contestants to eat live bugs on primetime television? Consuming “food” while it’s still alive spans numerous cultures around the world. In Japan, for example, odorigui (or “dance-eating”) is a centuries’ old tradition often involving squid, octopus, and tiny translucent fish known as ice gobies. Diners pop these still-living creatures into their mouths, as the wriggling is part of the overall meal experience.   

To potentially better understand the psychology and emotional responses associated with consuming odorigui dishes, researchers designed their own stand-in—a moving gelatin robo-food combining 3D-printing, kitchen cooking, and air pumps. The results appear not only tastier than your average reality show shock snack, but a potential step towards creative culinary and medical applications.

… And yet, judging from this video, it’s undeniably still a little odd.

Engineering photo

Detailed in a study published earlier this month in PLOS One, a team at Japan’s University of Electro-Communications and Osaka University recently devised a pneumatically-driven handheld device to investigate what they dub “human-edible robot interaction,” or HERI. For the “edible” portion of HERI, researchers cooked up a gummy candy-like mixture using a little extra sugar and apple juice for flavor. 

After letting the liquid cure in molds that included two hollow airways, the team then attached the snack to a coffee mug-like holder. The design allowed researchers to inject air through the gelatin in different combinations—alternating airflow between each tube produced a side-to-side wagging motion, while simultaneous inflation offered a (slightly unnerving) pulsating movement.

And then, the taste tests.

The team directed 16 Osaka University students to grab the device holding their designated, writhing soft robot morsel, place the edible portion in their mouth, allow it to move about for 10 seconds, then chomp. Another (possibly relieved) group of control students also ate a normal, immobile gelatin gummy. Following their meals, each volunteer answered a survey including questions such as:

– Did you think what you just ate had animateness?

– Did you feel an emotion in what you just ate?

– Did you think what you just ate had intelligence?

– Did you feel guilty about what you just ate?

Perhaps unsurprisingly, it seems that a meal’s experience can be influenced by whether or not the thing you just put in your mouth is also moving around in your mouth. Students described this sensation using the Japanese onomatopoeic terms gabu, or “grappling,” and kori-kori, meaning “crisp.” Movement also more frequently caused volunteers to feel a bit of guilt at eating a “still living” dish, as well as attach a sense of intelligence to it.

[Related: Scientists swear their lab-grown ‘beef rice’ tastes ‘pleasant’]

While only an early attempt at looking into some of the dynamics in odorigui, researchers believe more intricate soft robot designs can allow for more accurate experiments. Meanwhile, such research could lead to a “deepening understanding of ethical, social, and philosophical implications of eating,” as well as potential uses in medical studies involving oral and psychological connections. There’s also a possibility for “innovative culinary” experiences down the line, so who knows what might be coming to high-brow restaurants in the future—perhaps gyrating gyros, or wobbly waffles. Hopefully, nothing too macabre will wind up on menus. It’s certainly something researchers took into consideration during their tests.

“NOTE: During the experiment, we did not draw a face on the edible robot,” reads the fine print at the bottom of the demonstration video, presumably meaning they were just having a bit of fun with the project.

Which is good to hear. Otherwise, this whole thing might have come across as weird.

The post This edible, wriggling robot mimics experience of eating moving food appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A sea creature extinct for half a billion years inspired a new soft robot https://www.popsci.com/technology/extinct-sea-creature-soft-robot/ Sat, 10 Feb 2024 13:00:00 +0000 https://www.popsci.com/?p=602170
pleurocystitid soft robot
Pleurocystitid inspired soft robot on rocky beach. Desatnick et al. / Carnegie Mellon

Pleurocystitids arrived in the oceans alongside jellyfish. Although long gone, they may help guide the future of 'paleobionics.'

The post A sea creature extinct for half a billion years inspired a new soft robot appeared first on Popular Science.

]]>
pleurocystitid soft robot
Pleurocystitid inspired soft robot on rocky beach. Desatnick et al. / Carnegie Mellon

Plenty of robots are inspired by existing animals, but not as many take their cue from extinct creatures. To design their own new machine, Carnegie Mellon University researchers looked over 500-million years back in time for guidance. Their result, presented during the 68th Biophysical Society Annual Meeting, is an underwater soft robot modeled after one of the sea urchin’s oldest ancestors.

[Related: Watch robot dogs train on obstacle courses to avoid tripping.]

Pleurocystitids swam the oceans around half a billion years ago—about the same time experts now believe jellyfish first appeared. While an ancient precursor to invertebrates such as sea stars, pleurocystitids featured a muscular, tail-like structure that likely allowed them to better maneuver underwater. After studying CT scans of the animal’s fossilized remains, researchers fed the data into a computer program to analyze and offer mobility simulations.

While no one knows for sure exactly how pleurocystitids moseyed around, the team determined the most logical possibility likely involved side-to-side sweeping tail motions that allowed it to propel across the ocean floor. This theory is also reinforced by fossil records, which indicate the animal’s tail lengthened over time to make them faster without the need for much more energy expenditure. From there, engineers built their own tail-touting, soft robot pleurocystitid.

Evolution photo

To the casual viewer, footage of the mechanical monster clumsily inching across the ground may seem to hint at why the pleurocystitid is long gone. But according to Richard Desatnick, a Carnegie Mellon PhD student under the direction of mechanical engineering faculty Phil LeDuc and Carmel Majidi, the ancient animal likely deserves more credit.

“There are animals that were very successful for millions of years and the reason they died out wasn’t from a lack of success from their biology—there may have been a massive environmental change or extinction event,” Desatnick said in a recent profile.

Geologic records certainly reinforce such an argument. What’s more, given that today’s animal world barely accounts for one percent of all creatures to ever roam, swim, or soar above the planet, there is a wealth of potential biomechanical inspirations left to explore. Desatnick and his colleagues hope that their proof-of-concept pleurocystitid will help inspire new entries into a field they call paleobionics—the study of Earth’s animal past to guide some of tomorrow’s robotic creations.

The Carnegie Mellon team believes future iterations of their soft robot could offer a variety of uses—including surveying dangerous geological locations, and helping out with underwater machine repairs. More agile robo-pleurocystitids may one day glide through the waters. Even if nearby sea stars and urchins don’t recognize it, neither would exist without their shared source of inspiration.

The post A sea creature extinct for half a billion years inspired a new soft robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A four-legged ‘Robodog’ is patrolling the Large Hadron Collider https://www.popsci.com/technology/robodog-large-hadron-collider/ Thu, 08 Feb 2024 20:00:00 +0000 https://www.popsci.com/?p=602001
Caption: CERT’s four-legged Robodog can manuevuer throguh cramped spaces and use sensors to spot fires, leaks, or other hazards.
Caption: CERT’s four-legged Robodog can manuevuer throguh cramped spaces and use sensors to spot fires, leaks, or other hazards. CERT

The robot's quadruped locomotion helps it look for hazards in cramped and cluttered experiment spaces inaccessible to other robots.

The post A four-legged ‘Robodog’ is patrolling the Large Hadron Collider appeared first on Popular Science.

]]>
Caption: CERT’s four-legged Robodog can manuevuer throguh cramped spaces and use sensors to spot fires, leaks, or other hazards.
Caption: CERT’s four-legged Robodog can manuevuer throguh cramped spaces and use sensors to spot fires, leaks, or other hazards. CERT

Traversing through the dark, underground areas of the Large Hadron Collider (LHC) in Geneva, Switzerland isn’t for the faint of heart. The world’s most powerful particle accelerator violently smashes protons and other subatomic particles together at nearly the speed of light, which can emit radiation at levels potentially harmful to humans. If that weren’t enough, long stretches of compact, cluttered areas and uneven surface areas throughout the facility make stable footing a necessity. 

Scientists at the European Organization for Nuclear Research (CERN) are turning to four-legged, dog-inspired robots to solve that problem. This week, CERN showed off its recently developed CERNquadbot robot which they said successfully completed its first radiation survey in CERN’s North Area, the facility’s largest experimental area. Looking forward, CERN plans to have its “Robodog” trot through other experiment caves to analyze areas and look for hazards. 

Why does CERT need a robot dog? 

The hazardous, sometimes cramped confines of the LHC’s experiments caverns pose challenges to both human workers and past robot designs alike. Temporary radiation levels and other environmental hazards like fires and potential water leaks can make some areas temporarily inaccessible to humans. Other past CERT robots, while adept at using strong robotics arms to carry heavy objects over distance, struggle to traverse over uneven ground. Stairs, similarly, are a nonstarter for these mostly wheeled and tracked robots. 

That’s where CERT’s robot dog comes in. CERTquadbot’s four, dog-like legs allow it to traverse up and down and side to side, all while adjusting for slight changes on the ground’s surface. A video of the robot at work shows it tic-tacking its four metal legs up and down as it navigates through what looks like pavement and a metal grated floor, all the while using onboard sensors to analyze its surroundings. A human operator can be seen nearby directing the robot using a controller. For a touch of added flair, the robot can also briefly stand up on its two hind legs. The Robodog had to use all of its various maneuverability during its recent test-run up the North area, which was reportedly filled with obstacles. 

“There are large bundles of loose wires and pipes on the ground that slip and move, making them unpassable for wheeled robots and difficult even for humans,” CERN’s Controls, Electronics and Mechatronics robotics engineer Chris McGreavy said in a statement.

Thankfully for the CERN scientists, the Robodog rose to the occasion. And unlike other living dogs, this one didn’t need a tasty treat for a reward.

“There were no issues at all: the robot was completely stable throughout the inspection,” McGreavy added. 

Particle Physics photo

Now with the successful test completed, CERN says it’s upgrading the robot and preparing it and its successors to deploy in experiment caves, including the ALICE detector which is used to study quark-gluon plasma. These areas often feature stairs and other complex surfaces that would stump CERN’s other, less maneuverable robots. Once inside, the robot dogs will monitor the area for hazards like fire and water leaks or quickly respond to alarms. 

CERN directed PopSci to this blog post when we asked for more details regarding the robot. 

Dog-inspired dogs are going where humans can’t 

Four-legged quadruped robots have risen in popularity across numerous industries in recent years for their ability to nimbly access areas either too cumbersome or dangerous for humans and larger robots to access. Boston Dynamics’ “Spot,” possibly the most famous quadruped robot currently on the market, has been used to inspect dangerous offshore oil drilling sites, explore old abandoned mining facilities, and even monitor a major sports arena in Atlanta, Georgia. More controversially, law enforcement officials in New York City City and at the southern US border have also turned to these quadruped style robots to explore areas otherwise deemed too hazardous for humans. 

Still, CERN doesn’t expect its new Robodog to completely eliminate the need for the other models in its family of robots. Instead, the various robots will work together in tandem, using their respective strengths to fill in gaps with the ultimate goal of hopefully speeding up the process of scientific discovery.

The post A four-legged ‘Robodog’ is patrolling the Large Hadron Collider appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NYPD retires big, egg-shaped subway surveillance robot—for now https://www.popsci.com/technology/nypd-retires-k5-subway-robot/ Mon, 05 Feb 2024 20:30:00 +0000 https://www.popsci.com/?p=601463
People walk past the K5 robot used by the New York City Police Department (NYPD) in the Times Square subway station in New York on November 28. 2023.
People walk past the K5 robot used by the New York City Police Department (NYPD) in the Times Square subway station in New York on November 28. 2023. TIMOTHY A. CLARY/AFP via Getty Images

Privacy advocates have criticized the 'trash can on wheels.'

The post NYPD retires big, egg-shaped subway surveillance robot—for now appeared first on Popular Science.

]]>
People walk past the K5 robot used by the New York City Police Department (NYPD) in the Times Square subway station in New York on November 28. 2023.
People walk past the K5 robot used by the New York City Police Department (NYPD) in the Times Square subway station in New York on November 28. 2023. TIMOTHY A. CLARY/AFP via Getty Images

Commuters making their way through New York City’s bustling midtown subway stations will now do so without a roughly 400-pound autonomous robot lurking nearby. After a nearly six-month long trial, the New York Police Department is ending its use of an eye-catching “K5” mobile surveillance robot once heralded by city officials as a high-tech, lower labor cost solution to deter crime. Many New Yorkers and privacy advocates poked fun at the odd, egg-shaped robot, which some said seemed like more of an expensive, eye-grabbing gimmick than a meaningful security investment. The K5 may be gone for now, though city officials haven’t ruled out redeploying the K5 in the future. 

A spokesperson for New York’s Deputy Commissioner of Public Information told PopSci that the controversial robot manufactured by the firm Knightscope had “completed its pilot deployment in the NYC subway system.” As of last week, there was no longer deployed in transit. A reporter at The New York Times spotted the robot parked, all alone, in a vacant storefront. A separate New York Daily News report citing a spokesperson from mayor Eric Adams’ administration revealed the robot has actually been sitting in storage since early December.

Why the NYPD used the robot

Knightscope describes its K5 as a “fully autonomous” security robot outfitted with four cameras capable of recording video but not audio. The robot can reach a maximum speed of 3 miles per hour and has a 360 degree range of motion. It cannot walk up stairs. Hospitals, warehouses, malls, and other private businesses have turned to the K5 in recent years to patrol and survey their premises. Adams initially championed the K5 last year for its purported ability to patrol for long hours without needing rest. 

“This is below minimum wage,” Adams said during a press conference last year. “No bathroom breaks, no meal breaks.” The NYPD reportedly paid $9 per hour to lease the K5. In total, the K5 pilot program reportedly cost the NYPD $12,250.

When Adams announced NYPD’s use of the K5 last year, he said the robot would patrol the subway during late night hours, between 12 a.m. and 6 a.m. In practice though, it’s unclear how often the robot actually made those rounds. Aside from filming travelers, Adams and the NYPD said the K5 also has a button that connects people to a live representative via a 16-microphone array who can answer questions or report a potentially concerning incident. It’s unclear whether or not the K5’s short stint in the subway had any meaningful impact on crime or security. 

On its website, Knightscope claims its technologies are “known to be effective at reducing crime.” In reality, the hulking, egg-shaped robot received more attention for attracting selfies than for its surreptitious surveillance. A security officer named Kelvin Caines recently told The New York Times NYPD officers would “never let it [the robot] do anything.” He claimed he rarely saw the K5 separated from its charging section. The K5 was also regularly seen with an officer chaperon by its side, in part, to prevent the robot from being vandalized. That human overseer prevented the K5 from truly fulfilling its “autonomous” pitch. 

Robots photo

Knightscope Chief Client Officer Stacy Stephens told PopSci the company was unable to discuss specific details regarding its relationship with the NYPD, though he took issue with previous reporting suggesting the NYPD had retired the robot for good. A spokesperson for mayor Eric Adams’ administration told theTimes it’s “reviewing options for the K5’s next deployment as part of the pilot.”

Police robots draw public backlash 

This wouldn’t be the first time New York turned away from a robot only to redeploy it later. In 2021, the NYPD cut short its contract with the robotics firm Boston Dynamics following a wave of public backlash to the department’s use of its dog-shaped “Spot” robot. The NYPD reintroduced several Spot robots two years later with the goal of deploying them in areas too dangerous for police or firefighters to access. 

Privacy and civil liberties groups were skeptical of the K5 robot from the start, with some calling it both a privacy risk and a waste of resources. Some organizations, like the New York-based Surveillance Technology Oversight Project (STOP), feared real-times images collected by the K5 could be fed into existing facial recognition systems. Those types of facial recognition systems, which notoriously struggle to accurately identify nonwhite people, have led to the wrongful arrest of at least seven people in the US in recent years, nearly all of whom were Black

“I said this was a trash can on wheels, but it looks like the wheels aren’t even working at this point,” Surveillance Technology Oversight Project Executive Director Albert Fox Cahn said in a statement. “With major crimes down and the mayor mandating budget cuts across city agencies, why are we spending so much money on these gadgets?”

Shane Ferro, a staff attorney with the Digital Forensics Unit at the Legal Aid Society agreed with that assessment. 

“The Adams’ Administration continues to be distracted by false claims of high-tech solutions to age-old issues,” Ferro said in a statement. “The NYPD subway robot is an unnecessary expense and public gimmick that serves no legitimate safety purpose.” 

Police robots and drones gain traction despite public apprehension

New York’s police department has ramped up its use of robots, drones, facial recognition detection tools, and other controversial policing tech since Adams took office even as other cities like Boston have voted to ban similar tools. In total, New York reportedly spent nearly $3 billion on drones, robots, and other surveillance tools between 2007 and 2019. Adams isn’t alone in his embrace of new technologies either. The Electronic Frontier Foundation, a California-based civil liberties organization, estimates more than 1,400 police departments across the US currently use drones in some form. Boston Dynamics’ Spot robot, meanwhile, has reportedly been deployed in the field by law enforcement in Houston, Los Angeles, and St. Petersburg, Florida in recent years. 

Physical police robots, more so than other forms of new policing tech, often draw backlash from local residents and community leaders who fear they could be misused or even outfitted with weapons. That’s not completely outside of the realm of possibility. In 2016, Dallas Police strapped an explosive device to a Remotec Andros Mark V-A1 robot and detonated it in order to kill an armed suspect. More recently, San Francisco officials approved a policy that would permit police to use remote controlled robots to kill suspects, only to have the policy reversed following a torrent of public dissent.

For the time being at least, it looks like New York won’t have police robots roaming through its subway system. Overall policing trends, however, suggest robots assisting police may become more commonplace over time.

The post NYPD retires big, egg-shaped subway surveillance robot—for now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this cool, useless biohybrid robot take a stroll https://www.popsci.com/technology/biohybrid-robot-legs-walking-underwater/ Fri, 26 Jan 2024 19:30:00 +0000 https://www.popsci.com/?p=600404
Biohybrid robot legs in underwater container
Moving just 5.4mm per minute isn't much, but it's a start. Credit: Shoji Takeuchi research group, University of Tokyo

Part rubber, part rat muscle tissue, it could inspire future, more helpful machines.

The post Watch this cool, useless biohybrid robot take a stroll appeared first on Popular Science.

]]>
Biohybrid robot legs in underwater container
Moving just 5.4mm per minute isn't much, but it's a start. Credit: Shoji Takeuchi research group, University of Tokyo

As impressive as many biohybrid robotic projects are, they aren’t exactly known for their hairpin turns. In fact, it’s still pretty difficult to design an agile machine merging artificial materials and biological tissue. But if a future generation of biohybrids does manage to one day clear that hurdle, they could owe it to a tiny pair of cute, albeit pretty much useless, robo-legs.

Animals photo

Researchers at the University of Tokyo detailed their 3cm tall creation in a new study published today in Matter. By combining 3D-printed parts, rubber, and lab-cultivated rat muscle tissue cells, the team managed to create a proof-of-concept minibot capable of turning on a 90-degree pivot while suspended in water. To make it work, one “leg” receives minute electrical pulses that in turn contract its rat muscle actuators, while the other serves as its fixed point of support. In doing so, the biohybrid prototype manages to pivot at an angle previously unobtainable by similar robotic designs.

[Related: Meet xenobots, tiny machines made out of living parts.]

It’s a pretty big deal… although an incredibly slow one. According to researchers, their robot moves at an incremental 5.4mm per minute thanks to electrical stimulations issued through the water at five second intervals. But before you think this is being a bit harsh on the little guy, take it from team member Shoji Takeuchi:

“This is still basic research. We are not at the stage where this robot itself can be used anywhere,” he stipulated while speaking to New Scientist.

As it stands (so to speak), the biohybrid can’t even remain upright underwater without a buoy support system. It also needs constant supervision, and a watery conduit to stimulate the muscle actuators. Takeuchi says getting it onto dry land would require much thicker muscle designs, additional joints, and some kind of nutrient system to keep the tissue cultures alive and kicking.

Writing in their paper, researchers believe their advancements potentially could “contribute to a deeper understanding of biological locomotion mechanisms,” as well as possibly “pave the way further mimicking the intricacies of the human gait mechanism” in biohybrid robots.

After a few more years pumping weights at the gym on dry land (i.e. advancements in the lab), more complex robot iterations could possibly find their way back into water as deep-sea explorers. Science also notes biohybrid designs may also eventually be deployed in search-and-rescue missions. It may sound somewhat spooky to find yourself saved by a biorobot built from rat muscles—but it’s either that or the spider bots.

The post Watch this cool, useless biohybrid robot take a stroll appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
BMW plans to put humanoid robots in a South Carolina factory to do… something https://www.popsci.com/technology/bmw-humanoid-robot/ Tue, 23 Jan 2024 20:00:00 +0000 https://www.popsci.com/?p=599867
BMW is exploring ways to use Figure’s 5’6, 130 pound humanoid robot in its South Carolina manufacturing facility.
BMW is exploring ways to use Figure’s 5’6, 130-pound humanoid robot in its South Carolina manufacturing facility. Figure

In a race with Tesla, the automaker is 'exploring the latest technology' but remains vague on its automation goals.

The post BMW plans to put humanoid robots in a South Carolina factory to do… something appeared first on Popular Science.

]]>
BMW is exploring ways to use Figure’s 5’6, 130 pound humanoid robot in its South Carolina manufacturing facility.
BMW is exploring ways to use Figure’s 5’6, 130-pound humanoid robot in its South Carolina manufacturing facility. Figure

Shiny, silver human-shaped robots the size of lightweight boxers are getting ready to start shuffling their way through BMW’s US factory floors. The carmaker recently reached a commercial agreement with the robotics startup Figure to bring its eponymously named “general purpose” humanoid robot to BMW’s manufacturing facilities, starting with its Spartanburg, South Carolina location. The agreement puts BMW in a race with Tesla and other automakers who’ve embraced a vision of humanoid robots in an effort to further automate their already tech-filled facilities. It’s clear the robots are coming, but nobody really seems to know exactly what to do with them just yet. 

How does the robot work?

The Figure robot is a 5’6, 130-pound bipedal hunk of metal capable of lifting around 45 pounds and walking up to 2.7 miles per hour. Figure, which aspires to make “the world’s first commercially viable general purpose humanoid robot” says its Figure 01 model can operate for around five hours before needing to recharge. Though it’s unclear exactly how the robot will work in an automobile factory setting, Figure believes its device generally will “enable the automation of difficult, unsafe, or tedious tasks.”

As to what the humanoid robots will be doing at the factory exactly, Figure spokesperson told PopSci that “tasks have yet to be announced publicly.” A BMW spokesperson said that the automakers are “investigating concepts.”

Robots photo

The agreement between BMW and Figure features multiple phases. At first, the companies will look to “identify initial use cases” for the robots. Once those are established, the robots will make their debut at BMW’s Spartanburg, South Carolina manufacturing facility. A BMW spokesperson from the company’s South Carolina facility told PopSci it’s investigating ways to use the humanoid robot in the facility and said it could prove useful in situations where two hands are needed to grip certain objects. The BMW spokesperson did not provide more specific use cases and said there isn’t currently a timetable for when the robot could arrive on site.

“BMW is always exploring the latest technology to make our processes more efficient. Companies that invest in innovation such as this are more sustainable, become more productive, and have a competitive advantage,” the spokesperson told PopSci. “We need the right tools for the future, and this is just one tool in our toolbox that can be used.”

A spokesperson for Figure, meanwhile, told PopSci their goal is to have robots perform in the BMW manufacturing facility sometime in 2024. 

Robots in car factories aren’t anything new, but up until now they’ve mostly resembled single-purposed machines only capable of performing specific preset tasks. Robotics manufacturers like Figure believe their new Humanoid robots, made in the image of humans, could act as a type of generalist able to walk a factory floor and perform various tasks. The inclusion of hands, for example, could help the Figure bot open doors and use tools. Arms and legs, meanwhile, could help the robot climb up stairs, traverse terrain, and lift heavy boxes. 

[ Related: Hyundai’s robot-heavy EV factory in Singapore is fully operational ]

“Single-purpose robotics have saturated the commercial market for decades, but the potential of general purpose robotics is completely untapped,” Figure Founder and CEO Brett Adcock said in a statement. “Figure’s robots will enable companies to increase productivity, reduce costs, and create a safer and more consistent environment.”

“Figure 01 brings together the dexterity of the human form and cutting edge AI to go beyond single-function robots and lend support across manufacturing, logistics, warehousing, and retail,” Figure notes on its website. 

Figure vaguely says it’s using AI to build “intelligent embodied agents” capable of interacting with real world environments in unique and unstructured real world scenarios. Recently the company released this video claiming to show the Figure robot using AI to learn how to make a cup of Keurig coffee after ingesting 10 hours of footage. 

Robots photo

Carmakers are racing to bring humanoid robots to factory floors 

BMW’s new agreement with Figure comes around three years after Tesla announced its own plans to introduce artificial intelligence-enabled humanoid robots to its factory floors. At the time, Tesla’s robot was actually a dancing man wearing tight spandex. Since then Tesla has shown several prototypes of its “Optimus” robot which features a similar body design to the Figure model. The latest iteration of Optimus can reportedly squat and fondle eggs, but it’s still unclear exactly how that will translate to building cars. Tesla CEO Elon Musk previously told investors Optimus’ importance would “become apparent in several years” and even suggested the staggering machine could one day be worth more than Tesla’s automobiles. Tesla did not immediately respond to PopSci’s request for comment. 

Regardless of whether or not Musk’s predictions come true, other larger scale industrialists are taking note. In 2021, automating giant Hyundai completed an estimated $1.1 billion acquisition of Boston Dynamics, which is best known for creating odd videos of hulking humanoid robots performing backflips and various forms of calisthenics. Outside of the auto industry, Amazon recently revealed it was testing a bi-pedal humanoid robot from a firm called Digit which it said could one day work alongside employees in warehouses. 

So, why all the interest in robots now? Recent reporting from The Wall Street Journal suggests automakers like BMW see expanded automation through robotics as a way to offset rising labor costs and cut product prices. US autoworkers part of the United Auto Workers Union recently agreed to a new contract with Ford, Stellantis, and General Motors that includes a 25% wage increase over the course of four years. Other automakers like Toyota and Hyundai responded with their own wage increases. Humanoid robots, while costly to produce and untested in terms of reliability, could prove theoretically attractive investments for carmakers looking to offset rising labor–if they learn to outperform trained humans at making cars.  

Still, the sci-fi promise of a relentless, hyper efficient, never-sleeping robot workforce, for the time being at least, remains mostly speculative. Even Figure’s robot will reportedly have to walk itself to a charging station every few hours for a brief break.

The post BMW plans to put humanoid robots in a South Carolina factory to do… something appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These micro-robots were inspired by mini-bugs and water striders https://www.popsci.com/technology/water-strider-micro-bug-robots/ Thu, 18 Jan 2024 16:17:06 +0000 https://www.popsci.com/?p=599278
Water skimmer inspired robot atop water
One robot weighs 55 milligrams, while its parter is just 5 milligrams. Bob Hubner, WSU Photo Services

A lack of moving parts may make them some of the smallest of their kind ever.

The post These micro-robots were inspired by mini-bugs and water striders appeared first on Popular Science.

]]>
Water skimmer inspired robot atop water
One robot weighs 55 milligrams, while its parter is just 5 milligrams. Bob Hubner, WSU Photo Services

The design theory behind a pair of tiny insect-inspired obots may one day find its way into environmental monitoring, surgery procedures, as well as search-and-rescue missions—all while, reportedly, setting records in the process. Modeled after a mini-bug and a water strider, the two bots respectively weigh in at eight and 55 milligrams, and may mark the “smallest, lightest, and fastest fully functional” micro-robots in the world, according to Washington State University.

Developed by a team of WSU researchers and recently presented at the IEEE Robotics and Automation Society’s International Conference on Intelligent Robots and Systems, the robots’ teenyness is largely owed to their novel movement actuators weighing less than a milligram each. To construct the parts, a group led by associate professor of engineering Néstor O. Pérez-Arancibia relied on a material known as a shape memory alloy. Although shape memory alloys change form when heated, they can “remember” their original shapes and return to them after cooling. Because of this, the two micro-bots do not require standard motors, and therefore have no need for bulky moving parts.

[Related: Bat-like echolocation could help these robots find lost people.]

Both the mini-bug and water strider robots’ actuators are each composed of two shape memory alloy, 1/1000th inch-wide wires. Small electrical currents heat and cool the wires, allowing the actuators to move their fins or limbs as fast as 40 times a second while also lifting over 150 times their weight.

“They’re very mechanically sound. The development of the very lightweight actuator opens up new realms in micro-robotics,” Conor Trygstad, a mechanical and materials engineering PhD student and study lead author, explained in WSU’s recent spotlight.

But although the robots are impressive when compared to their mechanical peers, the pair “still [lag] behind their biological relatives,” conceded Trygstad.

Water strider and mini-bug robots next to quarter for size comparison
Credit: WSU

Both machines can currently traverse their environments at about six millimeters a second; a five-milligram ant, by comparison, speeds along at about a meter per second. Part of this limitation is owed to the micro-robots’ designs—the water strider bot can flap its limbs to propel itself atop water, but its natural inspiration actually uses its legs to row to move much faster. For now, the robots also require wired power sources, thus severely preventing any real-world implementations for the moment.

Going forward, however, the team intends to mimic other bug species while also creating a new water strider iteration capable of switching between moving atop water, and underneath its surface. Relying on catalytic combustion or integrating small batteries could also vastly increase the robots’ utility and range of use. If the breakthrough designs continue improving, similar micro-robots could one day be deployed to monitor hard-to-reach or dangerous environments, help with miniature fabrication techniques and surgical procedures, and even aid artificial pollination efforts.

The post These micro-robots were inspired by mini-bugs and water striders appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA is headed for the moon next week, and it’s bringing lots of weird stuff https://www.popsci.com/science/nasa-vulcan-lunar-lander/ Thu, 04 Jan 2024 20:52:10 +0000 https://www.popsci.com/?p=597513
Rendering of Astrobotic Peregrin lunar lander on moon's surface
The Astrobotic Peregrin lander is scheduled to make its soft lunar landing in late February. Astrobotic

United Launch Alliance's unmanned spacecraft takes off on January 8, 2024, carrying new tools, tiny robots, and... Gene Roddenberry’s ashes.

The post NASA is headed for the moon next week, and it’s bringing lots of weird stuff appeared first on Popular Science.

]]>
Rendering of Astrobotic Peregrin lunar lander on moon's surface
The Astrobotic Peregrin lander is scheduled to make its soft lunar landing in late February. Astrobotic

A rocket stocked with scientific instruments, technological gadgets, and… bitcoin (literally) is about to head for the moon’s surface. United Launch Alliance’s NASA-funded Vulcan Centaur is slated to lift off in the early hours of January 8 from Cape Canaveral, Florida, to begin its nearly two-month journey. After traveling roughly 238,900 miles, the nearly 2,829-pound Peregrin lander, built by private space company Astrobotic, should arrive at the Gruithuisen Domes within the moon’s Sinus Viscositatis region. If successful, it will mark the first US landing on Earth’s satellite since NASA’s Apollo 17 mission in 1972.

As Gizmodo notes, over 20 various payloads from six countries will be aboard the Peregrin lander—some meant for research, with others purely symbolic gestures ahead of Artemis astronauts’ planned touchdown later this decade.

[Related: Why scientists think it’s time to declare a new lunar epoch.]

The technology aboard

NASA intends to utilize a number of new tools and analysis tech aboard the lander, including a Near-Infrared Volatile Spectrometer System (NIRVSS) and Neutron Spectrometer System (NSS) meant for identifying substances such as water on the lunar surface. A Laser Retro-Reflector Array (LRA) will also provide incredibly precise distance measurements between the moon and Earth, while the Linear Energy Transfer Spectrometer (LETS) will assess lunar surface radiation to advance future astronauts’ safety.

Similar to LETS, Germany’s M-42 radiation detector will analyze similar potential mission dangers, as Mexico’s Colmena robot swarm will deploy and assemble to form a solar panel. Alongside not to be outdone, Carnegie Mellon University’s tiny, student-built Iris Lunar rover could become the first US robot upon the moon if all goes as planned. In addition, the university is also sending off a MoonArk lightweight time capsule containing poems, music, nano-scale objects, Earth samples, and images.

Also, that

Despite the industry’s many criticisms, a portion of Vulcan’s inventory will also center on cryptocurrency—namely, Bitcoin. Thanks to BitMex and Bitcoin Magazine, a physical Bitcoin engraved with a private encryption key will be deposited on the lunar surface for “future explorers” to recover, along with a few other shiny crypto objects.

Stranger things

Although primarily intended to signify humanity’s future on the moon, next week’s launch also includes the literal remnants of its past. Two memorial space companies, Celestis and Elysium Space, will also have cargo aboard the Vulcan rocket: DNA from legendary science fiction author Arthur C. Clarke, as well as the trace cremated ashes of multiple original Star Trek actors and show creator, Gene Roddenberry.

And all that’s just a portion of the larger inventory list intended to travel in the Vulcan rocket next week. For a more detailed look at additional payload info, including a hunk of Mount Everest, head over to Gizmodo.

The post NASA is headed for the moon next week, and it’s bringing lots of weird stuff appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch an AI-leveraging robot beat humans in this classic maze puzzle game https://www.popsci.com/technology/cyberrunner-maze-game-robot/ Thu, 21 Dec 2023 15:30:00 +0000 https://www.popsci.com/?p=596498
CyberRunner robot capable of playing Labyrinth maze game
CyberRunner learned to successfully play Labyrinth after barely 5 hours of training. ETH Zurich

After hours of learning, CyberRunner can guide a marble through Labyrinth in just 14.5 seconds.

The post Watch an AI-leveraging robot beat humans in this classic maze puzzle game appeared first on Popular Science.

]]>
CyberRunner robot capable of playing Labyrinth maze game
CyberRunner learned to successfully play Labyrinth after barely 5 hours of training. ETH Zurich

Artificial intelligence programs easily and consistently outplay human competitors in cognitively intensive games like chess, poker, and Go—but it’s much harder for robots to beat their biological rivals in games requiring physical dexterity. That performance gap appears to be shortening, however, starting with a classic children’s puzzle game.

Researchers at Switzerland’s ETH Zurich recently unveiled CyberRunner, their new robotic system that leveraged precise physical controls, visual learning, and AI training reinforcement in order to learn how to play Labyrinth faster than a human.

AI photo

Labyrinth and its many variants generally consist of a box topped with a flat wooden plane that tilts across an x and y axis using external control knobs. Atop the board is a maze featuring numerous gaps. The goal is to move a marble or a metal ball from start to finish without it falling into one of those holes. It can be a… frustrating game, to say the least. But with ample practice and patience, players can generally learn to steady their controls enough to steer their marble through to safety in a relatively short timespan.

CyberRunner, in contrast, reportedly mastered the dexterity required to complete the game in barely 5 hours. Not only that, but researchers claim it can now complete the maze in just under 14.5 seconds—over 6 percent faster than the existing human record.

The key to CyberRunner’s newfound maze expertise is a combination of real-time reinforcement learning and visual input from overhead cameras. Hours’ worth of trial-and-error Labyrinth runs are stored in CyberRunner’s memory, allowing it learn step-by-step how to best navigate the marble successfully along its route.

[Related: This AI program could teach you to be better at chess.]

“Importantly, the robot does not stop playing to learn; the algorithm runs concurrently with the robot playing the game,” reads the project’s description. “As a result, the robot keeps getting better, run after run.”

CyberRunner not only learned the fastest way to beat the game—but it did so by finding faults in the maze design itself. Over the course of testing possible pathways, the AI program uncovered shortcuts allowing it to shave off time from its runs. Basically, CyberRunner created its own Labyrinth cheat codes by finding shortcuts that sidestep the maze’s marked pathways.

CyberRunner’s designers have made the project completely open-source, with an aim for other researchers around the world to utilize and improve upon the program’s capabilities.
“Prior to CyberRunner, only organizations with large budgets and custom-made experimental infrastructure could perform research in this area,” project collaborator and ETH Zurich professor Raffaello D’Andrea said in a statement this week. “Now, for less than 200 dollars, anyone can engage in cutting-edge AI research. Furthermore, once thousands of CyberRunners are out in the real-world, it will be possible to engage in large-scale experiments, where learning happens in parallel, on a global scale.”

The post Watch an AI-leveraging robot beat humans in this classic maze puzzle game appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tesla’s Optimus robot can now squat and fondle eggs https://www.popsci.com/technology/tesla-optimus-robot-update/ Wed, 13 Dec 2023 19:30:00 +0000 https://www.popsci.com/?p=595389
Tesla Optimus robot handling an egg in demo video
Optimus' new hands include tactile sensing capabilities in all its fingers. X / Tesla

Elon Musk once said it will help create 'a future where there is no poverty.'

The post Tesla’s Optimus robot can now squat and fondle eggs appeared first on Popular Science.

]]>
Tesla Optimus robot handling an egg in demo video
Optimus' new hands include tactile sensing capabilities in all its fingers. X / Tesla

The last time Elon Musk publicly debuted a prototype of his humanoid robot, Optimus could “raise the roof” and wave at the politely enthused crowd attending Tesla’s October 2022 AI Day celebration. While not as advanced, agile, handy, or otherwise useful as existing bipedal robots, the “Bumblebee” proof-of-concept certainly improved upon the company’s first iteration—a person dressed as a robot.

On Wednesday night, Musk surprised everyone with a two-minute highlight reel posted to his social media platform, X, showcasing “Optimus Gen 2,” the latest iteration on display. In a major step forward, the now sleekly-encased robot can walk and handle an egg without breaking it. (Musk has previously stated he intends Optimus to be able to pick up and transport objects as heavy as 45 pounds.) 

Unlike last year’s Bumblebee demo, Tesla’s December 12 update only shows pre-taped, in-house footage of Gen 2 performing squats and stiffly striding across a Tesla showroom floor. That said, the new preview claims the third Optimus can accomplish such perambulations 30 percent quicker than before (an exact speed isn’t provided in the video) while weighing roughly 22 lbs less than Bumblebee. It also now includes “articulated foot sections” within its “human foot geometry.”

The main focus, however, appears to be the robot’s “faster… brand-new” five-fingered hands capable of registering and interpreting tactile sensations. To demonstrate, Optimus picks up an egg, transfers it between hands, and places it back down while a superimposed screen displays its finger pressure readings. 

[Related: Tesla’s Optimus humanoid robot can shuffle across stage, ‘raise the roof’]

The clip does not include an estimated release window or updated price point. In the past, Musk said production could begin as soon as this year, but revised that launch date in 2022 to somewhere 3-5 years down the line. If Optimus does make it off the factory line—and onto factory floors as a surrogate labor force—it will enter an industry rife with similar work robots.

During Tesla’s October 2022 AI Day event, Musk expressed his belief that Optimus will one day “help millions of people” through labor contributions that aid in creating “a future of abundance, a future where there is no poverty, where people can have whatever you want in terms of products and services.”

Musk previously offered a ballpark cost for Optimus at somewhere under $20,000—although his accuracy in such guesstimates aren’t great. The company’s much-delayed Cybertruck, for example, finally received its production launch event last month with a base price costing roughly one Optimus more than originally stated.

The post Tesla’s Optimus robot can now squat and fondle eggs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet NeRmo, the mouse robot with backbone https://www.popsci.com/technology/mouse-robot-backbone/ Wed, 06 Dec 2023 19:00:00 +0000 https://www.popsci.com/?p=594063
NeRmo mouse robot standing against blue background
NeRmo's agility and speed is owed largely to the inclusion of a realistic, flexible spine. Zhenshan Bing, et al.

Most quadruped robots ditch spine-derived designs for simplicity’s sake. NeRmo embraces the complex system.

The post Meet NeRmo, the mouse robot with backbone appeared first on Popular Science.

]]>
NeRmo mouse robot standing against blue background
NeRmo's agility and speed is owed largely to the inclusion of a realistic, flexible spine. Zhenshan Bing, et al.

Four-legged robots like Boston Dynamics’ Spot and Cheetah owe almost all their agility to fancy footwork. While they may visually move much like their mammalian counterparts, the anatomical inspirations largely stop at their legs. In biology, however, a quadrupedal animal’s movement, flexibility, and intricate motor functions stem almost entirely from its spine. Replicating that complex system of stacked vertebrae in robots is much more difficult than the legs—but if artificial spines could be integrated into such designs, engineers could open up entirely new avenues of precise maneuverability.

[Related: A new tail accessory propels this robot dog across streams.]

Now, engineers are reportedly a few steps further towards spine-centric quadruped bots thanks to a research team’s very uncanny, rodent-inspired robot. Writing in Science Robotics on Wednesday, collaborators across Germany and China have unveiled NeRmo, a biomimetic, four-legged robot that relies on a novel motor-tendon framework to scurry its way around environments.

As far as looks go, NeRmo mirrors a mouse’s skeletal system—although the ears, although cute, are likely superfluous. The robot’s rigid front half houses its electronics systems, while its latter half functions much as an actual flexible spine would, with four lumbar and lateral joints. Artificial tendons thread through the spine as well as the robot’s elbow and knee joints allow NeRmo even more mouselike movements alongside quicker turning times. 

Animals photo

According to collaborators at the Technical University of Munich, University of Technology Nuremberg, and China’s Sun Yat-Sen University, NeRmo’s tendon-pulley system precludes the need for any musculature while still allowing for smooth flexion capabilities across the lateral and sagittal planes, i.e. side-to-side, and up-and-down.

To test their new design, the team ran NeRmo through a series of four experiments to demonstrate static balancing, straight-line walking, agile turning, and maze navigation. Each trial included two rounds—one with the spinal system engaged, and another with it disabled. Across the board, NeRmo performed their tasks better, faster, and more accurately when it integrated the spine into its movements.

Maze navigation, however, was NeRmo’s true shining moment. With its spine engaged, the mouse-bot completed its labyrinth runs an average of 30 percent faster than simply waddling through without spinal support.
Although still in its early stages, researchers believe further design tweaking and integration of the spinal systems into future quadruped robots could vastly improve their functionality. If NeRmo wasn’t proof enough, think of it this way—MIT’s Cheetah can gallop at 13 feet-per-second with just one actuated joint mimicking spinal flexion in the sagittal plane. NeRmo, meanwhile, has eight joints.

The post Meet NeRmo, the mouse robot with backbone appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this eel robot effortlessly glide underwater https://www.popsci.com/technology/eel-robot-migration-study/ Fri, 01 Dec 2023 19:00:00 +0000 https://www.popsci.com/?p=593348
1-guilla eel robot without casing
Elongated anguilliform swimmers, like eels, demonstrate exceptional swimming efficiency during their migration period, travelling thousands of kilometres without feeding. To explore and decompose this type of swimming, 1-guilla, the anguilliform, eel-like robot was designed. Alexandros Anastasiadis, Annalisa Rossi, Laura Paez, et al.

Researchers built the robot to investigate how eels migrate on empty stomachs.

The post Watch this eel robot effortlessly glide underwater appeared first on Popular Science.

]]>
1-guilla eel robot without casing
Elongated anguilliform swimmers, like eels, demonstrate exceptional swimming efficiency during their migration period, travelling thousands of kilometres without feeding. To explore and decompose this type of swimming, 1-guilla, the anguilliform, eel-like robot was designed. Alexandros Anastasiadis, Annalisa Rossi, Laura Paez, et al.

A research team from the Swiss Federal Institute of Technology recently designed and built their own swimming robot modeled on oceanic eels. Despite its relatively simple design, the bot’s award-winning underwater undulations could provide key insights into its eel inspirations’ biology.

Fish photo

As New Scientist first highlighted on November 30, a video showcase of the collaborators’ work. The clip highlights the abilities of 1-guilla, the team’s nearly three-foot-long, waterproof robot. Featuring eight motorized segments, a malleable tail fin, as well as a head piece containing its frontal battery and computational unit, 1-guilla was named in honor of the more technical term for an eel’s body—anguilliform. The video of the machine’s aquatic journeys recently took home a Gallery of Fluid Motion award during last month’s annual American Physical Society’s Division of Fluid Dynamics.

While anguilliform evolutionary design allows flesh-and-blood eels to migrate thousands of miles without eating, biologists are not fully sure how the fish subspecies accomplishes such a feat. Enter 1-guilla, whose body movements could be tinkered with by its designers to explore various physical patterns, as well as the interplay between energy efficiency and a speed

During testing, a “standing wave” motion occurred when 1-guilla repeatedly alternated between an S-shape and its original, straight position—only to thrash about in the water. Researchers then programmed 1-guilla to undulate so an S-shape traveled down its body. During this phase, the robot created a “traveling wave” motion allowing it to move forward. Increasing the “amplitude” of its body bending alongside lengthening its S-shape “wavelength” also led to a speedier swim.

But the main influence in how quickly 1-guilla could move through water is its tailfin. Increasing the tail’s angle to its maximum 45-degree range offered the most speed—but at a steep cost. Maximum range, perhaps predictably, requires maximum energy usage, which isn’t exactly a winning strategy for traveling long distances.

[Related: NASA hopes its snake robot can search for alien life on Saturn’s moon Enceladus.]

“To calculate efficiency, the motor’s power consumption (P) is divided by its speed (U) to get the Cost of Transport (CoT),” the team explains in its demonstration video.

The more 1-guilla’s motions resembled traveling waves, the lower its cost of transport. Knowing this, the researchers hypothesize that overall efficiency, not the fastest speed possible, is the key to an actual eel’s lengthy migration while on a comparatively empty stomach.

Serpentine robots are all the rage right now. NASA, for example, is putting the final touches on its aptly named Exobiology Extant Life Surveyor (EELS) prototype. Ostensibly 1-guilla’s 16-foot-long, 200-pound bigger sibling, EELS could one day find itself traversing both the surface and underground passageways on Saturn’s icy, possibly life-hosting moon, Enceladus. Meanwhile, MIT engineers recently unveiled their own three-foot-long, modular eel-bot made from simple lattice-like structures known as “voxels.”

The post Watch this eel robot effortlessly glide underwater appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Army ants could teach robots a thing or two https://www.popsci.com/technology/robot-swarm-army-ants/ Wed, 22 Nov 2023 18:00:00 +0000 https://www.popsci.com/?p=591264
Army ants building living bridge between two ledges in lab
Ants' tiny brains can still coordinate to build complex structures using their own bodies. Credit: Isabella Muratore

Army ants use their bodies to build bridges. Robots could soon take a cue from the tiny insect’s ability to collaborate.

The post Army ants could teach robots a thing or two appeared first on Popular Science.

]]>
Army ants building living bridge between two ledges in lab
Ants' tiny brains can still coordinate to build complex structures using their own bodies. Credit: Isabella Muratore

Apart from their nasty stings, army ant colonies are often known for their stunning, intricate architectural feats using their own bodies. When worker ant hunting parties encounter obstacles such as fallen tree branches, gaps in foliage, or small streams, the tiny insects will join forces to create a bridge for the remaining ant brethren to traverse. It’s as impressive as it is somewhat disconcerting—these are living, crawling buildings, after all. But one research team isn’t studying the coordination between miniscule bugs to benefit future construction projects; they are looking into how army ant teamwork could be mimicked by robots.

“Army ants create structures using decentralized collective intelligence processes,” Isabella Muratore, a postdoctoral researcher at the New Jersey Institute of Technology specializing in army ant building techniques, explains to PopSci over email. “This means that each ant follows a set of rules about how to behave based on sensory input and this leads to the creation of architectural forms without the need for any prior planning or commands from a leader.”

[Related: These robots reached a team consensus like a swarm of bees.]

Along with engineers from NJIT and Northwestern University, Muratore and her entomologist colleagues developed a series of tests meant to gauge army ant workers’ reactions and logistical responses to environmental impediments. After placing obstacles in the ants’ forest paths, Muratore filmed and later analyzed the herds’ subsequent adaptations to continue along their routes. Utilizing prior modeling work, the team also tested whether the ant bridges could withstand sudden, small changes in obstacle length using an adjustable spacing device.

Muratore and others recently presented their findings at this year’s annual Entomological Society of America conference. According to their observations, army ants generally choose to construct bridges in the most efficient locations—places wide enough to necessitate a building project while simultaneously using the least number of ants possible. The number of bridges needed during a sojourn also influences the ants’ collective decisions on resource allocation.

David Hu, a Georgia Institute of Technology engineering professor focused on fire ant raft constructions during flooding, recently likened the insects to neurons in one big, creepy-crawly brain while speaking to NPR on the subject. Instead of individual ants determining bridge dimensions and locations, each ant contributes to the decisions in their own small way.

[Related: Robot jellyfish swarms could soon help clean the oceans of plastic.]

Muratore and her collaborators believe an army ant’s collaborative capabilities could soon help engineers program swarms of robots based on the insect’s behavior principles and brains. Ants vary across species, but they still can pack a surprising amount of information within their roughly 1.1 microliter volume brains.

Replicating that brainpower requires relatively low energy costs. Scaling it across a multitude of robots could remain comparatively cheap, while exponentially increasing their functionality. This could allow them to “flexibly adapt to a variety of challenges, such as linking together to form bridges over gaps of different lengths in the most efficient manner possible,” Muratore writes to PopSci.
Robotic teamwork is crucial to implement the machines across a number of industries and scenarios, from outer space exploration, to ocean cleanup projects, to search-and-rescue efforts in areas too dangerous for humans to access. In these instances, coordinating quickly and efficiently not only saves time and energy, it could save lives.

The post Army ants could teach robots a thing or two appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hyundai’s robot-heavy EV factory in Singapore is fully operational https://www.popsci.com/technology/hyundai-singapore-factory/ Tue, 21 Nov 2023 18:15:00 +0000 https://www.popsci.com/?p=590969
Robot dog at Hyundai factory working on car
Over 200 robots will work alongside human employees at the new facility. Hyundai

The seven-story facility includes a rooftop test track and ‘Smart Garden.’

The post Hyundai’s robot-heavy EV factory in Singapore is fully operational appeared first on Popular Science.

]]>
Robot dog at Hyundai factory working on car
Over 200 robots will work alongside human employees at the new facility. Hyundai

After three years of construction and limited operations, the next-generation Hyundai Motor Group Innovation Center production facility in Singapore is officially online and fully functioning. Announced on November 20, the 935,380-square-foot, seven-floor facility relies on 200 robots to handle over 60 percent of all “repetitive and laborious” responsibilities, allowing human employees to focus on “more creative and productive duties,” according to the company.

In a key departure from traditional conveyor-belt factories, HMGIC centers on what the South Korean vehicle manufacturer calls a “cell-based production system” alongside a “digital twin Meta-Factory.” Instead of siloed responsibilities for automated machinery and human workers, the two often cooperate using technology such as virtual and augmented reality. As Hyundai explains, while employees simulate production tasks in a digital space using VR/AR, for example, robots will physically move, inspect, and assemble various vehicle components.

[Related: Everything we love about Hyundai’s newest EV.]

By combining robotics, AI, and the Internet of Things, Hyundai believes the HMGIC can offer a “human-centric manufacturing innovation system,” Alpesh Patel, VP and Head of the factory’s Technology Innovation Group, said in Monday’s announcement

Atop the HMGIC building is an over 2000-feet-long vehicle test track, as well as a robotically assisted “Smart Farm” capable of growing up to nine different crops. While a car factory vegetable garden may sound somewhat odd, it actually compliments the Singapore government’s ongoing “30 by 30” initiative.

Due to the region’s rocky geology, Singapore can only utilize about one percent of its land for agriculture—an estimated 90 percent of all food in the area must be imported. Announced in 2022, Singapore’s 30 by 30 program aims to boost local self-sufficiency by increasing domestic yields to 30 percent of all consumables by the decade’s end using a combination of sustainable urban growth methods. According to Hyundai’s announcement, the HMGICS Smart Farm is meant to showcase farm productivity within compact settings—while also offering visitors some of its harvested crops. The rest of the produce will be donated to local communities, as well as featured on the menu at a new Smart Farm-to-table restaurant scheduled to open at the HMGICS in spring 2024.

[Related: Controversial ‘robotaxi’ startup loses CEO.]

HMGICS is expected to produce up to 30,000 electric vehicles annually, and currently focuses on the IONIQ 5, as well as its autonomous robotaxi variant. Beginning in 2024, the facility will also produce Hyundai’s IONIQ 6. If all goes according to plan, the HMGICS will be just one of multiple cell-based production system centers.

The post Hyundai’s robot-heavy EV factory in Singapore is fully operational appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This 3D-printed soft robotic hand has ‘bones,’ ‘ligaments,’ and ‘tendons’ https://www.popsci.com/technology/3d-printed-soft-robot-hand/ Wed, 15 Nov 2023 20:00:00 +0000 https://www.popsci.com/?p=589875
Side by side of 3D printed robot hand gripping pen and bottle
Researchers 3D-printed a robotic hand, a six-legged robot, a 'heart' pump, and a metamaterial cube. ETH Zurich / Thomas Buchner

3D-printed designs are usually limited to fast-drying polymers, but a new method enables wild, soft robotic possibilities.

The post This 3D-printed soft robotic hand has ‘bones,’ ‘ligaments,’ and ‘tendons’ appeared first on Popular Science.

]]>
Side by side of 3D printed robot hand gripping pen and bottle
Researchers 3D-printed a robotic hand, a six-legged robot, a 'heart' pump, and a metamaterial cube. ETH Zurich / Thomas Buchner

To call soft robotic hands “complex” is a bit of an understatement. These designs consider a number of engineering factors, including the elasticity and durability of materials. This usually entails separate 3D-printing processes for each component, often with multiple plastics and polymers. Now, however, engineers working together from ETH Zurich and the MIT spin-off company, Inkbit, can create extremely intricate products with a 3D-printer utilizing a laser scanner and feedback learning. The researchers’ impressive results already include a six-legged gripper robot, an artificial “heart” pump, sturdy metamaterials, as well as an articulating soft robotic hand complete with artificial tendons, ligaments, and bones.

Engineering photo

[Related: Watch a robot hand only use its ‘skin’ to feel and grab objects.]

Traditional 3D-printers use fast-curing polyacrylate plastics. In this process, UV lamps quickly harden a malleable plastic gel as it is layered via the printer nozzle, while a scraping tool removes surface imperfections along the way. While effective, the rapid solidification can limit a product’s form, function, and flexibility. But trying to swap out the fast-curing plastic for slow-curing polymers like epoxies and thiolenes mucks up the machinery, meaning many soft robotic components require separate manufacturing methods.

Knowing this, designers wondered if adding scanning technology alongside rapid printing adjustments could solve the slow-curing hurdle. As detailed in their new paper published in Nature, their new system not only offers a solution, but demonstrates 3D-printed, slow-curing polymers’ potential across a number of designs.

Instead of scraping away imperfections layer-by-layer, three-dimensional scanning offers near-instantaneous information on surface irregularities. This data is sent to the printer’s feedback mechanism, which then adjusts the necessary material amount “in real time and with pinpoint accuracy,” Wojciech Matusik, an electrical engineering and computer science professor at MIT and study co-author, said in a recent project profile from ETH Zurich.

To demonstrate their new method’s potential, researchers created a quartet of diverse 3D-printed projects using soft-curing polymers—a resilient metamaterial cube, a heart-like fluid pump capable of transporting “liquids” through its system, a six-legged robot topped with a sensor-informed two-pronged gripper, as well an articulating hand capable of grasping objects using embedded sensor pads.
While refinements to production methods, polymers’ chemical compositions, and lifespan are still needed, the team believes the comparatively fast and adaptable 3D-printing method could one day lead to a host of novel industrial, architectural, and robotic designs. Soft robots, for example, offer less risk of injury when working alongside humans, and can handle fragile goods better than their standard, metal robot counterparts. Already, however, the existing advances have produced designs once impossible for 3D printers.

The post This 3D-printed soft robotic hand has ‘bones,’ ‘ligaments,’ and ‘tendons’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These robots reached a team consensus like a swarm of bees https://www.popsci.com/technology/bee-robot-communication/ Wed, 08 Nov 2023 18:30:00 +0000 https://www.popsci.com/?p=587785
Image of kilobots atop photo of bees
The tiny robots communicate using multicolored LED lights. Credit: Unsplash / University of Barcelona / PopSci

Scout bees vote for new hive locations with a 'dance.' These bots use blinking lights.

The post These robots reached a team consensus like a swarm of bees appeared first on Popular Science.

]]>
Image of kilobots atop photo of bees
The tiny robots communicate using multicolored LED lights. Credit: Unsplash / University of Barcelona / PopSci

Bees are extremely adept at communicating, even though their brains weigh just two milligrams. They’re so efficient at reaching a consensus, in fact, that researchers created a mini-robot team inspired by their ‘conversations.’

In the search for a new nesting spot, scout bees are known to conduct tiny “waggle dances” to indicate their preferred hive location—slowly winning over swarmmates to join in the process. The moves are tiny but complex, involving moving in figure-eight patterns while shaking their bodies at rapid speed. The bees with the most popular dance part earn final say on where to build. While the three centimeter-wide “kilobots” under the watch of a team at Spain’s University of Barcelona can’t shimmy and shake just yet, they do signal to one another much like bees.

[Related: Bee brains could teach robots to make split-second decisions.]

As detailed in their preprint paper submitted in late October, the team first attached a colored LED light alongside an infrared-light receiver and emitter atop each of a total of 35 kilobots. They then programmed the bots using a modified version of a previously designed mathematical model based on scout bee behavior. From there, the team placed varying numbers of kilobots within an enclosure and let them jitter through their new environment on their trio of toothpick-like legs. During over 70 tests, researchers ordered certain bot clusters to advertise their preferred nesting location “opinion” via signaling between their LED lights’ red, blue, and green hues.

Every kilobot team achieved a group consensus within roughly 30 minutes, no matter the team size or environmental density. Such reliable decision making—even in machines capable of transmitting just 9 bytes of information at a time—could one day prove invaluable across a number of industries.

[Related: Bat-like echolocation could help these robots find lost people.]

“We believe that in the near future there are going to be simple robots that will do jobs that we don’t want to do, and it will be very important that they make decisions in a decentralized, autonomous manner,” Carmen Miguel, one of the study’s co-authors, explained to New Scientist on November 7.

During invasive medical procedures, for instance, tiny robots could maneuver within a patient’s body, communicating with one another without the need for complex electronics. Similarly, cheap bots could coordinate with one another while deployed during search-and-rescue missions. In such scenarios, the environmental dangers often prevent the use of expensive robots due to risk of damage or destruction.

Above it all, however, the University of Barcelona team believes their work draws attention to often underappreciated aspects of everyday existence. The team’s paper abstract concludes: “By shedding light on this crucial layer of complexity… we emphasize the significance of factors typically overlooked but essential to living systems and life itself.”

The post These robots reached a team consensus like a swarm of bees appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why electric knifefish ‘shimmy’ https://www.popsci.com/environment/electric-knifefish-shimmy/ Thu, 26 Oct 2023 15:00:00 +0000 https://www.popsci.com/?p=583514
A long torpedo-shaped fish swims among green plants. Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America.
Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America. Deposit Photos

Quick movements heighten animal senses—even in humans.

The post Why electric knifefish ‘shimmy’ appeared first on Popular Science.

]]>
A long torpedo-shaped fish swims among green plants. Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America.
Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America. Deposit Photos

Animals have a wide range of ways to make sense of the world around them. Dogs sniff the air around them. Dolphins use echolocation. Humans glance at each other. For the electric knifefish, “shimmying” around in the water like a tadpole helps it make sense of its watery world. But knifefish are not the only ones that wiggle with purpose. In a study published October 26 in the journal Nature Machine Intelligence, scientists describe a wide range of organisms that perform these same wavy patterns of movement to feel out the environment around them. 

[Related: Five animals that can sense things you can’t.]

The team behind this study was interested in what the nervous system does when animals move to improve their perception of the world, and if that behavior could be translated to robotic control systems.

“Amoeba don’t even have a nervous system, and yet they adopt behavior that has a lot in common with a human’s postural balance or fish hiding in a tube,” study co-author and Johns Hopkins University mechanical engineer Noah Cowan said in a statement. “These organisms [knifefish and amoebas] are quite far apart from each other in the tree of life, suggesting that evolution converged on the same solution through very different underlying mechanisms.”

Fish photo

Shimmying in the dark

Knifefish are blade-shaped fish found in freshwater lakes and rivers in Central and South America. They can reach three feet long and eat insects, crustaceans, and other fish. In the wild, they are hardwired to hide to avoid predators. They send out weak electric discharges that sense the predators’ location and find shelter. Wiggling around rapidly helps them actively sense their surroundings to find a place to hide.

While watching electric knifefish in an observation tank, the team noticed that when it was dark, the fish shimmied back and forth significantly more frequently. The fish swayed more gently with occasional bursts of quick movements when the lights were on. 

“We found that the best strategy is to briefly switch into explore mode when uncertainty is too high, and then switch back to exploit mode when uncertainty is back down,” co-author and Johns Hopkins computational cell biologist and neuroethologist Debojyoti Biswas said in a statement. When a predator could be nearby, the knifefish will quickly search for somewhere to hide. If they feel safe, they can return back to a more normal and less wiggly state to find food.

Exciting the senses

In the study, the team created a model that simulates the key sensing behaviors of the fish. They used work from other labs and spotted these same sensory-dependent movements in other organisms including amoeba, moths, cockroaches, moles, bats, mice, and even humans.

According to the authors, this is the first time scientists have deciphered this mode-switching strategy in fish and linked the behavior across species. They believe that all organisms have a brain computation that manages uncertainty in their environment.

[Related: How cats and dogs see the world.]

“If you go to a grocery store, you’ll notice people standing in line will change between being stationary and moving around while waiting,” Cowan said. “We think that’s the same thing going on, that to maintain a stable balance you actually have to occasionally move around and excite your sensors like the knifefish. We found the statistical characteristics of those movements are ubiquitous across a wide range of animals, including humans.”

Understanding these sensory mechanisms and their nuances could be used to improve search and rescue drones, space rovers, and other autonomous robots. These same characteristics for looking around could be built into future robots to help them perceive the space around them. The team also plans to explore how these mechanisms work in living things—even in plants.

The post Why electric knifefish ‘shimmy’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Marines used a ‘robotic goat’ to fire a rocket-propelled grenade https://www.popsci.com/technology/marines-robotic-goat-fires-weapon/ Tue, 24 Oct 2023 20:46:42 +0000 https://www.popsci.com/?p=582921
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023.
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023. Justin J. Marty / US Marines

Here's why the US military put a light anti-tank weapon on the back of a robotic quadruped.

The post The Marines used a ‘robotic goat’ to fire a rocket-propelled grenade appeared first on Popular Science.

]]>
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023.
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023. Justin J. Marty / US Marines

On September 9, Marines at Twentynine Palms, California, strapped a rocket launcher to the back of a commercially available robotic goat as part of a tactical training exercise. In a video of the test, the robotic goat is set up for safety on a firing range within a little sandbagged shelter, cleared to fire, and then the rocket-propelled grenade launches off the goat’s back. (While most quadrupedal robots of this size are referred to as robot dogs, the Marine Corps referred to the robot in question as a robotic goat.) The test, one of several new and autonomy-adjacent technologies demonstrated that day, offers a glimpse into what robot-assisted combat of the present and the future could look like.

The test was conducted by the Tactical Training and Exercise Control group, together with the Office of Naval Research, and it took place at the Marine Air Ground Task Force Training Command, which is the largest Marine Corps base. The rocket-propelled grenade launcher used was an M72 Light Anti-tank Weapon (or LAW). The weapon is a NATO standard, and thousands of the weapons have been delivered to Ukraine since it was invaded by Russia in February 2022.

The M72 LAW has been in service with US forces since 1963. Weighing just 5.5 pounds, the weapon is light, cheap enough to discard after firing, and dead simple to use. A Marine Corps guide notes that it is a standard tool of infantry battalions (which includes roughly 800 Marines). The weapon is also not specific to any line of service and “can be fired by any Marine with basic infantry skills.”

[Related: The US military’s tiniest drone feels like it flew straight out of a sci-fi film]

The rockets fired by the launcher can travel up to 3,280 feet, but are most effective at a range of 650 feet. That’s a dangerously close distance to be near a tank, as it places the person trying to destroy the tank within range of not just the tank’s big cannon but also any machine guns it may have for self-defense. This danger is exacerbated for armies fighting in open fields, but the M72 was designed for the density and obstructions of urban combat. All of those features, from simplicity to disposability to close-range firing, make it an ideal weapon to mount on a remote-controlled robot shooter.

“Instead of having a Marine handle the weapon system, manipulate the safeties, we could put a remote trigger mechanism on it that allowed it to all be done remotely,” said Aaron Safadi, in a release on the test. Safadi is the officer in charge of the emerging technology integration section of the Tactical Training and Exercise Control group. “The Marine could be behind cover and concealment, the weapon system could go forward, and the Marine could manipulate the safeties from a safe place while allowing that weapon system to get closer to its target.”

The robot goat on which the Marines tested the M72 is, as a Marine emphatically explains in the video, a tool for testing and not the intended robot for carrying it into combat. As reported by The War Zone, “the underlying quadrupedal robot is a Chinese-made Unitree Go1, which is readily available for purchase online, including through Amazon.” (The War Zone is owned by PopSci’s parent company, Recurrent Ventures.)

In the past, security concerns about using off-the-shelf robotics and drones made in China have led to the Department of Defense banning their use without explicit waivers for permission. That’s left the Pentagon in a sometimes tricky spot, as the overwhelming majority of commercial manufacture of such robots is in China, to the point that even models branded Made in USA have Chinese components.

Both Ukraine and Russia have adopted off-the-shelf commercial robots for use in their war against each other. The low price point of the Go1 goat robot suggests it could follow a similar pattern, should it prove useful as a remote-control firing platform. The Marine Corps, should it pursue a different mount for the M72, could pursue a platform like the Ghost Robotics Q-UGV. This four-legged robotic dog has already seen use patrolling an Air Force base in Florida, and in 2021 Ghost demonstrated a version of the Q-UGV with a gun mounted on its back at a defense technology exposition.

To mount the M72 on the robot goat, the robot first dons a metal box with firing controls and safety switches on its back. After firing, the box can be opened, the spent launcher discarded, and the robot is ready to take on a new round. It is easy to see the appeal of such a system in combat. With the M72 designed to punch through armor or defenses at short range, the driver could use a video-game-like controller to scout ahead, watching through the robot’s cameras as eyes. Sensors on the side of the robot help it avoid other obstacles. Once it’s in position, the robot’s rocket could be launched, and if the robot survives the encounter, it could let the Marine witness the destruction before advancing.

Bringing tanks or other armored vehicles into cities is already a fraught decision, as urban combat necessitates reduced perception. Cities, even ones reduced to rubble, can hide all sorts of waiting unpleasantness. For urban defenders and assaulters alike, the ability to mount weapons on robotic scouts, even and especially disposable robots with disposable weapons, offers a way to take a first toe into urban combat without exposing troops to excess danger.

Watch a video of the robot goat, and other items test in the training exercise, below:

Navy photo

The post The Marines used a ‘robotic goat’ to fire a rocket-propelled grenade appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This seafaring robot ‘eats’ stinky seaweed and dumps it in deep water https://www.popsci.com/technology/algaray-seaweed-robot/ Tue, 24 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=582851
AlgaRay robot floating atop water in Antigua
After gathering the seaweed, AlgaRay can dive below the surface to deposit its cargo near the ocean floor. Seaweed Generation/University of Exeter

The AlgaRay scoops up invasive sargassum seaweed before it washes onto shores. It could even alleviate CO2 pollution in the process.

The post This seafaring robot ‘eats’ stinky seaweed and dumps it in deep water appeared first on Popular Science.

]]>
AlgaRay robot floating atop water in Antigua
After gathering the seaweed, AlgaRay can dive below the surface to deposit its cargo near the ocean floor. Seaweed Generation/University of Exeter

If you’ve ever spent time on a beach in the Gulf of Mexico or the Caribbean, there is a solid chance you stumbled across a slimy mass of stinky, sulfurous-smelling seaweed. The specific marine plant in question during those gross encounters is likely sargassum—while helpful for absorbing CO2, sargassum is also incredibly invasive, and can wreak havoc on both shoreline and ocean ecosystems. Cleanup efforts can cost tens of thousands of dollars while disrupting both tourist and fishing industries, but a recent aquatic robot project is showing immense promise in alleviating sargassum stress. In fact, AlgaRay’s recent successes have even earned it a spot on Time’s Best Inventions of 2023.

Co-designed by Seaweed Generation, a nonprofit organization dedicated to utilizing the versatile plant to help mitigate and remove carbon emissions, an AlgaRay prototype is currently patrolling off the coasts of Antigua. There, the roughly 9-foot-wide robot scoops up clumps of sargassum until its storage capacity is filled, at which point the autonomous bot dives 200m below the surface.

[Related: Rocks may be able to release carbon dioxide as well as store it.]

At this depth, the air pockets that make sargassum leaves so buoyant are so compressed by the water pressure that it simply can’t float anymore. Once released by AlgaRay, the seaweed then sinks to the ocean floor. According to a new writeup by Seaweed Generation’s partners at the University of Exeter, the robot can repeat this process between four and six times every hour. And thanks to a combination of solar panels, lithium batteries, and navigational tools connected to Starlink’s satellite internet constellation, AlgaRay will “ultimately be able to work almost non-stop,” reports the University of Exeter.

Of course, ocean ecosystems are complex and delicate balancing acts at any depth. AlgaRay’s designers are well aware of this, and assure its potential additional ocean floor CO2 deposits won’t be carried out recklessly. Additionally, they note sargassum blooms—exacerbated by human ecological disruption—are already causing major issues across the world.

“Sargassum inundations… cause environmental, social and economic disruption across the Caribbean, Central US and West African regions,” Seaweed Generation CEO Paddy Estridge and Chief of Staff Blythe Taylor, explain on the organization’s website. “Massive influxes of seaweed wash ashore and rot, releasing not just the absorbed CO2 but hydrogen sulfide gasses, decimating fragile coastal ecosystems including mangroves and seagrass meadows and killing countless marine animals.”

[Related: The US is investing more than $1 billion in carbon capture, but big oil is still involved.]

Estridge and Taylor write that humans “need to tread carefully” when it comes to depositing biomass within the deep ocean to ensure there are no “negative impacts or implications on the surrounding environment and organisms.” At the same time, researchers already know sargassum naturally dies and sinks to the bottom of the ocean.

Still, “we can’t assume either a positive or negative impact to sinking sargassum, so a cautious pathway and detailed monitoring has been built into our approach,” Estridge and Taylor write. “The scale of our operations are such that we can measure any change to the ocean environment on the surface, mid or deep ocean. Right now, and for the next few years our operations are literally a drop in the ocean (or a teaspoon of Sargassum per m2).”

As the name might imply, the AlgaRay is inspired by manta rays, which glide through ocean waters while using their mouths to filter and eat algae. In time, future iterations of the robot could even rival manta rays’ massive sizes. A nearly 33-foot-wide version is in the works to collect upwards of 16 metric tons of seaweed at a time—equal to around two metric tons of CO2. With careful monitoring of deep sea repositories, fleets of AlgaRay robots could soon offer an efficient, creative means to remove CO2 from the atmosphere.

“The [Intergovernmental Panel on Climate Change]  has been very clear that we need to be able to remove (not offset, remove) 10 billion [metric tons] of carbon a year from the atmosphere by 2050 to have a hope of avoiding utter catastrophe for all people and all earth life,” write Estridge and Taylor. Knowing this, AlgaRay bots may be a key ally for helping meet that goal. If nothing else, perhaps some beaches will be a little less overrun with rotting seaweed every year. 

The post This seafaring robot ‘eats’ stinky seaweed and dumps it in deep water appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch what happens when AI teaches a robot ‘hand’ to twirl a pen https://www.popsci.com/technology/nvidia-eureka-ai-training/ Fri, 20 Oct 2023 19:10:00 +0000 https://www.popsci.com/?p=581803
Animation of multiple robot hands twirling pens in computer simulation
You don't even need humans to help train some AI programs now. NVIDIA Research

The results are better than what most humans can manage.

The post Watch what happens when AI teaches a robot ‘hand’ to twirl a pen appeared first on Popular Science.

]]>
Animation of multiple robot hands twirling pens in computer simulation
You don't even need humans to help train some AI programs now. NVIDIA Research

Researchers are training robots to perform an ever-growing number of tasks through trial-and-error reinforcement learning, which is often laborious and time-consuming. To help out, humans are now enlisting large language model AI to speed up the training process. In a recent experiment, this resulted in some incredibly dexterous albeit simulated robots.

A team at NVIDIA Research directed an AI protocol powered by OpenAI’s GPT-4 to teach a simulation of a robotic hand nearly 30 complex tasks, including tossing a ball, pushing blocks, pressing switches, and some seriously impressive pen-twirling abilities.

AI photo

[Related: These AI-powered robot arms are delicate enough to pick up Pringles chips.]

NVIDIA’s new Eureka “AI agent” utilizes GPT-4 by asking the large language model (LLM) to write its own reward-based reinforcement learning software code. According to the company, Eureka doesn’t need intricate prompting or even pre-written templates; instead, it simply begins honing a program, then adheres to any subsequent external human feedback.

In the company’s announcement, Linxi “Jim” Fan, a senior research scientist at NVIDIA, described Eureka as a “unique combination” of LLMs and GPU-accelerated simulation programming. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists,” Fan added.

Judging from NVIDIA’s demonstration video, a Eureka-trained robotic hand can pull off pen spinning tricks to rival, if not beat, extremely dextrous humans. 

After testing its training protocol within an advanced simulation program, Eureka then analyzes its collected data and directs the LLM to further improve upon its design. The end result is a virtually self-iterative AI protocol capable of successfully encoding a variety of robotic hand designs to manipulate scissors, twirl pens, and open cabinets within a physics-accurate simulated environment.

Eureka’s alternatives to human-written trial-and-error learning programs aren’t just effective—in most cases, they’re actually better than those authored by humans. In the team’s open-source research paper findings, Eureka-designed reward programs outperformed humans’ code in over 80 percent of the tasks—amounting to an average performance improvement of over 50 percent in the robotic simulations.

[Related: How researchers trained a budget robot dog to do tricks.]

“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” Anima Anandkumar, senior director of AI research at NVIDIA’s senior director of AI research and one of the Eureka paper’s co-authors, said in the company’s announcement. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”

The post Watch what happens when AI teaches a robot ‘hand’ to twirl a pen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This weird-looking British ship will keep an eye out for sabotage beneath the surface https://www.popsci.com/technology/british-ship-proteus-surveillance/ Fri, 20 Oct 2023 14:00:37 +0000 https://www.popsci.com/?p=581582
The Proteus.
The Proteus. Ministry of Defence

It's called the Proteus, and it's a surveillance vessel.

The post This weird-looking British ship will keep an eye out for sabotage beneath the surface appeared first on Popular Science.

]]>
The Proteus.
The Proteus. Ministry of Defence

On October 10, the Royal Fleet Auxiliary dedicated a ship called the Proteus in a ceremony on the River Thames. The vessel, which looks like someone started building a ship and then stopped halfway through, is the first in the fleet’s Multi-Role Ocean Surveillance program, and is a conversion from a civilian vessel. 

In its new role, the Proteus will keep a protective eye on underwater infrastructure deemed vitally important, and will command underwater robots as part of that task. Before being converted to military use, the RFA Proteus was the Norwegian-built MV Topaz Tangaroa, and it was used to support oil platforms.

Underwater infrastructure, especially pipelines and communications cables, make the United Kingdom inextricably connected to the world around it. While these structures are hard to get to, as they rest on the seafloor, they are not impossible to reach. Commercial vessels, like the oil rig tenders the Proteus was adapted from, can reach below the surface with cranes and see below it through remotely operated submarines. Dedicated military submarines can also access seafloor cables. By keeping an eye on underwater infrastructure, the Proteus increases the chance that saboteurs can be caught, and more importantly, improves the odds that damage can be found and repaired quickly.

“Proteus will serve as a testbed for advancing science and technological development enabling the UK to maintain the competitive edge beneath the waves,” reads the Royal Navy’s announcement of the ship’s dedication.

The time between purchase and dedication of the Topaz Tangaroa to the Proteus was just 11 months, with conversion completed in September. The 6,600-ton vessel is operated by a crew of just 26 from the Royal Fleet Auxiliary, while the surveillance, survey, and warfare systems on the Proteus are crewed by 60 specialists from the Royal Navy. As the Topaz Tangaroa, the vessel was equipped for subsea construction, installation, light maintenance, and inspection work, as well as survey and remotely operated vehicle operations. The Proteus retains its forward-mounted helipad, which looks like a hexagonal brim worn above the bow of the ship.

Most striking about the Proteus is the large and flat rear deck, which features a massive crane as well as 10,700 square feet of working space, which is as much as five tennis courts. Helpful to the ship’s role as a home base for robot submersibles is a covered “moon pool” in the deck that, whenever uncovered, lets the ship launch submarines directly beneath it into the ocean.

“This is an entirely new mission for the Royal Fleet Auxiliary – and one we relish,” Commodore David Eagles RFA, the head of the Royal Fleet Auxiliary, said upon announcement of the vessel in January.

Proteus is named for one of the sons of the sea god Poseidon in Greek mythology, with Proteus having domain over rivers and the changing nature of the sea. While dedicated on a river, the ship is designed for deep-sea operation, with a ballast system providing stability as it works in the high seas. 

“Primarily for reasons of operational security, the [Royal Navy] has so far said little about the [Multi-Role Ocean Surveillance] concept of operations and the areas where Proteus will be employed,” suggests independent analysts Navy Lookout, as part of an in-depth guide on the ship. “It is unclear if she is primarily intended to be a reactive asset, to respond to suspicious activity and potentially be involved in repairs if damage occurs. The more plausible alternative is that she will initially be employed in more of a deterrent role, deploying a series of UUVs [Uncrewed Underwater Vehicles] and sensors that monitor vulnerable sites and send periodic reports back to the ship or headquarters ashore. Part of the task will be about handling large amounts of sensor data looking for anomalies that may indicate preparations for attacks or non-kenetic malign activity.”

In the background of the UK’s push for underwater surveillance are actual attacks and sabotage on underwater pipelines. In September 2022, an explosion caused damage and leaks in the Nord Stream gas pipeline between Russia and Germany. While active transfer of gas had been halted for diplomatic reasons following Russia’s February 2022 invasion of Ukraine, the pipeline still held gas in it at the time of the explosion. While theories abound for possible culprits, there is not yet a conclusive account of which nation was both capable and interested enough to cause such destruction.

The Proteus is just the first of two ships with this task. “The first of two dedicated subsea surveillance ships will join the fleet this Summer, bolstering our capabilities and security against threats posed now and into the future,” UK Defence Secretary Ben Wallace said in January. “It is paramount at a time when we face Putin’s illegal invasion of Ukraine, that we prioritise capabilities that will protect our critical national infrastructure.”

While the Proteus is unlikely to fully deter such acts, having it in place will make it easier for the Royal Navy to identify signs of sabotage. Watch a video of the Proteus below:

Navy photo

The post This weird-looking British ship will keep an eye out for sabotage beneath the surface appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI design for a ‘walking’ robot is a squishy purple glob https://www.popsci.com/technology/ai-robot-blob/ Fri, 13 Oct 2023 15:30:00 +0000 https://www.popsci.com/?p=579501
AI-designed multi-legged robots on table
They may not look like much, but they skipped past billions of years' of evolution to get those little legs. Northwestern University

During testing, the creation could walk half its body length per second—roughly half as fast as the average human stride.

The post AI design for a ‘walking’ robot is a squishy purple glob appeared first on Popular Science.

]]>
AI-designed multi-legged robots on table
They may not look like much, but they skipped past billions of years' of evolution to get those little legs. Northwestern University

Sam Kreigman and his colleagues made headlines a few years back with their “xenobots”— synthetic robots designed by AI and built from biological tissue samples. While experts continue to debate how to best classify such a creation, Kriegman’s team at Northwestern University has been hard at work on a similarly mind-bending project meshing artificial intelligence, evolutionary design, and robotics.

[Related: Meet xenobots, tiny machines made out of living parts.]

As detailed in a new paper published earlier this month in the Proceedings of the National Journal of Science, researchers recently tasked an AI model with a seemingly straightforward prompt: Design a robot capable of walking across a flat surface. Although the program delivered original, working examples within literal seconds, the new robots “[look] nothing like any animal that has ever walked the earth,” Kriegman said in Northwestern’s October 3 writeup.

And judging from video footage of the purple multi-“legged” blob-bots, it’s hard to disagree:

Evolution photo

After offering their prompt to the AI program, the researchers simply watched it analyze and iterate upon a total of nine designs. Within just 26 seconds, the artificial intelligence managed to fast forward past billions of years of natural evolutionary biology to determine legged movement as the most effective method of mobility. From there, Kriegman’s team imported the final schematics into a 3D printer, which then molded a jiggly, soap bar-sized block of silicon imbued with pneumatically actuated musculature and three “legs.” Repeatedly pumping air in and out of the musculature caused the robots’ limbs to expand and contract, causing movement. During testing, the robot could walk half its body length per second—roughly half as fast as the average human stride.

“It’s interesting because we didn’t tell the AI that a robot should have legs,” Kriegman said. “It rediscovered that legs are a good way to move around on land. Legged locomotion is, in fact, the most efficient form of terrestrial movement.”

[Related: Disney’s new bipedal robot could have waddled out of a cartoon.]

If all this weren’t impressive enough, the process—dubbed “instant evolution” by Kriegman and colleagues—all took place on a “lightweight personal computer,” not a massive, energy-intensive supercomputer requiring huge datasets. According to Kreigman, previous AI-generated evolutionary bot designs could take weeks of trial and error using high-powered computing systems. 

“If combined with automated fabrication and scaled up to more challenging tasks, this advance promises near-instantaneous design, manufacture, and deployment of unique and useful machines for medical, environmental, vehicular, and space-based tasks,” Kriegman and co-authors wrote in their abstract.

“When people look at this robot, they might see a useless gadget,” Kriegman said. “I see the birth of a brand-new organism.”

The post AI design for a ‘walking’ robot is a squishy purple glob appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Futuristic aircraft and robotic loaders dazzled at a Dallas tech summit https://www.popsci.com/technology/up-summit-2023-aircraft-equipment/ Thu, 12 Oct 2023 20:00:00 +0000 https://www.popsci.com/?p=579128
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes.
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes. Rob Verger

Check out these photos of cargo drones, electric flying machines, Army gear, and remote-controlled construction equipment at a Texas event.

The post Futuristic aircraft and robotic loaders dazzled at a Dallas tech summit appeared first on Popular Science.

]]>
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes.
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes. Rob Verger

Last week at a ranch outside Dallas, Texas, hundreds of people gathered to hobnob and discuss topics like transportation, aviation, drones, and more. Some were clad in cowboy hats. The event, called the UP.Summit, included investors, politicians, business leaders, representatives from large companies like Airbus, Bell, Boeing, as well as relatively newer players like Beta Technologies and Joby Aviation that are working on electric aircraft. 

On display was gear and hardware from companies like Wisk, Zipline, Jedsy, and much more.  

Take a look at some of the flying machines and other gadgets and equipment that were at the event, which is put on by investment firm UP.Partners.

This helicopter-like prototype aircraft is called a Volocopter, and it holds one person. Up top are 18 all-electric propellers mounted on a ring that’s about 26 feet in diameter. It can fly for about 20 minutes and has a range of about 11 or 12 miles.
This helicopter-like prototype aircraft is called a Volocopter, and it holds one person. Up top are 18 all-electric propellers mounted on a ring that’s about 26 feet in diameter. It can fly for about 20 minutes and has a range of about 11 or 12 miles. Rob Verger
The CEO of Bulgaria-based Dronamics, Svilen Rangelov, tells PopSci that this aircraft is basically a “flying delivery van.” The drone has a wingspan of about 50 feet, measures about 25 feet long, and is called the Black Swan, even though it’s white. Rangelov says that it can carry about 770 pounds of packages a distance of some 1,550 miles, and that ground-based pilots operate or oversee the aircraft as it flies. The company plans to start operating delivery flights in Greece early next year. (The aircraft in the photo is a replica and can’t actually fly.)
The CEO of Bulgaria-based Dronamics, Svilen Rangelov, tells PopSci that this aircraft is basically a “flying delivery van.” The drone has a wingspan of about 50 feet, measures about 25 feet long, and is called the Black Swan, even though it’s white. Rangelov says that it can carry about 770 pounds of packages a distance of some 1,550 miles, and that ground-based pilots operate or oversee the aircraft as it flies. The company plans to start operating delivery flights in Greece early next year. (The aircraft in the photo is a replica and can’t actually fly.) Rob Verger
This piece of construction equipment is a John Deere wheel loader, but on top of the cab is special equipment from a company called Teleo that allows the machine to be remotely operated from large distances. Popular Science had the chance to control a piece of construction equipment called a compact track loader in California from a base station in Texas, and observed a Teleo employee at the same Texas station operate a different large piece of construction equipment—a Komatsu WA500-8 wheel loader—in Oulu, Finland.
This piece of construction equipment is a John Deere wheel loader, but on top of the cab is special gear from a company called Teleo that allows the machine to be remotely operated from large distances. Popular Science had the chance to control a piece of construction equipment called a compact track loader in California from a base station in Texas, and observed a Teleo employee at the same Texas station operate a different large construction vehicle—a Komatsu WA500-8 wheel loader—in Oulu, Finland. Rob Verger
This small robotic helicopter is roughly 22 feet long, 7.5 feet high, and is called the Mosquito. It’s a development aircraft for a company called Rain that’s working on software to snuff out wildfires early. “We’re building technology to stop wildfires before they grow out of control, when they’re the size of a single tree, not when they’re the size of a warzone,” says Maxwell Brodie, the CEO of Rain. They’re collaborating with Sikorsky, which has already developed the tech for a Black Hawk helicopter to be able to fly itself. Brodie says their plan is to eventually pre-position autonomous, uncrewed helicopters (big ones like Black Hawks, not this Mosquito) with their software so they can tackle wildfires with a quickness when they’re small.
This small robotic helicopter is roughly 22 feet long, 7.5 feet high, and is called the Mosquito. It’s a development aircraft for a company called Rain that’s working on software to snuff out wildfires early. “We’re building technology to stop wildfires before they grow out of control, when they’re the size of a single tree, not when they’re the size of a warzone,” says Maxwell Brodie, the CEO of Rain. They’re collaborating with Sikorsky, which has already developed the tech for a Black Hawk helicopter to be able to fly itself. Brodie says their plan is to eventually pre-position autonomous, uncrewed helicopters (big ones like Black Hawks, not this Mosquito) with their software so they can tackle wildfires with a quickness when they’re small. Rob Verger
The goggle-like pieces of gear on top of the backpacks are the latest iteration—version 1.2—of the Army’s IVAS (Integrated Visual Augmentation System), which has been a challenging technology to get right and has a history of causing issues like nausea. The goal is to give a soldier a head-up display that can show a compass heading, map, or other information right in front of their eyes. Think of them as augmented reality goggles for soldiers that continue to be a work in progress; they’re made by Microsoft.
The goggle-like pieces of gear on top of the backpacks are the latest iteration—version 1.2—of the Army’s IVAS (Integrated Visual Augmentation System), which has been a challenging technology to get right and has a history of causing issues like nausea. The goal is to give a soldier a head-up display that can show a compass heading, map, or other information right in front of their eyes. Think of them as augmented reality goggles for soldiers that continue to be a work in progress; they’re made by Microsoft. Rob Verger
This is the tail rotor of an Airbus H160 helicopter. Notice how it’s tilted, or canted, ever so slightly? The 10-degree tilt gives the helicopter a tiny bit of lift—about 1 percent. (The vast majority comes from the main rotor, up top.) While some tail rotors just have blades that spin freely in the air, the ones that are enclosed like this are called Fenestrons.
This is the tail rotor of an Airbus H160 helicopter. Notice how it’s tilted, or canted, ever so slightly? The 10-degree tilt gives the helicopter a tiny bit of lift—about 1 percent. (The vast majority comes from the main rotor, up top.) While some tail rotors just have blades that spin freely in the air, the ones that are enclosed like this are called Fenestrons. Rob Verger
Like the uncrewed flying machine from Dronamics, this drone’s sole purpose is to carry cargo. But unlike the Dronamics vehicle, it can take off and land vertically by using eight electric motors and propellers. (It had another four props for forward flight.) It’s also hybrid electric—an onboard engine and generator create the electricity the system needs. “Jet fuel goes in, 700 volts of electric power comes out, and that electrical power drives the propulsion, and charges the onboard battery,” explains David Merrill, the CEO and cofounder of the company. The drone, called the Chaparral, carries cargo in the canoe-like container below it. Merrill says that its range is about 300 miles with a 300-pound payload. They’re working with the Air Force and FedEx. (The drone in the photograph is a full-sized replica of the real thing.)
Like the uncrewed flying machine from Dronamics, this drone’s sole purpose is to carry cargo. But unlike the Dronamics vehicle, it can take off and land vertically by using eight electric motors and propellers. (It has another four props for forward flight.) It’s also hybrid electric—an onboard engine and generator create the electricity the system needs. “Jet fuel goes in, 700 volts of electric power comes out, and that electrical power drives the propulsion, and charges the onboard battery,” explains David Merrill, the CEO and cofounder of the company. The drone, called the Chaparral, carries cargo in the canoe-like container below it. Merrill says that its range is about 300 miles with a 300-pound payload. They’re working with the Air Force and FedEx. (The drone in the photograph is a full-sized replica of the real thing.) Rob Verger

The post Futuristic aircraft and robotic loaders dazzled at a Dallas tech summit appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Disney’s new bipedal robot could have waddled out of a cartoon https://www.popsci.com/technology/disney-robot-cute-animation/ Tue, 10 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=578352
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task.
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task. Walt Disney Imagineering/Youtube

Its only job (for now) is to be absolutely adorable.

The post Disney’s new bipedal robot could have waddled out of a cartoon appeared first on Popular Science.

]]>
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task.
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task. Walt Disney Imagineering/Youtube

Some robots are cuter than others—but Disney may have just revealed a contender for the most adorable yet. Last week at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Detroit, a team of researchers from Disney Research Studios in Zurich revealed a charismatic, child-sized bipedal bot that looks like a cross between a cleaned-up WALL-E and a baby chick. With stubby legs, a box-like head, and wiggly antennae, it doesn’t need to do much to look loveable.

But, this little robot packs a powerful amount of personality in the ways it moves—that little boxy head has four degrees of freedom, according to IEEE Spectrum, meaning it can look up, down, around, and tilt in a perplexed manner. Its five-degree-of-freedom legs and hips allow it to balance and waddle around indoors or out, and even catch itself when given a playful shove. 

Robots photo

“Most roboticists are focused on getting their bipedal robots to reliably walk,” Disney research scientist Morgan Pope tells IEEE Spectrum. “At Disney, that might not be enough—our robots may have to strut, prance, sneak, trot, or meander to convey the emotion that we need them to.”

[Related: Why humans feel bad for awkward robots.]

While Disney has long been one of the biggest names in animation, creating real-world characters that have the same magnetism as our favorite movie characters is complicated—after all, animation tools don’t always play fair with the laws of physics, team lead and research scientist Mortiz Bächer added. 

Enter a reinforcement learning-based pipeline that helps bring together animation magic and real-world physicality. The system is highly tunable, and apparently can train a robot new behavior on a single PC. These behaviors can be tweaked, and essentially allow the mostly 3D-printed robot to handle itself in public and stay in character. Additionally, this process opens up a whole new world of possibilities when it comes to making new robotic characters with different personalities, legs, arms, or other components.

[Related: Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears.]

These kinds of developments are not only fun, but could one day be useful since humans and robots may one day find themselves in closer quarters. Amazon has been playing around with automation for over a decade, and robots are finding their way into healthcare, conservation, and even into our burrito bowls. The team at Disney argues that having a robot that can show you a little bit of emotion or intent can go a long way in bridging the gap between people and potential new robot friends.

The post Disney’s new bipedal robot could have waddled out of a cartoon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
College students invented an easy device for cerebral palsy patients to drink on their own https://www.popsci.com/technology/robocup-cerebral-palsy/ Mon, 09 Oct 2023 16:00:00 +0000 https://www.popsci.com/?p=577668
Man with cerebral palsy drinking from RoboCup
Gary Lynn demonstrates the RoboCup. Brandon Martin/Rice University

Two undergraduates worked alongside disability advocate Gary Lynn to create the open source 'RoboCup.'

The post College students invented an easy device for cerebral palsy patients to drink on their own appeared first on Popular Science.

]]>
Man with cerebral palsy drinking from RoboCup
Gary Lynn demonstrates the RoboCup. Brandon Martin/Rice University

“Are you drinking enough water?”

The question is so ubiquitous that it’s become meme canon in recent years. But what may be an annoying reminder to one person is often a logistical challenge for people dealing with mobility issues like cerebral palsy (CP). After learning about the potential physical hurdles involved in staying hydrated, two undergraduate engineering students at Rice University set out to design a robotic tool to help disabled users easily access their drinks as needed. The result, appropriately dubbed “RoboCup,” is not only a simple, relatively easy-to-construct device—it’s one whose plans are already available to anyone online for free.

According to a recent university profile, Thomas Kutcher and Rafe Neathery began work on their invention after being approached by Gary Lynn, a local Houstonian living with CP who oversees a nonprofit dedicated to raising awareness for the condition. According to Kutcher, a bioengineering major, their RoboCup will hopefully remove the need for additional caregiver aid and thus “grant users greater freedom.”

[Related: How much water should you drink in a day?]

RoboCup was by no means perfect from the outset, and the undergraduates reportedly went through numerous iterations before settling on their current design. In order to optimize their tool to help as many people as possible, Kutcher and Rafe spoke to numerous caregiving and research professionals about how to best improve their schematics.

“They really liked our project and confirmed its potential, but they also pointed out that in order to reach as many people as possible, we needed to incorporate more options for building the device, such as different types of sensors, valves and mechanisms for mounting the device on different wheelchair types,” Kutcher said in their October 6 profile.

Engineering photo

The biggest challenge, according to the duo, was balancing simplification alongside functionality and durability. In the end, the pair swapped out an early camelback version for a mounted cup-and-straw design, which reportedly is both aesthetically more pleasing to users, as well as less intrusive.

In a demonstration video, Lynn is shown activating a small sensor near his left hand, which automatically pivots an adjustable straw towards his mouth. He can then drink as much as he wants, then alert the sensor again to swivel the straw back to a neutral position.

Lynn, who tested the various versions of RoboCup, endorsed the RoboCup’s ability to offer disabled users more independence in their daily lives, and believes that “getting to do this little task by themselves will enhance the confidence of the person using the device.”

Initially intended to just be a single semester project, Kutcher and Neathery now intend to continue refining their RoboCup, including investigating ways it could be adapted to people dealing with other forms of mobility issues. In the meantime, the RoboCup is entered in World Cerebral Palsy Day’s “Remarkable Designa-thon,” which promotes new products and services meant to help those with CP. And, as it just so happens, voting is open to the public from October 6-13.

The post College students invented an easy device for cerebral palsy patients to drink on their own appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch robot dogs train on obstacle courses to avoid tripping https://www.popsci.com/technology/dog-robot-vine-course/ Fri, 06 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=577508
Better navigation of complex environments could help robots walk in the wild.
Better navigation of complex environments could help robots walk in the wild. Carnegie Mellon University

Four-legged robots have a tough time traipsing through heavy vegetation, but a new stride pattern could help.

The post Watch robot dogs train on obstacle courses to avoid tripping appeared first on Popular Science.

]]>
Better navigation of complex environments could help robots walk in the wild.
Better navigation of complex environments could help robots walk in the wild. Carnegie Mellon University

Four-legged robots can pull off a lot of complex tasks, but there’s a reason you don’t often see them navigating “busy” environments like forests or vine-laden overgrowth. Despite all their abilities, most on-board AI systems remain pretty bad at responding to all those physical variables in real-time. It might feel like second nature to us, but it only takes the slightest misstep in such situations to send a quadrupedal robot tumbling.

After subjecting their own dog bot to a barrage of obstacle course runs, however, a team at Carnegie Mellon University’s College of Engineering is now offering a solid step forward, so to speak, for robots deployed in the wild. According to researchers, teaching a quadrupedal robot to reactively retract its legs while walking provides the best gait for both navigating and untangling out of obstacles in its way.

[Related: How researchers trained a budget robot dog to do tricks.]

“Real-world obstacles might be stiff like a rock or soft like a vine, and we want robots to have strategies that prevent tripping on either,” Justin Yim, a University of Illinois Urbana-Champaign engineering professor and project collaborator, said in CMU’s recent highlight.

The engineers compared multiple stride strategies on a quadrupedal robot while it tried to walk across a short distance interrupted by multiple, low-hanging ropes. The robot quickly entangled itself while high-stepping, or walking with its knees angled forward, but retracting its limbs immediately after detecting an obstacle allowed it to smoothly cross the stretch of floor.

AI photo

“When you take robots outdoors, the entire problem of interacting with the environment becomes exponentially more difficult because you have to be more deliberate in everything that you do,” David Ologan, a mechanical engineering master’s student, told CMU. “Your system has to be robust enough to handle any unforeseen circumstances or obstructions that you might encounter. It’s interesting to tackle that problem that hasn’t necessarily been solved yet.”

[Related: This robot dog learned a new trick—balancing like a cat.]

Although wheeled robots may still prove more suited for urban environments, where the ground is generally flatter and infrastructures such as ramps are more common, walking bots could hypothetically prove much more useful in outdoor settings. Researchers believe integrating their reactive retraction response into existing AI navigation systems could help robots during outdoor search-and-rescue missions. The newly designed daintiness might also help quadrupedal robots conduct environmental surveying without damaging their surroundings.

“The potential for legged robots in outdoor, vegetation-based environments is interesting to see,” said Ologan. “If you live in a city, a wheeled platform is probably a better option… There is a trade-off between being able to do more complex actions and being efficient with your movements.”

The post Watch robot dogs train on obstacle courses to avoid tripping appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How researchers trained a budget robot dog to do tricks https://www.popsci.com/technology/parkour-algorithm-robodog/ Thu, 05 Oct 2023 22:00:00 +0000 https://www.popsci.com/?p=577333
robot dog doing parjour
Zipeng Fu / YouTube

A new 'parkour algorithm' teaches robodogs in virtual settings first.

The post How researchers trained a budget robot dog to do tricks appeared first on Popular Science.

]]>
robot dog doing parjour
Zipeng Fu / YouTube

While bipedal human-like androids are a staple of sci-fi movies, for many potential real world tasks, like rescuing people from burning buildings, flooded streets, or the freezing wilds, four-legged “robodogs” are better. In a new paper due to be presented at the Conference on Robot Learning (CoRL) next month in Atlanta, researchers at Stanford University and Shanghai Qi Zhi Institute have proposed a novel, simplified machine learning technique that allows them to train a vision-based algorithm that enables (relatively) cheap, off-the-shelf robots to climb, leap, crawl, and run around the real world. As the researchers claim, they can do “parkour” all by themselves.

Traditionally, teaching robots to navigate the world has been an expensive challenge. Boston Dynamics’ Atlas robots can dance, throw things, and parkour their way around complex environments, but they are the result of more than a decade of DARPA-funded research. As the researchers explain in the paper, “the massive engineering efforts needed for modeling the robot and its surrounding environments for predictive control and the high hardware cost prevent people from reproducing parkour behaviors given a reasonable budget.” However, recent advances in artificial intelligence have demonstrated that training an algorithm in a computer simulation and then installing it in a robot can be cost effective way to train them to walk, climb stairs, and mimic animals, so the researchers set out to do the same for parkour in low-cost hardware. 

The researchers used two-stage reinforcement learning to train the parkour algorithm. In the first “soft dynamics” step, the virtual robots were allowed to penetrate and collide with the simulated objects but were encouraged—using a simple reward mechanism—to minimize penetrations as well as the mechanical energy necessary to clear each obstacle and move forward. The virtual robots weren’t given any instructions—they had to puzzle out how best to move forward for themselves, which is how the algorithm learns what does and doesn’t work.

In the second “hard dynamics” fine-tuning stage, the same reward mechanism was used but the robots were no longer allowed to collide with obstacles. Again, the virtual robots had to figure out what techniques worked best to proceed forward while minimizing energy expenditure. All this training allowed the researchers to develop a “single vision-based parkour policy” for each skill that could be deployed in real robots.

And the results were incredibly effective. Although the team was working with small robots that stand just over 10-inches tall, their relative performance was pretty impressive—especially given the simple reward system and virtual training program. The off-the-shelf robots were able to scale objects up to 15.75-inches high (1.53x their height), leap over gaps 23.6-inches wide (1.5x their length), crawl beneath barriers as low as 7.9-inches (0.76x their height), and tilt so they could squeeze through gaps a fraction of an inch narrower than their width. 

According to an interview with the researchers in Stanford News, the biggest advance is that the new training technique enables the robodogs to act autonomously using just their onboard computer and camera. In other words, there’s no human with a remote control. The robots are assessing the obstacle they have to clear, selecting the most appropriate approach from their repertoire of skills, and executing it—and if they fail, they try again.

The researchers noted that the biggest limitation with their training method is that the simulated environments have to be manually designed. So, going forward, the team hopes to explore “advances in 3D-vision and graphics to construct diverse simulation environments automatically from large-scale real-world data.” That could enable them to train even more adventurous robodogs.

Of course, this Stanford team isn’t the only research group exploring robodogs. In the past year or two, we’ve seen quadrupedal robots of varying shapes and sizes that can paw open doors, climb walls and ceilings, sprint on sand, and balance along beams. But for all that, we’re still a while away from seeing rescue robodogs out in the wild. It seems labradors aren’t out of a job just yet.

See them in action, below:

Robots photo

The post How researchers trained a budget robot dog to do tricks appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An ‘electronic tongue’ could help robots taste food like humans https://www.popsci.com/technology/electronic-tongue-ai-robot/ Wed, 04 Oct 2023 20:00:00 +0000 https://www.popsci.com/?p=577156
Electronic artificial tongue sensor
The sensor could one day help AI develop their own versions of taste palates. Das Research Lab/Penn State

A combination of ultra-thin sensors marks the first step in machines being able to mimic our tastes.

The post An ‘electronic tongue’ could help robots taste food like humans appeared first on Popular Science.

]]>
Electronic artificial tongue sensor
The sensor could one day help AI develop their own versions of taste palates. Das Research Lab/Penn State

AI programs can already respond to sensory stimulations like touch, sight, smell, and sound—so why not taste? Engineering researchers at Penn State hope to one day accomplish just that, in the process designing an “electronic tongue” capable of detecting gas and chemical molecules with components that are only a few atoms thick. Although not capable of “craving” a late-night snack just yet, the team is hopeful their new design could one day pair with robots to help create AI-influenced diets, curate restaurant menus, and even train people to broaden their own palates.

Unfortunately, human eating habits aren’t based solely on what we nutritionally require; they are also determined by flavor preferences. This comes in handy when our taste buds tell our brains to avoid foul-tasting, potentially poisonous foods, but it also is the reason you sometimes can’t stop yourself from grabbing that extra donut or slice of cake. This push-and-pull requires a certain amount of psychological cognition and development—something robots currently lack.

[Related: A new artificial skin could be more sensitive than the real thing]

“Human behavior is easy to observe but difficult to measure. and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that,” 

Saptarshi Das, an associate professor of engineering science and mechanics, said in an October 4 statement. Das is a corresponding author of the team’s findings, which were published last month in the journal Nature Communications, and helped design the robotic system capable of “tasting” molecules.

To create their flat, square “electronic gustatory complex,” the team combined chemitransistors—graphene-based sensors that detect gas and chemical molecules—with molybdenum disulfide memtransistors capable of simulating neurons. The two components worked in tandem, capitalizing on their respective strengths to simulate the ability to “taste” molecular inputs.

“Graphene is an excellent chemical sensor, [but] it is not great for circuitry and logic, which is needed to mimic the brain circuit,” said Andrew Pannone, an engineering science and mechanics grad student and study co-author, in a press release this week. “For that reason, we used molybdenum disulfide… By combining these nanomaterials, we have taken the strengths from each of them to create the circuit that mimics the gustatory system.”

When analyzing salt, for example, the electronic tongue detected the presence of sodium ions, thereby “tasting” the sodium chloride input. The design is reportedly flexible enough to apply to all five major taste profiles: salty, sour, bitter, sweet, and umami. Hypothetically, researchers could arrange similar graphene device arrays that mirror the approximately 10,000 different taste receptors located on a human tongue.

[Related: How to enhance your senses of smell and taste]

“The example I think of is people who train their tongue and become a wine taster. Perhaps in the future we can have an AI system that you can train to be an even better wine taster,” Das said in the statement.

The post An ‘electronic tongue’ could help robots taste food like humans appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot trio mimics the life cycle of a frog https://www.popsci.com/environment/frog-robot-trio-video/ Wed, 04 Oct 2023 14:00:00 +0000 https://www.popsci.com/?p=577051
Four legged robot inspired by frog
The robots are inspired by frogs' multiple life stages. Colorado State University

Search-and-rescue operations could one day feature a fleet of frog-bots to help save the day.

The post This robot trio mimics the life cycle of a frog appeared first on Popular Science.

]]>
Four legged robot inspired by frog
The robots are inspired by frogs' multiple life stages. Colorado State University

New quadrupedal robots, based on years of research alongside some amphibian inspiration, could one day crawl and shimmy their way into search-and-rescue operations. As detailed in a new paper recently published in Nature Communications, the robotic trio developed by a team at Colorado State University can swim, walk, and crawl depending on their environments’ obstacles—thanks in large part to lightweight artificial muscles that don’t require heavy onboard power sources.

[Related: Four-legged dog robots could one day explore the moon.]

The new systems, which have been in development since 2017, were designed by a team led by CSU Department of Mechanical Engineering professor Jianguo Zhao, and rely on materials that change rigidity depending on temperature.

“Our embedded morphing scheme uses a lightweight artificial muscle similar to a human muscle, and it contracts when electricity is applied,” Zhao explained in the project’s October 2 announcement. “By embedding these artificial muscles in the spine of the robot or in its skin, we can achieve a variety of shape-types. Altogether, this approach offers a promising path towards developing robots that can navigate and work in difficult environments.”

Wildlife photo

Aside from the electrical properties, the robots owe their movements in large part to frogs—or, rather, frogs’ multiple life stages. “They start as tadpoles with tails for swimming before developing legs that let them jump, crawl or swim,” Zhao continued. “We take inspiration from those transformations, but achieving animal-like embedded shape morphing in robots remains challenging and is something we hope this work will continue to address.”

Judging from the video montage, it’s easy to see the frog analogy. Depending on its surroundings and terrain, the robots can curve their limbs to “swim,” then adjust them accordingly to scale a rocky hurdle that mimics a shoreline. On dry land, Zhao’s robots can “hop” along by repeatedly rotating their limbs 360 degrees to push forward. A third version of the robot can flatten itself to skitter through small openings, as well as hang onto a ledge to help transition across gaps.

For now, however, the robots require remote control, but future iterations could rely on sensor- and camera-based analysis of their environments for navigation, and even morph as needed to handle their surroundings.

The post This robot trio mimics the life cycle of a frog appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears https://www.popsci.com/technology/robot-wolves-guard-bear/ Tue, 03 Oct 2023 21:00:00 +0000 https://www.popsci.com/?p=576879
Animal deterring robotic wolf sentry
It may not look like a real wolf to you, but it does the trick against boars and bears. Wolf Kamuy

First introduced to combat invasive wild boars, experts now believe the robo-wolf could deter wandering black and brown bears.

The post Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears appeared first on Popular Science.

]]>
Animal deterring robotic wolf sentry
It may not look like a real wolf to you, but it does the trick against boars and bears. Wolf Kamuy

Stories about solar-powered robotic wolves first surfaced back in 2017 after Japanese researchers began testing prototypes to combat wild boars’ devastating encroachment into farmlands. Since then, a company called Wolf Kamuy expanded sales of its sentry products featuring menacing fangs, fur, flashing red LED “eyes,” and a head capable of shaking side-to-side while emitting a 90 decibel howl. But boars aren’t the only problem plaguing rural Japanese communities. According to recent reports, Wolf Kamuy is now offering many of its faux-wolves as bear deterrence.

[Related: How to watch Alaska’s fat bears.]

It turns out the “Super Monster Wolf” isn’t just effective at protecting farmers’ crops—it’s also pretty good at protecting the farmers themselves. As reported October 1 via the BBC, bears are an increasingly difficult, sometimes even deadly nuisance in many areas of Japan thanks to a combination of serious factors, including climate change, deforestation,and urban expansion. What’s more, bear populations in regions such as Hokkaido appear to be actually increasing as Japan faces an aging population and declining birth rates. According to the BBC, some researchers estimate a total of over 22,000 bears located around Hokkaido. Because of all this, the region recorded at least 150 bear attacks over the past six decades—with four fatalities in 2021 alone. Meanwhile, bears continue to wander into more crowded towns and cities bordering wildlife areas.

Enter: the Super Monster Wolf. By installing the guard bots in urban locales, experts hope to deter bears from wandering into populated areas to potentially harm both humans and themselves. Researchers previously estimated that a robo-wolf’s howls effectively deterred bears from encroaching within approximately 1-square-km (about 0.38 square mi) of its installation—arguably better than many electric fence perimeters. With strategic placement, Super Monster Wolves could help elderly communities, and protect the bears.

Of course, humanity cannot solely rely on an army of robot wolves to protect us from bear attacks. Bears (not to mention countless other species) face immense existential threats in the face of ongoing climate change calamities, and it’s not the bears’ fault they are increasingly desperate to find food sources. The best remedy, therefore, is to continue focusing on climate solutions like conservation, renewable energy, and sustainable urban planning, rather than stopgaps like the (admittedly rad) Super Monster Wolf.

The post Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl https://www.popsci.com/technology/chipotle-burrito-bowl-salad-robot/ Tue, 03 Oct 2023 12:00:00 +0000 https://www.popsci.com/?p=576646
Chipotle automated makeline station
Chipotle also announced an avocado-pitting robot earlier this year. Chipotle

Human workers will still have to add the guacamole.

The post Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl appeared first on Popular Science.

]]>
Chipotle automated makeline station
Chipotle also announced an avocado-pitting robot earlier this year. Chipotle

Back in July, Chipotle revealed the “Autocado”—an AI-guided avocado-pitting robot prototype meant to help handle America’s insatiable guacamole habit while simultaneously reducing food waste. Today, the fast casual chain announced its next automated endeavor—a prep station capable of assembling entrees on its own.

[Related: Chipotle is testing an avocado-pitting, -cutting, and -scooping robot.]

According to the company’s official reveal this morning, its newest robotic prototype—a collaboration with the food service automation startup, Hyphen—creates virtually any combination of available base ingredients for Chipotle’s burrito bowls and salads underneath human employees’ workspace. Meanwhile, staff are reportedly allowed to focus on making other, presumably more structurally complex and involved dishes such as burritos, quesadillas, tacos, and kid’s meals. Watch the robot prototype plop food into little piles in the bowl under the workspace here: 

AI photo

As orders arrive via Chipotle’s website, app, or another third-party service like UberEats, burrito bowls and salads are automatically routed within the makeline, where an assembly system passes dishes beneath the various ingredient containers. Precise portions are then doled out accordingly, after which the customer’s order surfaces via a small elevator system on the machine’s left side. Chipotle employees can then add any additional chips, salsas, and guacamole, as well as an entree lid before sending off the orders for delivery.

[Related: What robots can and can’t do for a restaurant.]

Chipotle estimates around 65 percent of all its digital orders are salads and burrito bowls, so their so-called “cobot” (“collaborative” plus “robot”) could hypothetically handle a huge portion of existing kitchen prep. The automated process may also potentially offer more accurate orders, the company states. 

Advocates frequently voice concern about automation and its effect on human jobs. And Chipotle isn’t the only chain in question—companies like Wendy’s and Panera continue to experiment with their own automation plans. Curt Garner, Chipotle’s Chief Customer and Technology Officer described the company’s long-term goal of having the automated digital makeline “be the centerpiece of all our restaurants’ digital kitchens.”

For now, however, the new burrito bowl bot can only be found at the Chipotle Cultivate Center in Irvine, California—presumably alongside the Autocado.

The post Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This gigantic mech suit can be yours for $3 million https://www.popsci.com/technology/archax-mech-suit-robot/ Mon, 02 Oct 2023 15:00:00 +0000 https://www.popsci.com/?p=576477
Archax robotic mech suit in warehouse
The Archax has two transport modes, and is named after the archaeopteryx. YouTube

The 15-foot-tall Archax is first and foremost meant to be very 'cool.'

The post This gigantic mech suit can be yours for $3 million appeared first on Popular Science.

]]>
Archax robotic mech suit in warehouse
The Archax has two transport modes, and is named after the archaeopteryx. YouTube

Five mech suits capable of morphing between robotic and vehicular modes are now available for pre-order from a Japanese startup overseen by 25-year-old inventor Ryo Yoshida. At nearly 15-feet-tall and weighing in around 3.5 tons, one of Tsubame Industries’  “Archax” joyrides can be all yours—if you happen to have an extra $3 million burning a hole in your pocket.

News of the production update came courtesy of Reuters on Monday, who spoke with Yoshida about their thought process behind constructing the futuristic colossus, which gets its name from the famous winged dinosaur archaeopteryx. 

[Related: Robotic exoskeletons are storming out of sci-fi and onto your squishy human body.]

“Japan is very good at animation, games, robots and automobiles so I thought it would be great if I could create a product that compressed all these elements into one,” he said at the time. “I wanted to create something that says, ‘This is Japan.’”

To pilot the steel and iron-framed Archax, individuals must first climb a small ladder and enter a cockpit situated within the robot’s chest. Once sealed inside, a system of nine cameras connected to four view screens allows riders to see the world around them alongside information such as battery life, speed, tilt angle, and positioning. Depending on a user’s desire, Archax can travel upwards of 6 mph from one of two setups—a four-wheeled upright robotic mode, and a more streamlined vehicle mode in which the cockpit reclines 17 degrees as the chair remains upright. Meanwhile, a set of joysticks alongside two floor pedals control the mech suit’s movement, as well as its controllable arms and hands

Engineering photo

Unlike countless other robotic creations on the market, however, Archax currently isn’t designed for rigorous real world encounters. It’s currently meant to be, per the company’s own description, “cool.” 

But that doesn’t mean Yoshida and his team at Tsubame aren’t hopeful to build future Archax models better equipped for real world uses. According to the inventor, he hopes such pilotable robotic suits could find applications within search-and-rescue operations, disaster relief, and even the space industry. For now, however, Tsubame sounds perfectly satisfied with its luxury toy status.

“Arcax is not just a big robot that you can ride inside. A person can climb into the cockpit and control the vehicle at will. Each part moves with sufficient speed, rigidity, and power,” reads the product’s description.

“And it’s cool,” Tsubame Industries reiterates.

The post This gigantic mech suit can be yours for $3 million appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new drone might help cops stop high-speed car chases https://www.popsci.com/technology/skydio-x10-cop-drone/ Tue, 26 Sep 2023 17:00:00 +0000 https://www.popsci.com/?p=574631
Skydio X10 drone flying at night
Skydio's newest drone is designed specifically to act as a remote controlled first responder. Skydio

Skydio wants its 'intelligent flying machines' to become part of law enforcement's 'basic infrastructure.' Little regulation stands in their way.

The post A new drone might help cops stop high-speed car chases appeared first on Popular Science.

]]>
Skydio X10 drone flying at night
Skydio's newest drone is designed specifically to act as a remote controlled first responder. Skydio

A new high-tech surveillance drone developed by a California-based startup Skydio will include infrared sensors, cameras capable of reading license plates as far as 800 feet away, and the ability to reach top speeds of 45 mph. Skydio hopes “intelligent flying machines”–like its new drone X10–will become part of the “basic infrastructure” supporting law enforcement, government organizations, and private businesses. Such an infrastructure is already developing across the country. Meanwhile, critics are renewing their privacy and civil liberties concerns about what they believe remains a dangerously unregulated industry.

Skydio first unveiled its new X10 on September 20, which Wired detailed in a new rundown on Tuesday. The company’s latest model is part of a push to “get drones everywhere they can be useful in public safety,” according to CEO Adam Bry during last week’s launch event. Prior to the X10’s release, Skydio has reportedly sold over 40,000 other “intelligent flying machines” to more than 1,500 clients over the past decade, including the US Army Rangers and the UK’s Ministry of Defense. Skydio execs, however, openly express their desire to continue expanding drone adoption even further via a self-explanatory concept deemed “drone as first responder” (DFR).

[Related: The Army skips off-the-shelf drones for a new custom quadcopter.]

In such scenarios, drones like the X10 can be deployed in less than 40 seconds by on-the-scene patrol officers from within a backpack or car trunk. From there, however, the drones can be piloted via onboard 5G connectivity by operators at remote facilities and command centers. Skydio believes drones like its X10 are equipped with enough cutting edge tools to potentially even aid in stopping high-speed car chases.

To allow for this kind of support, however, drone operators are increasingly required to obtain clearance from the FAA for what’s known as beyond the visual line of sight (BVLOS) flights. Such a greenlight allows drone pilots to control fleets from centralized locations instead of needing to remain onsite. BVLOS clearances are currently major goals for retail companies like Walmart and Amazon, as well as shipping giants like UPS, who will need such certifications to deliver to customers at logistically necessary distances. According to Skydio, the company has already supported customers in “getting over 20 waivers” for BVLOS flight, although its X10 announcement does not provide specifics as to how. 

Man in combat gear holding X10 drone at night
Credit: Skydio

Drone usage continues to rise across countless industries, both commercial and law enforcement related. As the ACLU explains, drones’ usages in scientific research, mapping, and search-and-rescue missions are undeniable, “but deployed without proper regulation, drones [can be] capable of monitoring personal conversations would cause unprecedented invasions of our privacy rights.”

Meanwhile, civil rights advocates continue to warn that there is very little in the way of such oversight for the usage of drones among the public during events such as political demonstrations, protests, as well as even simply large gatherings and music festivals.

“Any adoption of drones, regardless of the time of day or visibility conditions when deployed, should include robust policies, consideration of community privacy rights, auditable paper trails recording the reasons for deployment and the information captured, and transparency around the other equipment being deployed as part of the drone,” Beryl Lipton, an investigative researcher for the Electronic Frontier Foundation, tells PopSci.

“The addition of night vision capabilities to drones can enable multiple kinds of 24-hour police surveillance,” Lipton adds.

Despite Skydio’s stated goals, critics continue to push back against claims that such technology benefits the public, and instead violates privacy rights while disproportionately targeting marginalized communities. Organizations such as the New York Civil Liberties Union cites police drones deployed at protests across 15 cities in the wake of the 2020 murder of George Floyd.

[ Related: Here is what a Tesla Cybertruck cop car could look like ]

Skydio has stated in the past it does not support weaponized drones, although as Wired reports, the company maintains an active partnership with Axon, makers of police tech like Tasers. Currently, Skydio is only integrating its drone fleets with Axon software sold to law enforcement for evidence management and incident responses.

Last year, Axon announced plans to develop a line of Taser-armed drones shortly after the Uvalde school shooting massacre. The news prompted near immediate backlash, causing Axon to backtrack less than a week later—but not before the majority of the company’s AI Ethics board resigned in protest.

Update 09/26/23 1:25pm: This article has been updated to include a response from the Electronic Frontier Foundation.

The post A new drone might help cops stop high-speed car chases appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This massive armored vehicle has a giant plow for clearing Russian mines https://www.popsci.com/technology/mine-clearing-tank/ Fri, 22 Sep 2023 13:36:50 +0000 https://www.popsci.com/?p=573451
This is a Mine-Clearing Tank.
This is a Mine-Clearing Tank. Pearson Engineering

Eight machines like this one are already in Ukraine to do the dangerous work of dealing with minefields.

The post This massive armored vehicle has a giant plow for clearing Russian mines appeared first on Popular Science.

]]>
This is a Mine-Clearing Tank.
This is a Mine-Clearing Tank. Pearson Engineering

At the DSEI international arms show held in London earlier this month, German defense company FFG showed off a tank-like vehicle it had already sent to Ukraine. The Mine Clearing Tank, or MCT, is a tracked and armored vehicle, based on the WISENT 1 armored platform, designed specifically to clear minefields and protect the vehicle’s crew while doing so. As Russia’s February 2022 invasion of Ukraine continues well into its second year, vehicles like this one show both what the present need there is, and what tools may ultimately be required for Ukraine to reclaim Russian-occupied territory.

The current shape of the war in Ukraine is largely determined by minefields, trenches, and artillery. Russia holds long defensive lines, where mines guard the approaches to trenches, and trenches protect soldiers as they shoot at people and vehicles. Artillery, in turn, allows Russian forces to strike at Ukrainian forces from behind these defensive lines, making both assault and getting ready for assault difficult. This style of fortification is hardly unique; it’s been a feature of modern trench warfare since at least World War I. 

Getting through defensive positions is a hard task. On September 20, the German Ministry of Defense posted a list of the equipment it has so far sent to Ukraine. The section on “Military Engineering Capabilities” covers an extensive range of tools designed to clear minefields. It includes eight mine-clearing tanks of the WISENT 1 variety, 11 mine plows that can go on Ukraine’s Soviet-pattern T-72 tanks, three remote-controlled mine-clearing robots, 12 Ahlmann backhoe loaders designed for mine clearing, and the material needed for explosive ordnance disposal.

The MCT WISENT 1 weighs 44.5 tons, a weight that includes its heavy armor, crew protection features, and the powerful engines it needs to lift and move the vehicle’s mine-clearing plow. The plow itself weighs 3.5 tons, and is wider than the vehicle itself.

“During the clearing operation, the mines are lifted out of the ground and diverted via the mine clearing shield to both sides of the lane, where they are later neutralized by EOD forces. If mines explode, ‘only’ the mine clearance equipment will be damaged. If mines slip through and detonate under the vehicle, the crew is protected from serious injuries,” reports Gerhard Heiming for European Security & Technology.

One of the protections for crew are anti-mine seats, designed to divert the energy from blasts away from the occupants. The role of a mine-clearing vehicle is, after all, to drive a path through a minefield, dislodging explosives explicitly placed to prevent this from happening. As the MCT WISENT 1 clears a path, it can also mark the lane it has cleared.

Enemy mine

Mines as a weapon are designed to make passage difficult, but not impossible. What makes mines so effective is that many of the techniques to clear them, and do so thoroughly, are slow, tedious, time-consuming tasks, often undertaken by soldiers with hand tools. 

“The dragon’s teeth of this war are land mines, sometimes rated the most devilish defense weapons man ever devised,” opens How Axis Land Mines Work, a story from the April 1944 issue of Popular Science. “Cheap to make, light to transport, and easy to install, it is as hard to find as a sniper, as dangerous to disarm as a commando. To cope with it, the Army Engineers have developed a corps of specialists who have one of the most nerve-wracking assignments in the book.”

The story goes on to to detail anti-tank and anti-personnel mines, which are the two categories broadly in use today. With different explosive payloads and pressure triggers, the work of min-clearing is about ensuring all the mines are swept aside, so dismounted soldiers and troops in trucks alike can have safe passage through a cleared route. 

The MCT WISENT 1 builds upon lessons and technologies for mine-clearing first developed and used at scale in World War II. Even before the 2022 invasion by Russia, Ukraine had a massive mine-clearing operation, working on disposing of explosives left from World War II through to the 2014-2022 Donbass war. The peacetime work of mine clearing can be thorough and slow.

For an army on the move, and looking to break through enemy lines and attack the less-well-defended points beyond the front, the ability of an armored mine-sweeper to clear a lane can be enough to shift the tide of battle, and with it perhaps a stalled front.

The post This massive armored vehicle has a giant plow for clearing Russian mines appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why humans feel bad for awkward robots https://www.popsci.com/technology/human-robot-embarrassment/ Wed, 20 Sep 2023 18:30:00 +0000 https://www.popsci.com/?p=572966
grimace face awkward smiley
Bernard Hermant / Unsplash

Secondhand embarrassment is related to empathy.

The post Why humans feel bad for awkward robots appeared first on Popular Science.

]]>
grimace face awkward smiley
Bernard Hermant / Unsplash

When someone does something cringey, it’s only human nature to feel embarrassed for them. If a friend slips and falls on a wet floor, it makes sense to feel self-conscious on their behalf. It’s a sign of empathy, according to science, and it determines how people cooperate, connect, and treat one another. What happens, though, when the second person in this situation is replaced with a robot?

Experiencing secondhand embarrassment lights up areas in the human brain associated with pain and the recognition of emotions. In that vein, social anxiety is linked to heightened empathy, but also comes with a reduced capacity to actually understand the other person’s emotions, known as cognitive empathy. And of course, the more socially close and invested a person is in another, the more acutely they’ll feel this bystander discomfort. 

Interestingly, new research from Toyohashi University of Technology in Japan found that humans can have the same sort of secondhand embarrassment when they see a robot commit a social faux pas. A detailed report was published in the journal Scientific Reports last week. 

To test this phenomenon, human subjects were immersed in a virtual environment where both human and robot avatars were present. The researchers then put these avatars, both the ones representing humans and the ones depicting bots, through awkward situations like stumbling in a crowd, running into a sliding door, or dancing clumsily in public. 

Researchers then measured skin conductance, or the electrical activity of the sweat glands, of the subjects. This correlates to arousal signals like stress, or other states of high emotion. Participants also filled out a questionnaire about their emotional responses to each virtual social situation. 

[Related: Do we trust robots enough to put them in charge?]

The data indicates that humans felt self-embarrassment for both the human and robot avatars when they were in a socially awkward scenario, although they perceived the situation as more “real” for the human avatar compared to the robot.  

Still, the team says that the results show that “humans can empathize with robots in embarrassing situations, suggesting that humans assume the robots can be aware of being witnessed and have some degree of self-consciousness based on self-reflection and self-evaluation,” they wrote in the paper. But it also matters what the robot looks like: “The appearance of the robot may affect the empathic embarrassment because humans empathize more strongly with more human-looking robots and less with more mechanical-looking robots when they are mistreated by humans.”

Previous research into this area has turned up similar themes. Last year, a study out of France found that humans would unconsciously sync their movements with that of humanoid robots, as a bid to fit in socially. And imbuing robot speech with more emotional undertones make them more acceptable to humans

Despite the interesting findings in this recent study, the team from Toyohashi University of Technology acknowledges that a larger sample size, as well as real-world humans and robots, would make the conclusions more convincing. 

“Our study provides valuable insights into the evolving nature of human-robot relationships. As technology continues to integrate into our daily lives, understanding the emotional responses we have towards robots is crucial,” Harin Hapuarachchi, the lead researcher on the project, said in a press release. “This research opens up new avenues for exploring the boundaries of human empathy and the potential challenges and benefits of human-robot interactions.”

The post Why humans feel bad for awkward robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What’s in the US military’s historic lost and found: nukes, jets, and drones https://www.popsci.com/technology/lost-military-f35-drones-nuclear-weapons/ Wed, 20 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=572760
an F-35B fighter jet
An F-35B seen in South Carolina on Aug. 17, 2023. Kyle Baskin / US Marine Corps

The F-35 in South Carolina is not the first important asset to go missing for a spell.

The post What’s in the US military’s historic lost and found: nukes, jets, and drones appeared first on Popular Science.

]]>
an F-35B fighter jet
An F-35B seen in South Carolina on Aug. 17, 2023. Kyle Baskin / US Marine Corps

For roughly 24 hours, between the afternoon of September 17 and the evening of September 18, the United States Marine Corps couldn’t find one of its F-35B stealth fighter jets. The pilot had ejected, but it took the military a spell to find the jet, and in the process it put out a call for the public to keep their eyes peeled for the plane. Joint Base Charleston confirmed Monday evening that a debris field was found two hours northeast of the base, believed to be the crashed plane. 

So how does the military lose a stealth jet? That’s the $100-million question. F-35 unit prices vary by model and the lot in which they are purchased; recent F-35B purchases have cost a high of $108 million per jet and a low of $78.3 million. On the other hand, F-35A models, which the Air Force fly, cost around $69.9 million now, though older lots cost up to $89.2 million. 

The nature of stealth helps explain how it’s possible, in 2023, for the Department of Defense to lose track of one of its own jets, prompting a call for citizens to help search. Stealth is a technology designed to hide planes from radar, so that stealth fighters and bombers can attack buildings, ships, vehicles, and other targets in war with less fear of getting detected and shot down by enemy aircraft and anti-air missiles. To achieve this sort of radar-invisibility, stealth planes have physical shapes that reduce radar signature, along with special coatings that dampen the reflectivity of radio waves.

Because the stealth characteristics are built into jets like the F-35 series, as well as the F-22 fighter, and the B-2 and B-21 bombers, they are just harder for radars to track. One way to keep track of where planes are is a transponder, which sends out a signal announcing the aircraft’s location. Transponders are useful for commercial and military aircraft, and required for almost all flights in US skies, as they allow aircraft to avoid each other. The Washington Post reported that the F-35B’s transponder was not working at the time the pilot ejected, leading the military to ask the public for help locating the plane.

Another way to make stealth jets more visible, and to conceal the true ability of their radar-avoiding shape, is to include high-radar-visibility augmentation, as is sometimes done at air shows. The military sometimes augments the F-35′s cross-section during public or semi-public flights so they will look different on a radar from how it would during an actual combat mission, retired Air Force General Hawk Carlisle told Defense News.

Public transponder records, as reported by the War Zone (which is owned by PopSci’s parent company, Recurrent), show the search pattern the Air Force used to try to locate the lost F-35B before finding the debris field. If other techniques were used to find the plane beyond visual search, it is likely the military will want to keep those secret, as details about how to find a stealth plane could undermine the massive investment already put into stealth jets.

Even if it briefly created a flurry of media attention, the case of the temporarily missing F-35B is just the latest incident of the US military losing control of something powerful and important. Here are several others.

Lost drones

For as long as the military has operated drones, some of those drones have gotten lost. Both of these instances have some similarity to this week’s wild F-35 hunt.

A plane called the Kettering Bug was built during World War I as an “aerial torpedo,” or a flying uncrewed bomb that would, in the fixed trench combat of the time, travel a set distance and then shed its wings to crash into an enemy position with explosive force. The war ended before the Bug could see action, but this predecessor of both drones and cruise missiles was tested as a secret weapon in the United States. 

On October 4, 1918, the biplane bomb took off, and then flew off track. The US Army searched the area near its Dayton, Ohio launch site, asking the public if they had seen a missing plane. Several of the witnesses reported what appeared to be a plane with a drunk pilot, and the Army went along with those stories, saying the pilot had jumped out and was being treated. The plane, as an uncrewed weapon, had no human pilot on board. Rather than reveal the secret weapon, the Army let witnesses believe they had seen something other than the aerial torpedo. The Army found the wreckage of the Bug, recovered its reusable mechanical parts, and burned the wrecked fuselage on the spot.

Almost a century later in 2017, the US Army lost an RQ-7B Shadow drone, which was launched from a base in southern Arizona on January 31, then discovered over a week later on February 9, having crashed into a tree outside of Denver. The Shadow drone has a stated range of under 80 miles, though that range is how far it can fly while remaining in contact with the ground station used by human operators. Shadow drones can also fly for nine hours, with a cruising speed of 81 mph, so the 630-mile journey was within the distance the drone could technically cover. While drones like the Shadow are programmed to search for lost communications signals, autonomous flight features mean that a failure to connect can lead to unusual journeys, like the one the Shadow took.

Lost jets

The F-35B that went missing in South Carolina is just the latest such plane to crash and require search and recovery. In November 2021, a British F-35B operating from the HMS Queen Elizabeth crashed into the Mediterranean. The pilot ejected safely, but the sunken stealth jet, once found, required a maritime salvage operation. 

Then, in January 2022, the US Navy lost an F-35C in the South China Sea. The plane approached too low on a landing, skidded across the deck, and then fell off the deck’s edge into the ocean after the pilot had ejected. The incident injured seven sailors, including the pilot.  The sunken stealth jet had to be recovered from a depth of 12,400 feet, using a specialized remotely operated vessel.

While in both cases these crashes featured witnesses in the general vicinity who knew where the lost planes ended up, the recovery took on a similar sense of importance, as even a crashed and sunken jet could reveal crucial details of the aircraft’s design and operation to another country, had one of them gotten there first.

Lost nukes

While jets are often the most expensive piece of hardware lost in a crash, there’s also the cargo to consider. In February 1958, the US Air Force lost a Mark 15 thermonuclear bomb off the coast of Tybee Island, Georgia, following a mid-air collision with an F-86 fighter jet. To date, the bomb has not yet been found in its watery resting place, despite extensive searching by the US Navy for the months after the incident.

In January 1961, a B-52 bomber transporting two nuclear bombs started to fall apart in the sky above North Carolina. The two bombs crashed into the ground, either as part of the plane or released independently (accounts vary), and neither bomb detonated. But both bombs did come close to detonation, as several safety triggers were activated in the fall, and the whole incident prompted a change to how easy it was to arm US nuclear bombs.

The incident over North Carolina was just one of several nuclear near-misses that came from the transport and failure of systems around US nuclear bombs. In January 1966, a US bomber collided with the tanker refueling it above the village of Palomares in Spain, releasing one nuclear weapon into the sea and three onto land, where two of them cracked open and dispersed the bomb’s plutonium into the wind. The three bombs on land were found and recovered quickly, and the fourth bomb was recovered from the sea after an extensive underwater salvage operation. Cleanup work on the site where the bombs scattered plutonium continued into the 2010s.

The post What’s in the US military’s historic lost and found: nukes, jets, and drones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Mini explosions give this little robot a big bounce https://www.popsci.com/technology/explosive-power-robot/ Fri, 15 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=570862
Tiny robot standing on perch
Miniature internal combustion engines power this small robot. YouTube

The bug-inspired bot can carry 22 times its own weight and leap almost as high as hopping insects.

The post Mini explosions give this little robot a big bounce appeared first on Popular Science.

]]>
Tiny robot standing on perch
Miniature internal combustion engines power this small robot. YouTube

Electrical power and battery arrays remain go-to routes for juicing up robots, but sometimes old school explosives can still do the trick. A team at Cornell University recently demonstrated just that idea via a new tiny robot that relies on small-scale actuators ostensibly fueled by miniature internal-combustion engines. Even at minuscule levels, the bug-sized quadrupedal bot’s design allows it to launch to heights nearly as high as many leaping insects, while also carrying and walking with a load 22 times its own weight.

As detailed in a paper published on September 14 in Science, researchers created a propulsion unit via assembling a 3D-printed combustion chamber with an inflatable elastomeric membrane, electrodes, as well as teeny fuel injection tubing. When the electrodes introduce a small spark, the membrane balloons in just half a millisecond with 9.5 newtons of force. The process can then be repeated as quickly as 100 times per second.

“The high frequencies, speeds, and strengths allow [the] actuators to provide microrobots with locomotion capabilities that were previously available only to much larger robots,” writes Northwestern University Assistant Professor of Materials Science and Engineering Ryan Truby in a related essay within Science.

[Related: This small, squishy robot is cuter than its cockroach inspiration.]

But as IEEE Spectrum explains, even the smallest explosions can wear down or damage materials over time. Knowing this, the engineering team designed the elastic membrane using flame-resistant material alongside an integrated flame arrestor to control the timing and size of each little kaboom. The results are an extremely durable propulsion unit that the team estimates can continuously operate for over 750,000 cycles (roughly 8.5 hours for the robot) before any noticeable performance degradation. In video demonstrations, the team showcased their 29 mm long, 1.5 g robot vertically leaping 59 centimeters, even while carrying comparably massive amounts of weight. To “walk,” the robot fires its actuators at breakneck speed, and turns via selectively engaging the same engines.

Robots photo

Moving forward (so to speak), the team wants to hone their bot’s ability to actually slow its actuators to allow for more precise movement, as well as the ability to “run.” The robot is also currently tethered via power cables, so creating wireless iterations would be integral to deploying the device in a real-world scenario, such as a disaster zone or other hard-to-reach environments.

“One idea we want to explore in the future is using aggregates of these small and powerful actuators as large, variable recruitment musculature in large robots,” Robert F. Shepherd, head of Cornell’s Organic Robotics Lab and study co-author, told IEEE Spectrum. “Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots.”

Explosive robot muscles—what could go wrong?

The post Mini explosions give this little robot a big bounce appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microflier robots use the science of origami to fall like leaves https://www.popsci.com/technology/microflier-origami-robots/ Wed, 13 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=570105
Robotic origami microflier
Researchers at the University of Washington developed small robotic devices that can change how they move through the air by 'snapping' into a folded position during their descent. Mark Stone/University of Washington

The newest origami robots can change shape within milliseconds after dropping from drones.

The post Microflier robots use the science of origami to fall like leaves appeared first on Popular Science.

]]>
Robotic origami microflier
Researchers at the University of Washington developed small robotic devices that can change how they move through the air by 'snapping' into a folded position during their descent. Mark Stone/University of Washington

Origami has inspired yet another robot—in this case, one that dynamically changes its shape after dropping from drones in order to glide through the air while collecting environmental data. As detailed via a new study published in Science Robotics, researchers at the University of Washington relied on the traditional Miura-ori folding method (itself inspired by leaves’ geometric patterns) to underpin their new “microfliers.”

According to study co-senior author Vikram Iyer, an UW assistant professor of computer science and engineering, the microfliers first fall “chaotically” from drones in an unfolded, flat state, much akin to an elm leaf’s descent. Using tiny onboard pressure sensors to measure altitude, alongside timers and Bluetooth signals, the robots then morph midair to change airflow’s effects on its new structure. This allows it a more stable descent such as those seen within maple leaves.

[Related: Foldable robots with intricate transistors can squeeze into extreme situations.]

“Using origami opens up a new design space for microfliers,” Iyer said in the University of Washington’s announcement. “This highly energy efficient method allows us to have battery-free control over microflier descent, which was not possible before.”

Because of the microfliers’ light weight—about 400 milligrams, or roughly half as heavy as a nail—the robots can already span the length of a football field when dropped from just 40 meters (131 feet) in the air. Battery-free, solar-fueled actuators kick into gear at customizable times to control how and when their shapes interact with surrounding air, thus controlling directional descents. Researchers believe unfurling the bots at different times will allow for greater areas of distribution, and at just 25 milliseconds to initiate folding, the timing can be extremely precise. Although the current robots only transition in a single direction, researchers hope future versions will do so in both directions, allowing for more precise landings during turbulent weather.

Time lapse image of origami microflier changing shape during descent

The team believes such microfliers could be easily deployed as useful sensors during environmental and atmospheric surveying. The current models can transmit air temperature and pressure data via Bluetooth signals as far as 60 meters (196 feet) away, but researchers think both their reach and capabilities could be expanded in the future.

Origami is increasingly inspiring new, creative robots.  Earlier this year, researchers at UCLA developed flexible “mechanobots” that can squeeze their way into incredibly narrow environments. Meanwhile, the folding art’s principles are showing immense potential within engineering and building advancements, such as MIT’s recent developments in origami-inspired plate lattice designs for cars, planes, and spacecraft.

The post Microflier robots use the science of origami to fall like leaves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Ascento Guard patrol robot puts a cartoonish spin on security enforcement https://www.popsci.com/technology/ascento-guard-robot/ Tue, 12 Sep 2023 18:00:00 +0000 https://www.popsci.com/?p=569688
Ascento Guard robot
The new robot literally puts a friendly face on perimeter surveillance. Ascento

A startup's new security guard bot boasts two wheels—and eyebrows.

The post The Ascento Guard patrol robot puts a cartoonish spin on security enforcement appeared first on Popular Science.

]]>
Ascento Guard robot
The new robot literally puts a friendly face on perimeter surveillance. Ascento

Multiple companies around the world now offer robotic security guards for property and event surveillance, but Ascento appears to be only one, at least currently, to sell mechanical patrollers boasting eyebrows. On September 12, the Swiss-based startup announced the launch of its latest autonomous outdoor security robot, the Ascento Guard, which puts a cartoon-esque spin on security enforcement.

[Related: Meet Garmi, a robot nurse and companion for Germany’s elderly population.]

The robot’s central chassis includes a pair of circular “eye” stand-ins that blink, along with rectangular, orange hazard lights positioned as eyebrows. When charging, for example, an Ascento Guard’s eyes are “closed” to mimic sleeping, but open as they engage in patrol responsibilities. But perhaps the most unique design choice is its agile “wheel-leg” setup that seemingly allows for more precise movements across a variety of terrains. Showcase footage accompanying the announcement highlights the robot’s various features for patrolling “large, outdoor, private properties.” Per the company’s announcement, it already counts manufacturing facilities, data centers, pharmaceutical production centers, and warehouses as clients.

AI photo

According to Ascento co-founder and CEO, Alessandro Morra, the global security industry currently faces a staff turnover rate as high as 47 percent each year. “Labor shortages mean a lack of qualified personnel available to do the work which involves long shifts, during anti-social hours or in bad weather,” Morra said via the company’s September 12 announcement. “The traditional approach is to use either people or fixed installed cameras… The Ascento Guard provides the best of both worlds.”

Each Ascento Guard reportedly only requires a few hours’ worth of setup time before becoming virtually autonomous via programmable patrol schedules. During its working hours, the all-weather robots are equipped to survey perimeters at a walking speed of approximately 2.8 mph, as well as monitor for fires or break-ins via thermal and infrared cameras. On-board speakers and microphones also allow for end-to-end encrypted two-way communications, while its video cameras can “control parking lots,” per Ascento’s announcement—video footage shows an Ascento Guard scanning car license plates, for example.

While robot security guards are nothing new by now, the Ascento Guard’s decidedly anthropomorphic design typically saved for elderly care and assistance, is certainly a new way to combat potential public skepticism, not to mention labor and privacy concerns espoused by experts for similar automation creations. Ascento’s reveal follows a new funding round backed by a host of industry heavyweights including the European Space Agency incubator, ESA BIC, and Tim Kentley-Klay, founder of the autonomous taxi company, Zoox.

The post The Ascento Guard patrol robot puts a cartoonish spin on security enforcement appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The US military’s tiniest drone feels like it flew straight out of a sci-fi film https://www.popsci.com/technology/black-hornet-drone/ Tue, 12 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=569223
the black hornet drone
The Black Hornet in flight. The wire hanging down is the aircraft's antenna. Teledyne FLIR

The Black Hornet reconnoissance drone is minuscule and highly maneuverable—and even explored the collapsed parking garage in New York City in April.

The post The US military’s tiniest drone feels like it flew straight out of a sci-fi film appeared first on Popular Science.

]]>
the black hornet drone
The Black Hornet in flight. The wire hanging down is the aircraft's antenna. Teledyne FLIR

On April 18 in New York City, a parking garage in lower Manhattan collapsed, killing one person—the garage’s manager, Willis Moore. Much of the media coverage surrounding that event focused on a robotic dog that the New York City Fire Department used on the scene, a mechanical quadruped painted like a dalmatian and named Bergh. But another robot explored the collapsed structure that spring day—an exceptionally tiny and quiet drone flown by militaries that looks exactly like a little helicopter.

It’s called the Black Hornet. It weighs less than 1.2 ounces, takes off from its operator’s hand, and streams back video to a screen so people can see what the drone sees and make decisions before approaching a structure that might have hostile forces or other hazards inside it. 

Here’s how this 6.6-inch-long drone works, what it’s like to fly it, and how it was used that April day following the deadly structural collapse. 

black hornet drone
The drone is small enough to take off—and then finish its flight—in an operator’s hand. Rob Verger

Restaurant reconnaissance

Popular Science received a demonstration of the drone on August 10, and had the chance to fly it, in a space on the ground floor of a New York City hotel near Central Park. 

Rob Laskovich, a former Navy SEAL and the lead trainer for the Black Hornet with Teledyne FLIR, the company that makes the diminutive drone, explains that the drone’s low “noise signature” makes it virtually undetectable when it’s more than 10 feet away from people and 10 feet in the air. “It almost disappears,” he says. “And the size of this thing—it’s able to get into very tight corners.” 

Because it’s so quiet and so maneuverable, the itty bitty drone offers a way to gather information about what’s in a space up to a mile away or further and stream that video (at a resolution of 640 by 480 pixels) over encrypted radio link back to the base station. This latest version of the Black Hornet also doesn’t need access to GPS to fly, meaning it can operate inside a building or in other “GPS-denied” spaces. It carries no weapons. 

Laskovich removes one of the toy-sized Black Hornets from a case; there are three of them in this kit, meaning two can be charging while another one is flying. The drone has a nearly invisible wire antenna that requires a flick of the finger to make it hang out down off the back. The Black Hornet, he says, is “almost like a mini Black Hawk helicopter.” It is indeed just like a miniature helicopter; it has a top rotor to give it lift and a tail rotor to prevent it from spinning around in circles—the anti-torque system. 

Mission control for the little bird involves a small non-touchscreen display and a button-filled controller designed to be used with one hand. Laskovich selects “indoor mode” for the flight. “To start it, it’s a simple twist,” he says, giving the Black Hornet a little lateral twist back and forth with his left hand. Suddenly, the top rotor starts spinning. Then he spins the tiny chopper around a bit more, “to kind of let it know where it’s at,” he says. He moves the aircraft up and down. 

“What it’s doing, it’s reading the environment right now,” he adds. “Once it’s got a good read on where it’s at, the tail rotor is going to start spinning, and the aircraft will take off.” And that’s exactly what happens. The wee whirlybird departs from his hand, and then it’s airborne in the room. The sound it makes is a bit like a mosquito. 

On the screen on the table in front of us is the view from the drone’s cameras, complete with the space’s black and white tiled floor; two employees walk past it, captured on video. A few moments later he turns it so it’s looking at us at our spot in a corner booth, and on the screen I see the drone’s view of me, Laskovich, and Chris Skrocki, a senior regional sales manager with Teledyne FLIR, standing by the table. 

Laskovich says this is the smallest drone in use by the US Department of Defense; Teledyne FLIR says that the US Army, Navy, Marines, and Air Force have the drone on hand. Earlier this summer, the company announced that they were going to produce 1,000 of these itty bitty aircraft for the Norwegian Ministry of Defense, who would send them to Ukraine, adding to 300 that had already been sent. Skrocki notes that a kit of three drones and other equipment can cost “in the neighborhood of about $85,000.”

Eventually Laskovich pilots the chopper back to him and grabs it out of the air from the bottom, as if he was a gentle King Kong grabbing a full-sized helicopter out of the sky, and uses the hand controller to turn it off. 

Kitchen confidential 

The demonstration that Laskovich had conducted was with a Black Hornet model that uses cameras to see the world like a typical camera sensor does. Then he demonstrates an aircraft that has thermal vision. (That’s different from night vision, by the way.) On the base station’s screen, the hot things the drone sees can be depicted in different ways: with white showing the hot spots, black showing the heat, or two different “fuse” modes, the second of which is highly colorful, with oranges and reds and purples. That one, with its bright colors, Laskovich calls “Predator mode,” he says, “because it looks like the old movie Predator.”

Laskovich launches the thermal drone with a whir and he flies it away from our booth, up towards a red EXIT sign hanging from a high ceiling and then off towards an open kitchen. I watch to see what the drone sees via the screen on the table in front of me. He gets it closer and closer to the kitchen area and eventually puts it into “Predator mode.” 

A figure is clearly visible on the drone’s feed, working in the general kitchen area. “And the cool part about it, they have no idea there’s a drone overhead right now,” he says. He toggles through the different thermal settings again: in one of the drone’s modes, a body looks black, then in another, white. He descends a bit to clear a screen-type installation that hangs from the ceiling over the kitchen area and pushes further into the cooking space. At one point, the drone, via the screen in front of me, reveals plates on metal shelving. 

“There’s your serving station right there,” he says. “We’re right in the kitchen right now.” He notes that thanks to “ambient noise,” any people nearby likely can’t detect the aircraft. He flies the drone back to us and I can see the black and white tile floor, and then the drone’s view of me and Laskovich sitting at our table. He cycles through the different thermal settings once more, landing on Predator mode again, revealing both me and Laskovich in bright orange and yellow. 

In a military context, the drone’s ideal use case, Laskovich explains, is to provide operators a way to see, from some distance away, what’s going on in a specific place, like a house that might be sheltering hostile forces. “It’s the ability to have real-time information of what’s going on on a target, without compromising your unit,” he says.

One of the thermal views is colloquially called "Predator mode." In the image above, the author is on the left and Rob Laskovich is on the right.
One of the thermal views is colloquially called “Predator mode.” In the image above, the author is on the left and Rob Laskovich is on the right. courtesy Teledyne FLIR

Flight lessons

Eventually, it’s my turn to learn to fly this little helo. The action is all controlled by a small gray hand unit with an antenna that enables communication to the drone. On the front of the control stick are a bunch of buttons, and on the back are two more. Some of them control what the camera does. Others control the flight of the machine itself. One of them is a “stop and hover” button. Two of the buttons are for yaw, which makes the helicopter pivot to the left or right. The two on the back tell the helicopter to ascend or descend—the altitude control. The trick in flying it, Laskovich says, is to look at the screen while you’re operating the drone, not the drone itself. 

I hold the helicopter in my left hand, and after I put the system in “indoor mode,” Laskovich tells me, “you’re ready to fly.” 

I twist the Black Hornet back and forth and the top rotor starts spinning with a whir. After some more calibration moves, the tail rotor starts spinning, too. I let it go and it zips up out of my hand. “You’re flying,” Laskovich says, who then proceeds to tell me what buttons to press to make the drone do different things. 

launching a black hornet drone
After the top rotor and the tail rotor begin spinning, the next step is just to let the drone go. Teledyne FLIR / Popular Science

I fly it for a bit around the space, and after about seven minutes, I use my left hand to grab onto the bottom part of the machine and then hit three buttons simultaneously on the controller to kill the chopper’s power. And suddenly, the rotor and tail stopped spinning. The aircraft remains in my left hand, a tiny little flying machine that feels a bit like it flew out of a science fiction movie. 

Flying this aircraft, which will hold a stable hover all on its own, is much easier than managing the controls of a real helicopter, which I, a non-pilot, once very briefly had the chance to try under the watchful tutelage of an actual aviator and former Coast Guard commander. 

black hornet drone
The drone can terminate its flight in the pilot’s hand. Teledyne FLIR / Popular Science

The garage collapse

On April 18, Skrocki was in New York City on business when he heard via text message that the parking garage had collapsed. He had the Black Hornet on hand, and contacted the New York Police Department and offered the drone’s use. They said yes, and he headed down to the scene of the collapse, and eventually sent the drone into the collapsed structure “under coordination with the guys there on scene,” Skrocki says. 

He recalls what he saw in there, via the Black Hornet. “There were some vehicles that were vertically stacked, a very busy scene,” he says. “It just absolutely appeared unstable.” When the flight was over, as Skrocki notes on a post on LinkedIn that includes a bit of video, he landed the drone in a hat. The Black Hornet drone doesn’t store the video it records locally on the device itself, but the base station does, and Skrocki noted on Linkedin that “Mission data including the stills/video was provided to FDNY.”

Besides the robotic dog, the FDNY has DJI drones, and they said that they used one specific DJI model, an Avata, that day for recon in the garage. As for the Black Hornet, the FDNY said in an emailed statement to PopSci: “It was used after we were already done surveying the building. The DJI Avata did most if not all of the imagery inside the building. The black hornet was used as we had the device present and wanted to see its capabilities. We continue to use the DJI Avata for interior missions.” The FDNY does not have its own Black Hornet. 

Beyond military uses, Skrocki says that the Black Hornet can help in a public safety context or with police departments, giving first responders an eye on a situation where an armed suspect might be suicidal or have a hostage, for example. The drone could provide a way for watchers to know exactly when to try to move in.

In New York state, the Erie County Sheriff’s Office has a Black Hornet set that includes three small aircraft. And Teledyne FLIR says that the Connecticut State Police has the drone, although via email a spokesperson for that police force said: “We cannot confirm we have Black Hornet Drones.” 

The New York City Police Department has controversially obtained two robotic dogs, a fact that spurred the executive director of the New York Civil Liberties Union to tell The New York Times in April: “And all we’re left with is Digidog running around town as this dystopian surveillance machine of questionable value and quite potentially serious privacy consequences.” 

Stuart Schrader, an associate research professor at Johns Hopkins University’s Center for Africana Studies, highlights the potential for military-level technology in civilian hands to experience a type of “mission creep.”

“It seems quite sensible to not put humans or [real] dogs in danger to do the [parking garage] search, and use a drone instead,” Schrader says. “But I think that the reality is what we see with various types of surveillance technologies—and other technologies that are dual-use technologies where they have military origins—it’s just that most police departments or emergency departments have very infrequent cause to use them.” And that’s where the mission creep can come in. 

In the absence of a parking garage collapse or other actual disaster, departments may feel the need to use the expensive tools they already have in other more general situations. From there, the tech could be deployed, Schrader says, “in really kind of mundane circumstances that might not warrant it, because it’s not a crisis or emergency situation, but actually it’s just used to potentiate the power of police to gain access for surveillance.”

The post The US military’s tiniest drone feels like it flew straight out of a sci-fi film appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The newest moon-bound robot will roll around like a tennis ball https://www.popsci.com/technology/japan-lunar-ball-robot/ Mon, 11 Sep 2023 17:00:00 +0000 https://www.popsci.com/?p=569255
JAXA LEV-2 lunar probe on sand
This lunar probe was inspired by children's toys. JAXA/TOMY/Sony/Doshisha University

Japan's LEV-2 lunar probe is inspired by children's toys, and could make history by the end of the year.

The post The newest moon-bound robot will roll around like a tennis ball appeared first on Popular Science.

]]>
JAXA LEV-2 lunar probe on sand
This lunar probe was inspired by children's toys. JAXA/TOMY/Sony/Doshisha University

If all goes according to plan, a tennis ball-sized robot modeled after a children’s toy will soon briefly explore the moon’s surface as part of Japan’s first soft lunar landing. As recently highlighted by Space.com, the Japanese space agency, JAXA, is currently overseeing its Smart Lander for Investigating Moon (SLIM) probe mission, which launched on September 6 alongside the country’s XRISM X-ray satellite payload. Unlike more powerful launches, it will take less than 9-foot-wide SLIM between three and four months to reach lunar orbit, after which it will survey the roughly 1000-foot-wide Shioli Crater landing site from afar for about another month.

Afterwards, however, the lander will descend towards the moon, and deploy the Lunar Excursion Vehicle 2 (LEV-2) once it reaches around six-feet above the surface. The probe’s sphere-shaped casing will then divide into two halves on either side of a small camera system. From there, LEV-2 will begin hobbling atop the SLIM landing site and surrounding area for around two hours, until its battery reserve is depleted.

[Related: India’s successful moon landing makes lunar history.]

Per JAXA’s description, LEV-2 was developed by its Space Exploration Innovation Hub Center associate senior researcher Hirano Daichi. Daichi collaborated with a team from Doshisha University as well as the toy manufacturer TOMY to create the tiny space explorer. Meanwhile, Sony provided the two cameras that will survey the moon. According to Daichi, the team turned to children’s toys for their “robust and safe design… which reduced the number of components used in the vehicle as much as possible and increased its reliability.”

“This robot was developed successfully within the limited size and mass using the downsizing and weight reduction technologies and the shape changing mechanism developed for toys by TOMY,” continued Daichi.

Moons photo

If successful, JAXA engineers hope the soft lunar landing method can be adapted to larger craft in the future, including those piloted by human astronauts. “By creating the SLIM lander humans will make a qualitative shift towards being able to land where we want and not just where it is easy to land, as had been the case before,” reads JAXA’s project description. “By achieving this, it will become possible to land on planets even more resource scarce than the moon.”

Beyond just this project, it’s been an active time for lunar exploration. In August, India completed the first successful lunar landing at the moon’s south pole via its Chandrayaan-3 probe. Last year, NASA’s Artemis-1 rocket also kickstarted the space agency’s long standing goal towards establishing a permanent moon base.

The post The newest moon-bound robot will roll around like a tennis ball appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This wormy robot can wriggle its way around a jet engine https://www.popsci.com/technology/ge-aerospace-sensiworm-robot/ Sat, 09 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=568999
an inchwoom robot climbing up a smooth surface
Sensiworm can crawl around a jet engine. GE Aerospace

It's soft enough to squeeze into tight spaces.

The post This wormy robot can wriggle its way around a jet engine appeared first on Popular Science.

]]>
an inchwoom robot climbing up a smooth surface
Sensiworm can crawl around a jet engine. GE Aerospace

A new wormy robot could help with jet engine inspections at GE Aerospace, according to an announcement this week. Sensiworm, short for “Soft ElectroNics Skin-Innervated Robotic Worm,” is the newest outgrowth in GE’s line of worm robots, which includes a “giant earthworm” for tunneling and the “Pipeworm” for pipeline inspection. 

Jet engines are complex devices made up of many moving parts. They have to withstand factors like high heat, plenty of movement, and varying degrees of pressure. Because they need to perform at top speed, they often need to undergo routine cleaning and inspection. Typically, this is done with human eyes and with a device like a borescope, which is a skinny tube with a camera that’s snaked into the engine (technically known as a turbofan). But with Sensiworm, GE promises to make this process less tedious and that it could happen “on wing,” meaning the turbofan doesn’t need to be removed from the wing for the inspection. 

Like an inchworm, Sensiworm moves forward on its own using two sticky suction-like parts on its bottom to squish into crevasses and scrunch around the curves of the engine to find areas where there are cracks or corrosion, or check to see if the heat-protecting thermal barrier coatings are as thick as they should be. 

It comes with cameras and sensors onboard, and is attached through a long, thin wire. In a demo video, this robot showed that it can navigate around obstacles, hang on to a spinning turbine, and sniff out gas leaks. 

These “mini-robot companions” could add an extra pair of eyes and ears, expanding the inspection capabilities of human service operators for on-wing inspections without having to take anything apart. “With their soft, compliant design, they could inspect every inch of jet engine transmitting live video and real-time data about the condition of parts that operators typically check,” GE Aerospace said in a press release

“Currently, our demonstrations have primarily been focused on the inspection of engines,” Deepak Trivedi, principal robotics engineer at GE Aerospace Research, noted in the statement. “But we’re developing new capabilities that would allow these robots to execute repair once they find a defect as well.”

Flexible, squiggling robots have found lots of uses in many industries. Engineers have designed them for medical applications, search and rescues, military operations, and even space ventures

Watch Sensiworm at work below: 

Engineering photo

The post This wormy robot can wriggle its way around a jet engine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Will we ever be able to trust health advice from an AI? https://www.popsci.com/health/will-we-ever-be-able-to-trust-health-advice-from-an-ai/ Tue, 05 Sep 2023 13:00:00 +0000 https://www.popsci.com/?p=567169
robot doctor talks to elderly person sitting in chair
AI-generated illustration by Dan Saelinger

Medical AI chatbots have the potential to counsel patients, but wrong replies and biased care remain major risks.

The post Will we ever be able to trust health advice from an AI? appeared first on Popular Science.

]]>
robot doctor talks to elderly person sitting in chair
AI-generated illustration by Dan Saelinger

IF A PATIENT KNEW their doctor was going to give them bad information during an upcoming appointment, they’d cancel immediately. Generative artificial intelligence models such as ChatGPT, however, frequently “hallucinate”—tech industry lingo for making stuff up. So why would anyone want to use an AI for medical purposes?

Here’s the optimistic scenario: AI tools get trained on vetted medical literature, as some models in development already do, but they also scan patient records and smartwatch data. Then, like other generative AI, they produce text, photos, and even video—personalized to each user and accurate enough to be helpful. The dystopian version: Governments, insurance companies, and entrepreneurs push flawed AI to cut costs, leaving patients desperate for medical care from human clinicians. 

Right now, it’s easy to imagine things going wrong, especially because AI has already been accused of spewing harmful advice online. In late spring, the National Eating Disorders Association temporarily disabled its chatbot after a user claimed it encouraged unhealthy diet habits. But people in the US can still download apps that use AI to evaluate symptoms. And some doctors are trying to use the technology, despite its underlying problems, to communicate more sympathetically with patients. 

ChatGPT and other large language models are “very confident, they’re very articulate, and they’re very often wrong,” says Mark Dredze, a professor of computer science at Johns Hopkins University. In short, AI has a long way to go before people can trust its medical tips. 

Still, Dredze is optimistic about the technology’s future. ChatGPT already gives advice that’s comparable to the recommendations physicians offer on Reddit forums, his newly published research has found. And future generative models might complement trips to the doctor, rather than replace consults completely, says Katie Link, a machine-learning engineer who specializes in healthcare for Hugging Face, an open-source AI platform. They could more thoroughly explain treatments and conditions after visits, for example, or help prevent misunderstandings due to language barriers.

In an even rosier outlook, Oishi Banerjee, an artificial intelligence and healthcare researcher at Harvard Medical School, envisions AI systems that would weave together multiple data sources. Using photos, patient records, information from wearable sensors, and more, they could “deliver good care anywhere to anyone,” she says. Weird rash on your arm? She imagines a dermatology app able to analyze a photo and comb through your recent diet, location data, and medical history to find the right treatment for you.

As medical AI develops, the industry must keep growing amounts of patient data secure. But regulators can lay the groundwork now for responsible progress, says Marzyeh Ghassemi, who leads a machine-learning lab at MIT. Many hospitals already sell anonymized patient data to tech companies such as Google; US agencies could require them to add that information to national data sets to improve medical AI models, Ghassemi suggests. Additionally, federal audits could review the accuracy of AI tools used by hospitals and medical groups and cut off valuable Medicare and Medicaid funding for substandard software. Doctors shouldn’t just be handed AI tools, either; they should receive extensive training on how to use them.

It’s easy to see how AI companies might tempt organizations and patients to sign up for services that can’t be trusted to produce accurate results. Lawmakers, healthcare providers, tech giants, and entrepreneurs need to move ahead with caution. Lives depend on it.

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.

The post Will we ever be able to trust health advice from an AI? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Australia is eyeing uncrewed vessels to patrol the vast Pacific Ocean https://www.popsci.com/technology/australia-pacific-submarine-strategy-autonomy/ Sat, 02 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=567346
US submarine in Australia
The USS Mississippi in Australia in 2022. It's a Virginia-class fast-attack submarine. John Hall / US Marine Corps

The Pacific is strategically important, and Australia already has a deal with the US and UK involving nuclear-powered submarines.

The post Australia is eyeing uncrewed vessels to patrol the vast Pacific Ocean appeared first on Popular Science.

]]>
US submarine in Australia
The USS Mississippi in Australia in 2022. It's a Virginia-class fast-attack submarine. John Hall / US Marine Corps

The Pacific Ocean is vast, strategically important, and soon to be patrolled by another navy with nuclear-powered submarines. Earlier this year, Australia finalized a deal with the United States and the United Kingdom to acquire its own nuclear-powered attack submarines, and to share in duties patrolling the Pacific. These submarines will be incorporated into the broader functions of Australia’s Royal Navy, where they will work alongside other vessels to track, monitor, and if need be to fight other submarines, especially those of other nations armed with nuclear missiles. 

But because the ocean is so massive, the Royal Australian Navy wants to make sure that its new submarines are guided in their search by fleets of autonomous boats and subs, also looking for the atomic needle in an aquatic haystack—enemy submarines armed with missiles carrying nuclear warheads. To that end, on August 21, Thales Australia announced it was developing an existing facility for a bid to incorporate autonomous technology into vessels that can support Australia’s new nuclear-powered fleet. This autonomous technology will be first developed around more conventional roles, like undersea mine clearing, though it is part of a broader picture for establishing nuclear deterrence in the Pacific.

To understand why this is a big deal, it’s important to look at two changed realities of power in the Pacific. The United States and the United Kingdom are allies of Australia, and have been for a long time. A big concern shared by these powers is what happens if tensions over the Pacific with China escalate into a shooting war.

Nuclear submarines

In March of this year, the United States, Australia, and the United Kingdom announced an agreement called AUKUS, a partnership between the three countries that will involve the development of new submarines, and shared submarine patrols in the Pacific. 

Australia has never developed nuclear weapons of its own, while the United States and the United Kingdom were the first and third countries, respectively, to test nuclear weapons. By basing American and British nuclear-powered (but not armed) submarines in Australia, the deal works to incorporate Australia into a shared concept of nuclear deterrence. In other words, the logic is that if Russia or China or any other nuclear-armed state were to try to threaten Australia with nuclear weapons, they’d be threatening the United States and the United Kingdom, too.

So while Australia is not a nuclear-armed country, it plans to host the submarine fleets of its nuclear-armed allies. None of these submarines are developed to launch nuclear missiles, but they are built to look for and hunt nuclear-armed submarines, and they carry conventional weapons like cruise missiles that can hit targets on land or at sea.

The role of autonomy

Here’s where the new complex announced by Thales comes in. The announcement from Thales says that the new facility will help the “development and integration of autonomous vessels in support of Australia’s nuclear deterrence capability.” 

Australia is one of many nations developing autonomous vessels for the sea. These types of self-navigating robots have important advantages over human-crewed ones. So long as they have power, they can continuously monitor the sea without a need to return to harbor or host a crew. Underwater, direct communication can be hard, so autonomous submarines are well suited to conducting long-lasting undersea patrols. And because the ocean is so truly massive, autonomous ships allow humans to monitor the sea over great distances, as robots do the hard work of sailing and surveying.

That makes autonomous ships useful for detecting and, depending on the sophistication of the given machine, tracking the ships and submarines of other navies. Notably, Australia’s 2025 plan for a “Warfare Innovation Navy” outlines possible roles for underwater autonomous vehicles, like scouting and assigning communications relays. The document also emphasizes that this is new technology, and Australia will work together with industry partners and allies on the “development of doctrine, concepts and tactics; standards and data sharing; test and evaluation; and common frameworks and capability maturity assessments.”

Mine-hunting ships

In the short term, Australia is looking to augment its adoption of nuclear-powered attack submarines by modernizing the rest of its Navy. This includes the replacement of its existing mine-hunting fleet. Mine-hunting is important but unglamorous work; sea mines are quick to place and persist until they’re detonated, defused, or naturally decay. Ensuring safe passage for naval vessels often means using smaller ships that scan beneath the sea using sonar to detect mines. Once found, the vessels then remain in place, and send out either tethered robots or human divers to defuse the mines. Australia has already retired two of its Huon-class minehunters, surface ships that can deploy robots and divers, and is set to replace the remaining four in its inventory. 

In its announcement, Thales emphasized the role it will play in replacing and developing the next-generation of minehunters. And tools developed to hunt mines can also help hunt subs with nuclear weapons on them. Both tasks involve locating underwater objects at a safe distance, and the stakes are much lower in figuring it out first with minehunting.

Developing new minehunters is likely an area where the Royal Australian Navy and industry will figure out significant parts of autonomy. Mine hunting and clearing is a task particularly suited towards naval robots, as mines are fixed targets, and the risk is primarily borne by the machine doing the defusing. Sensors developed to find and track mines, as well as communications tools that allow mine robots to communicate with command ships, could prove adaptable to other areas of naval patrol and warfare.

The post Australia is eyeing uncrewed vessels to patrol the vast Pacific Ocean appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This small, squishy robot is cuter than its cockroach inspiration https://www.popsci.com/technology/clari-cockroach-robot/ Fri, 01 Sep 2023 15:00:00 +0000 https://www.popsci.com/?p=567534
The CLARI mini-robot created by Kaushik Jayaram, assistant professor, mechanical engineering and Heiko Kabutz, PhD student, mechanical engineering at the University of Colorado Boulder
CLARI could one day traverse collapsed buildings in search of survivors. Casey Cass/CU Boulder

CLARI is lighter than a ping pong ball, but capable of morphing its body to fit in the tiniest of spaces.

The post This small, squishy robot is cuter than its cockroach inspiration appeared first on Popular Science.

]]>
The CLARI mini-robot created by Kaushik Jayaram, assistant professor, mechanical engineering and Heiko Kabutz, PhD student, mechanical engineering at the University of Colorado Boulder
CLARI could one day traverse collapsed buildings in search of survivors. Casey Cass/CU Boulder

A multi-legged robot inspired by everyday bugs could soon come to your aid in a literal and figurative pinch. In a new study published via Advanced Intelligent Systems, University of Colorado Boulder researchers recently unveiled their Compliant Legged Articulated Robotic Insect, aka CLARI. The cute, modular bot is lighter than a ping pong ball and small enough that multiple units can fit in your hand. But don’t let its size and weight fool you—CLARI is optimized to squeeze into tight spaces via an extremely malleable body structure. The bug-like bot shows immense promise as an exploratory tool for small areas such as within jet engines, as well as even during search and rescue missions.

[Related: This bumblebee-inspired bot can bounce back after injuring a wing.]

According to assistant professor of mechanical engineering and study co-author Kaushik Jayaram, CLARI’s inspiration is owed largely to the everyday cockroach. As a graduate student, Jayaram engineered a robot capable of compressing to just half its height, much like roaches fitting through tiny crevices in buildings.

“We were able to squeeze through vertical gaps, but that got me thinking: That’s one way to compress. What are others?” said Jayaram in an August 30 statement.

Fast forward a few years to CLARI, a new iteration that builds upon previous soft robotic advancements. In its standard shape, CLARI resembles a square with four articulating legs, each controlled by its own dual actuators and circuitry. When encountering a difficult environment, however, the team’s robot can narrow from 1.3 inches wide to just 0.8 inches narrow. With more refinement, Jayaram’s team believes future CLARI robots could become even more malleable.

Insects photo

“What we want are general-purpose robots that can change shape and adapt to whatever the environmental conditions are,” Jayarm said. He likens the ultimate version to an amoeba “which has no well-defined shape but can change depending on whether it needs to move fast or engulf some food.”

Instead of dining opportunities, however, CLARI bots could use their unique structures and various leg configurations to traverse disaster zones in search of missing victims, or inspect the innards of machinery without needing to take apart the entire product. Right now, CLARI still requires wired connections for both power and controls, but Jayaram’s team hopes to eventually create wireless models capable of independent movement and exploration.

“Most robots today basically look like a cube,” Jayaram said. “Why should they all be the same? Animals come in all shapes and sizes.”

The post This small, squishy robot is cuter than its cockroach inspiration appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This drug-delivery soft robot may help solve medical implants’ scar tissue problem https://www.popsci.com/technology/soft-robot-drug-ai/ Thu, 31 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=567276
Professor Garry Duffy and Dr Rachel Beatty show the soft robotic implant developed by University of Galway and MIT
The implant uses mechanotherapy to adjust its shape and size, thus avoiding scar tissue buildup. Martina Regan

The new design could one day provide continuous, consistent drug dispersal without succumbing to fibrosis complications.

The post This drug-delivery soft robot may help solve medical implants’ scar tissue problem appeared first on Popular Science.

]]>
Professor Garry Duffy and Dr Rachel Beatty show the soft robotic implant developed by University of Galway and MIT
The implant uses mechanotherapy to adjust its shape and size, thus avoiding scar tissue buildup. Martina Regan

Scar tissue, also known as fibrosis, is the scourge of medical device implants. Even when receiving potentially life saving drug treatments, patients’ bodies often form scarring around the foreign object, thus eventually forcing the implant to malfunction or fail. This reaction can drastically limit a procedure’s efficacy, but a new breakthrough combining soft robotics and artificial intelligence could soon clear the troublesome hurdle.

According to a new study published with Science Robotics, a collaboration between researchers at MIT and the University of Galway resulted in new medical device tech that relies on AI and a malleable body to evade scar tissue buildup. 

“Imagine a therapeutic implant that can also sense its environment and respond as needed using AI,” Rachel Beatty, co-lead author and postdoctoral candidate at the University of Galway, said in a statement. “This approach could generate revolutionary changes in implantable drug delivery for a range of chronic diseases.”

The technology’s secret weapon is its conductive, porous membrane capable of detecting when it is becoming blocked by scar tissue. When this begins to occur, a machine learning algorithm kicks in to oversee an emerging treatment known as mechanotherapy, in which soft robotic implants inflate and deflate at various speeds and sizes to deter scar tissue formation.

[Related: A micro-thin smart bandage can quickly heal and monitor wounds.]

Ellen Roche, an MIT professor of mechanical engineering and study co-author, explains that personalized, precision drug delivery systems could greatly benefit from responding to individuals’ immune system responses. Additionally, such devices could reduce “off-target effects” while ensuring the right drug dosages are delivered at the right times.

“The work presented here is a step towards that goal,” she added in a statement.

In training simulations, the team’s device could develop personalized, consistent dosage regimes in situations involving significant fibrosis. According to researchers, the new device’s AI could effectively control drug release even in a “worst-case scenario of very thick and dense scar tissue,” per the August 31 announcement.

According to Garry Duffy, the study’s senior author and a professor of anatomy and regenerative medicine at the University of Galway, the team initially focused on using the new robot for diabetes treatment. “Insulin delivery cannulas fail due to the foreign body response and have to be replaced often (approx. every 3-5 days),” told PopSci via email. “If we can increase the longevity of the cannula, we can then maintain the cannula for longer with less changes of the set required by the person living with diabetes.”

Beyond diabetes, they envision a future where the device can be easily adapted to a variety of medical situations and drug delivery regimens. According to Duffy, the advances could soon “provide consistent and responsive dosing over long periods, without clinician involvement, enhancing efficacy and reducing the need for device replacement because of fibrosis,” he said in the August 31 statement.

The post This drug-delivery soft robot may help solve medical implants’ scar tissue problem appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Future leaping robots could take a cue from seed-launching witch hazel plants https://www.popsci.com/environment/witch-hazel-seed-robot/ Tue, 29 Aug 2023 15:00:00 +0000 https://www.popsci.com/?p=566561
Close-up of witch hazel plant
Witch hazel can eject seeds from their shells as fast as 30 feet-per-second. Deposit Photos

Despite their small size, witch hazel seed pods pack a powerful punch.

The post Future leaping robots could take a cue from seed-launching witch hazel plants appeared first on Popular Science.

]]>
Close-up of witch hazel plant
Witch hazel can eject seeds from their shells as fast as 30 feet-per-second. Deposit Photos

Despite witch hazel plants’ various medicinal uses, a closer inspection of their propagation techniques more resembles cannonfire than convalescence. As witch hazels’ woody seed capsules dry and warp, they split open and build pressure against the seeds themselves. Eventually, the pressure ejects the seeds from their pods in an impressive display of force for something so small. Getting a detailed look at that process, however, is difficult to do with the naked eye.

“If you blink you’ll miss it,” Justin Jorge, a Duke University biomechanical engineering graduate student, explained in a statement earlier this month.

As detailed in a paper recently published in the Journal of the Royal Society Interface, training high-powered cameras on witch hazel seed launches is providing a better glimpse at how the delicate plants can exert so much comparative force. In time, their findings could influence a new generation of leaping robots.

[Related: Leaping robots take physics lessons from grasshoppers.]

The witch hazel deep dive comes courtesy of senior author and Duke University professor of biology Sheila Patek, who Jorge worked alongside as part of their PhD thesis. According to the study, Patek’s team first trained a high-speed, 100,000 frames per second video camera on three varieties of seed-bearing witch hazel plants collected from Duke Gardens and Duke Forest. Researchers then waited for the plants to propagate, and examined their speeds and velocities. The playbacks proved both impressive, and surprising.

Upon review, witch hazel seeds accelerate upwards of 30 feet-per-second within just half a millisecond of leaving their pods. What’s more, the speed is largely uniform across plant breeds, regardless of seed sizes ranging from as light as just 15 milligrams, to seeds 10 times as massive.

Robots photo

“We found that the launch speeds were all roughly the same,” continued Jorge. “Given the order of magnitude difference in seed masses, I was not expecting that at all.”

Further investigation revealed that witch hazel plant varieties’ seed capsules are proportional to the size of the seeds themselves—heavier seeds mean larger pods, thus a greater reserve of elastic energy. This ensures that, regardless of plant or seed size, the rapidfire launch speed remains consistent.

Jorge explained that while most people may associate springiness with coils, rubber bands, or archery bows, biology allows for “all these weird, complex shapes.” It stands to reason, then, that these unique designs could improve synthetic springs, such as those found within certain small jumping robots.

“People ask me all the time, ‘why are you looking at seed-shooting plants?’” said Jorge. “It’s the weirdness of their springs.”

The post Future leaping robots could take a cue from seed-launching witch hazel plants appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These AI-powered robot arms are delicate enough to pick up Pringles chips https://www.popsci.com/technology/robot-arms-pringles/ Thu, 24 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=565256
Robot arms lifting a single Pringles chip
The 'Bi-Touch' system relies on deep reinforcement learning to accomplish delicate tasks. Yijiong Lin

Using deep reinforcement learning and 'proprioception,' the two robotic limbs can pick up extremely fragile objects.

The post These AI-powered robot arms are delicate enough to pick up Pringles chips appeared first on Popular Science.

]]>
Robot arms lifting a single Pringles chip
The 'Bi-Touch' system relies on deep reinforcement learning to accomplish delicate tasks. Yijiong Lin

A bimanual robot controlled by a new artificial intelligence system responds to real-time tactile feedback so precisely that it can pick up individual Pringles chips without breaking them. Despite the delicacy required for such a feat, the AI program’s methodology allows it to learn specific tasks solely through simulated scenarios in just a couple of hours.

Researchers at University of Bristol’s Bristol Robotics Laboratory detailed their new “Bi-Touch” system in a new paper published on August 23 via IEEE Robotics and Automation Letters. In their review, the team highlights how their AI directs its pair of robotic limbs to “solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way,” lead author and engineering professor Yijiong Lin said in a statement on Thursday.

What makes the team’s advancements so promising is its leveraging of two robotic arms, versus a single limb as usually seen in most tactile robotic projects. Despite doubling the number of limbs, however, training only takes just a few hours. To accomplish this, researchers first train their AI in a simulation environment, then apply the finalized Bi-Touch system to their physical robot arms.

[Related: This agile robotic hand can handle objects just by touch.]

“With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch,” Lin continued. “And more importantly, we can directly apply these agents from the virtual world to the real world without further training.”

Bi-Touch system’s success is owed to its reliance on Deep Reinforcement Learning (Deep-RL), in which robots attempt tasks through copious trial-and-error experimentation. When successful, researchers give AI a “reward” note, much like when training a pet. Over time, the AI learns the best steps to achieve its given goal—in this case, using the two limbs each capped with a single, soft pad to pick up and maneuver objects such as foam brain mold, a plastic apple, and an individual Pringles chip. With no visual inputs, the Bi-Touch system only relies on proprioceptive feedback such as force, physical positioning, and self-movement.

The team hopes that their new Bi-Touch system could one day deploy in industries such as fruit-picking, domestic services, and potentially even integrate into artificial limbs to recreate touch sensations. According to researchers, the Bi-Touch system’s utilization of “affordable software and hardware,” coupled with the impending open-source release of its coding, ensures additional teams around the world can experiment and adapt the program to their goals.

The post These AI-powered robot arms are delicate enough to pick up Pringles chips appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These nifty drones can lock together in mid-air to form a bigger, stronger robot https://www.popsci.com/technology/drones-assemble-air/ Wed, 23 Aug 2023 22:00:00 +0000 https://www.popsci.com/?p=564938
two drones coming together while flying
Drones, assemble. University of Tokyo / Advanced Intelligent Systems

Teamwork makes the dream work.

The post These nifty drones can lock together in mid-air to form a bigger, stronger robot appeared first on Popular Science.

]]>
two drones coming together while flying
Drones, assemble. University of Tokyo / Advanced Intelligent Systems

A drone’s size affects what it can—or can’t—do. If a drone is too small, it may be limited in the types of tasks it can complete, or the amount of heavy lifting it can do. But if a drone is too big, it may be difficult to get it up in the air or have it navigate around tricky structures, but it may make up for that in other ways 

A solution that a group of engineers from the University of Tokyo came up with is to create a set of drone units that can assemble and disassemble in the air. That way, they can break up to fit into tight spaces, but can also combine to become stronger if needed. 

Last month, the detailed design behind this type of system, called Tilted-Rotor-Equipped Aerial Robot With Autonomous In-Flight Assembly and Disassembly Ability (TRADY), was described in the journal Advanced Intelligent Systems

The drones used in the demonstration look like normal quadcopters but with an extra component (a plug or jack). The drone with the plug and the drone with the jack are designed to lock into one another, like two pieces of a jigsaw puzzle. 

[Related: To build a better crawly robot, add legs—lots of legs]

“The team developed a docking system for TRADY that takes its inspiration from the aerial refueling mechanism found in jet fighters in the form of a funnel-shaped unit on one side of the mechanism means any errors lining up the two units are compensated for,” according to Advanced Science News. To stabilize the units once they intertwine, “the team also developed a unique coupling system in the form of magnets that can be switched on and off.”

Engineering photo
The assembly mechanism, illustrated. University of Tokyo / Advanced Intelligent Systems

Although in their test runs, they only used two units, the authors wrote in the paper that this methodology “can be easily applied to more units by installing both the plug type and the jack type of docking mechanisms in a single unit.” 

To control these drones, the researchers developed two systems: a distributed control system for operating each unit independently that can be switched to a unified control system. An onboard PC conveys the position of each drone to allow them to angle themselves appropriate for coming together and apart. 

Other than testing the smoothness of the assembly and disassembly process, the team put these units to work by giving them tasks to do, such as inserting a peg into a pipe, and opening a valve. The TRADY units were able to complete both challenges. 

“As a future prospect, we intend to design a new docking mechanism equipped with joints that will enable the robot to alter rotor directions after assembly. This will expand the robot’s controllability in a more significant manner,” the researchers wrote. “Furthermore, expanding the system by utilizing three or more units remains a future challenge.” 

Engineering photo
Here is the assembled drone units working to turn a valve. University of Tokyo / Advanced Intelligent Systems

The post These nifty drones can lock together in mid-air to form a bigger, stronger robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ukraine claims it built a battle drone called SkyKnight that can carry a bomb https://www.popsci.com/technology/ukraine-skyknight-drone/ Tue, 22 Aug 2023 22:09:09 +0000 https://www.popsci.com/?p=564533
ukraine troops training exercise
Ukrainian soldiers during a training exercise in 2017. Anthony Jones / US Army

The announcement came via the Ministry of Defense’s Telegram account.

The post Ukraine claims it built a battle drone called SkyKnight that can carry a bomb appeared first on Popular Science.

]]>
ukraine troops training exercise
Ukrainian soldiers during a training exercise in 2017. Anthony Jones / US Army

For 18 months, Russia’s invasion of Ukraine has been fought largely from the ground. Neither Russia nor Ukraine has been able to establish air superiority, or the ability to completely rule the sky at the other’s expense. While Ukraine is working to gradually build up a new air force using NATO-model fighters like the F-16 (which nations including Denmark and the Netherlands have pledged to the country), it is also using a range of drones to drop death from the sky. On August 19, the Ukrainian Ministry of Defense announced a small new armed drone for military use: the SkyKnight.

The announcement of the new UAV was posted to the Ministry of Defense’s Telegram account, and features an image of the SkyKnight drone. The vehicle is compact, and features four limbs like a common quadcopter, but each limb sports two rotors, making the drone an octocopter. A sensor is fitted on the front of the drone, with a camera facing forwards, and what appears to be batteries are strapped, in an unusual configuration, to the top of the drone’s hull. Underneath it holds a 2.5 kg (5.5 lbs) bomb. That’s between three and five times as heavy as a hand grenade, and would be a large explosive for a drone of this size.

“This can be used against stationary and moving targets – anything from tanks, armored vehicles, artillery and other systems, to infantry units on the move and in trenches, and against any target that is identified as a Russian military one,” says Samuel Bendett, an analyst at the Center for Naval Analysis and adjunct senior fellow at the Center for New American Security. “This payload can be effective and devastating against infantry units, as evidenced from multiple videos of similar attacks by quadcopters.”

Before the massive invasion of Ukraine in February 2022, the country fought a long, though more geographically confined, war against Russian-backed separatists in the Donbas of Eastern Ukraine. Using quadcopters as bombers was a regular occurance in that war, like when in 2018 Ukrainian forces used a DJI Mavic quadcopter to drop a bomb on trenches. While the Mavic was not built for war, it is a simple and easy to use machine, which could be modified in the field to carry a small explosive and a release claw. Paired with the drone’s cameras and human operators watching from a control screen, soldiers could get a bird’s eye view of their human targets, and then attack from above.

This tactic persisted in the larger war from February 2022, where small drones joined medium and larger drones in the arsenals of both nations fighting. The war in Ukraine is hardly the first war to see extensive use of drones, but none so far have matched it in sheer scale.

“Never before have so many drones been used in a military confrontation,” writes Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Many, possibly the majority, of the drones used by Ukrainian forces were originally designed for commercial purposes or for hobbyists.” 

The SkyKnight is described as domestically produced, a production of the present Ukrainian industry built for this specific war. It appears to share parts in common from the broader hobbyist drone market, and its assembly, complete with strapped-on batteries and exposed wires (at least according to how it’s depicted on Telegram), speaks to ease of assembly over finicky obsession with form.

In the announcement of the SkyKnight, the Ministry of Defence says that if the pilot has any familiarity with DJI or Autel drones, which stabilize themselves in flight, then the pilot can learn to fly the SkyKnight in about a week.

“DJI and Autel are a staple [Uncrewed Aerial Vehicle] across the Ukrainian military, with many thousands fielded since the start of the Russian invasion,” says Bendett. “DJI especially as a go-to drone for ISR, target tracking, artillery spotting and light combat missions. Ukrainian forces and drone operators have amassed a lot of experience flying these Chinese-made drones.”

Domestic manufacture is important, not just because of the shorter supply lines, but because DJI’s response to the conflict has been to ban the sale of its drone to Ukraine and Russia.

“The Chinese manufacturer DJI produces most of these systems,” writes Franke. “It officially suspended operations in Ukraine and Russia a few weeks into the war, but its drones, most notably the Mavic type, remain among the most used and most sought-after systems.”

By making its own self-detonating drone weapons, Ukraine is able to use the drones as a direct weapon, which can attack from above and is hard to see or stop. In a war where soldiers describe fighting without quadcopters as being “like blind kittens,” a flying camera with a bomb attached makes soldiers deadly, at greater range, and in new ways.

Beyond the airframe and remote control, the Ministry of Defense boasts that the SkyKnight has an automatic flight mode, and can continue to fly towards a target selected by the operator even if the operator loses communication with the drone.

“Ukraine is investing a lot of resources in domestic combat drone production to meet the challenge from the Russian military that is increasingly fielding more quadcopter and FPV-type drones,” says Bendett. “This SkyKnight needs to be manufactured in sufficient quantities to start making a difference on the battlefield.”

The post Ukraine claims it built a battle drone called SkyKnight that can carry a bomb appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
See the coolest and strangest machines from the World Robot Conference https://www.popsci.com/technology/world-robot-conference-2023/ Tue, 22 Aug 2023 19:00:00 +0000 https://www.popsci.com/?p=564388
2023 World Robot Conference In Beijing robots
Humanoid robots are on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. VCG / VCG via Getty Images

From cute bionic cats to giant welding arms, check out this year's bots in pictures.

The post See the coolest and strangest machines from the World Robot Conference appeared first on Popular Science.

]]>
2023 World Robot Conference In Beijing robots
Humanoid robots are on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. VCG / VCG via Getty Images

The annual World Robot Conference wrapped up today in Beijing after a full week of humanoid, dog, butterfly, and industrial showcases. First launched in 2015, the WRC serves as a meetup for designers, investors, students, researchers, and curious visitors to check out the latest advancements in AI-powered machinery.

From four-legged assistants, to human-like expressive heads, to bipedal “cyborgs,” WRC offered some of the latest, greatest, and strangest projects currently in the works. Check out the gallery below highlighting which robots dazzled onlookers and could soon move from the conference showroom to the everyday world.

BEIJING, CHINA - AUGUST 19: A child interacts with a bionic cat during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Du Jianpo/VCG via Getty Images)
A child interacts with a bionic cat during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo: Du Jianpo / VCG via Getty Images
BEIJING, CHINA - AUGUST 19: Welding robots assemble a car during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Zhan Min/VCG via Getty Images)
Welding robots assemble a car during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo by Zhan Min / VCG via Getty Images
BEIJING, CHINA - AUGUST 18, 2023 - Humanoid robots perform a dance during the 2023 World Robot Conference in Beijing, China, August 18, 2023. In the first half of 2023, the output of China's industrial robots and service robots increased by 5.4% and 9.6%, respectively. (Photo credit should read CFOTO/Future Publishing via Getty Images)
Humanoid robots perform a dance during the 2023 World Robot Conference in Beijing, China, August 18, 2023. In the first half of 2023, the output of China’s industrial robots and service robots increased by 5.4% and 9.6%, respectively. Photo: CFOTO / Future Publishing via Getty Images
BEIJING, CHINA - AUGUST 18: Humanoid robots are on display at the booth of EX Robots during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. (Photo by Song Yu/VCG via Getty Images)
 Humanoid robots are on display at the booth of EX Robots during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. Photo: Song Yu / VCG via Getty Images
BEIJING, CHINA - AUGUST 19: People visit brain-computer interface (BCI) exhibition area during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Zhan Min/VCG via Getty Images)
People visit brain-computer interface (BCI) exhibition area during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo: by Zhan Min / VCG via Getty Images
BEIJING, CHINA - AUGUST 18: UBTECH Panda Robot performs during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. (Photo by VCG/VCG via Getty Images)
UBTECH Panda Robot performs during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. Photo: VCG / VCG via Getty Images
BEIJING, CHINA - AUGUST 19: Humanoid robot is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Zhan Min/VCG via Getty Images)
Humanoid robot is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo: Zhan Min/VCG via Getty Images
BEIJING, CHINA - AUGUST 16: Unitree robotic dog is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. (Photo by VCG/VCG via Getty Images)
Unitree robotic dog is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. Photo: VCG/VCG via Getty Images

The post See the coolest and strangest machines from the World Robot Conference appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These bathroom-cleaning bots won’t replace human janitors any time soon https://www.popsci.com/technology/somatic-bathroom-robot/ Tue, 22 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=564294
Artist rendering of Somatic bathroom cleaning robot entering restroom
Somatic offers a robot service that automates many janitorial tasks. Somatic

Somatic offers a fleet of automated bathroom cleaning robots for businesses. Some experts are wary.

The post These bathroom-cleaning bots won’t replace human janitors any time soon appeared first on Popular Science.

]]>
Artist rendering of Somatic bathroom cleaning robot entering restroom
Somatic offers a robot service that automates many janitorial tasks. Somatic

Despite long hours, physical demands, and often unhygienic working conditions, the average salary of a janitor in the United States is less than $34,000. Somatic, a robotic cleaning service for commercial buildings founded in 2020, offers a robot that can do the job for barely a third of that wage.

“We aim for customers to walk in and not be able to tell if a person or a robot cleaned the bathroom,” Somatic CEO Michael Levy tells PopSci.

In time-lapsed video footage recently highlighted by New Atlas and elsewhere, a Somatic bathroom bot can be seen roaming the halls of a medical facility, entering both single and multi-stall restrooms, as well as attending to tasks like spraying disinfectant and vacuuming the floor. According to Levy, “Less frequent tasks like restocking consumables (paper towels, toilet paper, and soap) and taking the trash out still require local staff input.”

[Related: The germiest places you might not be cleaning.]

But although Levy assures customers—and laborers—that its line of semi-autonomous machines first unveiled in 2020 are meant as an aid alongside sanitation workers, others aren’t so sure.

For Paris Marx, a longtime tech industry critic and host of the podcast, Tech Won’t Save Us, Somatic’s robots are another  example of an engineering team believing they can understand and automate tasks when “they really have no idea what a janitor (in this case) really does.”

Marx notes the more pressing concerns in many automation drives are the scare tactics employed by managers, who often use the threat of new technologies to force workers into accepting lower wages, worse conditions, and consent to surveillance technologies on the job.

Robots photo

Meanwhile, those within the janitorial and service industries echoed their concern about the rise of automated products like those offered by Somatic.

“As we have seen during the WGA and SAG-AFTRA strikes this summer, every industry is experiencing technological changes that may impact workers’ lives,” a spokesperson for Service Employees International Union (SEIU) said. “Whether it’s actors, writers, or janitors, what is important is for employers to negotiate with workers through their unions how these technologies will be employed to ensure the best outcomes—for consumers, workers, their families, and our communities.”

[Related: Study shows the impact of automation on worker pay.]

According to Marx, the current onslaught of AI-powered and robotic labor products are reminiscent of tech companies’ and media’s job doomsday prophecies from the mid-2010s. “[T]he reality was that while robots were trialed in everything from elder care to food service, very few of them actually stuck around because they simply didn’t do the job as well as a human,” says Marx. “A decade later, we’re repeating a similar cycle with generative AI and robots like Somatic’s bathroom cleaning robot, but I don’t expect the outcome to be any different than last time.”

“When you watch their demonstration video, the robot is only going over clean bathrooms—it never shows us how it handles a real mess,  says Marx, noting the robot appears slow, and doesn’t seem to provide a deep clean most people might expect for a public restroom.

According to Levy, each initial setup of their bathroom bots is done virtually from Somatic’s office. “We ‘play the worst video game ever,’” he says. “We clean the bathroom one time using software we built. That cleaning plan is then pushed to the robot via an [over the air] update.”

For Marx, however, there’s room for improvement. “I also didn’t see it touch the sinks,” they note.

The post These bathroom-cleaning bots won’t replace human janitors any time soon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dead whales and dinosaur eggs: 7 fascinating images by researchers https://www.popsci.com/environment/science-images-competition-2023/ Fri, 18 Aug 2023 10:00:00 +0000 https://www.popsci.com/?p=563700
Dead humback whale on beach from aerial view
Researchers from the University of Glasgow’s Scottish Marine Animal Stranding Scheme conduct a necropsy of a stranded humpback whale. Submitted by Professor Paul Thompson, photo captured by James Bunyan from Tracks Ecology

See the world from a scientist's perspective.

The post Dead whales and dinosaur eggs: 7 fascinating images by researchers appeared first on Popular Science.

]]>
Dead humback whale on beach from aerial view
Researchers from the University of Glasgow’s Scottish Marine Animal Stranding Scheme conduct a necropsy of a stranded humpback whale. Submitted by Professor Paul Thompson, photo captured by James Bunyan from Tracks Ecology

Oh, the wonders scientists see in the field. Documenting the encounters can be an integral part of the discovery process, but it can also pull others into the experience. These seven photos and illustrations are the winners of this year’s BMC Ecology and Evolution image competition, which gets submissions from researchers all around the world each year. It includes four categories: “Research in Action,” “Protecting our planet,” “Plants and Fungi,” and “Paleoecology.”

See the full gallery of winners and their stories on the BMC Ecology and Evolution website. And explore last year’s winners here.

Fruiting bodies of small orange fungi
An invasive orange pore fungus poses unknown ecological consequences for Australian ecosystems. Cornelia Sattler
Beekeepers holding honeycomb in Guinea
The Chimpanzee Conservation Center in Guinea to protect our planet and empower local communities is a sustainable beekeeping project, launched in the surrounding villages of Faranah, which showcases an inspiring solution to combat deforestation caused by traditional honey harvesting from wild bees. By cultivating their own honey, the locals avoid tree felling and increase production. Roberto García-Roa
Marine biologist releasing black-tip reef shark in ocean
A researcher releases a new-born blacktip reef shark in Mo’orea, French Polynesia. Victor Huertas
Hadrosaur egg with embryo. Illustration.
This digital illustration is based on a pair of hadrosauroid dinosaur eggs and embryos from China’s Upper Cretaceous red beds, dating back approximately 72 to 66 million years ago. It depicts an example of a “primitive” hadrosaur developing within the safety of its small egg. Submitted by Jordan Mallon. Restoration by Wenyu Ren.
Brown spider on wood parasitized by fungus
While it is not uncommon to encounter insects parasitised by “zombie” fungi in the wild, it is a rarity to witness large spiders succumbing to these fungal conquerors. In the jungle, near a stream, lies the remains of a conquest shaped by thousands of years of evolution. Roberto García-Roa
Marine biologists steering underwater robot in the ocean
Researchers from the Hoey Reef Ecology Lab deploy an underwater ROV at Diamond Reef within the Coral Sea Marine Park. Victor Huertas

The post Dead whales and dinosaur eggs: 7 fascinating images by researchers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Origami-inspired robot can gently turn pages and carry objects 16,000 times its weight https://www.popsci.com/technology/robot-gripper-kirigami/ Tue, 15 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=563193
Soft robot gripper turning a book page
The new gripper is delicate enough to turn individual book pages. NC State

The gripper design finds a balance between 'strength, precision and gentleness.'

The post Origami-inspired robot can gently turn pages and carry objects 16,000 times its weight appeared first on Popular Science.

]]>
Soft robot gripper turning a book page
The new gripper is delicate enough to turn individual book pages. NC State

The Japanese art of paper cutting and folding known as kirigami has provided a wealth of inspiration for ingenious robotic designs, but the latest example might be the most versatile and impressive yet. As first detailed earlier this month in Nature Communications, a team at North Carolina State University recently developed a new soft robot gripper sensitive enough to handle water droplets and turn book pages, but strong enough to achieve a 16,000 payload-to-weight ratio. With additional refinements, engineers believe the gripper could find its way into a wide array of industries—as well as into human prosthetics.

“It is difficult to develop a single, soft gripper that is capable of handling ultrasoft, ultrathin, and heavy objects, due to tradeoffs between strength, precision and gentleness,” study author Jie Yin, an NC State associate professor of mechanical and aerospace engineering, said in a statement. “Our design achieves an excellent balance of these characteristics.”

[Related: Foldable robots with intricate transistors can squeeze into extreme situations.]

While previous soft grippers have been developed using elements of kirigami, the researchers’ tendril-like structures distribute their force in such a way as to be delicate and precise enough to help zip certain zippers and pick up coins. As New Scientist recently also noted, the shape and angling allows the 0.4 gram grippers to hold objects as heavy as 6.4 kilograms—a payload-to-weight ratio 2.5 times higher than the previous industry record. 

Because the grippers’ abilities derive from their design and not the materials themselves, the team also showcased additional potential by building iterations from plant leaves. The potential for biodegradable grippers could prove extremely useful in situations where they are only temporarily necessary, such as handling dangerous medical waste like needles.

Robots photo

If all that weren’t enough, the NC State team went yet one step further by experimenting with attaching their grippers to a myoelectric prosthetic hand controlled via muscle activity in a user’s forearm. “The new gripper can’t replace all of the functions of existing prosthetic hands, but it could be used to supplement those other functions,” said Helen Huang, paper co-author and NC State’s Jackson Family Distinguished Professor in the Joint Department of Biomedical Engineering. “And one of the advantages of the kirigami grippers is that you would not need to replace or augment the existing motors used in robotic prosthetics. You could simply make use of the existing motor when utilizing the grippers.”

Yin, Huang, and their colleagues hope to eventually collaborate with robotic prosthetic makers, food processing companies, as well as electronics and pharmaceutical businesses to develop additional usages for their soft grippers.

The post Origami-inspired robot can gently turn pages and carry objects 16,000 times its weight appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This waddling robot could guide baby turtles to the sea https://www.popsci.com/technology/baby-sea-turtle-robot/ Tue, 08 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=561891
Sea turtle robot crawling across sand
A tiny robot mimics the gait and movement of baby sea turtles. University of Notre Dame

Engineers synthesized the gaits and anatomy of multiple sea turtle species to create a helpful turtle bot.

The post This waddling robot could guide baby turtles to the sea appeared first on Popular Science.

]]>
Sea turtle robot crawling across sand
A tiny robot mimics the gait and movement of baby sea turtles. University of Notre Dame

It’s sad but true: only around one in 1,000 baby sea turtles survive the arduous trek from their beach nests to the open ocean. This journey has only grown more fraught thanks to continued seaside development and all manner of human trash obstacles for the tiny creatures. To both better understand their movements, as well as potentially help them out, a team of researchers at the University of Notre Dame recently designed and built their own turtle robot.

Their results? Well, take a look for yourself and try not to say “Awww.”

Animals photo

“The sea turtle’s unique body shape, the morphology of their flippers and their varied gait patterns makes them very adaptable,” explained Yasemin Ozkan-Aydin, an assistant professor of electrical engineering and roboticist at the University of Notre Dame who led the latest biomimicry project.

Along with an electrical engineering doctoral student Nnamdi Chikere and undergraduate John Simon McElroy, Ozkan-Aydin broke down sea turtles’ evolutionary design into a few key parts: an oval-shaped frame, four individually operated remote-controlled flippers, a multisensor device, battery, as well as an onboard control unit. The trio relied on silicone molds to ensure the flippers’ necessary flexibility, and utilized 3D printed rigid polymers for both its frame and flipper connectors.

[Related: Safely share the beach with endangered sea turtles this summer.]

To maximize its overall efficacy, the team’s new turtle-bot isn’t inspired by a single species. Instead, Ozkan-Aydin and her colleagues synthesized the gait patterns, morphology, and flipper anatomy of multiple turtle species to take “the most effective aspects from each,” she said on August 7.

Unlike other animal-inspired robots, however, Ozkan-Aydin’s turtle tech is initially intended solely to help their biological mirrors. “Our hope is to use these baby sea turtle robots to safely guide sea turtle hatchlings to the ocean and minimize the risks they face during this critical period,” she explains.

Judging from recent reports, they could use all the help they can get. According to the Wild Animal Health Fund, 6 out of 7 sea turtle species are currently considered threatened or endangered. The aptly named nonprofit sea turtle organization, See Turtles, lists a number of current threats facing the species, including getting entangled in fishing gear, illegal trade and consumption of eggs and meat, marine pollution, and global warming. 

The post This waddling robot could guide baby turtles to the sea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why industrial automation can be so costly https://www.popsci.com/technology/robot-profit-study/ Mon, 07 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=561580
Robotic arms welding car frames on automotive assembling line
Research indicates businesses can't necessarily ease their way into automation. Deposit Photos

A new study tracks robotic labor's potential for profit—and the rat race to maintain it.

The post Why industrial automation can be so costly appeared first on Popular Science.

]]>
Robotic arms welding car frames on automotive assembling line
Research indicates businesses can't necessarily ease their way into automation. Deposit Photos

Companies often invest in automation with the expectation of increased profits and productivity, but that might not always be the case. A recent study indicates businesses are likely to see diminished returns from automation—at least initially. What’s more, becoming too focused on robotic integration could hurt a company’s ability to differentiate itself from its competitors.

According to a new review of European and UK industrial data between 1995 and 2017, researchers at the University of Cambridge determined that many businesses experienced a “U-shaped curve” in profit margins as they moved to adopt robotic tech into their production processes. The findings, published on August 2 in IEEE Transactions on Engineering Management, suggest companies should not necessarily rush towards automation without first considering the wider logistical implications.

[Related: Workplace automation could affect income inequality even more than we thought.]

“Initially, firms are adopting robots to create a competitive advantage by lowering costs,” said Chandler Velu, the study’s co-author and a professor of innovation and economics at Cambridge’s Institute for Manufacturing. “But process innovation is cheap to copy, and competitors will also adopt robots if it helps them make their products more cheaply. This then starts to squeeze margins and reduce profit margin.”

As co-author Philip Chen also notes, researchers “intuitively” believed more robotic tech upgrades would naturally lead to higher profits, “but the fact that we see this U-shaped curve instead was surprising.” Following interviews with a “major American medical manufacturer,” the team also noted that as robotics continue to integrate into production, companies appear to eventually reach a point when their entire process requires a complete redesign. Meanwhile, focusing too much on robotics for too long could allow other businesses time to invest in new products that set themselves apart for consumers, leading to a further disadvantage.

[Related: Chipotle is testing an avocado-pitting, -cutting, and -scooping robot.]

“When you start bringing more and more robots into your process, eventually you reach a point where your whole process needs to be redesigned from the bottom up,” said Velu. “It’s important that companies develop new processes at the same time as they’re incorporating robots, otherwise they will reach this same pinch point.”

Regardless of profit margins and speed, all of this automation frequently comes at huge costs to human laborers. Last year, a study from researchers at MIT and Boston University found that the negative effects stemming from robotic integrations could be even worse than originally believed. Between 1980 and 2016, researchers estimated that automation reduced the wages of men without high school degrees by nearly nine percent, and women without the same degree by around two percent, adjusted for inflation.

The post Why industrial automation can be so costly appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA gears up to send a trio of rovers to the moon in 2024 https://www.popsci.com/technology/nasa-cadre-rovers/ Fri, 04 Aug 2023 13:30:00 +0000 https://www.popsci.com/?p=561096
Two NASA lunar CADRE rovers parked on the ground
Each prototype CADRE rover is roughly the size of a shoe box. NASA/JPL-Caltech

If successful, the CADRE robots could change how future space missions are planned.

The post NASA gears up to send a trio of rovers to the moon in 2024 appeared first on Popular Science.

]]>
Two NASA lunar CADRE rovers parked on the ground
Each prototype CADRE rover is roughly the size of a shoe box. NASA/JPL-Caltech

A team of small, solar-powered rovers are traveling to the moon next year. There, they will attempt to autonomously organize and carry out a mission with next-to-no input from NASA’s human controllers. If successful, similar robotic fleets could one day tackle a multitude of mission tasks, thus allowing their human team members to focus on a host of other responsibilities.

Three robots, each roughly the size of a carry-on suitcase, comprise the Cooperative Autonomous Distributed Robotic Exploration (CADRE) project. The trio will descend onto the lunar surface via tethers deployed by a 13-foot-tall lander. From there, NASA managers back on Earth, such as CADRE principal investigator Jean-Pierre de la Croix, plan to transmit a basic command such as “Go explore this region.”

[Related: Meet the first 4 astronauts of the ‘Artemis Generation’.]

“[T]he rovers figure out everything else: when they’ll do the driving, what path they’ll take, how they’ll maneuver around local hazards,” de la Croix explained in an August 2 announcement via NASA. “You only tell them the high-level goal, and they have to determine how to accomplish it.”

The trio will even elect a “leader” at their mission’s outset to divvy up work responsibilities, which will reportedly include traveling in formation, exploring a roughly 4,300 square foot region of the moon, and creating 3D topographical maps of the area using stereoscopic cameras. The results of CADRE’s roughly 14-day robot excursion will better indicate the feasibility of deploying similar autonomous teams on space missions in the years to come.

Engineer observes a development model rover during a test for NASA’s CADRE technology demonstration in JPL’s Mars Yard
Credit: NASA / JPL-Caltech

As NASA notes, the mission’s robot trifecta requires a careful balance of form and function. Near the moon’s equator—where the CADRE bots will land—temperatures can rise to as high as 237 degrees Fahrenheit. Each machine will need to be hardy enough to survive the harsh lunar climate and lightweight enough to get the job done, all while housing the computing power necessary to autonomously operate. To solve for this, NASA engineers believe installing a 30-minute wake-sleep cycle will allow for the robots to sufficiently cool off, assess their respective heath, and then elect a new leader to continue organizing their mission as necessary.

“It could change how we do exploration in the future,” explains Subha Comandur, CADRE project manager for NASA’s Jet Propulsion Laboratory. “The question for future missions will become: ‘How many rovers do we send, and what will they do together?’”

The post NASA gears up to send a trio of rovers to the moon in 2024 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robots could now understand us better with some help from the web https://www.popsci.com/technology/deepmind-google-robot-model/ Mon, 31 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=559920
a robot starting at toy objects on table
This robot is powered by RT-2. DeepMind

A new type of language model could give robots insights into the human world.

The post Robots could now understand us better with some help from the web appeared first on Popular Science.

]]>
a robot starting at toy objects on table
This robot is powered by RT-2. DeepMind

Tech giant Google and its subsidiary AI research lab, DeepMind, have created a basic human-to-robot translator of sorts. They describe it as a “first-of-its-kind vision-language-action model.” The pair said in two separate announcements Friday that the model, called RT-2, is trained with language and visual inputs and is designed to translate knowledge from the web into instructions that robots can understand and respond to.

In a series of trials, the robot demonstrated that it can recognize and distinguish between the flags of different countries, a soccer ball from a basketball, pop icons like Taylor Swift, and items like a can of Red Bull. 

“The pursuit of helpful robots has always been a herculean effort, because a robot capable of doing general tasks in the world needs to be able to handle complex, abstract tasks in highly variable environments — especially ones it’s never seen before,” Vincent Vanhoucke, head of robotics at Google DeepMind, said in a blog post. “Unlike chatbots, robots need ‘grounding’ in the real world and their abilities… A robot needs to be able to recognize an apple in context, distinguish it from a red ball, understand what it looks like, and most importantly, know how to pick it up.”

That means that training robots traditionally required generating billions of data points from scratch, along with specific instructions and commands. A task like telling a bot to throw away a piece of trash involved programmers explicitly training the robot to identify the object that is the trash, the trash can, and what actions to take to pick the object up and throw it away. 

For the last few years, Google has been exploring various avenues of teaching robots to do tasks the way you would teach a human (or a dog). Last year, Google demonstrated a robot that can write its own code based on natural language instructions from humans. Another Google subsidiary called Everyday Robots tried to pair user inputs with a predicted response using a model called SayCan that pulled information from Wikipedia and social media. 

[Related: Google is testing a new robot that can program itself]

AI photo
Some examples of tasks the robot can do. DeepMind

RT-2 builds off a similar precursor model called RT-1 that allows machines to interpret new user commands through a chain of basic reasoning. Additionally, RT-2 possesses skills related to symbol understanding and human recognition—skills that Google thinks will make it adept as a general purpose robot working in a human-centric environment. 
More details on what robots can and can’t do with RT-2 is available in a paper DeepMind and Google put online.

[Related: A simple guide to the expansive world of artificial intelligence]

RT-2 also draws from work done through vision-language models (VLMs) that have been used to caption images, recognize objects in a frame, or answer questions about a certain picture. So, unlike SayCan, this model can actually see the world around it. But to make it so that VLMs can control robots, a component for output actions needs to be added on to it. And this is done by representing different actions the robot can perform as tokens in the model. With this, the model can not only predict what the answer to someone’s query might be, but it can also generate the action most likely associated with it. 

DeepMind notes that, for example, if a person says they’re tired and wants a drink, the robot could decide to get them an energy drink.

The post Robots could now understand us better with some help from the web appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What is DARPA? The rich history of the Pentagon’s secretive tech agency https://www.popsci.com/technology/what-is-darpa/ Sat, 29 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=559956
The U.S. Air Force X-37B Orbital Test Vehicle 4
The U.S. Air Force X-37B Orbital Test Vehicle 4 as seen in 2017. For a time, this vehicle was developed under DARPA. U.S. Air Force

The famous DOD research arm has been working on tech breakthroughs since 1958. Here's how it got started—and what it does now.

The post What is DARPA? The rich history of the Pentagon’s secretive tech agency appeared first on Popular Science.

]]>
The U.S. Air Force X-37B Orbital Test Vehicle 4
The U.S. Air Force X-37B Orbital Test Vehicle 4 as seen in 2017. For a time, this vehicle was developed under DARPA. U.S. Air Force

In 1957, the Soviet Union changed the night sky. Sputnik, the first satellite, was in orbit for just 22 days, but its arrival brought a tremendous set of new implications for nations down on Earth, especially the United States. The USSR was ahead in orbit, and the rocket that launched Sputnik meant that the USSR would likely be able to launch atomic or thermonuclear warheads through space and back down to nations below. 

In the defense policy of the United States, Sputnik became an example of “technological surprise,” or when a rival country demonstrates a new and startling tool. To ensure that the United States is always the nation making the surprises, rather than being surprised, in 1958 president Dwight D. Eisenhower created what we now know as DARPA, the Defense Advanced Research Projects Agency.

Originally called the Advanced Research Projects Agency, or ARPA, ARPA/DARPA has had a tremendous impact on technological development, both for the US military and well beyond it. (Its name became DARPA in 1972, then ARPA again from 1993 to 1996, and it’s been DARPA ever since.) The most monumental achievement of DARPA is the precursor to the service that makes reading this article possible. That would be ARPANET, the immediate predecessor to the internet as we know it, which started as a way to guarantee continuous lines of communication over a distributed network. 

Other projects include the more explicitly military ones, like work on what became the MQ-1 Predator drone, and endeavors that exist in the space between the civilian and military world, like research into self-driving cars.

What is the main purpose of DARPA?

The specific military services have offices that can conduct their own research, designed to bring service-specific technological improvements. Some of these are the Office of Naval Research, the Air Force Research Laboratory, and the Army’s Combat Capabilities Development Command (DEVCOM). DARPA’s mission, from its founding, is to tackle research and development of technologies that do not fall cleanly into any of the services, that are considered worth pursuing on their own merits, and that may end up in the hands of the services later.

How did DARPA start?

Sputnik is foundational to the story of DARPA and ARPA. It’s the event that motivated President Eisenhower to create the agency by executive order. Missiles and rockets at the time were not new, but they were largely secret. During World War II, Nazi Germany had launched rockets carrying explosives against the United Kingdom. These V-2 rockets, complete with some of the engineers who designed and built them, were captured by the United States and the USSR, and each country set to work developing weapons programs from this knowledge.

Rockets on their own are a devastatingly effective way to attack another country, because they can travel beyond the front lines and hit military targets, like ammunition depots, or civilian targets, like neighborhoods and churches, causing disruption and terror and devastation beyond the front lines. What so frightened the United States about Sputnik was that, instead of a rocket that could travel hundred of miles within Earth’s atmosphere, this was a rocket that could go into space, demonstrating that the USSR had a rocket that could serve as the basis for an Intercontinental Ballistic Missile, or ICBM. 

ICBMs carried with them a special fear, because they could deliver thermonuclear warheads, threatening massive destruction across continents. The US’s creation and use of atomic weapons, and then the development of hydrogen bombs (H-bombs), can also be understood as a kind of technological surprise, though both projects preceded DARPA.

[Related: Why DARPA put AI at the controls of a fighter jet]

Popular Science first covered DAPRA in July 1959, with “U.S. ‘Space Fence’ on Alert for Russian Spy-Satellites.” It outlined the new threat posed to the United States from space surveillance and thermonuclear bombs, but did not take a particularly favorable light to ARPA’s work.

“A task force or convoy could no longer cloak itself in radio silence and ocean vastness. Once spotted, it could be wiped out by a single H-bomb,” the story read. “This disquieting new problem was passed to ARPA (Advanced Research Projects Agency), which appointed a committee, naturally.”

That space fence formed an early basis for US surveillance of objects in orbit, a task that now falls to the Space Force and its existing tried-and-true network of sensors.

Did DARPA invent the internet?

Before the internet, electronic communications were routed through telecommunications circuits and switchboards. If a relay between two callers stopped working, the call would end, as there was no other way to sustain the communication link. ARPANET was built as a way to allow computers to share information, but pass it through distributed networks, so that if one node was lost, the chain of communication could continue through another.

“By moving packets of data that dynamically worked their way through a network to the destination where they would reassemble themselves, it became possible to avoid losing data even if one or more nodes went down,” describes DARPA

The earliest ARPANET, established in 1969 (it started running in October of that year), was a mostly West Coast affair. It connected nodes at University of California, Santa Barbara; University of California, Los Angeles; University of Utah; and Stanford Research Institute. By September 1971 it had reached the East Coast, and was a continent-spanning network connecting military bases, labs, and universities by the late 1970s, all sending communication over telephone lines.

[Related: How a US intelligence program created a team of ‘Superforecasters’]

Two other key innovations made ARPANET a durable template for the internet. The first was commissioning the first production of traffic routers to serve as relay points for these packets. (Modern wireless routers are a distant descendant of this earlier wired technology.) Another was setting up universal protocols for transmission and function, allowing products and computers made by different companies to share a communication language and form. 

The formal ARPANET was decommissioned in 1988, thanks in part to redundancy with the then-new internet. It had demonstrated that computer communications could work across great distances, through distributed networks. This became a template for other communications technologies pursued by the United States, like mesh networks and satellite constellations, all designed to ensure that sending signals is hard to disrupt.

“At a time when computers were still stuffed with vacuum tubes, the Arpanauts understood that these machines were much more than computational devices. They were destined to become the most powerful communications tools in history,” wrote Phil Patton for Popular Science in 1995.

What are key DARPA projects?

For 65 years, DARPA has spurred the development of technologies by funding projects and managing them at the research and development stage, before handing those projects off to other entities, like the service’s labs or private industry, to see them carried to full fruition. 

DARPA has had a hand in shaping technology across computers, sensors, robotics, autonomy, uncrewed vehicles, stealth, and even the Moderna COVID-19 vaccine. The list is extensive, and DARPA has ongoing research programs that make a comprehensive picture daunting. Not every one of DARPA’s projects yields success, but the ones that do have had an outsized impact, like the following list of game-changers:

Stealth: Improvements in missile and sensor technology made it risky to fly fighters into combat. During the Vietnam War, the Navy and Air Force adapted with “wild weasel” missions, where daring pilots would draw fire from anti-air missiles and then attempt to out-maneuver them, allowing others to destroy the radar and missile launch sites. That’s not an ideal approach. Stealth, in which the shape and materials of an aircraft are used to minimize its appearance on enemy sensors, especially radar, was one such adaptation pursued by DARPA to protect aircraft. DARPA’s development of stealth demonstrator HAVE BLUE (tested at Area 51) paved the way for early stealth aircraft like the F-117 fighter and B-2 bomber, which in turn cleared a path for modern stealth planes like the F-22 and F-35 fights, and the B-21 stealth bomber.

Vaccines: In 2011, DARPA started its Autonomous Diagnostics to Enable Prevention and Therapeutics (ADEPT) program. Through this, in 2013 Moderna received $25 million from DARPA, funding that helped support its work. It was a bet that paid off tremendously in the COVID-19 pandemic, and was one of many such efforts to fund and support everything from diagnostic to treatment to production technologies.

Secret space plane: The X-37B is a maneuverable shuttle-like robotic space plane that started as a NASA program, was developed under DARPA for a time, and then became an Air Force project. Today it is operated by Space Force. This robot can remain in orbit for extraordinarily long lengths of time, with a recent mission lasting over 900 days. The vehicle serves as a testbed for a range of technologies, including autonomous orbital flight as well as sensors and materials testing. There is some speculation as to what the X-37B will lead to in orbit. For now, observations match its stated testing objectives, but the possibility that a reusable, maneuverable robot could prove useful in attacking satellites is one that many militaries are cautiously worried about. 

That may be a list of some of DARPA’s greatest hits, and in recent years it’s announced projects relating to jetpacks, cave cartography, and new orbits for satellites. It even has a project related to scrap wood and paper, cleverly called WUD.

The post What is DARPA? The rich history of the Pentagon’s secretive tech agency appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new kind of thermal imaging sees the world in striking colors https://www.popsci.com/technology/hadar-thermal-camera/ Wed, 26 Jul 2023 16:00:00 +0000 https://www.popsci.com/?p=559135
Thermal vision of a home.
Thermal imaging (seen here) has been around for a while, but HADAR could up the game. Deposit Photos

Here's how 'heat-assisted detection and ranging,' aka HADAR, could revolutionize AI visualization systems.

The post A new kind of thermal imaging sees the world in striking colors appeared first on Popular Science.

]]>
Thermal vision of a home.
Thermal imaging (seen here) has been around for a while, but HADAR could up the game. Deposit Photos

A team of researchers has designed a completely new camera imaging system based on AI interpretations of heat signatures. Once refined, “heat-assisted detection and ranging,” aka HADAR, could one day revolutionize the way autonomous vehicles and robots perceive the world around them.

The image of a robot visualizing its surroundings solely using heat signature cameras remains in the realm of sci-fi for a reason—basic physics. Although objects are constantly emitting thermal radiation, those particles subsequently diffuse into their nearby environments, resulting in heat vision’s trademark murky, textureless imagery, an issue understandably referred to as “ghosting.”

[Related: Stanford researchers want to give digital cameras better depth perception.]

Researchers at Purdue University and Michigan State University have remarkably solved this persistent problem using machine learning algorithms, according to their paper published in Nature on July 26. Employing AI trained specifically for the task, the team was able to derive the physical properties of objects and surroundings from information captured by commercial infrared cameras. HADAR cuts through the optical clutter to detect temperature, material composition, and thermal radiation patterns—regardless of visual obstructions like fog, smoke, and darkness. HADAR’s depth and texture renderings thus create incredibly detailed, clear images no matter the time of day or environment.

AI photo
HADAR versus ‘ghosted’ thermal imaging. Credit: Nature

“Active modalities like sonar, radar and LiDAR send out signals and detect the reflection to infer the presence/absence of any object and its distance. This gives extra information of the scene in addition to the camera vision, especially when the ambient illumination is poor,” Zubin Jacob, a professor of electrical and computer engineering at Purdue and article co-author, tells PopSci. “HADAR is fundamentally different, it uses invisible infrared radiation to reconstruct a night-time scene with clarity like daytime.”

One look at HADAR’s visual renderings makes it clear (so to speak) that the technology could soon become a vital part of AI systems within self-driving vehicles, autonomous robots, and even touchless security screenings at public events. That said, a few hurdles remain before cars can navigate 24/7 thanks to heat sensors—HADAR is currently expensive, requires real-time calibration, and is still susceptible to environmental barriers that detract from its accuracy. Still, researchers are confident these barriers can be overcome in the near future, allowing HADAR to find its way into everyday systems. Still, HADAR is already proving beneficial to at least one of its creators.

“To be honest, I am afraid of the dark. Who isn’t?” writes Jacob. “It is great to know that thermal photons carry vibrant information in the night similar to daytime. Someday we will have machine perception using HADAR which is so accurate that it does not distinguish between night and day.”

The post A new kind of thermal imaging sees the world in striking colors appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Deep underground, robotic teamwork saves the day https://www.popsci.com/technology/search-and-rescue-robots/ Wed, 26 Jul 2023 01:00:00 +0000 https://www.popsci.com/?p=558901
Engineering photo
CREDIT: DARPA

Deploying a motley crew of robots that can roll, walk and fly is a smart strategy for search-and-rescue operations.

The post Deep underground, robotic teamwork saves the day appeared first on Popular Science.

]]>
Engineering photo
CREDIT: DARPA

This article originally appeared in Knowable Magazine.

When a Manhattan parking garage collapsed in April this year, rescuers were reluctant to stay in the damaged building, fearing further danger. So they used a combination of flying drones and a doglike walking robot to inspect the damage, look for survivors and make sure the site was safe for human rescuers to return.

Despite the robot dog falling over onto its side while walking over a pile of rubble — a moment that became internet-famous — New York Mayor Eric Adams called the robots a success, saying they had ensured there were no overlooked survivors while helping keep human rescuers safe.

Soon, rescuers may be able to call on a much more sophisticated robotic search-and-rescue response. Researchers are developing teams of flying, walking and rolling robots that can cooperate to explore areas that no one robot could navigate on its own. And they are giving robots the ability to communicate with one another and make many of their own decisions independent of their human controller.

Such teams of robots could be useful in other challenging environments like caves or mines where it can be difficult for rescuers to find and reach survivors. In cities, collapsed buildings and underground sites such as subways or utility tunnels often have hazardous areas where human rescuers can’t be sure of the dangers.

Operating in such places has proved difficult for robots. “You have mud, rock, rubble, constrained passages, large open areas … Just the range and complexity of these environments present a lot of mobility challenges for robots,” says Viktor Orekhov, a roboticist and a a technical advisor to the Defense Advanced Research Projects Agency (DARPA), which has been funding research into the field.

Underground spaces are also dark and can be full of dust or smoke if they are the site of a recent disaster. Even worse, the rock and rubble can block radio signals, so robots tend to lose contact with their human controller the farther they go.

Despite these difficulties, roboticists have made progress, says Orekhov, who coauthored an overview of their efforts in the 2023 Annual Review of Control, Robotics, and Autonomous Systems.

One promising strategy is to use a mix of robots, with some combination of treads, wheels, rotors and legs, to navigate the different spaces. Each type of robot has its own unique set of strengths and weaknesses. Wheeled or treaded robots can carry heavy payloads, and they have big batteries that allow them to operate for a long time. Walking robots can climb stairs or tiptoe over loose rubble. And flying robots are good at mapping out big spaces quickly.

There are also robots that carry other robots. Flying robots tend to have relatively short battery lives, so rescuers can call on “marsupials” — wheeled, treaded or legged robots that carry the flying robots deep into the area to be explored, releasing them when there is a big space that needs to be mapped.

Engineering photo

CREDIT: ROBOTIC SYSTEMS LAB: LEGGED ROBOTICS AT ETH ZÜRICH

A team of robots also allows for different instruments to be used. Some robots might carry lights, others radar, sonar or thermal imaging tools. This diversity allows different robots to see under varied conditions of light or dust. All of the robots, working together, provide the humans that deploy them with a constantly growing map of the space they are working in.

Although teams of robots are good for overall mobility, they present a new problem. A human controller can have difficulty coordinating such a team, especially in underground environments, where thick walls block out radio signals.

One solution is to make sure the robots can communicate with one another. That allows a robot that’s gone deeper and lost radio contact with the surface to potentially relay messages through other robots that are still in touch. Robots could also extend the communications range by dropping portable radio relays, sometimes called “bread crumbs,” while on the move, making it easier to stay in contact with the controller and other robots.

Even when communication is maintained, though, the demands of operating several robots at once can overwhelm a single person. To solve that problem, researchers are working on giving the robots autonomy to cooperate with one another.

In 2017, DARPA funded a multiyear challenge to develop technologies for robots deployed underground. Participants, including engineers working at universities and technology companies, had to map and search a complex subterranean space as quickly and efficiently as possible.

Engineering photo
Participants in the DARPA challenge used teams of robots to explore a varied underground space that included tunnels, caves and urban spaces such as subway stations.
CREDIT: DARPA

The teams that performed best at this task were those who gave the robots some autonomy, says Orekhov. When robots lost touch with one another and their human operator, they could explore on their own for a certain amount of time, then return to radio range and communicate what they had found.

One team, from Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO), took this further by designing its robots to make decisions cooperatively, says Navinda Kottege, a CSIRO roboticist who led the effort. The robots themselves decided which tasks to undertake — whether to map this room, explore that corridor or drop a communications node in a particular spot.

The robots also decided how to split up the work most effectively. If a rolling robot spotted a corridor that was too narrow to enter, a smaller walking robot could come and take over the job. If one robot needed to upload information to the base station, it might transmit it to a robot that was nearer to the entrance, and ask that robot to walk back to within communications range.

“There were some very interesting emergent behaviors. You could see robots swapping tasks amongst themselves based on some of those factors,” Kottege says.

In fact, the human operator can become the weak link. In one effort, a CSIRO robot wouldn’t enter a corridor, even though an unexplored area lay beyond it. The human operator took over and steered the robot through — but it turned out that the corridor had an incline that was too steep for the robot to manage. The robot knew that, but the human didn’t.

“So it did a backflip, and it ended up crushing the drone on its back in the process,” Kottege says.

To correct the problem, the team built a control system that lets the human operator decide on overall strategy, such as which parts of the course to prioritize, and then trusts the robots to make the on-the-ground decisions about how to get it done. “The human support could kind of mark out an area in the map, and say, ‘This is a high priority area, you need to go and look in that area,’” Kottege says. “This was very different than them picking up a joystick and trying to control the robots.”

This autonomous team concept broke new ground in robotics, says Kostas Alexis, a roboticist at the Norwegian University of Science and Technology whose team ultimately won the challenge. “The idea that you can do this completely autonomously, with a single human controlling the team of robots, just providing some high-level commands here and there … it had not been done before.”

Engineering photo

There are still problems to overcome, Orekhov notes. During the competition, for example, many robots broke down or got stuck and needed to be hauled off the course when the competition was over. After just an hour, most teams had only one or two functioning robots left.

But as robots become better, teams of them may one day be able to go into a hazardous disaster site, locate survivors and report back to their human operators with a minimum of supervision.

“There’s definitely lots more work that can and needs to be done,” Orekhov says. “But at the same time, we’ve seen the ability of the teams advanced so rapidly that even now, with their current capabilities, they’re able to make a significant difference in real-life environments.”

10.1146/knowable-072023-2

Kurt Kleiner is a freelance writer living in Toronto.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter.

Knowable Magazine | Annual Reviews

The post Deep underground, robotic teamwork saves the day appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot limbs could keep satellite plasma thrusters at an arm’s length https://www.popsci.com/technology/esa-onesat-robot-arms/ Tue, 25 Jul 2023 18:00:00 +0000 https://www.popsci.com/?p=559018
onesat robotic arm
This robot arm enables the OneSat to change and control its own orbit fuel. ESA

The ESA's OneSat is getting a pair of robotic appendages to help improve its fuel efficiency.

The post Robot limbs could keep satellite plasma thrusters at an arm’s length appeared first on Popular Science.

]]>
onesat robotic arm
This robot arm enables the OneSat to change and control its own orbit fuel. ESA

Satellites are usually a bespoke endeavor, requiring a range of specific needs and logistical plans to meet project requirements. To simplify some of these challenges a few years ago, a team of researchers and engineers working together between the European Space Agency (ESA), France’s CNES, the UK Space Agency, and Airbus unveiled the OneSat—a standardized telecommunications satellite capable of adjusting capacity, coverage areas, and frequency “on the fly” while in orbit.

On Tuesday, the ESA announced a new feature passed its inspection reviews and is ready to ship out with future OneSat launches. The latest addition is a “deployment and pointing system” featuring robotic arms capable of positioning a satellite’s plasma thrusters far away from its body. Such an addition will optimize the usage of OneSats’ xenon fuel reserves.

[Related: DARPA wants to push the boundaries of where satellites can fly.]

As tech news site The Next Web noted on Tuesday, the announcement means that OneSat is now “fully propelled by European technology.” In its official statement, the ESA explained that, “The deployment and pointing system promotes European autonomy and constitutes an essential feature of the industrial footprint in Europe of OneSat.”

Construction of the OneSat deployment and pointing system was truly a multinational effort within Europe—France’s Airbus designed the system, while Belgian manufacturer Euro Heat Pipes built the devices. A company in Spain provided its rotary actuator, while the booms, harnesses, and plasma thrusters were all also developed and assembled by multiple French outlets.

[Related: This giant solar power station could beam energy to lunar bases.]

OneSat deployment is meant to have extremely tangible effects across the world, including providing traditional TV broadcasting, boosting in-flight internet connections for air travelers, and helping remote communities gain previously unreliable or wholly lacking access to communications.

Because of their modular design, The Next Web also explained each OneSat can be built using largely off-the-shelf components, thereby allowing them to potentially enter the market in half the time as other satellite options, and for less cost. Multiple companies around the world have already placed orders for OneSat, including Japan’s main satellite operator, SKY Perfect JSAT Corporation. According to the ESA, this marks the first time a European satellite has been sold to a Japanese telecom company.

The post Robot limbs could keep satellite plasma thrusters at an arm’s length appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An electric cow, a robot mailman, and other automatons we overestimated https://www.popsci.com/technology/robot-fails/ Sat, 15 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=557015
Vintage robots
Predicting the future is fraught with peril. Popular Science

A look back at some robotic inventions that didn't quite get there.

The post An electric cow, a robot mailman, and other automatons we overestimated appeared first on Popular Science.

]]>
Vintage robots
Predicting the future is fraught with peril. Popular Science

In the series I Made a Big Mistake, PopSci explores mishaps and misunderstandings, in all their shame and glory.


In Hollywood, robots have come in many shapes and sizes. There’s the classic, corrugated-tubing-limbed Robot from the television series Lost In Space (1965); the clunky C-3PO and cute R2-D2, the Star Wars (1977) duo; the tough Terminator from The Terminator (1984) played by Arnold Schwarzenegger; the mischievous Johnny 5 from Short Circuit (1986); the kind-hearted, ill-fated Sonny in I, Robot (2004); and WALL-E (2008), the endearing trash-collecting robot. Robot-reality, however, still lags behind robot-fiction by quite a bit. Even Elon Musk’s October 2022 debut of Optimus—a distinctly masculine humanoid-frame robot prototype built by Tesla that, for the first time, wobbled along sans cables—failed to wow critics, who compared it to decades-old Japanese robotics and noted that it lacked any differentiating capabilities. 

And yet, automatons—self-propelled machines—are not new. More than two millennia ago, Archytas, an inventor from ancient Greece, built a pulley-activated wooden dove, capable of flapping its wings and flying a very short distance (a puff of air triggered a counterweight that set the bird in motion). Around the 12th century, Al-Jazari, a prolific Muslim inventor, built a panoply of automatons, including a water-powered mechanical orchestra—a harpist, a flutist, and two drummers—that rowed around a lake by means of mechanical oarsmen. Leonardo Da Vinci’s notebooks are peppered with detailed sketches of various automatons, including a mechanical knight capable of sitting up, waving its arms, and moving its head and purportedly debuted in 1495. But it was Czech playwright Karel Čapek in his 1920 play, R.U.R. (Rossum’s Universal Robots), who first coined the phrase “robot” as a distinct category of automaton. Robot comes from the Czech, robota, which means forced labor. As Popular Science editor, Robert E. Martin, wrote in December 1928, a robot is a “working automaton,” built to serve humans. Isaac Asimov enshrined Čapek’s forced-labor concept in his three laws of robotics, which first appeared in 1942 in his short story “Runaround.”

Predicting the future is fraught with peril, especially for the science writer enthralled by the promise of a new technology. But that hasn’t stopped Popular Science writers and editors from trying. Past issues are peppered with stories of robots ready to take the world by storm. And yet, our domestic lives are still relatively robot free. (Factory automation is another story.) That’s because we underestimate just how sophisticated humans can be, taking on menial tasks with ease, like sorting and folding laundry. Even in the 21st century, service and domestic robots disappoint: design-challenged, single-purpose machines, like the pancake-shaped vacuums that knock about our living rooms. Advances in machine learning may finally add some agility and real-world adaptability to the next generation of robots, but until we get there (if we get there), a look back at some of the stranger robotic inventions, shaped by the miscalculations and misguided visions of their human inventors, might inform the future. 

Robots for hire

Robots photo
Popular Science August 1940 Issue

Looking for “live” entertainment to punctuate a party, banquet, or convention? Renting out robot entertainers may have roots as far back as 1940, according to a Popular Science story that described the star-studded life of Clarence the robot. Clarence, who resembled a supersized Tinman, could walk, talk, gesture with his arms, and “perform other feats.” More than eight decades later, however, robot entertainers are only slightly more sophisticated than their 1940s ancestor, even if they do have sleeker forms. For instance, Disney deploys talking, arm-waving, wing-flapping robots to animate rides, but they’re still pre-programmed to perform a limited range of activities. Chuck E. Cheese, which made a name for itself decades ago by fusing high-tech entertainment with the dining experience, has been phasing out its once-popular animatronics. Pre-programmed, stiff-gestured animal robots seem to have lost their charm for kiddos. They still can’t dance, twirl, or shake their robot booties. Not until Blade Runner-style androids hit the market will robot entertainment be worth the ticket price.

Animatronics that smoke, drink, and—moo

Robots photo
Popular Science May 1933

In May 1933, Popular Science previewed the dawn of animatronics, covering a prototype bound for the 1934 Chicago World’s Fair. The beast in question was not prehistoric, did not stalk its prey, and had no teeth to bare, but it could moo, wink its eyes, chew its cud, and even squirt a glassful of milk. The robotic cow may have been World’s Fair-worthy in 1933, but by 1935, Brooklyn inventor Milton Tenenbaum upped the stakes when he introduced a life-like mechanical dummy that, according to Popular Science, was known for “singing, smoking, drinking, and holding an animated conversation.” Tenenbaum proposed using such robots for “animated movie cartoons.” Although Hollywood was slow to adopt mooing cows and smoking dummies, Tenenbaum may have been crystal-balling the animatronics industry that eventually propelled blockbuster films like Jaws, Jurassic Park, and Aliens. Alas, with the advent of AI-generated movies, like Waymark’s The Frost, released in March 2023, animatronic props may be doomed to extinction.

The robot mailman

Robots photo
Popular Science October 1976 Issue

In October 1976, Popular Science saw the automated future of office mail delivery, declaring that the “Mailmobile is catching on.” Mailmobiles were (past tense) automated office mail carts that followed “a fluorescent chemical that can be sprayed without harm on most floor surfaces.” Later models used laser-guidance systems to navigate office floors. Mailmobiles were likely doomed by the advent of email, not to mention the limitations of their singular purpose. But in their heyday they were loved by their human office workers, who bestowed them with nicknames like Ivan, Igor, and Blue-eyes. A Mailmobile even played a cinematic role in the FX series, The Americans. Despite being shuttered in 2016 by their manufacturer, Dematic, (the original manufacturer was Lear Siegler, who also made Lear jets), there’s no denying their impressive four decade run. Of course, the United States Postal Service employs automation to process mail, including computer vision and sophisticated sorting machines, but you’re not likely to see your mail delivered by a self-driving mail mobile anytime soon. 

Lawn chair mowers

Robots photo

Suburban homeowners would probably part with a hefty sum for a lawn-mowing robot that really works. Today’s generation of wireless automated grass-cutters may be a bit easier to operate than the tethered type that Popular Science described in April 1954, but they’re still sub-par when it comes to navigating the average lawn, including steep grades, rough turf, and irregular geometries. In other words, more than a half century after their debut, the heart-stopping price tags on robot lawn mowers are not likely to appeal to most homeowners. Sorry suburbanites—lawn-chair mowing is still a thing of the future.

Teaching robots

Robots photo
Popular Science May 1983 Issue

It was in the early 1980s that companies began to roll out what Popular Science dubbed personal robots in the May 1983 issue. With names like B.O.B, HERO, RB5X, and ITSABOX for their nascent machines, the fledgling companies had set their sights on the domestic service market. According to one of the inventors, however, there was a big catch: “Robots can do an enormous number of things. But right now they can’t do things that require a great deal of mechanical or cognitive ability.” That ruled out just about everything on the home front, except, according to the inventors and, by extension, Popular Science, “entertaining guests and teaching children.” Ahem. Teaching children doesn’t require a great deal of cognitive ability? Go tell that to a teacher. Gaffes aside, fast forward four decades and, with the capabilities of large language models demonstrated by applications like Open AI’s ChatGPT, we might be on the cusp of building robots with just enough cognitive ability to somewhat augment the human learning experience (if they ever learn to get the facts right). As for robots that can reliably fold laundry and cook dinner while you’re at work? Don’t hold your breath.

The post An electric cow, a robot mailman, and other automatons we overestimated appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Chipotle is testing an avocado-pitting, -cutting, and -scooping robot https://www.popsci.com/technology/chipotle-avocado-robot/ Thu, 13 Jul 2023 19:00:00 +0000 https://www.popsci.com/?p=556746
Chipotle worker removing peeled and sliced avocados from Autocado robot
Autocado halves, peels, and cores avocados in half the time humans can. Chipotle

The prototype machine reportedly helps workers cut the time it takes to make guac by half.

The post Chipotle is testing an avocado-pitting, -cutting, and -scooping robot appeared first on Popular Science.

]]>
Chipotle worker removing peeled and sliced avocados from Autocado robot
Autocado halves, peels, and cores avocados in half the time humans can. Chipotle

According to Chipotle, it takes approximately 50 minutes for human employees to cut, core, and scoop out enough avocados to make a fresh batch of guacamole. It’s such a labor-intensive process that Chipotle reports some locations apparently have workers wholly “dedicated” to the condiment composition. The time it takes to complete the lengthy task could soon be cut in half, however, thanks to a new robotic coworker.

On Wednesday, Chipotle announced its partnership with the food automation company Vebu to roll out the Autocado—an aptly named “avocado processing cobotic prototype” designed specifically to prepare the fruit for human hands to then mash into tasty guac.

[Related: You’re throwing away the healthiest part of the avocado.]

Per the company’s announcement, Chipotle locales throughout the US, Canada, and Europe are estimated to run through 4.5 million cases of avocados in 2023—reportedly over 100 million pounds of fruit. The Autocado is designed specifically to cut down on labor time, as well as also optimize the amount of harvested avocado. Doing so not only would save costs for the company, but cut down on food waste.

AI photo

To use the Autocado, employees first dump up to 25-pounds of avocados into a loading area. Artificial intelligence and machine learning then vertically orient each individual fruit before moving the ingredients along to a processing station to be halved, cored, and peeled. Employees can then retrieve the ready avocado from a basin, then combine them with the additional guacamole ingredients and mash away.

“Our purpose as a robotic company is to leverage automation technology to give workers more flexibility in their day-to-day work,” said Vebu CEO Buck Jordan in yesterday’s announcement.

[Related: Workplace automation could affect income inequality even more than we thought.]

But as Engadget and other automation critics have warned, such robotic rollouts often can result in sacrificing human jobs for businesses’ bottom lines. In one study last year, researchers found that job automation may actually extract an even heavier toll on workers’ livelihoods, job security, and quality of life than previously believed. Chipotle’s Autocado machine may not contribute to any layoffs just yet, but it isn’t the only example of the company’s embrace of similar technology: a tortilla chip making robot rolled out last year as well. 

Automation isn’t only limited to burrito bowls, of course. Wendy’s recently announcing plans to test an underground pneumatic tube system to deliver food to parking spots, while Panera is also experimenting with AI-assisted coffeemakers. Automation isn’t necessarily a problem if human employees are reassigned or retrained in other areas of service, but it remains to be seen which companies will move in that direction. 

Although only one machine is currently being tested at the Chipotle Cultivate Center in Irvine, California, the company hopes Autocado could soon become a staple of many franchise locations.

Correction 7/13/23: A previous version of this article referred to Chipotle’s tortilla chip making robot as a tortilla making robot.

The post Chipotle is testing an avocado-pitting, -cutting, and -scooping robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Four-legged dog robots could one day explore the moon https://www.popsci.com/technology/robot-dog-team-moon/ Wed, 12 Jul 2023 18:00:00 +0000 https://www.popsci.com/?p=556224
Three quadruped robots standing on rocky terrain
Teams of quadruped robots could one day prove vital to lunar mining. ETH ZURICH / TAKAHIRO MIKI

Built-in redundancies may enable teams of quadruped dog bots to explore the lunar surface.

The post Four-legged dog robots could one day explore the moon appeared first on Popular Science.

]]>
Three quadruped robots standing on rocky terrain
Teams of quadruped robots could one day prove vital to lunar mining. ETH ZURICH / TAKAHIRO MIKI

Humans are going to need a lot of supplies if they hope to establish a permanent lunar base on the moon—an incredibly expensive logistical hurdle to clear. While return missions can hypothetically restock a great deal of the astronauts’ needs, it would be a lot cheaper and easier to harvest at least some of the necessary materials right there for base construction and repair projects. Of course, doing so will require serious teamwork to pull off—a team that could one day include packs of four-legged robots.

According to a study published on Wednesday in Science Robotics, researchers at Switzerland’s ETH Zurich university recently oversaw a series of outdoor excursions for a trio of modified quadruped ANYmal robots. Researchers tested their team on a variety of terrains across Switzerland and at the European Space Resources Innovation Centre (ESRIC) in Luxembourg.

[Related: NASA could build a future lunar base from 3D-printed moon-dust bricks.]

Engineers at ETH Zurich worked alongside the Universities of Basel, Bern, and Zurich to program each ANYmal with specific lunar tasks: One was taught to utilize a microscopy camera alongside a spectrometer to identify varieties of rock, while another focused on using cameras and a laser scanner to map and classify its surrounding landscape. Finally, a third robot could both identify rocks and map its surroundings—albeit less precisely for each task than either of its companions.

Moons photo

“Using multiple robots has two advantages,” doctoral student and researcher Philip Arm explains. “The individual robots can take on specialized tasks and perform them simultaneously. Moreover, thanks to its redundancy, a robot team is able to compensate for a teammate’s failure.” Because of their overlaps, a mission could still be completed even if one of the three robots breaks down during its duties.

The team’s redundancy-focused explorers even won an ESRIC and ESA Space Resources Challenge, which tasked competitors with locating and identifying minerals placed throughout a test area modeled after the lunar surface. In taking first place, the jury provided another year of funding to expand both their number and variety of robots. Researchers say future iterations of the lunar exploration team could include both wheeled and flying units. Although all of the robots’ tasks and maneuvers are currently directly controlled by human inputs, the researchers also hope to eventually upgrade their explorers to be semi-autonomous.

The post Four-legged dog robots could one day explore the moon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best robot vacuums of 2024 https://www.popsci.com/story/reviews/best-robot-vacuum/ Mon, 01 Nov 2021 11:00:00 +0000 https://www.popsci.com/uncategorized/best-robot-vaccum/
Home photo

Here’s what to look for when you’re shopping, plus a few of our favorite models.

The post The best robot vacuums of 2024 appeared first on Popular Science.

]]>
Home photo

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Best smart Thin, black robot vacuum cleaner for hardwood floors with boundary stripes eufy BoostIQ RoboVac 30C MAX
SEE IT

Connect to the eufyHome app and Alexa or Google Assistant for streamlined cleaning where you can control schedules, notifications, and locate your vacuum.

Best vacuum and mop combo The Robovac S7 MaxV Ultra Robot Vacuum and Mop is one of the best robot vacuums that's a combo.Roborock-S7-MaxV-Ultra-Robot-Vacuum-and-Mop-best-robot-vacuums Roborock S7 MaxV Ultra Robot Vacuum
SEE IT

This two-in-one pick uses artificial intelligence and 3D scanning to map out your home and provides strong suction and sonic scrubbing power.

Best self-emptying iRobot Roomba s9+ (9550) Robot Vacuum with Automatic Dirt Disposal- Empties itself, Wi-Fi Connected, Smart Mapping, Powerful Suction, Anti-Allergen System, Corners & Edges, Ideal for Pet Hair is the smart vacuum that you need in your house. iRobot Roomba s9+ Robot Vacuum
SEE IT

With powerful suction and a self-emptying function, you can go up to 60 days without emptying the canister. It’s never been easier to maintain a clean home without lifting a finger.

Nothing beats hands-free cleaning—and it truly doesn’t get any better than a robot vacuum. With the push of a button, the best robot vacuums can tackle the largest room in your house without wasting any time. They’re equipped with special features like a quick connection to handheld devices or the ability to remember the overall layout of each room in your home. Stop spending hours panic-vacuuming before guests come over or doing chores on your weekends. Enjoy more free time while these devices take care of the dirty work. All you need to operate a robot vacuum is an open outlet for its charging port and you’re ready to roll. Below are our favorite options and the things you will want to consider in your search for the best robot vacuum cleaner.

How we chose the best robot vacuums

We compared a range of over 50 robot vacuum models for price, brand, added features, mapping technology, reviews, and battery life. No two people’s cleaning needs are the same, which is why we provided a variety of options—from mop-only options from Samsung that can cut through stubborn grime to self-emptying picks for those that don’t want to lift a pinky. Many of the brands we selected have made a name for themselves in tech and vacuums, so we could be sure you’re choosing a robo-vac that will be both reliable and worth the investment.

The best robot vacuums: Reviews & Recommendations

Best smart: eufy BoostIQ RoboVac 30C MAX

Amazon

SEE IT

Why it made the cut: With a large-capacity dust bin and powerful motor, the eufy is a great pick for just about any home.

Specs

  • Surfaces: Hard floor, carpet
  • Bin size: .6 L
  • Run time: Maximum 100 minutes

Pros

  • Strong suction power
  • Voice-control equipped
  • Boundary strips for personalized control

Cons

  • Does not map floor plan

The brand eufy is a branch of Anker Innovations founded by Steven Yang in 2011, following his work with Google. The goal of eufy is to create products that make the “smart home simplified” with a focus on accessibility and convenience. The company’s devices are designed to easily connect with one another, creating cohesion and coherence in the home, from wireless security systems to robot vacuums and light bulbs. And the RoboVac from eufy is as “smart” as it gets when it comes to robot vacuums. It connects to Alexa and Google Assistant, as well as the specifically designed eufyHome app where you can set cleaning schedules, direct the clean with remote control, receive notifications, and locate your robot. You can easily program boundary strips that the RoboVac will identify using 10 built-in sensors as it uses the bounce method to clean.

Best vacuum and mop combo: Roborock S7 MaxV Ultra Robot Vacuum and Mop

Billy Cadden

SEE IT

Why it made the cut: This pick auto-detects the difference between carpet and hard floors to give you a complete and hassle-free clean.

Specs

  • Surfaces: Hard floor, carpet, tile
  • Bin size: 300 mL water tank
  • Run time: Three hours

Pros

  • AI detects obstacles
  • Long battery life
  • Self-cleaning

Cons

  • Expensive

Our favorite robot vacuum-and-mop hybrid is the Roborock S7 MaxV Ultra Robot Vacuum and Mop. This state-of-the-art device is designed with LiDAR navigation reactive artificial intelligence, and structured light 3D scanning to help it map put your home and cover steer clear of obstacles like shoes and toys. The Robovac also provides a powerful suction of 5100 Pa. The mopping function, meanwhile, incorporates sonic vibration technology, which allows it to scrub up to 3,000 times a minute. That a lot of time, since it stays charged for up to three hours. Just tell the Robovac what you want with Alexa or Google Assistant. Oh, and it’s self-cleaning as well.

Best self-emptying: iRobot Roomba s9+ Robot Vacuum

Amazon

SEE IT

Why it made the cut: The power of a robot vacuum, with a self-emptying feature to eliminate every step of this household task.

Specs

  • Surfaces: Carpet
  • Bin size: Reported 60 days of dirt
  • Run time: Maximum 120 minutes

Pros

  • Self-emptying design
  • Three stage cleaning for more thorough vacuum
  • Smart mapping

Cons

  • Some software issues in-app

The Roomba S9+ is iRobot’s most powerful vacuum to date and, boy, does it pack a punch. The iRobot company was founded in 1990 by three MIT roboticists—Colin Angle, Helen Geiner, and Rodney Brooks—with the vision of making practical robots a reality. Their first robot vacuum was released in 2002 and they have been consistently adding to and improving this design ever since. This vacuum self-evacuates after each clean at its docking station, which is equipped with a dirt disposal container that can hold up to 60 days of dust and debris. That means you can vacuum every day for almost two months without being bothered by multiple trips to the trash can.

Best with mapping technology: Neato Robotics Botvac D8 Connected

Neato

SEE IT

Why it made the cut: Easily designate which areas you want to be cleaned with the virtual No-Go lines and high-tech features on this Neato Robotics pick.

Specs

  • Surfaces: Hard floor
  • Bin size: .7 L
  • Run time: 100 minutes

Pros

  • Gets into hard-to-reach areas
  • HEPA filter
  • Automatic learning

Cons

  • Louder than other options

The Botvac D8 from Neato Robotics is a great go-to vacuum that can map and store the memory of up to three floors in your home for a methodical, planned clean, as well as zone-clean specific messes or spills when you tell it to. You can easily draw no-go lines on your phone’s touchscreen using the Neato app that the vacuum will automatically learn and follow. It comes equipped with a HEPA filter to capture dust mites and allergens, battery life of up to 100 minutes, a large 0.7-liter dustbin, and a flat edge design for quick and easy corner clean. Additionally, the brush on the D8 is 70-percent larger than other leading brands, so this vacuum is specifically great for picking up pet hair.

Best for marathon cleaning sessions: Ecovacs Deebot Ozmo T5

Amazon

SEE IT

Why it made the cut: This mop-meets-vacuum has a long battery life and high-tech features to make your clean as seamless as possible.

Specs

  • Surfaces: Hard floor, carpet
  • Bin size: 430 mL
  • Run time: Over three hours

Pros

  • Long battery life
  • Mopping included
  • Laser-mapping technology for a complete clean

Cons

  • Mop could use more water

Ecovacs was established as a company in 1998 with the official Ecovacs Robotics brand created in 2006. They specialize in spatially aware, mobile robots that clean your home, and the Deebot Ozmo is no exception. The Deebot Ozmo T5 from Ecovacs can run for over three hours, cleaning up to 3,200 square feet in a single session. Along with the impressive battery life, this vacuum is equipped with Smart Navi 3.0 laser-mapping technology to keep track of your home and prevent any missed areas, a high-efficiency filter, and three levels of suction power. It connects to your smartphone for a customized clean, and, did we mention? It’s also a mop. Yep, this vacuum can also simultaneously mop your floors, recognizing and avoiding carpeted areas as it cleans.

Best mop-only robot: SAMSUNG Electronics Jetbot Robotic

Why it made the cut: When it comes to automated mopping, this Samsung pick is designed with squeaky-clean floors in mind.

Specs

  • Surfaces: Tile, vinyl, laminate, and hardwood
  • Run time: 100 minutes

Pros

  • Multiple cleaning pads
  • Eight cleaning modes
  • Dual pads remove grime

Cons

  • No mapping

Whether you’re cleaning your bathroom floors, hardwood in the living room, or laminate in the kitchen, the dual spinning pads on the Samsung Jetbot (you can choose machine-washable Microfiber or Mother Yarn) scrub away grime and dirt without the effort of mopping. The eight cleaning modes (selectable via remote) include hand mode, focus mode, and random mode, among others, allowing you to personalize your clean depending on the room and mess level. A 100-minute battery allows for enough time for the double water tanks to offer edge-to-edge coverage.

What to consider when buying the best robot vacuums

There are five major things you should take into consideration when purchasing a robot vacuum. The best robot vacuums have a long-lasting battery and a large bin capacity so they can work away in your home without needing to be dumped out or recharged before the job is over. You might want to find one that can easily connect to your smartphone for customized or remote control. And if you’re really looking to elevate your floors, consider a robot vacuum with a mopping function to make your surfaces shine. Finally, look for other advanced features like mapping capabilities or smart-timers. We know that’s a lot of information to keep in mind while you shop, so we’ve created a thorough guide to help you better understand these features, as well as some product suggestions to get you started.

How much cleaning time do you want?

A robot vacuum is only as good as its battery life. Fortunately, many robot vacuums have batteries that last at least one hour. If you have a larger living space you might want to look for something that can last between 90 to 120 minutes to make sure the robot can get to every nook and cranny before needing to recharge. Keep in mind, some vacuums have different power settings, like high intensity or turbo that might drain its battery more quickly. Think about how you want to use your vacuum, what your regular time frames for cleaning will look like, and whether or not you need more surface coverage or suction power.

Most robot vacuums will either alert you when the battery is low or they will dock themselves at their charger. They may also do this automatically after every clean, which means you’ll never have to bother with locating a charging cable or deal with the consequences of forgetting to plug it in. A truly smart robot vacuum will take care of itself after taking care of your floors.

Do you want to control the robot vacuum with your phone?

The best robot vacuums pair with your smartphone so you can create customized settings and control your clean remotely. When we say these things can get fancy, we mean fancy. A device compatible robot vacuum might be able to pair with Alexa or Google Assistant, follow invisible boundary lines you create to keep it away from loose rugs or lots of cables, generate statistics based on a recent clean, tell you how much battery life is left, and virtually map your living space. Being able to control a robot vacuum from your phone means going to brunch with friends, running to the grocery store, picking up your kids from school, and coming home to a clean house. Some models even allow you to set a predetermined schedule for cleaning so you won’t even have to pull out your phone to get it going. Keep in mind, it might be a good idea to be home for your robot’s first clean so you can identify any tough spots or issues your little machine might face.

Before purchasing make sure you check each vacuum’s compatibility, especially if you are using an Android or you are looking to connect to a specific virtual assistant. Many of the vacuums are going to work great with any smart device, but we would hate for you to get ready to connect only to end up disappointed.

Do you want it to take out the trash for you?

Not all robot vacuums can collect the same amount of debris and detritus before needing to be emptied out. Think about how frequently you’re hoping to vacuum your home and how much dust, dirt, and pet dander might accumulate in the meantime. If you have a smaller living area, keep things relatively tidy, dust and sweep often, or vacuum regularly, you might be able to survive on a smaller bin. However, if you know you need something more heavy-duty, don’t skimp on bin storage. The average dustbin size is 600 milliliters; some can go up to 700 or 750. These dustbins are easy to remove and don’t require extra work, such as bag or filter replacement. If you have a cat or dog (or a very hairy human) running around the house, consider a vacuum that specifically boasts its ability to pick up hair and dander.

One of the best features a robot vacuum can have is a self-evacuating bin. Instead of emptying a bin after every one or two cleaning sessions, your vacuum will automatically deposit all of its collected dust bunnies, forgotten LEGO pieces, food crumbs, and other artifacts to a larger bin at its docking station. Many of these stations come with allergen filters and other sensors to keep its contents completely sealed. It will let you know when it needs to be emptied so you don’t have to worry about spillage or clogging. Now that’s some seriously futuristic cleaning.

Do you want a mop, too?

We are pleased to inform you that the best robot vacuums can also mop, so not only will you have all the dirt and debris sucked away but you’ll also have sparkling clean floors free of stains and spills. These vacuum-mop hybrids have two compartments: one for collecting the bits and pieces that are suctioned up and another to hold water that will go over hardwood or tile flooring. These hybrids typically come with a sensor that informs the robot where carpeted areas can be found, which the vacuum will avoid when it’s time to mop. That’s one more chore your smart vacuum can take care of and one more episode of TV you get to watch instead!

If the vacuum you are looking at doesn’t have its own mopping function, or maybe a hybrid isn’t in your price range, look for models that are able to pair with a separate robot mopper all together. Many brands create individual vacuums and mops that communicate with one another via smartphone or internal programming to schedule cleanings one right after the other. They can often be stored next to one another and have similar special features and battery life—so you can count on this dynamic duo to get the job done.

Does it know your home?

We touched on special features a little bit when we outlined smartphone compatibility, but we want to dive in further and really explain the kinds of advanced features you might want to prioritize when considering which robot vacuum is right for you. The first thing to look for is a vacuum with obstacle identification features, so your vacuum can identify small barriers like power strips, cables, pet toys, or shoes. There’s nothing worse than coming home and finding your vacuum trapped in an endless battle between your internet router and your kid’s favorite stuffed animal, right?

You can also look for specific mapping capabilities that determine whether or not your robot cleans randomly or methodically. A random robot using a “bounce” cleaning method might have object identification sensors—but it won’t necessarily keep track of where in your house it has been, and will go over areas more than once for a thorough clean. A methodical vacuum has sensors that track where it’s been and what areas of the house it’s covered. This is often faster, but not always the most thorough. However, these methodical cleaners collect data over time to retain a virtual map of your home for a more efficient clean. Just make sure you keep the lights on during a vacuuming session, because these sensors need to quite literally “see” in order to collect information and avoid bumping into things. Once this data has been collected, you might also be able to set up boundaries or no-clean zones from your phone. This tells the robot where to avoid, like a play area or delicate carpet.

You can also look for a vacuum with a camera so you can see where it is or simply check in on your home. There are almost endless advanced features you can choose to prioritize depending on your needs.

FAQs

Q: Do cheap robot vacuums work?

Affordable robot vacuums can still achieve the clean of your dreams but might sacrifice some added features, like self-emptying, smart-home connectivity, or mopping capabilities. That said, even a cheap robot vacuum will still drastically cut down the time you spend on chores—in our book, that’s a win.

Q: Is it worth getting a robot vacuum?

You can spend less time cleaning when you have a robot vacuum in your arsenal. While models are still evolving with better technology, those with families, pets, or simply limited spare time can benefit from investing in a robot vacuum. Regular vacuums—like a this one from Dyson—can be quite pricey as well, so why not spend a bit more and relegate the chore to hands-free software?

Q: Can robot vacuums go from hardwood to carpet?

In short, it depends. While some models can auto-detect the transition from carpet to hardwood floors, others will need you to map out different zones. These maps can help your robot vacuum determine what modes it needs to be on for each area to ensure an overall deep clean.

Final thoughts on the best robot vacuums

An amazing, hands-free clean should now be well within reach with a robot vacuum. There are so many options out there and we hope you now know what to look for when venturing out to get your new robotized housekeeper. Keep in mind that the best robot vacuums are worth investing in for an efficient, smart, and clean home with the push of a button.

Why trust us

Popular Science started writing about technology more than 150 years ago. There was no such thing as “gadget writing” when we published our first issue in 1872, but if there was, our mission to demystify the world of innovation for everyday readers means we would have been all over it. Here in the present, PopSci is fully committed to helping readers navigate the increasingly intimidating array of devices on the market right now.

Our writers and editors have combined decades of experience covering and reviewing consumer electronics. We each have our own obsessive specialties—from high-end audio to video games to cameras and beyond—but when we’re reviewing devices outside of our immediate wheelhouses, we do our best to seek out trustworthy voices and opinions to help guide people to the very best recommendations. We know we don’t know everything, but we’re excited to live through the analysis paralysis that internet shopping can spur so readers don’t have to.

The post The best robot vacuums of 2024 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A look at the weird intersection of taxidermy and car design https://www.popsci.com/technology/hyundai-risd-car-design-nature/ Mon, 10 Jul 2023 20:39:58 +0000 https://www.popsci.com/?p=554932
a model of a Kia EV9
A 3D-printed model of a Kia EV9. Kia America

An automaker and a design school have been collaborating on nature-based auto ideas. Here's what's been growing out of the partnership.

The post A look at the weird intersection of taxidermy and car design appeared first on Popular Science.

]]>
a model of a Kia EV9
A 3D-printed model of a Kia EV9. Kia America

In general, we see cars as artificial and inanimate machines made from welded steel and plastic. But what if vehicles could be designed with the evolution of microorganisms in mind, representing a collaboration with nature? Kia, Hyundai, and Genesis are investigating that worldview with a group of young artists and scientists at the renowned Rhode Island School of Design. 

Hyundai Motor Group (HMG), the parent company of all three brands, kicked off the RISD x Hyundai Motor Group Research Collaborative in 2019. Now in its fourth year, the unique partnership is focused on actively exploring the relationship between nature, art, and design for the good of humankind. Using phrases like “biologized skin” for robots and “chemotaxi processes” to describe movement, the team of students, professors, and HMG engineers and designers are challenging traditional ideas about how machines can work. 

Here’s what to know about projects that RISD students have created with the future of Hyundai Motor Group in mind. 

Using slime mold to mimic autonomous vehicles

While test-driving a brand-new 2024 Kia Seltos in and around Providence, Rhode Island with a group of journalists, we made a stop at RISD to hear from students in the program. In an initiative called Future Spaces and Autonomous Vehicles, students examined the future of autonomous vehicles using scientific methodologies combined with design-focused thinking. 

The first presenter, 2023 graduate Manini Banerjee, studied at Brown and Harvard before making her way to RISD, and she challenged us to think about how a car might work if it were driven by organisms instead of algorithms. 

In their research, Banerjee and her lab partner, Mehek Vohra, discovered that each autonomous vehicle processes 40 terabytes of data per hour; that’s the equivalent of typical use of an iPhone for 3,000 years, Banerjee says. The problem, she asserts, is that data processing and data storage relies heavily on carbon-emitting data centers, which only accelerates global warming. Vohra and Banerjee set out to find out if there is an opportunity for organic, sustainable data-free navigation. 

[Related: Inside the lab that’s growing mushroom computers]

Using a slime mold organism as a vehicle, the team observed how the mold grows, learns, and adapts. In a cardboard maze, the slim mold organism mimicked the movements of autonomous vehicles. During the study, they noticed the slime mold learned how to find the maze’s center through sensing chemicals and light in its environment. Is it possible to replace carbon-heavy data processes with a nature-based solution? Yes, Banerjee says. (According to Texas A&M, slime molds exist in nature as a “blob,” similar to an amoeba, engulfing their food, which is mostly bacteria. And in related work, research out of the University of Chicago involved using slime mold in a smartwatch in 2022.) 

“Civilization has been measured by this distance between the natural and the built environment,” she told the group. “I feel that we’ve begun to build that space with technological advancements.”

“Turn away from blindly pursuing innovation”

Today, designers and engineers look to the outside world to better understand physiology, patterns in nature, and beauty. The future of nature and cars as collaborators is front and center for the RISD and HMG partnership. 

There are about 100,000 taxidermied specimens in RISD’s Nature Lab collection; it’s on par with a world-class natural history museum and has been around since 1939. Students can check out a specimen from the lab like one might check out a library book for study. For instance, studying the wings of the kingfisher may spur an idea for not just colors but patterns, textures, and utility. Observing the bone structure of a pelican for strength points or the ways an insect’s wing repels water can advance the way vehicles are made, too. 

The RISD team is also exploring how to embrace entropy, or the degree of disorder or uncertainty in a system, versus strict mechanical processes. Sustainability is also an important element in this research, meaning that researchers should understand how materials break down instead of contributing to waste and climate change. Together, those two concepts inform the idea that engineering and technology can be programmed with built-in degradation (an expiration date, if you will) at the rate of human innovation.

“The intent is to turn away from blindly pursuing innovation and toward creating living machines that may restore our relationship with nature,” Banerjee said during a TedX presentation earlier this year. “If we understand the organisms we’re working with, we won’t have to hurt, edit, or decapitate them. We can move from ‘nature inspired’ to ‘nature collaborated.’”

The post A look at the weird intersection of taxidermy and car design appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bee brains could teach robots to make split-second decisions https://www.popsci.com/science/bee-brain-decision-making-robot/ Mon, 10 Jul 2023 16:45:00 +0000 https://www.popsci.com/?p=554670
A honey bee pollinates a yellow flower against a bright blue sky.
Bee brains have evolved over millions of years to become incredibly efficient. Deposit Photos

The power pollinators can make multiple quick decisions with a brain smaller than a sesame seed.

The post Bee brains could teach robots to make split-second decisions appeared first on Popular Science.

]]>
A honey bee pollinates a yellow flower against a bright blue sky.
Bee brains have evolved over millions of years to become incredibly efficient. Deposit Photos

The phrase “busy as a bee” certainly applies to the brains of honey bees. The insects have to balance effort, risk and reward, avoid predators, and make accurate assessments of which flowers are the most likely to offer food for their hive while they fly. Speed and efficiency are thus critical to their survival, and scientists are taking a look at their brains to understand how. A study published June 27 in the journal eLife explores how millions of years of evolution engineered honey bee brains to make these lightning fast decisions and reduce their risks. 

[Related: What busy bees’ brains can teach us about human evolution.]

“Decision-making is at the core of cognition. It’s the result of an evaluation of possible outcomes, and animal lives are full of decisions,” co-author and comparative neurobiologist  at Australia’s Macquarie University Andrew Barron said in a statement. “A honey bee has a brain smaller than a sesame seed. And yet she can make decisions faster and more accurately than we can. A robot programmed to do a bee’s job would need the backup of a supercomputer.”

Barron cites that today’s autonomous robots primarily work with the support of remote computing, and that drones have to be in wireless communication with some sort of data center. Looking at how bees’ brains work and could help design better robots that explore more autonomously

In the study, the team trained 20 bees to recognize five different colored “flower disks.” The blue flowers always had a sugar syrup, while the green flowers always had tonic water that tasted bitter to the bees. The other colors sometimes had glucose. Then, the team introduced each bee to a makeshift garden where the flowers only had distilled water. Each bee was filmed and the team watched over 40 hours of footage, tracking the path the insects took and timing how long it took for them to make a decision. 

“If the bees were confident that a flower would have food, then they quickly decided to land on it, taking an average of  0.6 seconds,” HaDi MaBouDi, co-author and computational neuroethologist from the University of Sheffield in England, said in a statement. “If they were confident that a flower would not have food, they made a decision just as quickly.”

If the bees were unsure, they took significantly more time–1.4 seconds on average–and the time reflected the probability that a flower contained some food.

Next, the team built a computer model that aimed to replicate the bees’ decision-making process. They noticed that the structure looked similar to the physical layout of a bee’s brain. They found that the bees’ brains could make complex autonomous decision making with minimal neural circuits. 

[Related: A robot inspired by centipedes has no trouble finding its footing.]

“Now we know how bees make such smart decisions, we are studying how they are so fast at gathering and sampling information. We think bees are using their flight movements to enhance their visual system to make them better at detecting the best flowers,” co-author and theoretical and computational biologist at the University of Sheffield James Marshall said in a statement

Marshall also co-founded Opteran, a company that reverse-engineers insect brain algorithms to enable machines to move autonomously. He believes that nature will inspire the future of the AI industry, as millions of years of insect brain evolution has led to these incredibly efficient brains that require minimal power. 

The post Bee brains could teach robots to make split-second decisions appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This magnetic robot could worm its way into human blood vessels https://www.popsci.com/technology/magnet-soft-worm-robot/ Mon, 10 Jul 2023 14:30:00 +0000 https://www.popsci.com/?p=554646
Screenshot of inching magnetic soft robot
Magnetic strategic portions of this soft robot allows it to move in three dimensions. MIT / Ankeeva et al

Just one magnetic field can create 'a movement-driving profile of magnetic forces.'

The post This magnetic robot could worm its way into human blood vessels appeared first on Popular Science.

]]>
Screenshot of inching magnetic soft robot
Magnetic strategic portions of this soft robot allows it to move in three dimensions. MIT / Ankeeva et al

Researchers at MIT have created a tiny, cucumber-inspired soft robot capable of scooting around otherwise hard-to-reach, three-dimensional environments using a single, weak magnetic field. As first detailed last month in an open access paper published with Advanced Materials, an inchworm-like mechanism made from strategically magnetized rubber polymer spirals shows immense promise in maneuvering through spaces as tiny as human blood vessels.

[Related: Seals provided inspiration for a new waddling robot.]

Before this newest wormbot, locomotive soft bots required moving magnetic fields to control their direction and angle. “[I]f you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet,” Polina Ankeeva, the paper’s lead author and a professor of materials science and engineering and brain and cognitive sciences, said in a statement. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution,” Ankeeva added. “You want to be able to have a stationary instrument that just applies [a] magnetic field to the whole sample.”

As such, the MIT research team’s new design isn’t uniformly magnetized like many other soft robots. By only magnetizing select areas and directions, just one magnetic field can create “a movement-driving profile of magnetic forces,” according to MIT’s announcement.

Medicine photo

Interestingly, engineers turned to cucumber vines’ coiled tendrils for inspiration: Two types of rubber are first layered atop one another before being heated and stretched into a thin fiber. As the new thread cools, one rubber contracts while the other retains its form to create a tightly wound spiral, much like a cucumber plant’s thin vines wrapping around nearby structures. Finally, a magnetizable material is threaded through the polymer spiral, then strategically magnetized to allow for a host of movement and directional options.

Because of each robot’s customizable magnetic patterns, multiple soft bots can be individually mapped to move in different directions when both exposed to a single, uniform weak magnetic field. Additionally, a subtle field manipulation allows the robots to vibrate—thus allowing the tiny worms to carry cargo to a designated location, then shake it off to deliver a payload. Because of their soft materials and relatively simple manipulation, researchers believe such mechanisms could be used in biomedical situations, such as inching through human blood vessels to deliver a drug at a precise location.

The post This magnetic robot could worm its way into human blood vessels appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole https://www.popsci.com/science/nasa-viper-moon-rover-test/ Sun, 09 Jul 2023 17:00:00 +0000 https://www.popsci.com/?p=554322
VIPER moon rover coming down a ramp during a test at the NASA Ames Research Center
Antoine Tardy, VIPER rover egress driver, adjusts the cables that power and send commands to the VIPER test unit as engineers practice its exit/descent from the model Griffin lunar lander at NASA's Ames Research Center in California's Silicon Valley. NASA/Dominic Hart

Four wheels are better than six for off-roading in craters.

The post NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole appeared first on Popular Science.

]]>
VIPER moon rover coming down a ramp during a test at the NASA Ames Research Center
Antoine Tardy, VIPER rover egress driver, adjusts the cables that power and send commands to the VIPER test unit as engineers practice its exit/descent from the model Griffin lunar lander at NASA's Ames Research Center in California's Silicon Valley. NASA/Dominic Hart

It’s no simple feat to send a rover to space, land it on a celestial body, and get the wheels rolling. NASA has used all kinds of techniques: The Pathfinder rover landed on Mars in 1997 inside a cluster of airbags, then rolled down its landing vehicle’s “petals,” which bloomed open like a flower, to the dusty surface. Cables attached to a rocket-powered “sky crane” spacecraft dropped the Perseverance Mars rover to the Red Planet’s surface in 2021. On the moon, Apollo 15, 16, and 17 astronauts pulled mylar cables to unfold and lower their buggies from the vehicles’ compact stowage compartments on lunar landers. 

But NASA’s first-ever rover mission to the lunar south pole will use a more familiar method of getting moving on Earth’s satellite: a pair of ramps. VIPER, which stands for Volatiles Investigating Polar Exploration Rover, will roll down an offramp to touch the lunar soil, or regolith, when it lands on the moon in late 2024. 

This is familiar technology in an unforgiving location. “We all know how to work with ramps, and we just need to optimize it for the environment we’re going to be in,” says NASA’s VIPER program manager Daniel Andrews.

A VIPER test vehicle recently descended down a pair of metal ramps at NASA’s Ames Research Center in California, as seen in the agency’s recently published photos, with one beam for each set of the rover’s wheels. Because the terrain where VIPER will land—the edge of the massive Nobile Crater—is expected to be rough, the engineering team has been testing VIPER’s ability to descend the ramps at extreme angles. They have altered the steepness, as measured from the lander VIPER will descend from, and differences in elevation between the ramp for each wheel. 

”We have two ramps, not just for the left and right wheels, but a ramp set that goes out the back too,” Andrews says. “So we actually get our pick of the litter, which one looks most safe and best to navigate as we’re at that moment where we have to roll off the lander.” 

[Related: The next generation of lunar rovers might move like flying saucers]

VIPER is a scientific successor to NASA’s Lunar Crater Observation and Sensing Satellite, or LCROSS mission, which in 2009 confirmed the presence of water ice on the lunar south pole. 

“It completely rewrote the books on the moon with respect to water,” says Andrews, who also worked on the LCROSS mission. “That really started the moon rush, commercially, and by state actors like NASA and other space agencies.”

The ice, if abundant, could be mined to create rocket propellant. It could also provide water for other purposes at long-term lunar habitats, which NASA plans to construct in the late 2020s as part of the Artemis moon program

But LCROSS only confirmed that ice was definitely present in a single crater at the moon’s south pole. VIPER, a mobile rover, will probe the distribution of water ice in greater detail. Drilling beneath the lunar surface is one task. Another is to move into steep, permanently shadowed regions—entering craters that, due to their sharp geometry, and the low angle of the sun at the lunar poles, have not seen sunlight in billions of years. 

The tests demonstrate the rover can navigate a 15-degree slope with ease—enough to explore these hidden dark spots, avoiding the need to make a machine designed for trickier descents. “We think there’s plenty of scientifically relevant opportunities, without having to make a superheroic rover that can do crazy things,” Andrews says.

Developed by NASA Ames and Pittsburgh-based company Astrobotic, VIPER is a square golf-cart-sized vehicle about 5 feet long and wide, and about 8 feet high. Unlike all of NASA’s Mars rovers, VIPER has four wheels, not six. 

”A problem with six wheels is it creates kind of the equivalent of a track, and so you’re forced to drive in a certain way,” Andrews says. VIPER’s four wheels are entirely independent from each other. Not only can they roll in any direction, they can be turned out, using the rover’s shoulder-like joints to crawl out of the soft regolith of the kind scientists believe exists in permanently shadowed moon craters. The wheels themselves are very similar to those on the Mars rovers, but with more paddle-like treads, known as grousers, to carry the robot through fluffy regolith.

“The metaphor I like to use is we have the ability to dip a toe into the [permanently shadowed region],” Andrews says. ”If we find we’re surprised or don’t like what we’re finding, we have the ability to lift that toe out, roll away on three wheels, and then put it back down.”

But VIPER won’t travel very far at all if it can’t get down the ramp from its lander, which is why Andrews and his team have been spending a lot of time testing that procedure. At first, the wheels would skid, just momentarily, as the VIPER test vehicle moved down the ramps. 

”We also found we could drive up and over the walls of the rampway,” Andrews says. “That’s probably not desirable.”

[Related on PopSci+: How Russia’s war in Ukraine almost derailed Europe’s Mars rover]

Together with Astrobotic, Andrews and his team have altered the ramps, and they now include specialized etchings down their lengths. The rover can detect this pattern along the rampway, using cameras in its wheel wells. “By just looking down there,” the robot knows where it is, he says. “That’s a new touch.”

Andrews is sure VIPER will be ready for deployment in 2024, however many tweaks are necessary. After all, this method is less complicated than a sky crane, he notes: “Ramps are pretty tried and true.”

The post NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robotic leg could give machines krill-like swimming abilities https://www.popsci.com/technology/krill-inspired-robot-leg/ Fri, 30 Jun 2023 20:00:00 +0000 https://www.popsci.com/?p=552598
Robot leg inspired by krill
A new robotic leg was inspired by krill's metachronal swimming. Wilhelmus Lab

It's called the Pleobot, and it was inspired by krill—tiny ocean creatures that are adept swimmers.

The post This robotic leg could give machines krill-like swimming abilities appeared first on Popular Science.

]]>
Robot leg inspired by krill
A new robotic leg was inspired by krill's metachronal swimming. Wilhelmus Lab

Robotics engineers are one step closer to building a swarm of krill bots capable of underwater exploration, as well as perhaps one day aiding in future search and rescue missions. According to a study published earlier this month in Scientific Reports, a team at Brown University working alongside researchers at the Universidad Nacional Autónoma de México recently designed and built a robotic platform dubbed the Pleobot. It’s a “unique krill-inspired robotic swimming appendage” researchers say is the first mechanism allowing for a comprehensive study of what’s known as metachronal propulsion.

While the whole assembly of the robot is about nine inches long, it’s based on krill, a tiny, paperclip-sized biological organism. Despite their small size, krill regularly travel comparatively massive distances—vertically migrating over 3,200 feet twice a day. One key to these daily journeys is their metachronal swimming—a form of movement often found in multi-legged aquatic creatures including shrimp and copepods in which their limbs undulate in wavelike patterns to propel them through their watery abodes.

[Related: When krill host social gatherings, other ocean animals thrive.]

For years, studying the intricacies of metachronal propulsion has been limited to the observation of live organisms. In a statement published on Monday, however, paper lead author Sara Oliveira Santos explained the Pleobot allows for “unparalleled resolution and control” to examine krill-like swimming, including studying how metachronal propulsion allows the creatures to “excel at maneuvering underwater.”

The Pleobot is constructed mainly from 3D printed parts assembled into three articulated sections. Researchers can actively control the first two portions of the krill-like leg, while the biramous (two-branched) end fins move passively against the water.

“We have snapshots of the mechanisms they use to swim efficiently, but we do not have comprehensive data,” said postdoctoral associate Nils Tack. “We built and programmed a robot that precisely emulates the essential movements of the legs to produce specific motions and change the shape of the appendages. This allows us to study different configurations to take measurements and make comparisons that are otherwise unobtainable with live animals.”

[Related: In constant darkness, Arctic krill migrate by twilight and the Northern Lights.]

According to the study, observing the Pleobot even allowed researchers to determine a previously unknown factor of krill movement—how they generate lift while swimming forward. As the team explained, krill must constantly swim to avoid sinking, which means at least some lift must be produced while moving horizontally within water. As Yunxing Su, another postdoctoral associate involved in the project, explained, “We identified an important effect of a low-pressure region at the back side of the swimming legs that contributes to the lift force enhancement during the power stroke of the moving legs.”

Moving forward, the team hopes to further expand their understanding of agile krill-like swimming and apply it to future Pleobot iterations. After honing this design—which the team has made open-source online—the krill leg mechanisms eventually could find their way onto underwater robots for all sorts of uses, including exploration and even rescue missions.

Take a look at how it moves, below:

Animals photo

Correction: An earlier version of this article stated that the Pleobot is about a foot in size. This has been updated for accuracy.

The post This robotic leg could give machines krill-like swimming abilities appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI-powered glove could help stroke patients play the piano again https://www.popsci.com/technology/stroke-piano-smart-glove/ Fri, 30 Jun 2023 12:00:00 +0000 https://www.popsci.com/?p=552404
A hand wearing a smart glove playing keyboard next to computer readings of movements
Wearables like this smart glove could help stroke patients recover their ability to play the piano. Credit: Dr Maohua Lin et al

A prototype of the 3D printed glove uses lights and haptics to guide movement.

The post This AI-powered glove could help stroke patients play the piano again appeared first on Popular Science.

]]>
A hand wearing a smart glove playing keyboard next to computer readings of movements
Wearables like this smart glove could help stroke patients recover their ability to play the piano. Credit: Dr Maohua Lin et al

A customizable smart glove powered by artificial intelligence shows promise as an easy-to-use, wearable tutoring aide for musicians recovering from strokes. According to a study published with Frontiers in Robotics and AI, a team at Florida Atlantic University has developed a lightweight “smart hand exoskeleton” prototype using 3D printed materials and machine learning. This new smart glove could soon help patients relearn how to play the piano “by ‘feeling’ the difference between correct and incorrect versions of the same song.”

[Related: A tiny patch can take images of muscles and cells underneath your skin.]

In the aftermath of a debilitating stroke, many patients require extensive therapy regimens to relearn certain motor movements and functionalities affected by neurotraumas. Sometimes, this loss of control unfortunately can extend to the patient’s ability to play instruments. And while therapeutic technology exists for other movement recovery, very few options are available to someone such as a pianist hoping to return to music.

Researchers’ new smart glove aims to remedy this issue via imbuing a 3D printed wearable with soft pneumatic actuators housed in the fingertips The researchers have equipped each fingertip with 16 tactile sensors, aka “taxels,” to monitor the wearer’s keystrokes and hand movements. The team also used machine learning to train the glove in differentiating the “feel” of correct versus incorrect renditions of “Mary Had a Little Lamb.” Putting it all together, a user could play the song themselves while receiving real-time feedback in the form of visual indicators, sound, or even touch-sensitive haptic responses. 

[Related: These wearable cyborg arms were modeled after Japanese horror fiction and puppets.]

“The glove is designed to assist and enhance their natural hand movements, allowing them to control the flexion and extension of their fingers,” Erik Engeberg, the paper’s senior author and a professor in FAU’s department of ocean and mechanical engineering, said in a statement on Thursday. “The glove supplies hand guidance, providing support and amplifying dexterity.”

Although only one smart glove currently exists, the research team hopes to eventually design a second one to create a full pair. Such devices could even one day be programmed to help with other forms of object manipulation and movement therapy. First, however, the wearable’s tactile sensing, accuracy, and reliability still need improvements, alongside advancing machine learning to better understand human inputs in real time.

The post This AI-powered glove could help stroke patients play the piano again appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This weird robot uses living bugs as gripping tools https://www.popsci.com/technology/pill-bug-robot/ Wed, 28 Jun 2023 15:00:00 +0000 https://www.popsci.com/?p=551795
Robotic gripper holding pillbug that is gripping piece of cotton
Pill bugs and mollusks were recently shown to be effective grippers for robots. Tadakuma Mechanisms Group, Tohoku University

A recent intersection between biology and robotics is causing some to wonder about the limits and ethics of domestication.

The post This weird robot uses living bugs as gripping tools appeared first on Popular Science.

]]>
Robotic gripper holding pillbug that is gripping piece of cotton
Pill bugs and mollusks were recently shown to be effective grippers for robots. Tadakuma Mechanisms Group, Tohoku University

The term “necrobotics” is relatively self-explanatory—using dead source material within robotic designs. Case in point: researchers at Rice University made headlines last year after repurposing a spider’s corpse as part of a “pneumatically actuating gripper tool” capable of grasping asymmetrical objects up to 130 percent its own mass.

But what if researchers harnessed living creatures as part of robotic devices? That’s the question recently posed by a team collaborating between multiple Japanese universities. In their paper, “Biological Organisms as End Effectors,” published earlier this month on the arXiv preprint server, researchers from Tohoku, Yamagata, and Keio Universities detailed how they developed a way to literally harness living pillbugs and underwater mollusks known as chiton as a robot’s gripping mechanisms without harming the animals.

[Related: Watch this bird-like robot make a graceful landing on its perch.]

In demonstration videos, a 3D-printed harness is ostensibly lassoed around the pill bug using either one or two flexible threads. In the single thread configuration, the pill bug is allowed to roll into its closed, defensive shape; with two threads, the insect is prevented from doing so, thus maintaining its open, walking stance. Attaching a harness to the mollusk required a bit more trial-and-error, with researchers settling on a removable epoxy glue applied to its external shell. In both experiments, the pill bug and chiton were shown to effectively grasp and maneuver objects, either via the insect’s closing into its defensive stance while grasping an object, or via the mollusk’s suctioning ability.

Insects photo

“This approach departs from traditional methodologies by leveraging the structures and movements of specific body parts without disconnecting them from the organism, all the while preserving the life and integrity of the creature,” reads a portion of the team’s paper. The team also notes that for future research, it will be “crucially important to enforce bioethics rules and regulations, especially when dealing with animals that have higher cognition.”  

But, researchers such as Kent State University geographer James Tyner, aren’t completely sold. “To a degree, this is simply the domestication of species not yet domesticated,” Tyner explains to PopSci. Tyner co-authored an essay last year lambasting Rice University’s recycled arachnid necrobot as an “omen” of potentially even further “subsumption of life and death to circuits of capital.” When it comes to employing living organisms within robotic systems, Tyner also questions their efficacy and purpose.

“I’m hard pressed to think of a situation where I’d feel comfortable deploying biotechnologies solely or even partially dependent on the gripping power of a pillbug,” Tyner adds.

Insects photo

For Josephine Galipon, a molecular biologist at Yamagata University and one of the project’s team members, such situations are easier to envision. “Let’s imagine a robot stuck at the bottom of the ocean that needs to improvise a gripper function to complete a task,” she offers via email to PopSci. “Instead of building a gripper from the ground up, it could borrow help from a chiton, and as a reward, the chiton would be transported to a new place with possibly more food.”

According to Galipon, establishing such mutually beneficial, cooperative, and dynamic interactions between living organisms and machines could offer advancements in both biology and robotic engineering.

[Related: These wearable cyborg arms were modeled after Japanese horror fiction and puppets.]

“‘Locomotion’ can be used for more than just getting around from one spot to another,” Galipon continues. “Surprisingly, it can also be used for tasks like picking up and moving objects, as illustrated [by the pillbug]. We can also learn more about how these organisms perceive the world around them.” Galipon points to previous instances of domestication, such as horses and messenger pigeons, and views their pillbug and chiton trials in a similar vein. 

Tyner, meanwhile, points to the longstanding history of biomimicry within robotics as a promising alternative to domesticating new animal species. They also raise the question of experts’ expanding concepts of sentience, and what that might entail for even creepy-crawler companions. Recent studies, in fact, offer evidence of a wider array of “feelings” for insects, notably the capacity for injury or discomfort in insects, such as fruit flies potentially experiencing a form of chronic pain. But critics like Tyner, however, the question still stands with or without evidence: “Do we extend moral standing, for example, only to sentient beings?”

In this sense, it’s a thought shared by Galipon and their fellow researchers. “[We] recommend caution when handling any type of animal, and to exercise mindfulness in avoiding their suffering as much as possible and to the best of our knowledge,” they write in their paper.

The post This weird robot uses living bugs as gripping tools appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These wearable cyborg arms were modeled after Japanese horror fiction and puppets https://www.popsci.com/technology/jizai-arms-cyborg/ Tue, 27 Jun 2023 20:00:00 +0000 https://www.popsci.com/?p=551506
Two dancers wearing Jizai Arms wearable robotic appendagees
Jizai Arms are wearable, swappable, cybernetic arms designed for human expression. Kazuaki Koyama/Jizai Arms

Robot-assisted ballet never looked so good.

The post These wearable cyborg arms were modeled after Japanese horror fiction and puppets appeared first on Popular Science.

]]>
Two dancers wearing Jizai Arms wearable robotic appendagees
Jizai Arms are wearable, swappable, cybernetic arms designed for human expression. Kazuaki Koyama/Jizai Arms

Speculative horror fiction, traditional Japanese puppetry, and cultural concepts of autonomy are inspiring a new project aimed at providing humans with sets of detachable cyborg arms. Jizai Arms are sleek, controllable appendages designed to compliment users’ movements, expression, and artistry. The University of Tokyo team lead by co-creator Masahiko Inami presented their creation for the first time last month at the 2023 CHI Conference on Human Factors in Computing Systems.

Unlike the headline-grabbing worlds of AI and autonomous robot technologies, however, Inami explained to Reuters on Tuesday that Jizai Arms are “absolutely not a rival to human beings.” Instead, the interchangeable limbs are meant to aid users to “do as we please… it supports us and can unlock creativity” in accordance with the Japanese concept of “jizai.” The term roughly translates to autonomy or freedom. According to the presentation’s abstract, the project is also intended to explore myriad possibilities between “digital cyborgs in a cyborg society.”

[Related: The EU just took a huge step towards regulating AI.]

To use Jizai Arms, subjects first strap on a harness to their torso. From there, arms can be attached into back sockets, and are currently controlled by a user or third-party via a miniature model of the same technology.

The project is partially inspired by centuries’ old “Jizai Okimono” animal puppetry, as well as Nobel Prize-winning author Yasunari Kawabata’s magical realism short story, “One Arm.” In this 1964 tale, a woman lets a man borrow her detached arm for an evening. “Half a century since its writing,” reads the paper’s introduction, “emerging human-machine integration technologies have begun to allow us to physically experience Kawabata’s world.”

Psychology photo

Videos provided by the project showcase dancers performing choreography alongside classical music while wearing the accessory arms. The team’s paper describes other experiences such as varying the number and designs of the cybernetic arms, swapping appendages between multiple users, and interacting with each other’s extra limbs. In the proof-of-concept video, for example, the two ballet dancers ultimately embrace one another using both their human and artificial arms.

[Related: Cyborg cockroaches could one day scurry to your rescue.]

According to Inami, users are already forming bonds with their wearables after experiencing the Jizai Arms. “Taking them off after using them for a while feels a little sad,” they relayed to Reuters. “That’s where they’re a little different [from] other tools.” In a similar vein, researchers plan to look into long term usage of such devices, and how that could fundamentally change humans’ daily perceptions of themselves and others. 

The post These wearable cyborg arms were modeled after Japanese horror fiction and puppets appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot used a fake raspberry to practice picking fruit https://www.popsci.com/technology/raspberry-picking-robot/ Sat, 24 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=550940
fake raspberry for testing robot pickers
EPFL CREATE

Soon, it will leave the lab for a real world test.

The post This robot used a fake raspberry to practice picking fruit appeared first on Popular Science.

]]>
fake raspberry for testing robot pickers
EPFL CREATE

It’s summer, and raspberries are in season. These soft, tartly sweet fruits are delicious but delicate. Most of the time, they have to be harvested by human hands. To help alleviate labor costs and worker shortages, a team at École polytechnique fédérale de Lausanne’s Computational Robot Design & Fabrication Lab (EPFL CREATE) in Switzerland made a robot that knows how to gently support, grasp, and pluck these berries without bruising or squishing them in the process. Their approach is detailed this week in the journal Communications Engineering

Agriculture, like many other fields that have scaled up dramatically over the last few decades, has become increasingly reliant on complex technology from sensors to robots and more. A growing number of farmers are interested in using robots for more time-intensive tasks such as harvesting strawberries, sweet peppers, apples, lettuce, and tomatoes. But many of these machines are still in an early stage, with the bottleneck factor being the inefficient and costly field trials companies typically have to undergo to fine tune the robot. 

The EPFL team’s solution was to create a fake berry and stem for the robot to learn on. To familiarize robots with picking raspberries, the engineers made a silicone raspberry with an artificial stem that “can ‘tell’ the robot how much pressure is being applied, both while the fruit is still attached to the receptacle and after it’s been released,” according to a press release. The faux raspberry contains sensors that measure compression force and pressure. Two magnets hold the fruit and the stem together. 

[Related: This lanternfly-egg-hunting robot could mean fewer bugs to squish]

In a small test with real raspberries, the robot was able to harvest 60 percent of the fruits without damaging them. That’s fairly low compared to the 90 percent from human harvesters on average, signaling to the team that there are still kinks to work out. For example, the robot’s range of reach is not great, and it gets confused when the berries are clustered together. 

Making a better fake raspberry could help the robot improve. Moreover, building an extended set that can simulate “environmental conditions such as lighting, temperature, and humidity could further close the Lab2Field reality gap,” the team wrote in the paper.

For now, the next step for the engineers is to modify the controllers and develop a camera system that “will allow robots to not only ‘feel’ raspberries, but also ‘see’ where they’re located and whether they’re ready to be harvested,” Josie Hughes, a professor at EPFL CREATE noted in the press release. 

They plan to put their pre-trained robot in a real field this summer to see how well it performs during the height of the local raspberry season in Switzerland. If the tech works as planned, the team wants to look into expanding its fake fruit repertoire to potentially cover berries, tomatoes, apricots or even grapes. 

Watch the robot and fake raspberry system at work from a trial run last year: 

Robots photo

The post This robot used a fake raspberry to practice picking fruit appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This pangolin-inspired robot can curl up into a healing ball https://www.popsci.com/technology/pangolin-robot-medicine/ Fri, 23 Jun 2023 15:00:00 +0000 https://www.popsci.com/?p=550767
Hard keratin scales inspired this tiny robot.
Hard keratin scales inspired this tiny robot. Max Planck Institute for Intelligent Systems

Pangolins are the only mammals to sport overlapping scales—a trait that could prove surprisingly useful for internal medicine.

The post This pangolin-inspired robot can curl up into a healing ball appeared first on Popular Science.

]]>
Hard keratin scales inspired this tiny robot.
Hard keratin scales inspired this tiny robot. Max Planck Institute for Intelligent Systems

If you don’t know what a pangolin is, then today is your lucky day. Primarily found in tropical regions of Africa and Asia, the tiny, adorable, sadly endangered creature is the only mammal known to be covered completely in overlapping scales composed of durable keratin—the same material that makes up your nails and hair. When needed, the flexible scales’ structure allows a pangolin to curl up into a defensive ball—a novel evolutionary design that recently inspired a team of engineers’ newest invention.

[Related: The Pangolin Finally Made It Onto The List Of The World’s Most Protected Animals.]

As described in a paper published on June 20 with Nature Communications, researchers at the Max Planck Institute for Intelligent Systems in Germany created a robot that could mimic a pangolins’ roly-poly resiliency. Instead of doing so for protection, however, the miniature robot uses its scaly design to quickly traverse environments while simultaneously carrying small payloads. With an added ability to heat to over 70 degrees Celsius (roughly 158 degrees Fahrenheit), the team’s barely two-centimeter-long robot shows immense promise for delivering medication within patients, as well as helping in procedures such as mitigating unwanted internal bleeding.

Endangered Species photo

The pangolin-inspired robot features a comparatively simple, two-layer design—a soft polymer layer studded in magnetic particles, and a harder exterior layer of overlapping metal scales. Exposing the robot to a low-frequency magnetic field causes it to roll into a cylindrical shape, and subsequently directing the magnetic field can influence the robot’s movement. While in this rolled shape, the team showed that their pangolin-bot can house deliverables such as medicine, and safely transport them through animal tissues and artificial organs to a desired location for release.

[Related: These 2D machines can shapeshift into moving 3D robots.]

Exposing their robot to a high-frequency magnetic field, however, offers even more avenues for potential medical treatment. In such instances, the pangolin robot’s metals heat up dramatically, providing thermal energy for situations such as treating thrombosis, cauterizing tumor tissues, or even stopping internal bleeding. “Untethered robots that can move freely, even though they are made of hard elements such as metal and can also emit heat, are rare,” reads a statement from the Planck Institute, adding that researchers’ new robot “could one day reach even the narrowest and most sensitive regions in the body in a minimally invasive and gentle way and emit heat as needed.”

The post This pangolin-inspired robot can curl up into a healing ball appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These 2D machines can shapeshift into moving 3D robots https://www.popsci.com/technology/mori3-robot-space/ Tue, 13 Jun 2023 14:00:00 +0000 https://www.popsci.com/?p=548156
Mori3 robots combined to form 3D walking shape
By joining forces, a team of Mori3 robots can form almost any 3D shape. EPFL

Mori3's triangular, modular design allows it to fuse with its companions, and could one day make it into space.

The post These 2D machines can shapeshift into moving 3D robots appeared first on Popular Science.

]]>
Mori3 robots combined to form 3D walking shape
By joining forces, a team of Mori3 robots can form almost any 3D shape. EPFL

In order to keep costs low while maximizing utility, engineers are getting creative with their potential robotic cargo. As governments and private companies set their sights returning to the moon and,eventually, establishing a human presence on Mars, space-friendly robots are all the more crucial. Taking inspiration from biological swarm behaviors and geometrical patterns, researchers at Switzerland’s Ecole Polytechnique Fédérale de Lausanne (EPFL) recently showcased Mori3, a new line of shapeshifting, 2D triangular robots capable of combining to form virtually any 3D shape.

[Related: Foldable robots with intricate transistors can squeeze into extreme situations.]

As detailed in a paper published on Monday with Nature Machine Intelligence, the team’s modular, origami-like Mori3 machines “can be assembled and disassembled at will depending on the environment and task at hand,” Jamie Paik, the paper’s co-author and director of EPFL’s aptly-named Reconfigurable Robotics Lab, said in a statement.

Robots photo

Although prices are steadily falling, space is still at a premium when traveling to, well, space. Reaching low-earth orbit via one of SpaceX’s Falcon 9 rockets, for example, can set you back approximately $1,200 per pound of payload—therefore, the more uses you can pack into a small design, the better. And Mori3’s (or, more accurately, a team of Mori3’s) appear up to the challenge.

In the team’s proof of concept, Mori3 robots were able to shuffle around, handle and move objects, as well as interact with their users in a variety of design shapes. Instead of specializing in a single task or function, Mori3 is meant as more of an all-purpose system, forming and reforming for astronauts’ various needs, such as external station repairs, to simply transporting materials throughout a lunar base or spacecraft.

[Related: The ISS’s latest delivery includes space plants and atmospheric lightning monitors.]

In footage first highlighted by The Daily Beast, multiple Mori3 bots are shown to fuse together and alter their overall shape to form a single, walking quadrupedal machine. In another video, an array of flat, triangular Mori3’s manage to morph and position itself into the upright, three-dimensional same walking robot.

“We had to rethink the way we understand robotics,” added Christoph Belke, a robotics researcher at EPFL and one of study’s other co-authors. “These robots can change their own shape, attach to each other, communicate and reconfigure to form functional and articulated structures.”

Check out videos from EPFL showcasing a single Mori3’s 2D movement, as well as a team’s combined 3D capabilities below:

Robots photo
Robots photo

The post These 2D machines can shapeshift into moving 3D robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This lanternfly-egg-hunting robot could mean fewer bugs to squish https://www.popsci.com/technology/robot-kill-lanternfly/ Sat, 10 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=547640
Spotted lanternfly adult on leaf
That pop of color on the adult spotted lanternfly is a warning to predators—and property owners. Stephen Ausmus/USDA

It’s good to get them before they can fly away.

The post This lanternfly-egg-hunting robot could mean fewer bugs to squish appeared first on Popular Science.

]]>
Spotted lanternfly adult on leaf
That pop of color on the adult spotted lanternfly is a warning to predators—and property owners. Stephen Ausmus/USDA

It’s that time of the year again. The invasive, crop-damaging spotted lanternflies are emerging, as they typically do in springtime. You may already start to see some of the polka-dotted nymphs out and about. As with the adult lanternflies, the advice from experts is to kill them on sight

But another way to prevent these pests from spreading is to scrape off and kill the egg masses that these bugs leave on wood, vehicles, and furniture. Inspecting every tree and every surface for lanternfly eggs is no fun task. That’s why a team of undergraduate engineering students at Carnegie Mellon University programmed a robot, called TartanPest, to do it. 

TartanPest was designed as a part of the Farm Robotics Challenge, where teams of students had to design a creative add-on to the preexisting tractor-like farm-ng robot in order to tackle a problem in the food and agriculture industry. 

Engineering photo
TartanPest scrubbing an egg mass off a tree. Carnegie Mellon University

[Related: Taiwan sent mosquito-fighting robots into its sewers]

Since lanternflies voraciously munch on a variety of economically important crops like hardwoods, ornamentals, and grapevines, getting rid of them before they become a problem can save farms from potential damage. The solution from the team at Carnegie Mellon is a robot arm with a machine learning-powered vision system for spotting the egg masses, and an attachment that can brush them off. 

Engineering photo
TartanPest in the wild. Carnegie Mellon University

The machine learning model was trained with 700 images of lanternfly egg masses from the platform iNaturalist, where citizen scientists can upload photos of plant or wildlife observations they have made. 

Of course, TartanPest is not the first robot that helps humans not get their hands dirty (from murdering bugs). Making robots that can find and kill harmful pests on farms has long been a topic of discussion among engineers, as these could be key to decreasing the amount of pesticides used. Beyond crops, bug-terminating robots could have a place in households, too. 

But ask yourself this if you’re squeamish about robots designed to kill bugs: Would you rather have laser wielding robots snuff out cockroaches and mosquitoes, or would you prefer to suck it up and squish them yourself? 

Watch the robot at work: 

Engineering photo

The post This lanternfly-egg-hunting robot could mean fewer bugs to squish appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot dog sniffs out fire ants without the painful sting https://www.popsci.com/technology/fire-ant-robot-dog/ Fri, 09 Jun 2023 19:00:00 +0000 https://www.popsci.com/?p=547487
Fire ants moving across ground
The robot dog identified ant hives with a 95 percent accuracy rate. Deposit Photos

Fire ants are a major nuisance, but scientists created a quadrupedal bot that can identify them better than humans.

The post Robot dog sniffs out fire ants without the painful sting appeared first on Popular Science.

]]>
Fire ants moving across ground
The robot dog identified ant hives with a 95 percent accuracy rate. Deposit Photos

From an ecological standpoint, most ants are great—they aerate soil, clean up organic matter, and help to spread plant seeds. Fire ants, on the other hand… well, you probably already know. The incredibly painful, invasive pests can cause serious harm to their surroundings by disrupting food chains and causing general chaos. It only takes one accidental encounter with the little monsters to know that you never want a repeat experience, but a new AI-powered robotic system could help reduce the number of painful run-ins by locating hives for eradication—no awful ant bites needed.

[Related: The terrifying way fire ants take advantage of hurricane floods.]

According to a new preprint paper highlighted on Friday by New Scientist, researchers at China’s Lanzhou University recently trained an open-source AI system on images of fire ant nests from varying angles and environmental conditions. From there, the engineers installed their program onto a quadrupedal Xiaomi CyberDog, then tasked it to survey 300-square-meter nursery gardens for ant mounds. Once a hive was located, the robot dog “pawed” at it to disturb its residents, after which researchers stepped in to analyze the insects’ numbers and aggression levels to determine regular species from the invasive fire ants.

Impressively, the team’s ant-finding robot dog far outperformed three human control surveyors, even after each received an hour of pest identification and management training. Both the robot and its human competitors searched the same nursery fields for 10 minutes, but the AI system detected three times more nests while also identifying them more accurately at a 95 percent precision rate. The search robot reportedly only fell short when it came to identifying smaller nests recently founded by a colony’s queen.

[Related: Save caterpillars by turning off your outdoor lights.]

Although in its early stages, researchers say that such a system utilizing a more advanced robot boasting more battery life, maneuverability, and speed could optimize its fire ant search-and-destroy missions. 

But then again, with an estimated 20 quadrillion ants across the world, even the most advanced future ant-identifying robots will likely have their work cut out for them.

The post Robot dog sniffs out fire ants without the painful sting appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>