Andrew Paul | Popular Science https://www.popsci.com/authors/andrew-paul/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Tue, 07 May 2024 15:13:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Andrew Paul | Popular Science https://www.popsci.com/authors/andrew-paul/ 32 32 Welcome aboard the world’s first hydrogen fuel cell superyacht https://www.popsci.com/environment/hydrogen-fuel-superyacht/ Tue, 07 May 2024 15:13:18 +0000 https://www.popsci.com/?p=613800
Project 821 hydrogen fuel superyacht in port
'Project 821' took five years to build, and is currently for sale. Credit: Feadship

'Project 821' is an enticing statement piece for the aspiring, eco-conscious Bond villain.

The post Welcome aboard the world’s first hydrogen fuel cell superyacht appeared first on Popular Science.

]]>
Project 821 hydrogen fuel superyacht in port
'Project 821' took five years to build, and is currently for sale. Credit: Feadship

Superyachts are notoriously dirty luxury toys, with a single billionaire’s boat emitting as much as 7,020 tons of CO2 per year. And while it’s unlikely uber-wealthy shoppers are going to forgo from their statement vessels anytime soon, at the very least there’s now a chance to make superyachts greener. That’s the idea behind the new Project 821, billed as the world’s first hydrogen fuel cell superyacht.

Announced over the weekend by Danish shipyard cooperative Feadship, Project 821 arrives following five years of design and construction. Measuring a massive 260-feet-long, the zero-diesel boat reportedly sails shorter distances than standard superyachts on the market, but still operates its hotel load and amenities using completely emissionless green hydrogen power.

Project 821 hydrogen superyacht foreshot
The superyacht’s liquid hydrogen must remain in cryogenic tanks cooled to -423.4 degrees Fahrenheit. Credit: Feadshipt

Hydrogen cells generate power by turning extremely lightweight liquid hydrogen into electricity stored in lithium-ion batteries. But unlike fossil fuel engines’ noxious smoke and other pollutants, hydrogen cells only emit harmless water vapor. The technology remained cost-prohibitive and logistically challenging for years, but recent advancements have allowed designers to start integrating the green alternative into cars, planes, and boats.

There are still hurdles, however. Although lightweight, liquid hydrogen must be housed in massive, double-walled -423.4 degrees Fahrenheit cryogenic storage tanks within a dedicated section of the vessel. According to Feadship, liquid hydrogen requires 8-10 times more storage space for the same amount of energy created by diesel fuel. That—along with 16 fuel cells, a switchboard connection for the DC electrical grid, and water vapor emission vent stacks—necessitated adding an extra 13-feet to the vessel’s original specifications. But these size requirements ironically makes superyachts such as Project 821 arguably ideal for hydrogen fuel cell integration.

Hydrogen superyacht aft image
Although emissionless, ‘Project 821’ is still not capable of standard-length voyages. Credit: Feadship

And it certainly sounds like Project 821 fulfills the “superyacht” prerequisites, with five decks above the waterline and two below it. The 14 balconies and seven fold-out platforms also house a pool, Jacuzzi, steam room, two bedrooms, two bathrooms, gym, pantry, fireplace-equipped offices, living room, library, and a full walkaround deck.

Such luxuries, however, will need to remain relatively close-to-harbor for the time being. Project 821 still isn’t capable of generating and storing enough power to embark on lengthy crossings, but it can handle an “entire week’s worth of silent operation at anchor or [briefly] navigating emission-free at 10 knots while leaving harbors or cruising in protected marine zones,” according to Feadship.

[Related: This liquid hydrogen-powered plane successfully completed its first test flights.]

“We have now shown that cryogenic storage of liquified hydrogen in the interior of a superyacht is a viable solution,” Feadship Director and Royal Van Lent Shipyard CEO Jan-Bart Verkuyl said in the recent announcement, adding that “additional fuel cell innovations… are on the near horizon.”

Of course, the greenest solution remains completely divesting from ostentatious, multimillion-dollar vanity flotillas before rising sea levels (and angry orcas) overwhelm even the wealthiest billionaires’ harbors. But it’s at least somewhat nice to see a new eco-friendly advancement on the market—even if it still looks like a Bond villain’s getaway vehicle.

The post Welcome aboard the world’s first hydrogen fuel cell superyacht appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
SpaceX reveals new sleek spacesuits ahead of upcoming historic mission https://www.popsci.com/science/spacex-eva-suits/ Mon, 06 May 2024 18:11:09 +0000 https://www.popsci.com/?p=613688
SpaceX EVA suit helmet close up
The EVA suit helmet is 3D printed from polycarbonate materials. SpaceX

The Extravehicular Activity (EVA) suits will be worn during the Polaris Dawn spacewalk and feature HUD visor displays.

The post SpaceX reveals new sleek spacesuits ahead of upcoming historic mission appeared first on Popular Science.

]]>
SpaceX EVA suit helmet close up
The EVA suit helmet is 3D printed from polycarbonate materials. SpaceX

SpaceX has revealed its new Extravehicular Activity (EVA) suits that could make their low-Earth orbital debut by summer’s end. The new uniform is described as an evolution of the spacesuits currently worn by astronauts aboard Dragon missions, which are designed solely for remaining within pressurized environments. In contrast, the EVA suits will allow astronauts to work both within and outside their capsule as needed thanks to a number of advancements in materials fabrication, joint design, enhanced redundancy safeguards, as well as the integration of a helmet visor heads up display (HUD).

Announced over the weekend, the SpaceX EVA suits will be worn by the four crewmembers scheduled to comprise the Polaris Program’s first mission, Polaris Dawn. First launched in 2022, the Polaris Program is a joint venture through SpaceX intended to “rapidly advance human spaceflight capabilities,” according to its website. Targeted for no earlier than summer 2024, Polaris Dawn will mark the first commercial spacewalk, as well as the first spacewalk to simultaneously include four astronauts. While making history outside their Dragon capsule, the crew will be the first to test Starlink laser-based communications systems that SpaceX believes will be critical to future missions to the moon and eventually Mars.

Polaris Dawn astronaut crew wearing EVA suits
Polaris Dawn’s four astronauts will conduct their mission no earlier than summer 2024. SpaceX

Mobility is the central focus of SpaceX’s teaser video posted to X on May 4, with an EVA suit wearer showing off their smooth ranges of motion for fingers, shoulders, and elbows. As PCMag.com also detailed on Monday, SpaceX EVA suits are fabricated with a variety of textile-based thermal materials and include semi-rigid rotator joints that allow work in both pressurized and unpressurized environments. For the boots, designers utilized the same temperature resilient material found in the Falcon 9 rocket’s interstage and Dragon capsule’s trunk.

Polaris Dawn astronauts will also sport 3D-printed polycarbonate helmets with visors coated in copper and indium tin oxide alongside anti-glare and anti-fog treatments. During the spacewalk roughly 435-miles above Earth, each crewmember’s helmet will project a built-in heads up display (HUD) to provide real-time pressure, temperature, and relative humidity readings.

[Related: Moon-bound Artemis III spacesuits have some functional luxury sewn in.]

Similar to the Prada-designed getups for NASA’s Artemis III astronauts, the SpaceX EVA suit is also meant to illustrate a future in which all kinds of body types can live and work beyond Earth. SpaceX explains that all the EVA upgrades are scalable in design, which will allow customization to accommodate “different body types as SpaceX seeks to create greater accessibility to space for all of humanity.” Its proposed goal of manufacturing “millions” of spacesuits for multiplanetary life may seem far-fetched right now, but it’s got to start somewhere—even if only just four of them at the moment.

The post SpaceX reveals new sleek spacesuits ahead of upcoming historic mission appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ancient mystery code was probably Sargon II’s name https://www.popsci.com/science/ancient-mystery-code-sargon/ Mon, 06 May 2024 14:49:44 +0000 https://www.popsci.com/?p=613616
Assyrian mural image of lion
Late 19th century drawing of an Assyrian lion symbol published by French excavator Victor Place. New York Public Library

A lion, an eagle, a bull, a fig tree, and a plow all came together to point to one of Mesopotamia's greatest rulers.

The post Ancient mystery code was probably Sargon II’s name appeared first on Popular Science.

]]>
Assyrian mural image of lion
Late 19th century drawing of an Assyrian lion symbol published by French excavator Victor Place. New York Public Library

King Sargon II was a big fan of seeing his name around town—at least, that’s what one expert believes after reviewing a series of repeating mystery images that have confounded researchers for well over a century.

Ruler of the Neo-Assyrian empire from 721-704 BCE, Sargon II oversaw huge portions of ancient Mesopotamia, and is considered one of the era’s greatest military strategists. By the time of his death in 705 BCE, the king had either conquered or neutralized all his major political threats, a feat celebrated by his establishment of a new Assyrian capital in present day Khorsabad, Iraq, called Dūr-Šarrukīn, or “Fort Sargon,” in 706 BCE.

Excavations of the city during the late-nineteenth century revealed a sequence of five symbols repeated across multiple temples throughout Dūr-Šarrukīn—a lion, an eagle, a bull, a fig tree, and a plow. In some cases, however, there is similar art using just the lion, tree, and plough. Although the images appear similar to Egyptian hieroglyphics, the Assyrian empire during Sargon II’s reign had long utilized their non-pictorial cuneiform for written communication. Because of this, researchers have spent years theorizing about what the five total images might represent. Given Sargon II’s regal ego, historians have previously surmised the art could potentially represent his name in some form, but weren’t clear how that could be the case.

Eagle and bull Assyrian art
Sargon II’s eagle and bull artwork depicted by French excavator Victor Place. New York Public Library

“The study of ancient languages and cultures is full of puzzles of all shapes and sizes, but it’s not often in the Ancient Near East that one faces mystery symbols on a temple wall,” Martin Worthington, a Trinity University professor specializing in ancient Mesopotamian languages and civilizations, said in a recent statement.

But according to Worthington, the answer is relatively simple and characteristic of the time. In his new paper published in the Bulletin of the American Schools of Oriental Research, Worthington argues the five images, when sounded out in ancient Assyrian, approximate “šargīnu,” or Sargon. Even when just the trio of pictures appears, their combination phonetically still resembles a shortened form of “Sargon.” Combined with the religious undertones of Assyrian constellations, Worthington contends the king was intent on making sure everyone knew just how great and powerful he was. 

“The effect of the symbols was to assert that Sargon’s name was written in the heavens, for all eternity, and also to associate him with the gods Anu and Enlil, to whom the constellations in question were linked,” he writes in his new paper’s abstract. “It is further suggested that Sargon’s name was elsewhere symbolized by a lion passant (pacing lion), through a bilingual pun.”

[Related: How cryptographers finally cracked one of the Zodiac Killer’s hardest codes.]

“[It was] a clever way to make the king’s name immortal,” Worthingon added through Trinity University’s announcement. “And, of course, the idea of bombastic individuals writing their name on buildings is not unique to ancient Assyria.”

Fig tree and plough Assyrian art
Fig tree and plough depicted by French excavator Victor Place. New York Public Library

Of course, given these are millennia-old metaphors sans concrete language reference points, it’s arguably impossible to state without a doubt these were Sargon’s regal brag banners. Cuneiform used at the time didn’t rely on literal pictures, and no codex is available to match the temple art with any translation. That said, Worthington believes the underlying logic, combined with Assyrian cultural reference points, makes a pretty convincing argument.

“I can’t prove my theory, but the fact it works for both the five-symbol sequence and the three-symbol sequence, and that the symbols can also be understood as culturally appropriate constellations, strikes me as highly suggestive,” Worthington said. “The odds against it all being happenstance are—forgive the pun—astronomical.”

The post Ancient mystery code was probably Sargon II’s name appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Many rural areas could soon lose cell service https://www.popsci.com/technology/rural-cell-loss/ Fri, 03 May 2024 17:44:33 +0000 https://www.popsci.com/?p=613520
Telecom towers in farmland
The FCC says another $3 billion is needed to fully fund 'rip-and-replace' programs. Deposit Photos

States such as Tennessee, Kansas, and Oklahoma could be affected unless 'rip-and-replace' funding is secured.

The post Many rural areas could soon lose cell service appeared first on Popular Science.

]]>
Telecom towers in farmland
The FCC says another $3 billion is needed to fully fund 'rip-and-replace' programs. Deposit Photos

Rural and Indigenous communities are at risk of losing cell service thanks to a 2019 law intended to strip US telecom networks of Chinese-made equipment. And while local companies were promised reimbursements as part of the “rip-and-replace” program, many of them have so far seen little of the funding, if any at all.

The federal push to block Chinese telephone and internet hardware has been years in the making, but gained substantial momentum during the Trump administration. In May 2019 an executive order barred American providers from purchasing telecom supplies manufactured by businesses within a “foreign adversary” nation. Industry and government officials have argued China might use products from companies like Huawei and ZTE to tap into US telecom infrastructure. Chinese company representatives have repeatedly pushed back on these claims and it remains unclear how substantiated these fears are.

[Related: 8.3 million places in the US still lack broadband internet access.]

As The Washington Post explained on Thursday, major network providers like Verizon and Sprint have long banned the use of Huawei and ZTE equipment. But for many smaller companies, Chinese products and software are the most cost-effective routes for maintaining their businesses.

Meanwhile, “rip-and-replace” program plans have remained in effect through President Biden’s administration—but little has been done to help smaller US companies handle the intensive transition efforts. In a letter to Congress on Thursday, FCC Chairwoman Jessica Rosenworcel explained an estimated 40 percent of local network operators currently cannot replace their existing Huawei and ZTE equipment without additional federal funding. Although $1.9 billion is currently appropriated, revised FCC estimates say another $3 billion is required to cover nationwide rip-and-replace costs.

Congress directed the FCC to begin a rip-and-replace program through the passage of the 2020 Secure and Trusted Communications Networks Act, but it wasn’t long before officials discovered the $3 billion shortfall. At the time, the FCC promised small businesses 39.5 percent reimbursements for their overhauls. Receiving that money subsequently triggered a completion deadline, but that remaining 61.5 percent of funding has yet to materialize for most providers. Last week, Sen. Maria Cantwell (D-WA) announced the Spectrum and National Security Act, which includes a framework to raise the additional $3 billion needed for program participants.

In her letter to Congress on Thursday, Rosenworcel said providers currently have between May 29, 2024, and February 4, 2025, to supposedly complete their transitions, depending on when they first received the partial funding. Rosenworcel added that at least 52 extensions have already been granted to businesses due in part to funding problems. Earlier this year, the FCC reported only 5 program participants had been able to fully complete their rip-and-replace plans.

It’s unclear how much of the US would be affected by the potential losses of coverage. To originally qualify for the reimbursement funding, a telecom company must provide coverage to under 2 million customers. The Washington Post cited qualified companies across much of the nation on Thursday, including Alaska, Colorado, Michigan, Missouri, New Mexico, Tennessee, Kansas, and Oklahoma. 

“The Commission stands ready to assist Congress in any efforts to fully fund the Reimbursement Program,” Rosenworcel said yesterday.

The post Many rural areas could soon lose cell service appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
China is en route to collect first-ever samples from the far side of the moon https://www.popsci.com/science/china-moon-launch/ Fri, 03 May 2024 14:20:28 +0000 https://www.popsci.com/?p=613439
A Long March 5 rocket, carrying the Chang'e-6 mission lunar probe, lifts off as it rains at the Wenchang Space Launch Centre in southern China's Hainan Province on May 3, 2024.
A Long March 5 rocket, carrying the Chang'e-6 mission lunar probe, lifts off as it rains at the Wenchang Space Launch Centre in southern China's Hainan Province on May 3, 2024. Credit: HECTOR RETAMAL/AFP via Getty Images

Chang'e-6 spacecraft's payoff could be historic.

The post China is en route to collect first-ever samples from the far side of the moon appeared first on Popular Science.

]]>
A Long March 5 rocket, carrying the Chang'e-6 mission lunar probe, lifts off as it rains at the Wenchang Space Launch Centre in southern China's Hainan Province on May 3, 2024.
A Long March 5 rocket, carrying the Chang'e-6 mission lunar probe, lifts off as it rains at the Wenchang Space Launch Centre in southern China's Hainan Province on May 3, 2024. Credit: HECTOR RETAMAL/AFP via Getty Images

China launched its uncrewed Chang’e-6 lunar spacecraft at 5:27 PM local time (5:27 PM EST) on Friday from the southern island province of Hainan, accelerating its ongoing space race with the US. If successful, a lander will detach upon reaching lunar orbit and descend to the surface to scoop up samples from the expansive South Pole-Aitken basin impact crater. Once finished, the lander will launch back up to Chang’e-6, dock, and return to Earth with the first-of-its-kind samples in tow. All told, the mission should take roughly 56 days to complete.

China’s potential return to the moon marks a significant development in international efforts to establish a permanent presence there. As the US moves forward with its Artemis program missions alongside assistance from Japan and commercial partners, China and Russia are also seeking to build their own lunar research station. Whoever does so first could have major ramifications for the future of moon exploration, resource mining, and scientific progress.

[Related: Why do all these countries want to go to the moon right now? ]

The China National Space Administration’s (CNSA) previous Chang’e-5 mission successfully landed a spacecraft at a volcanic plain on the moon’s near side, but Chang’e-6 aims to take things further, both technologically and logistically. To pull off a far side feat, CNSA mission controllers will need to use a satellite already in orbit around the moon to communicate with Chang’e-6 once its direct relay becomes blocked. But if they can manage it, the payoff will be substantial.

As NBC News explained Friday, the moon’s far side is much less volcanically active than its near side. Since all previous lunar samples have come from the near side, experts believe retrieving new samples elsewhere will help increase their understanding of the moon’s history, as well as potential information on the solar system’s origins.

NASA most likely still has an edge when it comes to returning actual humans to the moon, however. Even with recent mission delays, Artemis 3 astronauts are currently scheduled to reach the probable ice-laden lunar south pole by 2026. China does not expect to send its own taikonauts to the moon until at least 2030, and its joint research station with Russia still remains in its conceptual phase.

That same year will also mark the official decommissioning of the International Space Station. After NASA remotely guides it into a fiery re-entry through Earth’s atmosphere, the only remaining orbital station will be China’s three-module Tiangong facility.

In an interview with Yahoo Finance earlier this week, NASA Administrator Bill Nelson didn’t mince words about the potential ramifications of who sets up on the moon first.

“I think it’s not beyond the pale that China would suddenly say, ‘We are here. You stay out,’” Nelson said at the time. “That would be very unfortunate—to take what has gone on on planet Earth for years, grabbing territory, and saying it’s mine and people fighting over it.”

The post China is en route to collect first-ever samples from the far side of the moon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Space Force finds a dead Cold War-era satellite missing for 25 years https://www.popsci.com/science/lost-satellite-found/ Thu, 02 May 2024 18:16:29 +0000 https://www.popsci.com/?p=613375
Sun above earth photo taken from ISS
The S73-7 Infra-Red Calibration Balloon was already lost once before since it first launched in 1974. NASA/JSC

It's not the first time the tiny spy balloon has disappeared.

The post Space Force finds a dead Cold War-era satellite missing for 25 years appeared first on Popular Science.

]]>
Sun above earth photo taken from ISS
The S73-7 Infra-Red Calibration Balloon was already lost once before since it first launched in 1974. NASA/JSC

The US Space Force located a tiny experimental satellite after it spent two-and-a–half decades missing in orbit. Hopefully, they’ll be able to keep an eye on it for good—unlike the last time.

The S73-7 Infra-Red Calibration Balloon (IRCB) was dead on arrival after ejecting from one of the Air Force’s largest Cold War orbital spy camera systems. Although it successfully departed the KH-9 Hexagon reconnaissance satellite about 500 miles above Earth in 1974, the S73-7 failed to inflate to its full 26-inch diameter. The malfunction prevented it from aiding ground based equipment triangulate remote sensing arrays and thus rendered it yet another hunk of space junk.

It wasn’t long afterwards that observers lost sight of the IRCB, only to once again locate the small satellite in early 1990s. And then, they managed to lose it again. Now, after another 25 years, the US Space Force’s 18th Space Defense Squadron rediscovered the experimental device.

Confirmation came through a recent post on X from Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, who offered his “congrats to whichever… analyst made the identification.”

So how does a satellite disappear for years on end not once, but twice? It’s actually much easier than you might think. As Gizmodo explained on May 1, over 27,000 objects are currently in orbit, most of which are spent rocket boosters. These, along with various satellites, don’t transmit any sort of identification back to Earth. Because of this, tracking systems must match a detected object to a satellite’s predictable orbital path in order to ID it.

[Related: Some space junk just got smacked by more space junk, complicating cleanup.]

If you possess relatively up-to-date radar data, and there aren’t many contenders in a similar orbit, then it usually isn’t hard to pinpoint satellites. But the more crowded an area, the more difficult it is for sensors to match, especially if you haven’t seen your target in a while—say, miniature Infra-Red Calibration Balloon from the 1970s.

It’s currently unclear what information exactly tipped off Space Force to matching their newly detected object with the S73-7, but regardless, that makes it at least trackable above everyone’s heads. In all that time, McDowell’s data indicates the balloon has only descended roughly 9 miles from its original 500 mile altitude, so it’ll be a while before it succumbs to gravity and burns up in the atmosphere. Accounting for everything in orbit may sometimes be taken for granted, but it’s a vital component of humanity’s increasing reliance on satellite arrays, as well as the overall future of space travel.

The post Space Force finds a dead Cold War-era satellite missing for 25 years appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Orangutan observed using a plant to treat an open wound https://www.popsci.com/environment/ape-treat-wound-plant/ Thu, 02 May 2024 15:00:00 +0000 https://www.popsci.com/?p=613341
Close up of orangutan
How the great ape first learned to use the plant is still unclear. Deposit Photos

It's the first time this behavior was observed in the animal world.

The post Orangutan observed using a plant to treat an open wound appeared first on Popular Science.

]]>
Close up of orangutan
How the great ape first learned to use the plant is still unclear. Deposit Photos

Observers have documented multiple animal species using plants for self-medicinal purposes, such as great apes eating plants that treat parasitic infections or rubbing vegetation on sore muscles. But a wild orangutan recently displayed something never observed before—he treated his own open wound by activating a plant’s medical properties using his own spit. As detailed in a study published May 2 in Scientific Reports, evolutionary biologists believe the behavior could point toward a common ancestor shared with humans.

The discovery occurred within a protected Indonesian rainforest at the Suaq Balimbing research site. This region, currently home to roughly 150 critically endangered Sumatran orangutans, is utilized by an international team of researchers from the Max Planck Institute of Animal Behavior to monitor the apes’ behavior and wellbeing. During their daily observations, cognitive and evolutionary biologists noticed a sizable injury on the face of one of the local males named Rakus. Such wounds are unsurprising among the primates, since they frequently spar with one another—but then Rakus did something three days later that the team didn’t expect.

Endangered Species photo

After picking leaves off of a native plant known as an Akar Kuning (Fibraurea tinctoria), well-known for its anti-inflammatory, anti-fungal, and antioxidant properties, as well as its use in traditional malaria medicines, Rakus began to chew the plant into a paste. He then rubbed it directly on his facial injury for several minutes before covering it entirely with the mixture. Over the next few days, researchers noted the self-applied natural bandage kept the wound from showing signs of infection or exacerbation. Within five days, the injury scabbed over before healing entirely.

Such striking behavior raises a number of questions, particularly how Rakus first learned to treat his face using the plant. According to study senior author Caroline Schuppli, one possibility is that it simply comes down to “individual innovation.”

“Orangutans at [Suaq] rarely eat the plant,” she said in an announcement. “However, individuals may accidentally touch their wounds while feeding on this plant and thus unintentionally apply the plant’s juice to their wounds. As Fibraurea tinctoria has potent analgesic effects, individuals may feel an immediate pain release, causing them to repeat the behavior several times.”

[Related: Gorillas like to scramble their brains by spinning around really fast.]

If this were the case, it could be that Rakus is one of the few orangutans to have discovered the benefits of Fibraurea tinctoria. At the same time, adult orangutan males never live where they were born—they migrate sizable distances either during or after puberty to establish new homes. So it’s also possible Rakus may have learned this behavior from his relatives, but given observers don’t know where he is originally from, it’s difficult to follow up on that theory just yet.

Still, Schuppli says other “active wound treatment” methods have been noted in other African and Asian great apes, even when they aren’t used to disinfect or help heal an open wound. Knowing that, “it is possible that there exists a common underlying mechanism for the recognition and application of substances with medical or functional properties to wounds and that our last common ancestor already showed similar forms of ointment behavior.”

Given how much humans already have in common with their great ape relatives, it’s easy to see how this could be a likely explanation. But regardless of how Rakus knew how to utilize the medicinal plant, if he ever ends up scrapping with another male orangutan again, he’ll at least know how to fix himself up afterwards.

The post Orangutan observed using a plant to treat an open wound appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a tech billionaire talk to his AI-generated clone https://www.popsci.com/technology/ai-clone-interview/ Wed, 01 May 2024 19:12:52 +0000 https://www.popsci.com/?p=613256
Side by side of Reid AI deepfake and Reid Hoffman
Both Hoffmans appear to miss the larger point during their lengthy interview. YouTube

The deepfake double picks its nose in a very weird interview.

The post Watch a tech billionaire talk to his AI-generated clone appeared first on Popular Science.

]]>
Side by side of Reid AI deepfake and Reid Hoffman
Both Hoffmans appear to miss the larger point during their lengthy interview. YouTube

Billionaire LinkedIn co-founder Reid Hoffman has recently released a video ‘interview’ with his new digital avatar, Reid AI. Built on a custom GPT trained on two decades’ worth of Hoffman’s books, articles, speeches, interviews, and podcasts, Reid AI utilizes speech and video deepfake technology to create a digital clone capable of approximating its source subject’s mannerisms and conversational tone. For over 14 minutes, you can watch the two Hoffmans gaze lovingly and dead-eyed, respectively, into the tech industry’s uncanny navel. In doing so, viewers aren’t offered a “way to be better, to be more human,” as the real Hoffman argues—but a way towards a misguided, dangerous, unethical, and hollow future.

AI photo

Many people might shudder at the idea of unleashing a talking, animated AI avatar of themselves into the world, but the tech utopian “city of yesterday” investor sounds absolutely jazzed about it. According to an April 24 blog post, he finds the whole prospect so “interesting and thought-provoking,” in fact, that he recently partnered with generative AI video company Hour One and the AI audio startup 11ElevenLabs to make it happen. (If that latter name sounds familiar, it’s because 11ElevenLabs’ product is what scammers misused to create those audio deepfake Biden robocalls earlier this year.)

After teasing a showcase of his digital clone for months, Hoffman finally revealed a (heavily edited) video conversation between himself and “Reid AI” last week. And what does the cutting-edge, deepfake-animated culmination of a custom built GPT-4 chatbot reportedly trained on all things Hoffman? A solid question—and one that isn’t easy to answer after watching the surreal, awkward, and occasionally unhygienic simulated interaction.

“Why would I want to be interviewed by a digital version of myself?” Hoffman posits at the video’s outset. First and foremost, it’s apparently to summarize one of his books for an array of potential audience demographics: the smartest person in the world, 5-year-old children, Seinfeld fans, and Klingons. While Hoffman seems to love each subsequent Blitzscaling encapsulation (particularly the “smartest person” one) they all sound like it came from a ChatGPT prompt—which, technically, they did. The difference here is that, instead of only a text answer, the words get a Hoffman vocal approximation layered atop of a (still clearly artificial) video rendering of the man.

Amidst all his excitement, Hoffman—like so many influential tech industry figures—yet again betrays a fundamental misunderstanding of how generative AI works. Technology like OpenAI’s GPT, no matter how gussied up with visual and audio additions, is not capable of comprehension. When an AI responds, “Thank you” or “I think that’s a great point,” they don’t actually experience gratitude or think anything. Generative AI sees sentences as lines of code, each letter or space followed by the next, most probably letter or space. This can be adapted into conversational audio and dubbed to video personas, but that doesn’t change the underlying functionality. It simply received new symbolic input that influences what basically amounts to a superpowered autocorrect system. Even if its language is set to Klingon, as Reid AI offers at one point.

So when Reid AI warns Hoffman a wrong answer may result “because I misinterpreted the information you gave, or I don’t have the full context of your question,” Hoffman doesn’t pause to explain any of the above facts for viewers. He instead moves along to his next conversation point, which usually involves a plug for his books or LinkedIn.

[Related: A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election.]

Meanwhile, Reid AI’s visual component is supposedly meant to simulate many of Hoffman’s conversational mannerisms and queues. Judging from Reid AI’s performance, these largely boil down to stilted attempts at “nodding vigorously,” “emphatically tapping to illustrate a point,” and “picking his nose.” As New Atlas points out, the moment at 10:44 is an odd quirk to include in such a clearly condensed and edited video—perhaps meant to illustrate some of humanity’s more awkward, relatable traits. If so, it does little to distract from the far more absurd and troubling sentiments said by both Hoffman’s.

Reid AI expounds on boilerplate techno-libertarian talking points for fostering a “framework that fuels innovation.” Hoffman repeatedly opines that any concerns about bias, privacy, labor, and digital ownership concerns are just “start[ing] with the negative and [not realizing] all the things that are positive.” The digital clone regurgitates bland, uncreative ways to spruce up Hoffman’s LinkedIn page, like adding “personal flair” such as a fun and colorful header image.

Reid AI and Reid Hoffman side by side
Credit: YouTube

But the most worrisome moment arrives when Hoffman contends “Everyone should be asking themselves, ‘What can I do to help?’” make AI like digital avatars more commonplace. He even goes so far as to equate the current technological era to Europe’s adoption of the steam engine, which made it “such a dominant force in the entire world.” (Neither he, nor Reid AI, concede the other tools involved in the industrial revolution, of course—namely a colonialist system built on the labor of millions of exploited and enslaved populations.)

Hoffman says future iterations of Reid AI will add “to the range of capabilities, of things that I could do.” It’s an extremely telling sentiment—one implying people like Hoffman have no qualms with erasing any demarcation between their cloned and authentic selves. If nothing else, Hoffman has already found at least one task Reid AI can handle for him.

“I am curious to know what others’ thoughts are on how to mitigate impersonation and all other types of risks stemming from such a use-case and achieve ‘responsible AI,’” one LinkedIn user asked him in his announcement post’s comments.

“Great question… Here is Reid AI’s answer,” Hoffman responded alongside a link to a new avatar clip.

The post Watch a tech billionaire talk to his AI-generated clone appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
JWST measures ‘Hot Jupiter,’ a distant exoplanet hot enough to forge iron https://www.popsci.com/science/jwst-wasp-43b/ Wed, 01 May 2024 15:00:48 +0000 https://www.popsci.com/?p=613154
Artist rendering of exoplanet WASP-43b
This artist’s concept shows what the hot gas-giant exoplanet WASP-43 b could look like. A Jupiter-sized planet roughly 280 light-years away, the planet orbits its star at a distance of about 1.3 million miles, completing one circuit in about 19.5 hours. Credit: NASA, ESA, CSA

Blazing temperatures and supersonic winds rule WASP-43b.

The post JWST measures ‘Hot Jupiter,’ a distant exoplanet hot enough to forge iron appeared first on Popular Science.

]]>
Artist rendering of exoplanet WASP-43b
This artist’s concept shows what the hot gas-giant exoplanet WASP-43 b could look like. A Jupiter-sized planet roughly 280 light-years away, the planet orbits its star at a distance of about 1.3 million miles, completing one circuit in about 19.5 hours. Credit: NASA, ESA, CSA

NASA’s James Webb Space Telescope isn’t only snapping some of the most detailed images of our cosmos—it’s also helping an international team of astronomers determine the weather on planets trillions of miles away from Earth. Its latest subject, WASP-43b, appears to live up to its extremely heavy metal-sounding name.

Astronomers discovered WASP-43b back in 2011, but initially could only assess some of its potential conditions using the Hubble and now-retired Spitzer space telescopes. That said, it was immediately clear that the gas giant is a scorcher.According to their measurements, the planet orbits its star at just 1.3 million miles away. For comparison, that’s not even 1/25th the distance separating Mercury from the sun. WASP-43b is also tidally locked in its orbit, meaning that one side is always facing its star while the other half is constantly cloaked in darkness.

Chart of WASP-43b phase curve from low-resolution spectroscopy
Data from the Mid-Infrared Instrument on NASA’s Webb telescope shows the changing brightness of the WASP-43 star and planet system. The system appears brightest when the hot dayside of the planet is facing the telescope, and grows dimmer as the planet’s nightside rotates into view. Credit: Taylor J. Bell (BAERI); Joanna Barstow (Open University); Michael Roman (University of Leicester) Graphic Design: NASA, ESA, CSA, Ralf Crawford (STScI)

But at 280 light-years away and practically face-to-face with its star, WASP-43b is difficult to see clearly through telescopes. To get a better look, experts enlisted JWST’s Mid-Infrared Instrument (MIRI) to measure extremely small fluctuations in the brightness emitted by the WASP-43 system every 10 seconds for over 24 hours.

“By observing over an entire orbit, we were able to calculate the temperature of different sides of the planet as they rotate into view. From that, we could construct a rough map of temperature across the planet,” Taylor Bell, a researcher at the Bay Area Environmental Research Institute and the lead author of a study published yesterday in Nature Astronomy, said in Tuesday’s announcement.

[Related: JWST images show off the swirling arms of 19 spiral galaxies.]

Some of those temperatures are blazing enough to forge iron, with WASP-43b’s dayside averaging almost 2,300 degrees Fahrenheit. And while the nightside is a balmier 1,100 degrees Fahrenheit, that’s still only about 120 degrees short of the melting point for aluminum.

MIRI’s broad spectrum mid-infrared light data, paired alongside additional telescope readings and 3D climate modeling, also allowed astronomers to measure water vapor levels around the planet. With this information, the team could better calculate WASP-43b’s cloud properties, including their thickness and height.

Temperature map diagram for WASP-43b
This set of maps shows the temperature of the visible side of the hot gas-giant exoplanet WASP-43 b as it orbits its star. The temperatures were calculated based on more than 8,000 brightness measurements by Webb’s MIRI (the Mid-Infrared Instrument). Credit: Science: Taylor J. Bell (BAERI); Joanna Barstow (Open University); Michael Roman (University of Leicester) Graphic Design: NASA, ESA, CSA, Ralf Crawford (STScI)

The light data also revealed something striking about the gas giant’s atmospheric conditions—a total lack of methane, which astronomers previously hypothesized may be detectable, at least on the nightside. This fact implies that nearly 5,000 mph equatorial winds must routinely whip across WASP-43b, which are fast enough to prevent the chemical reactions necessary to produce detectable levels of methane.

“With Hubble, we could clearly see that there is water vapor on the dayside. Both Hubble and Spitzer suggested there might be clouds on the nightside,” Bell said on Tuesday. “But we needed more precise measurements from Webb to really begin mapping the temperature, cloud cover, winds, and more detailed atmospheric composition all the way around the planet.”

The post JWST measures ‘Hot Jupiter,’ a distant exoplanet hot enough to forge iron appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Boston Dynamics gives Spot bot a furry makeover https://www.popsci.com/technology/furry-boston-dynamics-spot/ Tue, 30 Apr 2024 19:04:16 +0000 https://www.popsci.com/?p=613083
Boston Dynamics Spot robot in puppet dog costume sitting next to regular Spot robot.
That's certainly one way to honor 'International Dance Day.'. Boston Dynamics/YouTube

'Sparkles' shows off the latest in robo-dog choreography.

The post Boston Dynamics gives Spot bot a furry makeover appeared first on Popular Science.

]]>
Boston Dynamics Spot robot in puppet dog costume sitting next to regular Spot robot.
That's certainly one way to honor 'International Dance Day.'. Boston Dynamics/YouTube

Boston Dynamics may have relocated the bipedal Atlas to a nice farm upstate, but the company continues to let everyone know its four-legged line of Spot robots have a lot of life left in them. And after years of obvious dog-bot comparisons, Spot’s makers finally went ahead and commissioned a full cartoon canine getup for its latest video showcase. Sparkles is here and like its fellow Boston Dynamics family, it’s perfectly capable of cutting a rug.

Dogs photo

Unlike, say, a mini Spot programmed to aid disaster zone search-and-rescue efforts or explore difficult-to-reach areas in nuclear reactors, Sparkles appears designed purely to offer viewers some levity. According to Boston Dynamics, the shimmering, blue, Muppet-like covering is a “custom costume designed just for Spot to explore the intersections of robotics, art, and entertainment” in honor of International Dance Day. In the brief clip, Sparkles can be seen performing a routine alongside a more standardized mini Spot, sans any extra attire.

But Spot bots such as this duo aren’t always programmed to dance for humanity’s applause—their intricate movements highlight the complex software built to take advantage of the machine’s overall maneuverability, balance, and precision. In this case, Sparkles and its partner were trained using Choreographer, a dance-dedicated system made available by Boston Dynamics with entertainment and media industry customers in mind.

[Related: RIP Atlas, the world’s beefiest humanoid robot.]

With Choreographer, Spot owners don’t need a degree in robotics or engineering to get their machines to move in rhythm. Instead, they are able to select from “high-level instruction” options instead of needing to key in specific joint angle and torque parameters. Even if one of Boston Dynamics robots running Choreographer can’t quite pull off a user’s routine, it is coded to approximate the request as best as possible.

“If asked to do something physically impossible, or if faced with an environmental challenge like a slippery floor, Spot will find the possible motion most similar to what was requested and do that instead—analogously to what a human dancer would do,” the company explains.

Choreographer is behind some of Boston Dynamics’ most popular demo showcases, including those BTS dance-off and the “Uptown Funk” videos. It’s nice to see the robots’ moves are consistently improving—but maybe nice still is that it’s at least one more time people don’t need to think about a gun-toting dog bot. Or even what’s in store for humanity after that two-legged successor to Atlas finally hits the market.

The post Boston Dynamics gives Spot bot a furry makeover appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Surprise! That futuristic COVID mask was even sketchier than we thought https://www.popsci.com/health/razer-zephyr-covid-refund/ Tue, 30 Apr 2024 15:53:07 +0000 https://www.popsci.com/?p=612975
Woman wearing Razer Zephyr Mask
The Razer Zephyr base model sold for $99. Credit: Razer

Razer owes $1 million in refunds for false N95 claims about Zephyr.

The post Surprise! That futuristic COVID mask was even sketchier than we thought appeared first on Popular Science.

]]>
Woman wearing Razer Zephyr Mask
The Razer Zephyr base model sold for $99. Credit: Razer

The Federal Trade Commission has ordered Razer to issue over $1.1 million in full refunds for its Razer Zephyr facemasks after alleging the PC gaming accessory company falsely billed its futuristic “wearable air purifier” as equivalent to N95-grade respirators. In truth, the FTC says Zephyr’s makers never even submitted their product for testing to either the FDA or the National Institute for Occupational Safety and Health (NIOSH). 

Razer is best known for its sleek, futuristic, luminescent video gaming accessories—but during the height of COVID-19, the company specializing in RGB backlit keyboards and headphones thought it wise to wade into pandemic healthcare. Released in October 2021 following nearly a year of internet hype, the Razer Zephyr looked more like a cyberpunk cosplay accessory than an actual “wearable air purifier.” Still, the transparent, twin-fan mask included three replaceable filters supposedly functioned together as equivalents to existing N95-grade products.

Outlets approached the odd healthcare accessory with a mix of anticipation and skepticism after plans were revealed in January 2021, later considered the pandemic’s deadliest month in the US. In the months leading up to its official launch, Razer co-founder and CEO Min-Liang Tan repeatedly posted on social media “linking the mask to the rise of the COVID-19 Delta variant, making explicit health claims, positioning the mask as a reusable N95, and claiming that Razer was seeking certification… [but] knew that they had never sought—and were not seeking—such certification,” according to the FTC’s complaint.

[Related: Calling TurboTax ‘free’ is ‘deceptive advertising,’ says FTC.]

To qualify for N95 certification, filters must guard against at least 95-percent of ambient air particles between 0.1 and 0.3 micrometers in size, while also providing higher filtration rates for larger particulates. Although COVID-19 virus cells measure around just 0.1 micrometers or smaller, they are always bonded to larger bodies such as water molecules and other biological material, and thus are effectively blocked by N95-rated masks and filters.

Razer consulted with a Singapore-based quality assurance company during Zephyr’s development, and in emails wrote they intended to market the wearable as “N95 grade.” Subsequent reviews showed Razer’s design only achieved around 83 percent particulate filtration efficiency (PFE) while its fans were off, with just a three percent improvement with the fans enabled. Even then, FTC documents state the Razer Zephyr “frequently tested much lower” and “did not come close to consistently reaching a PFE of 95 percent.” The quality testing company even went so far as to warn against mentioning N95 ratings “as it is not relevant to this product, and the claim will cause confusion.” 

Despite this, Razer moved forward with its marketing and released Razer Zephyr in October 2021, amid spiking global COVID-19 rates due to the Delta variant. Masks and filter packs were made available online through limited drop releases, as well as at three physical locations in Seattle, San Francisco, and Las Vegas. A single mask and three sets of filter replacements retailed for $99.99, while a mask alongside 33 filter sets sold for $149.99. A single, 10-set filter pack cost its wearers $29.99. The company even announced plans for a “Pro” version featuring voice amplification in early January 2022.

Razer Zephyr break apart concept art
Credit: Razer

Barely a week later, however, Razer began walking back its N95-grade marketing for Zephyr amid mounting scrutiny and criticism. The Pro edition never saw the light of day, and federal regulators eventually opened its official investigation into the situation. In addition to the more than $1.1 million in refunds, Razer must pay a $100,000 civil penalty, and is forbidden from making any future “COVID-related health misrepresentations or unsubstantiated health claims about protective health equipment.” All references to the sleek, shoddy masks now appear scrubbed from Razer’s official website.

“Products like the Zephyr invite a lot of scrutiny. Is this an honest, good-faith attempt to create an upgraded device for people who plan to wear masks in public long-term, or is it a cash grab? Does it work at all?” PopSci wrote in its official review from January 2022. “These are all good, fair questions to ask when a company with no history making medical technology quickly develops and launches an expensive piece of kit.”

The post Surprise! That futuristic COVID mask was even sketchier than we thought appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
China compiled the most detailed moon atlas ever mapped https://www.popsci.com/science/moon-atlas/ Mon, 29 Apr 2024 19:00:00 +0000 https://www.popsci.com/?p=612856
Moon photograph from Artemis 1
On flight day 20 of NASA’s Artemis I mission, Orion’s optical navigation camera looked back at the Moon as the spacecraft began its journey home. NASA/JSC

The Geologic Atlas of the Lunar Globe includes 12,341 craters, 81 basins, and 17 different rock types.

The post China compiled the most detailed moon atlas ever mapped appeared first on Popular Science.

]]>
Moon photograph from Artemis 1
On flight day 20 of NASA’s Artemis I mission, Orion’s optical navigation camera looked back at the Moon as the spacecraft began its journey home. NASA/JSC

If we want to establish a permanent human presence on the moon, we need more detailed maps than the existing options, some of which date back to the Apollo missions of 1960’s and 1970’s. After more than ten years of collaboration between more than 100 researchers working at the Chinese Academy of Sciences (CAS), the newest editions of lunar topography are rolling out for astronomers and space agencies around the world.

As highlighted recently by Nature, the Geologic Atlas of the Lunar Globe includes 12,341 craters, 81 basins, and 17 different rock types found across the moon’s surface, doubling previous map resolutions to a scale of 1:2,500,000.

[Related: Why do all these countries want to go to the moon right now?]

Although higher accuracy maps have been available for areas near Apollo mission landing sites, the US Geological Survey’s original lunar maps generally managed a 1:5,000,000 scale. Project co-lead and CAS geochemist Jianzhong Liu explained to Nature that “our knowledge of the Moon has advanced greatly, and those maps could no longer meet the needs for future lunar research and exploration.”

Geologic map of the moon
Credit: Chinese Academy of Sciences via Xinhua/Alamy

To guide lunar mapping into the 21st-century, CAS relied heavily on China’s ongoing lunar exploration programs, including the Chang’e-1 mission. Beginning in 2007, Chang’e-1’s high-powered cameras surveyed the moon’s surface from orbit for two years alongside an interference imaging spectrometer to identify various types of rock types. Additional data compiled by the Chang’e-3 (2013) and Chang’e-4 (2019) lunar landers subsequently helped hone those mapping endeavors. International projects like NASA’s Gravity Recovery and Interior Laboratory (GRAIL) and Lunar Reconnaissance Orbiter, as well as India’s Chandrayaan-1 probe all provided even more valuable topographical information.

The pivotal topographical milestone wasn’t an entirely altruistic undertaking, however. While CAS geophysicist Ross Mitchell described the maps as “a resource for the whole world,” he added that “contributing to lunar science is a profound way for China to assert its potential role as a scientific powerhouse in the decades to come.” 

[Related: Japan and NASA plan a historic lunar RV road trip together.]

The US is also far from the only ones anxious to set up shop on the moon—both China and Russia hope to arrive there by the mid-2030’s with the construction of an International Lunar Research Station near the moon’s south pole. Despite the two nations’ prior promise to be “open to all interested countries and international partners,” the US is distinctly not among the 10 other governments currently attached to the project.

China plans to launch its Chang’e-6 robotic spacecraft later this week, which will travel to the far side of the moon as the first of three new missions. In an interview on Monday, NASA Administrator Bill Nelson voiced his concerns of a potential real estate war on the moon.

Lithographic map of the moon
Credit: Chinese Academy of Sciences via Xinhua/Alamy

“I think it’s not beyond the pale that China would suddenly say, ‘We are here. You stay out,’” Nelson told Yahoo Finance. “That would be very unfortunate—to take what has gone on on planet Earth for years, grabbing territory, and saying it’s mine and people fighting over it.”

But if nothing else, at least the new maps will soon be available to virtually everyone. The Geologic Atlas is included in a new book from CAS, Map Quadrangles of the Geologic Atlas of the Moon, which also features an additional 30 sector diagrams offering even closer looks at individual lunar regions. The entire map resource will soon also become available to international researchers online through a cloud platform called Digital Moon.

The post China compiled the most detailed moon atlas ever mapped appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Romance scams just ‘happen in life,’ says CEO of biggest dating app company in the US https://www.popsci.com/technology/dating-app-romance-scams/ Mon, 29 Apr 2024 16:00:50 +0000 https://www.popsci.com/?p=612821
Woman's hands typing on laptop
Only an estimated 7 percent of online romance fraud victims report the crime to authorities. Deposit Photos

Dating app users collectively lost $1.1 billion to cons in 2023 alone.

The post Romance scams just ‘happen in life,’ says CEO of biggest dating app company in the US appeared first on Popular Science.

]]>
Woman's hands typing on laptop
Only an estimated 7 percent of online romance fraud victims report the crime to authorities. Deposit Photos

Online romance scams netted con artists over $1.1 billion in 2023, with an average reported loss of around $2,000 per target. These victims who span age, gender, and demographics often aren’t only out of money—their time, emotions, and sometimes even physical safety can be on the line. And while the CEO of the largest online dating company, Match Group, sympathizes, he contends that sometimes life just gives you lemons, apparently.

“Look, I mean, things happen in life. That’s really difficult,” Match Group CEO Bernard Kim told CBS Reports during an interview over the weekend, before adding they “have a tremendous amount of empathy for things that happen.”

“I mean, our job is to keep people safe on our platforms; that is top foremost, most important thing to us,” Kim continued. Kim’s statements come amid a yearlong CBS investigation series on online romance scammers, and the havoc they continue to inflict on victims. 

Match Group oversees some of the world’s most popular dating platforms, including Match.com, Tinder, Hinge, and OkCupid. According to its 2024 impact report, a combined 15.6 million people worldwide subscribe to at least one of its service’s premium features, with millions more utilizing free tiers. Although the FTC’s count of annual reported romance scams has declined slightly from its pandemic era highs, experts caution that these numbers could be vastly undercounted due to victims’ potential—and unwarranted—embarrassment.

Authorities believe as few as 7 percent of romance scams are actually reported, but while older age groups are frequently targeted, they aren’t alone. In fact, some studies show younger internet users are more likely to fall for online fraud than their elders, given a greater willingness to share personal information. Some of these con campaigns can span multiple years, and drain victims’ entire bank accounts and savings. At least one death has even been potentially tied to such situations.

[Related: Cryptocurrency scammers are mining dating sites for victims.]

Regulators and law enforcement agencies have attempted to hold companies like Match Group accountable as romance scam reports continue to skyrocket—an industry fueled in part thanks to the proliferation of tech-savvy approaches involving chatbots and other AI-based programs. In 2019, for example, the Federal Trade Commission filed a $844 million lawsuit alleging as many as 30 percent of Match.com’s profiles were opened for scamming purposes. A US District judge dismissed the FTC’s lawsuit in 2022, citing Match Group’s immunity against third-party content posted to their platforms.

Match Group says it invested over $125 million last year in its trust and safety strategies, and removes around 96 percent of new scam accounts created on any given day. The company reported a $652 million profit in 2023—up a massive 80 percent year-to-year.

[Related: Don’t fall for these online love scams.]

The FTC advises internet users to never send funds or any gifts to someone they never met in person, as well as keep trusted loved ones or friends informed of new online relations. Experts also caution against anyone who repeatedly claims they cannot meet in real life. Conducting reverse image searches of any photos provided by a new online acquaintance can help confirm fraudulent identities. The FTC also encourages anyone to report suspected frauds and scams here.

In its 2024 impact report, the company touted its inaugural “World Romance Scam Awareness Day” sponsored by Tinder alongside Mean Girls actor Jonathan Bennett, which promoted similar strategies. According to the event’s official website, however, the PSA event is technically called World Romance Scam Prevention Day.

The post Romance scams just ‘happen in life,’ says CEO of biggest dating app company in the US appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Makers of the world’s largest 3D printer just beat their own record https://www.popsci.com/technology/worlds-largest-3d-printer/ Fri, 26 Apr 2024 17:43:12 +0000 https://www.popsci.com/?p=612710
Factory of the Future 1.0 3D printer with man standing in front of it
The new industrial-sized 3D printer uses sustainable building materials like biobased polymers. University of Maine

Factory of the Future 1.0 can construct entire homes out of sustainable polymer materials.

The post Makers of the world’s largest 3D printer just beat their own record appeared first on Popular Science.

]]>
Factory of the Future 1.0 3D printer with man standing in front of it
The new industrial-sized 3D printer uses sustainable building materials like biobased polymers. University of Maine

After a five-year reign, the world’s largest 3D printer located at the University of Maine has been usurped—by a newer, larger 3D printer developed at the same school.

At a reveal event earlier this week, UMaine designers at the Advanced Structures & Composite Center (ASCC) showed off their “Factory of the Future 1.0,” aka the FoF 1.0. At four times the size of their previous Guinness World Record holder from 2019, MasterPrint, FoF 1.0 is capable of printing 96-by-32-by-18-foot tall structures and objects. Such sizable creations also require an impressive amount of building materials, however. According to its creators, FoF 1.0 can churn through upwards of 500-pounds of eco-friendly thermoplastic polymers per hour.

[Related: 3D printers just got a big, eco-friendly upgrade (in the lab)]

Global construction projects generate around 37 percent of all greenhouse gas emissions, mostly from the carbon-heavy production of aluminum, steel, and cement. Transitioning to more sustainable architecture and infrastructure projects is a key component of tackling climate change, spurring interest in massive 3D printer endeavors like FoF 1.0.

But just because there’s a new printer on the block doesn’t mean UMaine’s previous record-holder is obsolete. Designers created FoF 1.0 to print in tandem with MasterPrint, with the two machines even capable of working together on the same building components.

ASCC researchers and engineers aim to utilize these industrial-sized 3D printers to help construct some of the estimated 80,000 new homes needed in Maine over the next six years. FoF 1.0’s predecessor, MasterPrint, has already helped build the surprisingly stylish, sustainable, 600-square-foot BioHome3D prototype a few years back. 

BioHome3D house
BioHome 3D, built in part using FoF 1.0’s predecessor, MasterPrint. Credit: UMaine

“It’s not about building a cheap house or a biohome,” ASCC director Habib Dagher said at this week’s event. “We wanted to build a house that people would say, ‘Wow, I really want to live there.’”

With FoF 1.0’s help, those plans could potentially expand to encompass whole neighborhoods. According to Engadget’s calculations, the new machine could make “a modest single-story home in around 80 hours.”

Of course, such biofriendly projects don’t only catch the eye of sustainable architects. Funding for FoF 1.0 came in part from contributors such as the Department of Defense, as well as the Army Corps of Engineers. UMaine’s announcement also notes these backers hope to harness such machines for other projects, including “lightweight rapidly deployable structures and vessel technologies.”

Going forward, ASCC researchers hope to experiment with additional bio-based polymer sources, particularly wood residuals from Maine—which just so happens to be the country’s most forested state.

The post Makers of the world’s largest 3D printer just beat their own record appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new button battery dyes kids’ mouths blue if swallowed https://www.popsci.com/health/button-battery-dye/ Fri, 26 Apr 2024 15:42:35 +0000 https://www.popsci.com/?p=612665
Little boy biting his nails on grey background
The number of emergencies involving children ingesting batteries has spiked in recent years. Deposit Photos

The 'color alert technology' could save lives.

The post A new button battery dyes kids’ mouths blue if swallowed appeared first on Popular Science.

]]>
Little boy biting his nails on grey background
The number of emergencies involving children ingesting batteries has spiked in recent years. Deposit Photos

Energizer has designed a new lithium coin battery that releases a blue dye immediately upon interacting with moisture such as saliva. The marker offers parents a visible way to determine if their children accidentally swallowed one of these toxic products.

After two decades of steady integration into everything from key fobs and remote controls to cooking thermometers and smart watches, lithium button batteries are now extremely commonplace household items. Unfortunately, their ubiquity coincides with a major, ongoing spike in the number of children ingesting the small batteries. Over 70,300 emergency doctor visits were reported for children’s battery-related issues between 2010 and 2019. Of those, nearly 85 percent involved button batteries.

Apart from the choking hazard, the US Consumer Product Safety Commission warns a battery’s chemicals can cause severe bodily injury, and even death, within a matter of hours if ingested. Additionally, the electric current generated by saliva’s interactions with a battery can simultaneously burn through body tissue, leading to even more potentially lethal complications. Every year, thousands of emergency hospital visits occur because of ingesting batteries.

[Related: What to expect if your child swallows a button battery.]

To help address the continuing public health concern, Energizer recently partnered with the children’s safety nonprofit Reese’s Purpose to design a safer button battery, as well as even stronger childproof packaging.

Apart from a bitter-tasting, nontoxic coating increasingly found on similar products, the company’s newest coin-shaped batteries are also wrapped in a container that requires scissors to open. But even if a child does get their hands on one, parents and caretakers will almost instantly be able to see if they need to contact emergency medical services.

Described as a “color alert technology,” the battery’s dotted, negative underside releases a nontoxic, food grade blue dye when mixed with moisture, such as spit. According to Energizer’s website, the batteries contain about as much dye as an ounce of a flavored sports drink, and will disappear after a few water rinses or teeth brushing.

Hamsmith started the nonprofit advocacy group in honor of her 18-month-old daughter who died in 2020 after swallowing a remote control’s coin battery.

Regardless of childproofing innovations, however, caretakers should immediately take a child to medical professionals if they suspect battery ingestion. The National Capital Poison Center warns against inducing vomiting and instead suggests having any child over 12 months old to swallow honey. Doing so can coat the ingested battery, and thus help delay some chemical burning of internal tissue while en route to receiving medical attention. That said, children younger than 1-year-old shouldn’t eat honey, so rushing them immediately to the emergency room for an X-ray is the best approach.

In the event of suspected emergencies, parents are encouraged to call the National Battery Ingestion Hotline (800-498-8666) or Poison Control Center (800-222-1222).

The post A new button battery dyes kids’ mouths blue if swallowed appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Rare quadruple solar flare event captured by NASA https://www.popsci.com/science/quadruple-solar-flare/ Thu, 25 Apr 2024 18:18:20 +0000 https://www.popsci.com/?p=612553
Image of sun highlighting four solar events
Similar activity will likely increase as the sun nears its 'solar maximum.'. Credit: NASA/SDO/AIA

The 'super-sympathetic flare' might affect satellites and spacecraft near Earth.

The post Rare quadruple solar flare event captured by NASA appeared first on Popular Science.

]]>
Image of sun highlighting four solar events
Similar activity will likely increase as the sun nears its 'solar maximum.'. Credit: NASA/SDO/AIA

Earlier this week, NASA’s Solar Dynamics Observatory (SDO) recorded a rarely seen event—four nearly-simultaneous flare eruptions involving three separate sunspots, as well as the magnetic filament between them. But as impressive as it is, the event could soon pose problems for some satellites and spacecraft orbiting Earth, as well as electronic systems here on the ground.

It may seem like a massive ball of fiery, thermonuclear chaos, but there’s actually a fairly predictable rhythm to the sun. Similar to Earth’s seasonal changes, the yellow dwarf star’s powerful electromagnetic fluctuations follow a roughly 11-year cycle of ebbs and flows. Although astronomers still aren’t quite sure why this happens, it’s certainly observable—and recent activity definitely indicates the sun is heading towards its next “solar maximum” later this year.

Gif of supersympathetic solar flares
Credit: NASA/SDO/AIA

As Spaceweather.com notes, early Tuesday morning’s “complex quartet” of solar activity was what’s known as a “super-sympathetic flare,” in which multiple events occur at nearly the same time. This happens thanks to the often hard-to-detect magnetic loops spreading across the sun’s corona, which can create explosive chain reactions in the process. In this case, hundreds of thousands of miles separated the three individual flares, but they still erupted within minutes of each other. All-told, the super-sympathetic flare encompassed about a third of the sun’s total surface facing Earth.

[Related: Why our tumultuous sun was relatively quiet in the late 1600s]

And that “facing Earth” factor could present an issue. BGR explains “at least some” of the electromagnetic “debris” could be en route towards the planet in the form of a coronal mass ejection (CME). If so, those forces could result in colorful auroras around the Earth’s poles—as well as create potential tech woes for satellite arrays and orbiting spacecraft, not to mention blackouts across some radio and GPS systems. The effects, if there are any, are estimated to occur over the next day or so, but at least they’re predicted to only be temporary inconveniences.

Luckily, multi-flare situations like this week’s aren’t a regular occurrence—the last time something similar happened was back in 2010 in what became known as the Great Eruption.

[Related: Hold onto your satellites: The sun is about to get a lot stormier]

Still, these super-sympathetic flares serve as a solid reminder of just how much of our modern, electronically connected society is at the sun’s mercy. As recently as 2022, for example, a solar storm knocked around 40 Starlink satellites out of orbit. The risk of solar-induced problems will continue to rise as the skies grow increasingly crowded.

While many companies continue to construct redundancy programs and backup systems for these potential headaches, astronomers and physicists still can’t predict solar activity very accurately. More research and funding is needed to create early warning and forecasting programs.

This year alone has already seen at least two other solar activity events—and seeing as how we still haven’t passed the solar maximum, more impressive (and maybe damaging) activity is likely on the way.

The post Rare quadruple solar flare event captured by NASA appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A ‘bionic eye’ scan of an ancient, scorched scroll points to Plato’s long-lost gravesite https://www.popsci.com/technology/vesuvius-scroll-plato/ Wed, 24 Apr 2024 18:56:18 +0000 https://www.popsci.com/?p=612403
Statue of Plato in Greece
New imaging tools uncovered text that revises the timeline of Plato's life. Deposit Photos

Technology continues to reveal new details written on parchment burned by the Mount Vesuvius eruption of 79 CE.

The post A ‘bionic eye’ scan of an ancient, scorched scroll points to Plato’s long-lost gravesite appeared first on Popular Science.

]]>
Statue of Plato in Greece
New imaging tools uncovered text that revises the timeline of Plato's life. Deposit Photos

A research team’s “bionic eye” deciphered thousands of new words hidden within an ancient scroll carbonized during the eruption of Mount Vesuvius—and the new text points to the long-lost, potential final resting place of the philosopher Plato.

The 1,800-scroll collection, located in the estate now known as the “Villa of the Papyri,” was almost instantaneously incinerated during the historic Mount Vesuvius eruption in 79 CE, before being buried in layers of pumice and ash. The latest discovery is part of ongoing global efforts focused on the ancient Greek library believed to belong to Julius Caesar’s father-in-law.

Although rediscovered in 1792, the trove of text remained almost entirely inaccessible due to the carbonized parchment’s fragility and blackened writing. In recent years, however, contributors to projects like the Vesuvius Challenge have worked to finally reveal the charred artifacts’ potentially invaluable information. In February, the project’s organizers announced that a team successfully translated 2,000 characters within a scroll through a combination of machine learning software and computer vision programming. Now, a separate group’s “bionic eye” has uncovered even more.

[Related: 2,000 new characters from burnt-up ancient Greek scroll deciphered with AI.]

According to Italian news outlet, ANSA, experts utilized infrared hyperspectral imaging alongside a relatively new approach known as optical coherence tomography (OCT) to see through the carbonized material. OCT employs cross-sectional, high-resolution imagery most often used by optometrists to photograph the back of the eye. In this case, however, combining the two tools allowed researchers to bypass the layers of carbon to read a major portion of the scroll by detecting faint evidence of handwriting that human eyes can no longer see.

Now, it appears the team helped solve a major mystery within the history of philosophy—the location of Plato’s grave. After translating the section, it appears Plato was finally buried in a garden near a shrine to the Muses at the Platonic Academy in Athens. What’s more, the text details the pivotal philosopher’s last night before reportedly succumbing to illness. Plato, suffering from a high fever, unfortunately wasn’t a fan of a nearby musician’s attempt to comfort him by playing “sweet notes” on flute. According to the scroll, he even went so far as to criticize their “scant sense of rhythm.”

The section also offers a revised timeline of Plato’s life by stating that the philosopher was sold into slavery in either 404 or 399 BCE. Before the new discovery, historians believed he was enslaved in 387 BCE.

Researchers aren’t stopping here, either. As Interesting Engineering notes, the team will use their “bionic eye” for further scans through 2026, while the Vesuvius Challenge will pursue its own methods to discover even more insights into the scrolls.

The post A ‘bionic eye’ scan of an ancient, scorched scroll points to Plato’s long-lost gravesite appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA will unfurl a 860-square-foot solar sail from within a microwave-sized cube https://www.popsci.com/science/nasa-solar-sail/ Wed, 24 Apr 2024 15:53:58 +0000 https://www.popsci.com/?p=612334
ACS3 solar sail concept art above Earth
This artist’s concept shows the Advanced Composite Solar Sail System spacecraft sailing in space using the energy of the sun. ASA/Aero Animation/Ben Schweighart

The highly advanced solar sail boom could one day allow spacecraft to travel without bulky rocket fuel.

The post NASA will unfurl a 860-square-foot solar sail from within a microwave-sized cube appeared first on Popular Science.

]]>
ACS3 solar sail concept art above Earth
This artist’s concept shows the Advanced Composite Solar Sail System spacecraft sailing in space using the energy of the sun. ASA/Aero Animation/Ben Schweighart

NASA hitched a ride aboard Rocket Lab’s Electron Launcher in New Zealand yesterday evening, and is preparing to test a new, highly advanced solar sail design. Now in a sun-synchronous orbit roughly 600-miles above Earth, the agency’s Advanced Composite Solar Sail System (ACS3) will in the coming weeks deploy and showcase technology that could one day power deep-space missions without the need for any actual rocket fuel, after launch.

The fundamentals behind solar sails aren’t in question. By capturing the pressure emitted by solar energy, thin sheets can propel a spacecraft at immense speeds, similar to a sailboat. Engineers have already demonstrated the principles before, but NASA’s new project will specifically showcase a promising boom design constructed of flexible composite polymer materials reinforced with carbon fiber.

Sun photo

Although delivered in a toaster-sized package, ACS3 will take less than 30 minutes to unfurl into an 860-square-foot sheet of ultrathin plastic anchored by its four accompanying 23-foot-long booms. These poles, once deployed, function as sailboat booms, and will keep the sheet taut enough to capture solar energy.

[Related: How tiny spacecraft could ‘sail’ to Mars surprisingly quickly.]

But what makes the ACS3 booms so special is how they are stored. Any solar sail’s boom system will need to remain stiff enough through harsh temperature fluctuations, as well as durable enough to last through lengthy mission durations. Scaled-up solar sails, however, will be pretty massive—NASA is currently planning future designs as large as 5,400-square-feet, or roughly the size of a basketball court. These sails will need extremely long boom systems that won’t necessarily fit in a rocket’s cargo hold.

To solve for this, NASA rolled up its new composite material booms into a package roughly the size of an envelope. When ready, engineers will utilize an extraction system similar to a tape spool to uncoil the booms meant to minimize potential jamming. Once in place, they’ll anchor the microscopically thin solar sail as onboard cameras record the entire process.

NASA hopes the project will allow them to evaluate their new solar sail design while measuring how its resulting thrust influences the tiny spacecraft’s low-Earth orbit. Meanwhile, engineers will assess the resiliency of their novel composite booms, which are 75-percent lighter and designed to offer 100-times less shape distortion than any previous solar sail boom prototype.

Don’t expect the ACS3 experiment to go soaring off into space, though. After an estimated two-month initial flight and subsystem testing phase, ACS3 will conduct a weeks-long test of its ability to raise and lower the CubeSate’s orbit. It’s a lot of work to harness a solar force NASA says is equivalent to the weight of a paperclip in your palm. Still, if ACS3’s sail and boom system is successful, it could lead towards scaling up the design enough to travel across the solar system.

The post NASA will unfurl a 860-square-foot solar sail from within a microwave-sized cube appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA wants to measure moonquakes with laser-powered fiber optic cables https://www.popsci.com/science/moonquake-laser-fiber-optic/ Mon, 15 Apr 2024 19:57:18 +0000 https://www.popsci.com/?p=611037
Moon surface
Although the moon lacks tectonic plates, it still generates quakes from a variety of other factors. NASA/GSFC/Arizona State University

The moon’s seismic activity requires extremely sensitive tools to cut through the lunar dust.

The post NASA wants to measure moonquakes with laser-powered fiber optic cables appeared first on Popular Science.

]]>
Moon surface
Although the moon lacks tectonic plates, it still generates quakes from a variety of other factors. NASA/GSFC/Arizona State University

Even without any known active tectonic movement, the moon can still rumble. Its dramatic thermal changes, miniscule contractions from cooling, and even the influences of Earth’s gravity have all contributed to noticeable seismic activity. And just like on Earth, detecting these potentially powerful moonquakes will be important for the safety of any future equipment, buildings, and people atop the lunar surface. 

But instead of traditional seismometers, NASA hopes Artemis astronauts will be able to deploy laser-powered fiber optic cables.

In a recent study published in Earth and Planetary Science Letters, researchers at Caltech made the case for the promising capabilities of a new, high-tech seismological tool known as distributed acoustic sensing (DAS). Unlike traditional seismometers, DAS equipment measures the extremely tiny tremors detected in laser light as it travels through fiber optic cables. According to a separate paper from last year, a roughly 62-mile DAS cable line could hypothetically do the job of 10,000 individual seismometers.

[Related: Researchers unlock fiber optic connection 1.2 million times faster than broadband.]

This is particularly crucial given just how difficult it’s been to measure lunar seismic activity in the past. Apollo astronauts installed multiple seismometers on the lunar surface during the 1970’s, which managed to record quakes as intense as a magnitude 5. But those readings weren’t particularly precise, due to what’s known as scattering—when seismic waves are muddied from passing through layers of extremely fine, powdery regalith dust.

Researchers believe using fiber optic DAS setups could potentially solve this problem by averaging thousands of sensor points, and the data to back it up. According to a recent Caltech profile, the team of geophysicists deployed a similar cable system near Antarctica’s South Pole, the closest environment on Earth to our natural satellite’s surface due to its remote, harsh surroundings. Subsequent tests successfully detected subtle seismic activity such as cracking and shifting ice, while holding up against the harsh surroundings.

Of course, the moon’s brutal surface makes Antarctica look almost pleasant by comparison. Aside from the dust, temperature fluctuations routinely vary between 130 and -334 degrees Fahrenheit, while the lack of atmosphere means regular bombardment by solar radiation. All that said, Caltech researchers believe fiber optic cabling could easily be designed to withstand these factors. With additional work, including further optimizing its energy efficiency, the team believes DAS equipment could arrive alongside Artemis astronauts in the near future, ready to measure any moonquakes that come its way.

The post NASA wants to measure moonquakes with laser-powered fiber optic cables appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Startup pitches a paintball-armed, AI-powered home security camera https://www.popsci.com/technology/paintball-armed-ai-home-security-camera/ Mon, 15 Apr 2024 14:51:01 +0000 https://www.popsci.com/?p=610934
PaintCam Eve shooting paintballs at home
PaintCam Eve supposedly will guard your home using the threat of volatile ammunition. Credit: PaintCam

PaintCam Eve also offers a teargas pellet upgrade.

The post Startup pitches a paintball-armed, AI-powered home security camera appeared first on Popular Science.

]]>
PaintCam Eve shooting paintballs at home
PaintCam Eve supposedly will guard your home using the threat of volatile ammunition. Credit: PaintCam

It’s a bold pitch for homeowners: What if you let a small tech startup’s crowdfunded AI surveillance system dispense vigilante justice for you?

A Slovenia-based company called OZ-IT recently announced PaintCam Eve, a line of autonomous property monitoring devices that will utilize motion detection and facial recognition to guard against supposed intruders. In the company’s zany promo video, a voiceover promises Eve will protect owners from burglars, unwanted animal guests, and any hapless passersby who fail to heed its “zero compliance, zero tolerance” warning.

The consequences for shrugging off Eve’s threats: Getting blasted with paintballs, or perhaps even teargas pellets.

“Experience ultimate peace of mind,” PaintCam’s website declares, as Eve will offer owners a “perfect fusion of video security and physical presence” thanks to its “unintrusive [sic] design that stands as a beacon of safety.”

AI photo

And to the naysayers worried Eve could indiscriminately bombard a neighbor’s child with a bruising paintball volley, or accidentally hock riot control chemicals at an unsuspecting Amazon Prime delivery driver? Have no fear—the robot’s “EVA” AI system will leverage live video streaming to a user’s app, as well as employ facial recognition technology system that would allow designated people to pass by unscathed.

In the company’s promotional video, there appears to be a combination of automatic and manual screening capabilities. At one point, Eve is shown issuing a verbal warning to an intruder, offering them a five-second countdown to leave its designated perimeter. When the stranger fails to comply, Eve automatically fires a paintball at his chest. Later, a man watches from his PaintCam app’s livestream as his frantic daughter waves at Eve’s camera to spare her boyfriend, which her father allows.

“If an unknown face appears next to someone known—perhaps your daughter’s new boyfriend—PaintCam defers to your instructions,” reads a portion of product’s website.

Presumably, determining pre-authorized visitors would involve them allowing 3D facial scans to store in Eve’s system for future reference. (Because facial recognition AI has such an accurate track record devoid of racial bias.) At the very least, require owners to clear each unknown newcomer. Either way, the details are sparse on PaintCam’s website.

Gif of PaintCam scanning boyfriend
What true peace of mind looks like. Credit: PaintCam

But as New Atlas points out, there aren’t exactly a bunch of detailed specs or price ranges available just yet, beyond the allure of suburban crowd control gadgetry. OZ-IT vows Eve will include all the smart home security basics like live monitoring, night vision, object tracking, movement detection, night vision, as well as video storage and playback capabilities.

There are apparently “Standard,” “Advanced,” and “Elite” versions of PaintCam Eve in the works. The basic tier only gets owners “smart security” and “app on/off” capabilities, while Eve+ also offers animal detection. Eve Pro apparently is the only one to include facial recognition, which implies the other two models could be a tad more… indiscriminate in their surveillance methodologies. It’s unclear how much extra you’ll need to shell out for the teargas tier, too.

PaintCam’s Kickstarter is set to go live on April 23. No word on release date for now, but whenever it arrives, Eve’s makers promise a “safer, more colorful future” for everyone. That’s certainly one way of describing it.

The post Startup pitches a paintball-armed, AI-powered home security camera appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
CT scans look inside a California condor egg https://www.popsci.com/environment/california-condor-ct-scan/ Fri, 12 Apr 2024 19:16:24 +0000 https://www.popsci.com/?p=610773
CT scan of California condor egg
Conservationists initially worried Emaay's egg resulted in a malposition. San Diego Zoo Wildlife Alliance

Emaay is the 250th chick born as part of ongoing California Condor Recovery Program.

The post CT scans look inside a California condor egg appeared first on Popular Science.

]]>
CT scan of California condor egg
Conservationists initially worried Emaay's egg resulted in a malposition. San Diego Zoo Wildlife Alliance

For a moment, things weren’t looking great for the newest California condor chick. But thanks to some quick thinking and CT scanning technology, the San Diego Zoo welcomed its 250th hatchling in conservationists’ ongoing species recovery program. To celebrate, the wildlife park has released images and video of the moments leading up to the arrival of Emaay (pronounced “eh-my”), including a fascinating look within the egg itself.

Birds photo

When the California Condor Recovery Program began in 1982, only 22 of the critically endangered birds could be located. Since then, that number has grown over 560, with more than half of all California condors living in the wild. A big part of that success is thanks to the recovery program’s first adoptee, a three-month-old abandoned male named Xol-Xol (pronounced “hole-hole”). Xol-Xol, now 42, has fathered 41 chicks over his life, but his latest addition needed some extra care.

Zoologists placed the egg of the new chick in an incubator ahead of hatching, but noticed what appeared to be a malposition—a bodily angle that could have produced complications. The condor egg was then moved to the Paul Harter Veterinary Medical Center and placed in a computed tomography (CT) imaging machine.

California condor egg in CT scanner
The CT scanner provided a 3D double-check of Emaay’s egg. Credit: San Diego Wildlife Alliance

CT scanning takes a series of X-ray readings of an object from different angles, combining them through computer programming to create “slices,” or cross-sectional scans. The scans allow for far more detailed results than a basic X-ray image. Thankfully, subsequent CT scans of the condor egg confirmed a false alarm, allowing the team to return it to its incubator. 

[Related: California condor hatches after bird flu deaths.]

Upon pipping (a chick’s initial cracking of its shell), conservationists transferred the egg into the nest of Xol-Xol and his partner, Mexwe, who helped complete the hatching process. On March 16, Emaay greeted the world, with Xol-Xol and Mexwe caring for it ever since.

Emaay is one of about 50 California condor hatchlings now birthed every year—around 12-15 of which occur in the wild. But as San Diego Zoo’s 250th newcomer—and whose father was the program’s first adoptee—Emaay is particularly special to the team.

“Reaching this milestone feels incredible,” Nora Willis, senior wildlife care specialist at the San Diego Zoo Wildlife Alliance, said. “There’s still a long way to go but being part of this and helping the species recover is life changing.”

The post CT scans look inside a California condor egg appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Let this astronaut show you around the International Space Station https://www.popsci.com/science/iss-video-tour/ Fri, 12 Apr 2024 17:00:00 +0000 https://www.popsci.com/?p=610687
Astronaut Andreas Mogensen aboard the ISS
Astronaut Andreas Mogensen spent over six months aboard the ISS. ESA/NASA

Danish astronaut Andreas Mogensen made a ‘keepsake’ tour video before returning to Earth.

The post Let this astronaut show you around the International Space Station appeared first on Popular Science.

]]>
Astronaut Andreas Mogensen aboard the ISS
Astronaut Andreas Mogensen spent over six months aboard the ISS. ESA/NASA

Andreas Mogensen returned to Earth in mid-March after a six-and-a-half month stint aboard the International Space Station. To mark his tenure as part of NASA’s Crew-7 mission, the Danish European Space Agency (ESA) astronaut has shared his souvenir from undock day—a guided video tour of the ISS.

“It’s been a month now since I left the [ISS],” Mogensen posted to X early Friday morning. “… It is as much a keepsake for me as it is a way for me to share the wonder of the International Space Station with you. Whenever I will miss my time onboard ISS, and especially my crewmates, I will have this video to look at.”

Mogensen began his show-and-tell in the space station’s front end, above which a docked SpaceX Dragon craft awaited to take him home on March 12. On his left is the roughly 114-by-22-foot Columbus module—a science laboratory provided by the ESA back in 2008. Across from the lab is the smaller Japanese Experiment Module (JEM), nicknamed Kibō, which arrived not long after Columbus.

Astronauts waving in ISS
Fellow astronauts wave to Mogensen aboard the ISS. Credit: ESA/NASA

From there, Mogensen provides a first-person look at various other ISS facilities, including workstations, storage units, bathrooms, gym equipment, multiple docking nodes, and even the station kitchen. Of course, given the delicate environment, that module looks more like another lab than an actual place to cook meals—presumably because, well, no one is actually cooking anything up there.

International Space Station orbiting above Earth
The International Space Station is pictured from the SpaceX Crew Dragon Endeavour during a fly around of the orbiting lab that took place following its undocking from the Harmony module’s space-facing port on Nov. 8, 2021. NASA

But the most stunning area in the entire ISS is undoubtedly the cupola, which provides a 360-degree panoramic view of Earth, as well as a decent look at the space station’s overall size.

[Related: What a total eclipse looks like from the ISS.]

Speaking of which, Mogenen’s video also does a great job showcasing just how comparatively small the ISS really is, even after over 25 years of module and equipment additions. At 356-feet-long, it’s just one yard shy of the length of a football field, but any given module or transit space is only a few feet wide. Factor in the copious amounts of cargo, equipment, supplies, experiment materials, as well as the over 8-miles of cabling that wire its electrical systems, and it makes for pretty tight living conditions. Near the end of Mogensen’s tour, it only takes him a little over a minute to glide through most of the entire station back to his original starting point.

View of Earth from ISS cupola
Andrea Mogensen’s view of Earth from inside the ISS cupola. Credit: ESA/NASA

Of course, none of that undercuts one of humanity’s most monumental achievements in space exploration. Although the ISS is nearing the end of its tenure (it’s scheduled for decommission in 2031), Mogensen’s keepsake is a great document of what life is like aboard the habitat. But for those now looking for an even more detailed tour, there’s always NASA’s virtual walkthrough.

The post Let this astronaut show you around the International Space Station appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a tripod robot test its asteroid leaping skills https://www.popsci.com/technology/spacehopper-zero-gravity/ Fri, 12 Apr 2024 13:35:48 +0000 https://www.popsci.com/?p=610621
SpaceHopper robot in midair during parabolic flight test
SpaceHopper is designed to harness an asteroid's microgravity to leap across its surface. Credit: ETH Zurich / Nicolas Courtioux

SpaceHopper maneuvered in zero gravity aboard a parabolic flight.

The post Watch a tripod robot test its asteroid leaping skills appeared first on Popular Science.

]]>
SpaceHopper robot in midair during parabolic flight test
SpaceHopper is designed to harness an asteroid's microgravity to leap across its surface. Credit: ETH Zurich / Nicolas Courtioux

Before astronauts leave Earth’s gravity for days, weeks, or even months at a time, they practice aboard NASA’s famous parabolic flights. During these intense rides in modified passenger jets, trainees experience a series of stomach-churning ups and downs as the aircraft’s steep up-and-down movements create zero-g environments. Recently, however, a robot received similar education as their human counterparts—potentially ahead of its own journeys to space.

A couple years back, eight students at ETH Zürich in Switzerland helped design the SpaceHopper. Engineered specifically to handle low-gravity environments like asteroids, the small, three-legged bot is meant to (you guessed it) hop across its surroundings. Using a neural network trained in simulations with deep reinforcement learning, SpaceHopper is built to jump, coast along by leveraging an asteroid’s low-gravity, then orient and stabilize itself mid-air before safely landing on the ground. From there, it repeats this process to efficiently span large distances.

But it’s one thing to design a machine that theoretically works in computer simulations—it’s another thing to build and test it in the real-world.

Private Space Flight photo

Sending SpaceHopper to the nearest asteroid isn’t exactly a cost-effective or simple way to conduct a trial run. But thanks to the European Space Agency and Novespace, a company specializing in zero-g plane rides, the robot could test out its moves in the next best thing.

Over the course of a recent 30 minute parabolic flight, researchers let SpaceHopper perform in a small enclosure aboard Novespace’s Airbus A310 for upwards of 30 zero-g simulations, each lasting between 20-25 seconds. In one experiment, handlers released the robot in the middle of the air once the plane hit zero gravity, then observed it resituate itself to specific orientations using only its leg movements. In a second test, the team programmed SpaceHopper to leap off the ground and reorient itself before gently colliding with a nearby safety net.

Because a parabolic flight creates completely zero-g environments, SpaceHopper actually made its debut in less gravity than it would on a hypothetical asteroid. Because of this, the robot couldn’t “land” as it would in a microgravity situation, but demonstrating its ability to orient and adjust in real-time was still a major step forward for researchers. 

[Related: NASA’s OSIRIS mission delivered asteroid samples to Earth.]

“Until that moment, we had no idea how well this would work, and what the robot would actually do,” SpaceHopper team member Fabio Bühler said in ETH Zürich’s recent highlight video. “That’s why we were so excited when we saw it worked. It was a massive weight off of our shoulders.”

SpaceHopper’s creators believe deploying their jumpy bot to an asteroid one day could help astronomers gain new insights into the universe’s history, as well as provide information into our solar system’s earliest eras. Additionally, many asteroids are filled with valuable rare earth metals—resources that could provide a huge benefit across numerous industries back at home.

The post Watch a tripod robot test its asteroid leaping skills appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new solution proposed for drought-stricken Panama Canal goes around it https://www.popsci.com/environment/panama-canal-drought/ Thu, 11 Apr 2024 17:11:00 +0000 https://www.popsci.com/?p=610512
Cargo Ship in Panama Canal
Several freighters, assisted by tugboats, are entering the Panama Canal at Gatun Locks on the Atlantic side. Deposit Photos

Some trade routes will need to detour over land.

The post A new solution proposed for drought-stricken Panama Canal goes around it appeared first on Popular Science.

]]>
Cargo Ship in Panama Canal
Several freighters, assisted by tugboats, are entering the Panama Canal at Gatun Locks on the Atlantic side. Deposit Photos

As droughts continue to deplete the Panama Canal’s water levels, the maritime trading hub’s operators are planning a workaround. On Wednesday, Panama officials announced a new Multimodal Dry Canal project that will begin transporting international cargo across a “special customs jurisdiction” near the 110-year-old waterway.

The Panama Canal, which connects Atlantic and Pacific trading routes, has been in dire straits for some time. To function, ocean vessels pass through a series of above-sea-level “locks” filled with freshwater provided by nearby Lake Gatún and Lake Alajuela. Older Panamax locks require about 50 million gallons of freshwater per ship, while a small number of “Neo-Panamax locks” built in 2016 only require around 30 million gallons.

[Related: When climate change throws the Pacific off balance, the world’s weather follows.]

But the canal’s upgrades can’t keep up with climate change’s cascading effects. Lake Gatún and Lake Alajuela are replenished with rainwater, and a lingering drought compounded by El Niño has resulted in the second-driest year in the Panama Canal’s existence. To compensate, the daily average number of ships allowed to pass through the lock system has been reduced from 38 to 27, while each vessel is also now required to carry less cargo. Operators hope to soon raise that average to pre-drought levels, but likely at a cost to local marine ecosystem health and local drinking water supplies. Meanwhile, as the AFP reports, marine traffic jams routinely see over 100 ships waiting to pass through the 50-mile passage.

The new Multimodal Dry Canal project announced this week will attempt to further alleviate a global trade problem that particularly affects the Panama Canal’s most frequent users—the US, China, Japan, and South Korea.

Ship crews shouldn’t need to wait for a yearslong engineering process before seeing some relief to the passage’s congestion. During a presentation of project plans this week, Panamanian representatives said no additional investment or construction is needed. Instead, the dry thoroughfare will function as a complement to the canal by employing “existing roads, railways, port facilities, airports and duty-free zones,” according to the AFP on Wednesday.

Speaking with the BBC earlier this month (before the dry canal’s reveal), a shipping company general manager said such landbased detour routes could be costly—expenses that are “usually passed onto the consumer.”

The post A new solution proposed for drought-stricken Panama Canal goes around it appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Japan and NASA plan a historic lunar RV roadtrip together https://www.popsci.com/science/japan-lunar-rv/ Thu, 11 Apr 2024 15:00:12 +0000 https://www.popsci.com/?p=610467
Toyota concept art for lunar RV
Japan is working alongside Toyota and Hyundai to develop a massive lunar RV. Toyota / JAXA

It would be the first time a non-American lands on the moon.

The post Japan and NASA plan a historic lunar RV roadtrip together appeared first on Popular Science.

]]>
Toyota concept art for lunar RV
Japan is working alongside Toyota and Hyundai to develop a massive lunar RV. Toyota / JAXA

Japan has offered to provide the United States with a pressurized moon rover—in exchange for a reserved seat on the lunar van. Per NASA, the two nations have themselves a deal. 

According to a new signed agreement between NASA and Japan’s government, the Japan Aerospace Exploration Agency (JAXA) will “design, develop, and operate” a sealed vehicle for both crewed and uncrewed moon excursions. NASA will then oversee the launch and delivery, while Japanese astronauts will join two surface exploration missions in the vehicle.

[ Related: SLIM lives! Japan’s upside-down lander is online after a brutal lunar night ]

‘A mobile habitat’

Japan’s pressurized RV will mark a significant step forward for lunar missions. According to Space.com, the nation has spent the past few years working to develop such a vehicle alongside Toyota and Mitsubishi Heavy Industries. Toyota offered initial specs for the RV last year—at nearly 20-feet-long, 17-feet-wide, and 12.5-feet-tall, the rover will be about as large as two minibusses parked side-by-side. The cabin itself will provide “comfortable accommodation” for two astronauts, although four can apparently cram in, should an emergency arise.

Like an RV cruising across the country, the rover is meant to provide its inhabitants with everything they could need for as long as 30 days at a time. While inside, astronauts will even be able to remove their bulky (and fashionable) getups and move about normally—albeit in about 16.6 percent the gravity as on Earth. Last week, NASA announced it had narrowed the search for its new Artemis Lunar Terrain Vehicle (LTV) to three companies, but unlike Japan’s vehicle, that one will be unpressurized.

[Related: It’s on! Three finalists will design a lunar rover for Artemis

“It’s a mobile habitat,” NASA Administrator Nelson said during yesterday’s press conference alongside Minister Moriyama, describing it as “a lunar lab, a lunar home, and a lunar explorer… a place where astronauts can live, work, and navigate the lunar surface.”

Moons photo

Similar to the forthcoming Lunar Terrain Vehicle, the Japanese RV can be remotely controlled if astronauts aren’t around, and will remain in operation for 10 years following its delivery.

“The quest for the stars is led by nations that explore the cosmos openly, in peace, and together… America no longer will walk on the moon alone,” Nelson added.

A total of 12 astronauts—all American men—have walked across the moon’s surface. When the U.S. returns to the moon with NASA’s Artemis missions, it will also be the first time a woman and a person of color will land on the moon.

After some rescheduling, NASA currently intends to send its Artemis II astronauts on a trip around the moon in late 2025. Artemis III will see the first two humans touchdown in over 50 years in either late 2026 or early 2027. The Artemis IV mission is currently intended to occur no earlier than 2030. Meanwhile, China is trying to land its own astronauts on the lunar surface in 2030

The post Japan and NASA plan a historic lunar RV roadtrip together appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch two tiny, AI-powered robots play soccer https://www.popsci.com/technology/deepmind-robot-soccer/ Wed, 10 Apr 2024 18:00:00 +0000 https://www.popsci.com/?p=610317
Two robots playing soccer
Deep reinforcement learning allowed a pair of robots to play against one another. Credit: Google DeepMind / Tuomas Haarnoja

Google DeepMind's bipedal bots go head-to-head after years of prep.

The post Watch two tiny, AI-powered robots play soccer appeared first on Popular Science.

]]>
Two robots playing soccer
Deep reinforcement learning allowed a pair of robots to play against one another. Credit: Google DeepMind / Tuomas Haarnoja

Google DeepMind is now able to train tiny, off-the-shelf robots to square off on the soccer field. In a new paper published today in Science Robotics, researchers detail their recent efforts to adapt a machine learning subset known as deep reinforcement learning (deep RL) to teach bipedal bots a simplified version of the sport. The team notes that while similar experiments created extremely agile quadrupedal robots (see: Boston Dynamics Spot) in the past, much less work has been conducted for two-legged, humanoid machines. But new footage of the bots dribbling, defending, and shooting goals shows off just how good a coach deep reinforcement learning could be for humanoid machines.

While ultimately meant for massive tasks like climate forecasting and materials engineering, Google DeepMind can also absolutely obliterate human competitors in games like chess, go, and even Starcraft II. But all those strategic maneuvers don’t require complex physical movement and coordination. So while DeepMind can study simulated soccer movements, it hasn’t been able to translate to a physical playing field—but that’s quickly changing.

AI photo

To make the miniature Messi’s, engineers first developed and trained two deep RL skill sets in computer simulations—the ability to get up from the ground and how to score goals against an untrained opponent. From there, they virtually trained their system to play a full one-on-one soccer matchup by combining these skill sets, then randomly pairing them against partially trained copies of themselves.

[Related: Google DeepMind’s AI forecasting is outperforming the ‘gold standard’ model.]

“Thus, in the second stage, the agent learned to combine previously learned skills, refine them to the full soccer task, and predict and anticipate the opponent’s behavior,” researchers wrote in their paper introduction, later noting that, “During play, the agents transitioned between all of these behaviors fluidly.”

AI photo

Thanks to the deep RL framework, DeepMind-powered agents soon learned to improve on existing abilities, including how to kick and shoot the soccer ball, block shots, and even defend their own goal against an attacking opponent by using its body as a shield.

During a series of one-on-one matches using robots utilizing the deep RL training, the two mechanical athletes walked, turned, kicked, and uprighted themselves faster than if engineers simply supplied them a scripted baseline of skills. These weren’t miniscule improvements, either—compared to a non-adaptable scripted baseline, the robots walked 181 percent faster, turned 302 percent faster, kicked 34 percent faster, and took 63 percent less time to get up after falling. What’s more, the deep RL-trained robots also showed new, emergent behaviors like pivoting on their feet and spinning. Such actions would be extremely challenging to pre-script otherwise.

Screenshots of robots playing soccer
Credit: Google DeepMind

There’s still some work to do before DeepMind-powered robots make it to the RoboCup. For these initial tests, researchers completely relied on simulation-based deep RL training before transferring that information to physical robots. In the future, engineers want to combine both virtual and real-time reinforcement training for their bots. They also hope to scale up their robots, but that will require much more experimentation and fine-tuning.

The team believes that utilizing similar deep RL approaches for soccer, as well as many other tasks, could further improve bipedal robots movements and real-time adaptation capabilities. Still, it’s unlikely you’ll need to worry about DeepMind humanoid robots on full-sized soccer fields—or in the labor market—just yet. At the same time, given their continuous improvements, it’s probably not a bad idea to get ready to blow the whistle on them.

The post Watch two tiny, AI-powered robots play soccer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Internet use dipped in the eclipse’s path of totality https://www.popsci.com/technology/eclipse-internet-drop/ Tue, 09 Apr 2024 19:16:12 +0000 https://www.popsci.com/?p=610142
People looking up at eclipse wearing protective glasses
Internet usage dropped as much as 60 percent in some states while people watched the eclipse. Photo by Brad Smith/ISI Photos/USSF/Getty Images for USSF

Data shows a lot of people logged off during the cosmic event.

The post Internet use dipped in the eclipse’s path of totality appeared first on Popular Science.

]]>
People looking up at eclipse wearing protective glasses
Internet usage dropped as much as 60 percent in some states while people watched the eclipse. Photo by Brad Smith/ISI Photos/USSF/Getty Images for USSF

New data indicates a once-in-a-generation eclipse is a pretty surefire way to convince people to finally log off the internet—at least for a few minutes. According to estimates from cloud-computing provider Cloudflare, yesterday’s online traffic dropped between 40-60 percent week-to-week within the April 8 eclipse’s path of totality. In aggregate terms for the US, “bytes delivered traffic dropped by 8 percent and request traffic by 12 percent as compared to the previous week” around 2:00pm EST.

According to NASA, yesterday’s path of totality included a roughly 110-mile-wide stretch of land as it passed across Mazatlán, Mexico, through 13 states within the continental US, and finally over Montreal, Canada. In America alone, an estimated 52 million people lived within the eclipse’s path of totality. And it certainly seems like a lot of them put down their phones and laptops to go outside and have a look.

[Related: What a total eclipse looks like from space.]

As The New York Times highlights, Vermont saw the largest mass log-off, with an estimated 60-percent drop in internet usage compared to the week prior. South Carolinians, meanwhile, appeared to be the least compelled to take a computer break, since their traffic only dipped by around four percent.

Map of solar eclipse internet traffic change in US from Cloudflare
Credit: Cloudflare

Interestingly, you can also glean a bit about weather conditions during the eclipse from taking a look at Cloudflare’s internet usage map of the US. While most of the states within the event’s trajectory showcase pretty sizable downturns, Texas only experienced a 15 percent reduction. But given a large part of the Lone Star State endured severe weather conditions, it’s likely many people remained inside—maybe even online to livestream the views of the eclipse elsewhere.

[Related: The full sensory experience of an eclipse totality, from inside a convertible in Texas.]

So what were people doing if they weren’t posting through the eclipse? Well, snapping photos of the moment is always pretty popular, while NASA oversaw multiple volunteer research projects.

Judging from Cloudflare’s data, it didn’t take long for people to log back online once the eclipse ended above them. Usage appeared to spike back to pretty standard levels almost exactly in time with the event’s ending in any given state. No doubt most people rushed to post their reactions, photos, and videos… but maybe yesterday will still serve as a nice reminder that there’s a lot more to see when you take a break and go outside for a bit.

The post Internet use dipped in the eclipse’s path of totality appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Smugglers melted and spray painted $10 million in gold to look like machine parts https://www.popsci.com/technology/gold-smugglers-fake-parts-cargo-plane/ Tue, 09 Apr 2024 15:19:37 +0000 https://www.popsci.com/?p=610082
Smuggled gold disguised as machine parts
Hong Kong Customs on March 27 detected a suspected case of large-scale gold smuggling involving air freight, and seized about 146 kilograms of suspected gold with an estimated market value of about $84 million, at Hong Kong International Airport. Photo shows the suspected smuggled gold which was moulded and camouflaged as air compressor parts. Customs and Excise Department Hong Kong

The suspicious plane cargo was flagged by the Hong Kong authorities.

The post Smugglers melted and spray painted $10 million in gold to look like machine parts appeared first on Popular Science.

]]>
Smuggled gold disguised as machine parts
Hong Kong Customs on March 27 detected a suspected case of large-scale gold smuggling involving air freight, and seized about 146 kilograms of suspected gold with an estimated market value of about $84 million, at Hong Kong International Airport. Photo shows the suspected smuggled gold which was moulded and camouflaged as air compressor parts. Customs and Excise Department Hong Kong

It could have been the perfect crime, had they used better spray paint.

Recently, authorities have seized over 320 lbs worth of suspected smuggled gold during a cargo freight search at Hong Kong International Airport, according to yesterday’s customs announcement. Bound for Tokyo on March 27, investigators recovered the roughly $10.7 million haul from within two actual air compressors, the bureau’s largest ever in terms of overall gold value. But these weren’t goldbond bricks or stacks of doubloons stashed deep within the machinery—they were hunks of precious metal molded into the shapes of compressor parts, then camouflaged with silver-colored spray paint.

Customs agents first noticed something suspicious after running the 1,708 lbs pair of air compressors through a security X-ray late last month during a standard screening process. As Business Insider explains, similar air compressors are made from aluminum or iron, and usually intended for industrial and mining projects, as well as to fill divers’ gas cylinders.

Air compressors seized by customs authorities containing gold parts
The two air compressors seized by authorities. Credit: Customs and Excise Department Hong Kong

Speaking with the South China Morning Post (SCMP) on Monday, the assistant superintendent of Hong Kong International Airport’s customs air cargo division said technicians removed the motor casing and found a rotor “wrapped in a cord wheel which was tied to tape.”

“It was not similar to a normal motor,” he added.

After examining the rotor, authorities found traces of glue at both ends of the machinery part. Using a hammer, they then tapped the part and “noticed unevenness,” indicating the metal was far more malleable than it should have been. Scraping away at an outer layer of silver paint showed flecks of gold. At that point, the whole situation was pretty clear—these were dummy parts made of precious metal. Authorities believe the air compressor scheme was an attempt to evade Japan’s precious metals tariff that would have cost smugglers around $1.07 million, were they to go through official channels.

[Related: Montana traffickers illegally cloned Frankensheep hybrids for sport hunting.]

To create their industrial decoys, authorities believe smugglers must have first melted their gold down before pouring it into molds shaped to resemble motor rotors, screw shafts, and a gear piece. This probably was no easy feat, given that gold’s melting point is 1,948 degrees Fahrenheit.

According to Hong Kong Customs, police arrested the director of a local company on April 3 after finding his firm’s name listed as the shipment’s consignor. An initial investigation appears to show the company having no actual business dealings, potentially indicating it’s a shell outlet for smuggling. The investigation is still ongoing and the man has since been released on bail. Under Hong Kong’s Import and Export Ordinance, anyone found guilty of smuggling cargo could receive over $255,000 in fines alongside a maximum 7 year prison sentence.

The post Smugglers melted and spray painted $10 million in gold to look like machine parts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
3D printers just got a big, eco-friendly upgrade (in the lab) https://www.popsci.com/technology/3d-printer-eco-materials/ Mon, 08 Apr 2024 18:00:00 +0000 https://www.popsci.com/?p=609817
Researchers developed a 3D printer that can automatically identify the parameters of an unknown material on its own. The advance could help make 3D printing more sustainable, enabling printing with renewable or recyclable materials that are difficult to characterize.
Researchers developed a 3D printer that can automatically identify the parameters of an unknown material on its own. The advance could help make 3D printing more sustainable, enabling printing with renewable or recyclable materials that are difficult to characterize. MIT / Courtesy of researchers

Researchers developed a hack to automatically adjust printer parameters as needed to use algae, wood resins, and more.

The post 3D printers just got a big, eco-friendly upgrade (in the lab) appeared first on Popular Science.

]]>
Researchers developed a 3D printer that can automatically identify the parameters of an unknown material on its own. The advance could help make 3D printing more sustainable, enabling printing with renewable or recyclable materials that are difficult to characterize.
Researchers developed a 3D printer that can automatically identify the parameters of an unknown material on its own. The advance could help make 3D printing more sustainable, enabling printing with renewable or recyclable materials that are difficult to characterize. MIT / Courtesy of researchers

A team of international researchers have developed an adaptation to potentially help with 3D printing’s polymer problem. 

For quick prototyping jobs, designers often turn to fused filament fabrication (FFF) 3D printers. In these machines, molten polymers are layered atop one another using a heated nozzle. This process is underpinned by what’s known as slicer software, which informs the device of all the little details like temperature, speed, and flow necessary to make a specific desired product, instead of an amorphous blob of congealed goo. But a slicer only works for a reliably uniform material—that wouldn’t be too much of a problem, except most of those materials are often unrecyclable plastics.

But thanks to engineers collaborating between MIT’s Center for Bits and Atoms (CBA), the US National Institute of Standards and Technology (NIST), and the National Center for Scientific Research in Greece, a little computational fine-tuning can now allow an off-the-shelf device to analyze, adjust, and successfully utilize previously unrecognizable printing materials in real-time to create more eco-friendly products.

3D printers often rely on unsustainable materials, but you can’t simply swap out those polymers for potentially more sustainable alternatives. Unlike artificial polymers, eco-friendly options contain a mix of various ingredients that result in widely varying physical properties. Plant-based polymers, for example, can change based on what’s available season-to-season, while recyclable resins fluctuate depending on its source materials. Those can still be used, but a device’s software parameters would need tweaking for each and every batch. And considering how a 3D printer’s programming usually contains as many as 100 adjustable parameters, this makes recyclable workarounds a difficult sell.

[Related: A designer 3D printed a working clone of the iconic Mac Plus.]

In a new study published in Integrating Materials and Manufacturing Innovation, engineers detailed a newly designed mathematical function that allows off-the-shelf 3D-printer’s extruder software to use multiple materials—including bio-based polymers, plant-derived resins, or other recyclables.

First, researchers took a 3D printer built to provide data feedback while it is working, then outfitted it with three new tools to measure various factors such as pressure, filament thickness, and speed. Once installed, the team created a 20-minute test during which those instruments measured varying flow rates as well as their associated temperatures and pressures. After some trial-and-error, engineers realized the best approach to this was to set the hottest temperature possible for a 3D printer’s nozzle, also known as a “hotend,” for obvious reasons. In this case, the hotend’s maximum temperature lived up to the name—290 degrees Celsius, or about 554 Fahrenheit. They then set it to extrude filament at a steady rate, turned off the heater, and let it run.

“It was really difficult to figure out how to make that test work. Trying to find the limits of the extruder means that you are going to break the extruder pretty often while you are testing it,” CBA graduate student and study first author Jake Read said in a statement on Monday. “The notion of turning the heater off and just passively taking measurements was the ‘aha’ moment.”

Read and their collaborators then entered the information gleaned from their test into a new mathematical function that automatically computed workable printing parameters and machine settings depending on material. Once those were available, the team simply entered the parameters into the 3D printer software and let it run normally.

To test their system, researchers used six different materials to 3D print a small toy tugboat. Even including eco-friendly options derived from algae, wood, and sustainable polylactic acid, engineers reported no “failures of any kind” in their small model vessels—although from an aesthetically standpoint, the wood and algae resins did make for rather stringy-looking final products. 

But while the new alterations may not yet offer a “complete reckoning with all of the phenomenology and modeling associated with FFF printing,”  the team believes the system shows that “even simple methods in combination with instrumented hardware and workflows that connect machines to slicers can have promising results.”
Next up, researchers hope to expand on their computational modeling efforts, as well as design a way so testing parameters can automatically apply to a 3D printer instead of requiring manual entry. In the meantime, they have made their mechanical and circuit designs, as well as firmware, framework, and experiment source codes available online for others to try for themselves.

The post 3D printers just got a big, eco-friendly upgrade (in the lab) appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Shark skin and owl feathers could inspire quieter underwater sonar https://www.popsci.com/technology/shark-skin-owl-sonar/ Fri, 05 Apr 2024 18:36:20 +0000 https://www.popsci.com/?p=609718
Close up of shark head
The ridges on shark skin help cut down on drag while they swim. Deposit Photos

Here's how ships and submarines could benefit from biomimicry.

The post Shark skin and owl feathers could inspire quieter underwater sonar appeared first on Popular Science.

]]>
Close up of shark head
The ridges on shark skin help cut down on drag while they swim. Deposit Photos

Sharks and owls are evolutionarily optimized in surprisingly similar ways. When it comes to the ocean’s apex predator, their skin’s textured patterns, known as riblets, help cut down on drag. With owls, their tiny feather ridges called serrations allow them to fly silently while hunting prey.

Although the naturally-occurring aids have inspired biomimicry-based aeronautic designs in the past, a collaborative team of researchers from the University of California, Berkeley and MIT Lincoln Laboratory recently investigated if these same principles could also apply to underwater tools. Their findings, published in a new study in Extreme Mechanics Letters, indicate the designs could be adapted to improve the towed sonar arrays (TSAs) utilized by ships and submarines.

TSAs are vital for marine vessels engaged in underwater security or exploration projects. But if ships start cruising at decent speeds, the ensuing drag around the equipment can generate extra noise that interferes with sonar capabilities.

[Related: Did sonar finally uncover Amelia Earhart’s missing plane?]

Utilizing computational modeling, researchers tested various riblet shapes and patterns interacting with simulated water environments. From calm currents to the more commonly unpredictable flows seen in oceans, the team observed how smooth, triangular, trapezoidal, and scalloped riblets might affect fluid dynamics and acoustics.

Of these variations, the rectangular form showed the most promising results in choppy water—reducing noise by over 14-percent alongside a roughly 5 percent reduction in drag. When the riblets were finer and closer to one another, drag could be reduced by as much as an additional 25 percent.

These simulations not only showcased potential riblet patterns for sonar casings, but also illuminated new fluid dynamics that underpin noise reduction during turbulent water flows. In a process researchers call “vortex lifting,” flows are elevated and redirected away from the textured surfaces while also lowering their rotational strength.

“This elevation is key to reducing the intense pressure fluctuations that are generated by the interaction between the water flow and the array wall, leading to noise production,” Zixiao Wei, a mechanical engineering graduate student and study first author, said in a recent statement.

The team also noted that adding the animal-inspired textures to TSAs and other underwater vehicles wouldn’t just help humans—it could improve habitat conditions for marine wildlife, as well. Systems reliant on riblet patterns could make for quieter operating, thereby reducing the chances of artificially disturbing their surrounding ecosystems.

That said, it’s one thing to simulate shark skin—actually replicating it has proven extremely difficult. But with additional testing and deployment, Wei believes the new designs will showcase “the vast potential of biomimicry in advancing engineering and technology.”

The post Shark skin and owl feathers could inspire quieter underwater sonar appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Stellarator fusion reactor gets new life thanks to a creative magnet workaround https://www.popsci.com/environment/stellarator-fusion-reactor/ Fri, 05 Apr 2024 15:20:55 +0000 https://www.popsci.com/?p=609632
MUSE stellarator fusion reactor
A photo of MUSE, the first stellarator built at PPPL in 50 years and the first ever to use permanent magnets. Michael Livingston / PPPL Communications Department

Developed over 70 years ago, the stellarator has long been ignored in favor of options like tokamak reactors. It might be time for its 'quasiaxisymmetry' to shine.

The post Stellarator fusion reactor gets new life thanks to a creative magnet workaround appeared first on Popular Science.

]]>
MUSE stellarator fusion reactor
A photo of MUSE, the first stellarator built at PPPL in 50 years and the first ever to use permanent magnets. Michael Livingston / PPPL Communications Department

The quest to harness the holy grail of clean energy is potentially moving a step in the right direction thanks to the same principles behind refrigerator magnets. Earlier this week, researchers at the Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) revealed their new stellarator–a unique fusion reactor that uses off-the-shelf and 3D-printed materials to contain its superheated plasma.

First conceptualized over 70 years ago by PPPL’s founder, Lyman Spitzer, a traditional stellarator works by employing electromagnets precisely arranged in complex shapes to generate magnetic fields using electricity. Unlike tokamak reactors, stellarators do not need to run electric current specifically through their plasma to create magnetic forces—a process that can interfere with fusion reactions. That said, tokamaks still effectively confine their plasma so well that they have been the preferred reactor choice for researchers, especially when factoring in a stellarator’s comparative costs and difficulties. Because of all this, Spitzer’s design has remained largely unused for decades.

[Related: The world’s largest experimental tokamak nuclear fusion reactor is live.]

Engineers behind the new stellarator known as MUSE, however, say their workaround could solve these barriers. Instead of electromagnets, the device uses permanent magnets—albeit much more powerful and finely tuned than ones found in everyday novelty and souvenir collectibles. MUSE requires permanent magnets made using rare-earth metals that can exceed 1.2 teslas, the unit of measurement for magnetic flux density. In comparison, standard ferrite or ceramic permanent magnets usually exhibit between 0.5-to-1 teslas.

“I realized that even if they were situated alongside other magnets, rare-earth permanent magnets could generate and maintain the magnetic fields necessary to confine the plasma so fusion reactions can occur, and that’s the property that makes this technique work,” Michael Zarnstorff, a PPPL senior research physicist and MUSE principle investigator, said in a statement.

t left: Some of the permanent magnets that make MUSE’s innovative concept possible. At right: A close-up of MUSE's 3D-printed shell.
Left: Some of the permanent magnets that make MUSE’s innovative concept possible. Right: A close-up of MUSE’s 3D-printed shell. Credit: Xu Chu / PPPL and Michael Livingston / PPPL Communications Department

Building a stellarator with permanent magnets is a “completely new” approach, PPPL graduate student Tony Qian added. Qian also explained that the stellarator alteration will allow engineers to both test plasma confinement ideas and build new devices far more easily than before.

Atop the promising design alterations, MUSE reportedly manages what’s known as “quasisymmetry” better than any previous stellarator—more specifically, a subtype called “quasiaxisymmetry.”

In extremely simplified terms, quasisymmetry is when a magnetic field’s shape inside a stellarator isn’t the same as the field around the stellarator’s physical shape. Nevertheless, the overall magnetic field strength remains uniform, thus effectively confining plasma and increasing the chances for fusion reactions. According to Zarnstorff, MUSE pulls off its quasisymmetry “at least 100 times better than any existing stellarator.”

From here, the researchers intend to further investigate the nature of MUSE’s quasisymmetry, while also precisely mapping its magnetic fields—all factors influence the odds of achieving stable, net positive fusion reactions.

Whether or not scientists will discover the breakthroughs necessary to make green fusion energy a reality anytime soon remains to be seen. But thanks to some creative problem-solving using what are ostensibly very heavy duty fridge magnets, the long-overlooked stellarator could prove a valuable tool.

The post Stellarator fusion reactor gets new life thanks to a creative magnet workaround appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How cryptographers finally cracked one of the Zodiac Killer’s hardest codes https://www.popsci.com/technology/zodiac-letter-decode/ Thu, 04 Apr 2024 19:32:06 +0000 https://www.popsci.com/?p=609535
Zodiac Killer Z340 coded message
The 'Z340' letter from Zodiac Killer, sent on November 8th 1969. Credit: FBI/Public Domain

A new whitepaper offers a detailed look at how Z340 was decrypted after 51 years.

The post How cryptographers finally cracked one of the Zodiac Killer’s hardest codes appeared first on Popular Science.

]]>
Zodiac Killer Z340 coded message
The 'Z340' letter from Zodiac Killer, sent on November 8th 1969. Credit: FBI/Public Domain

An international team of cryptographers has published a new whitepaper detailing the massive amounts of work, crowdsourcing, and computational programming that was required to translate a notorious serial killer’s half-century-old mystery message. Although one cryptographer uploaded a video rundown of their methodology to YouTube in 2020, the team’s new whitepaper further shows just how much work went into accomplishing their feat.

Between 1968-69, a man calling himself the Zodiac murdered at least five people in Northern California. During that time, as well as years after, the killer mailed a series of letters to local newspapers alongside a total of four ciphers. To this day, authorities have not formally named anyone as the Zodiac Killer, and only two of his cryptograms have been solved.

One of those, however, was long considered the most difficult to parse. First published in newspapers on November 12, 1969, the 340-character cipher (often referred to as “Z340”) baffled amateur and professional cryptographers alike for years. In December 2020, however, an international team announced they believed they finally cracked the Zodiac’s encoded message. A subsequent review by the FBI supported the solution offered by David Oranchak, Sam Blake, and Jarl Van Eycke, putting to rest a 51-year-old enigma.

[Related: Codebreakers have finally deciphered the lost letters of Mary, Queen of Scots.]

“I HOPE YOU ARE HAVING LOTS OF FUN IN TRYING TO CATCH ME,” the Zodiac Killer’s Z340 message begins, before clarifying he did not make the famous A.M. San Francisco television call-in on October 22, 1969.

THAT WASNT ME ON THE TV SHOW 

WHICH BRINGS UP A POINT ABOUT ME 

I AM NOT AFRAID OF THE GAS CHAMBER 

BECAUSE IT WILL SEND ME TO PARADICE ALL THE SOONER 

BECAUSE I NOW HAVE ENOUGH SLAVES TO WORK FOR ME 

WHERE EVERYONE ELSE HAS NOTHING WHEN THEY REACH PARADICE 

SO THEY ARE AFRAID OF DEATH 

I AM NOT AFRAID BECAUSE I KNOW THAT MY NEW LIFE IS

LIFE WILL BE AN EASY ONE IN PARADICE DEATH

Zodiac’s Z340 message, typos included

First spotted this week by 404 Media, the 39-page paper (accompanied by 23 pages of source materials) offers the fascinating and complex history behind Z340. According to the three authors, arriving at their ultimate solution had been preceded by “many years of failed experiments, dead-end ideas, and efforts to summarize what was known about the [Zodiac Killer].”

After countless fruitless attempts, the team felt confident that Z340 included some combination of homophonic substitution (one letter swapped for one or more symbols) and transposition (letters reordered according to a certain systematic logic). Unfortunately, that didn’t exactly narrow down the possibilities. As Discover Magazine explained in a 2021 profile, the cryptographers then faced hundreds of thousands of possible approaches to reading Z340.

Internet photo

To tackle all those potentials, the team turned to AZDecrypt, a program dedicated to homophonic decryption built by Van Eycke. The mathematical intricacies behind AZDecrypt are intense—but just for reference, the codebreakers say their program can solve up to 200 homophonic substitution ciphers per second with a 99 percent accuracy rate. After augmenting the software a bit to incorporate transposition options, AZDecrypt got to work, and soon yielded its first breakthroughs. Before long, the team finally unraveled Z340.

[Related: This ancient language puzzle was impossible to solve—until a PhD student cracked the code.]

Interestingly, the writers theorize it’s entirely possible the Zodiac Killer didn’t intend Z340 to be this difficult to decode. Speaking in 2021, Oranchak believes the computational power needed to ultimately break Z340 wasn’t even available in 1969. The Zodiac’s very first cipher, Z408, was decoded just days after being published, so it’s likely he meant to make Z340’s enciphering methods harder—but accidentally went too far as “a random unintended result of the encipherment process.” 

But as they make clear in their whitepaper, it wasn’t just computer software that solved one of the Zodiac Killer’s last mysteries. “The solution of this cipher was the result of a large, multi-decade group effort, and we ultimately stood on the shoulders of many others’ excellent cryptanalytic contributions,” they write.

The post How cryptographers finally cracked one of the Zodiac Killer’s hardest codes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s on! Three finalists will design a lunar rover for Artemis https://www.popsci.com/science/artemis-moon-rover-finalists/ Thu, 04 Apr 2024 15:06:52 +0000 https://www.popsci.com/?p=609478
NASA Lunar Terrain Vehicle concept art
NASA wants the LTV ready for Artemis V astronauts scheduled to land on the moon in 2030. NASA

The Lunar Terrain Vehicle must be seen in action on the moon before NASA names its winner.

The post It’s on! Three finalists will design a lunar rover for Artemis appeared first on Popular Science.

]]>
NASA Lunar Terrain Vehicle concept art
NASA wants the LTV ready for Artemis V astronauts scheduled to land on the moon in 2030. NASA

NASA has announced three finalists to pitch them their best moon car ideas by this time next year to use on upcoming Artemis lunar missions. During a press conference yesterday afternoon, the agency confirmed Intuitive Machines, Lunar Outpost, and Venturi Astrolab will all spend the next 12 months developing their Lunar Terrain Vehicle (LTV) concepts as part of the “feasibility task order.”

According to Vanessa Wyche, director of NASA’s Johnson Space Center in Houston, the final LTV will “greatly increase our astronauts’ ability to explore and conduct science on the lunar surface while also serving as a science platform between crewed missions.”

Intuitive Machines LTV concept art
Credit: Intuitive Machines

While neither Lunar Outpost nor Venturi Astrolab have been on the moon yet, they are planning uncrewed rover missions within the next couple years. In February, Intuitive Machines became the first privately funded company to successfully land on the lunar surface with its NASA-backed Odysseus spacecraft. Although “Odie” officially returned the US to the moon after an over-50 year hiatus, touchdown complications resulted in the craft landing on its side, severely limiting the extent of its mission.

[Related: NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole.]

The last time astronauts zipped around on a moon buggy was back in 1971 during NASA’s Apollo 15 mission. The new LTV, like its Apollo predecessor, will only accommodate two people in an unpressurized cockpit—i.e. exposed to the harsh moon environment.

Venturi Astrolab LTV concept next to rocket on moon
Credit: Venturi Astrolab

Once deployed, however, the LTV will differ from the Lunar Roving Vehicle in a few key aspects—most notably, it won’t always need someone at the steering wheel. While astronauts will pilot the LTV during their expeditions, the vehicle will be specifically designed for remote control once the Artemis crew is back home on Earth. In its initial May 2023 proposal call, the agency explained its LRV capabilities will be “similar to NASA’s Curiosity and Perseverance Mars rovers.” When NASA isn’t renting the LTV, the winning company will also be free to contract it out to private ventures in the meantime.

But while a promising lunar rover design is great to see on paper, companies will need to demonstrate their vehicle’s capabilities before NASA makes its final selection—and not just on some desert driving course here on Earth.

Lunar Outpost LTV concept art
Credit: Lunar Outpost

After reviewing the three proposals, NASA will issue a second task order to at least one of the finalists, requesting to see their prototype in action on the moon. That means the company (or companies) will need to plan and execute an independent lunar mission, deliver a working vehicle to the moon, and “validate its performance and safety.” Only once that little hurdle is cleared does NASA plan to greenlight one of the company’s rovers.

If everything goes smoothly, NASA’s Artemis V astronauts will use the winning LTV when they arrive near the moon’s south pole in 2030.

The post It’s on! Three finalists will design a lunar rover for Artemis appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ https://www.popsci.com/technology/slide-whistle-quartet/ Wed, 03 Apr 2024 21:00:00 +0000 https://www.popsci.com/?p=609382
Slide Whistle robot quartet
Somehow, it only took Tim Alex Jacobs two weeks to build. YouTube

Well, the notes start coming and they don't stop coming.

The post Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ appeared first on Popular Science.

]]>
Slide Whistle robot quartet
Somehow, it only took Tim Alex Jacobs two weeks to build. YouTube

The slide whistle isn’t known as a particularly difficult instrument to play—there’s a reason they’re usually marketed to children. But designing, programming, and building a robotic slide whistle quartet? That takes a solid background in computer science, a maddening amount of trial-and-error, logistical adjustments to account for “shrinkflation,” and at least two weeks to make it all happen.

That said, if you’re confident in your technical abilities, you too can construct a portable slide-whistle symphony-in-a-box capable of belting out Smash Mouth’s seminal, Billboard-topping masterpiece “All Star.” Fast forward to the 4:47 mark to listen to the tune. 

AI photo


Despite his initial apology for “crimes against all things musical,” it seems as though Tim Alex Jacobs isn’t feeling too guilty about his ongoing robot slide whistle hobby. Also known online as “mitxela,” Jacobs has documented his DIY musical endeavors on his YouTube channel for years. It appears plans to create MIDI-controlled, automated slide whistle systems have been in the works since at least 2018, but it’s difficult to envision anything much more absurd than Jacob’s latest iteration, which manages to link four separate instruments alongside motorized fans and mechanical controls, all within a latchable carrying case.

Aside from the overall wonky tones that come from slide whistles in general, Jacobs notes just how difficult it would be to calibrate four of them. What’s more, each whistle’s dedicated fan motor differs slightly from one another, making the resultant pressures unpredictable. To compensate for this, Jacobs drilled holes in the pumps to create intentional air leaks, allowing him to run the motors closer to full power than before without overheating.

[Related: Check out some of the past year’s most innovative musical inventions.]

“If we can run them at a higher power level, then the effects of friction will be less significant,” Jacobs explains. But although this reportedly helped a bit, he admits the results were “far from adequate.” Attaching contact microphones to each slide whistle was also a possibility, but the work involved in calibrating them to properly isolate the whistle tones simply wasn’t worth it.

So what was worth the effort? Well, programming the whistles to play “All Star” in its entirety, of course. The four instruments are in no way tuned to one another, but honestly, it probably wouldn’t be as entertaining if they somehow possessed perfect pitch.

Jacobs appears to have plans for further fine tuning (so to speak) down the line, but it’s unclear if he’ll stick with Smash Mouth, or move onto another 90s pop-rock band.

The post Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Would you wear this ‘shoe-like vessel’ made from genetically engineered bacteria? https://www.popsci.com/environment/bacteria-cell-shoe/ Wed, 03 Apr 2024 17:16:46 +0000 https://www.popsci.com/?p=609331
Shoe made from bacterial cellulose
The bacterial cellulose is engineered to produce its own dark, leather-like pigment. Imperial College London

Researchers’ new cellulose material could help transition the toxic fashion industry into a greener future.

The post Would you wear this ‘shoe-like vessel’ made from genetically engineered bacteria? appeared first on Popular Science.

]]>
Shoe made from bacterial cellulose
The bacterial cellulose is engineered to produce its own dark, leather-like pigment. Imperial College London

Transitioning towards sustainable clothing practices is a must for combating climate change, so researchers are turning to bacteria for their fashion inspiration. As detailed in the research journal Nature Biotechnology, a team at Imperial College London has genetically engineered new microbial strains capable of being woven into wearable material, while simultaneously self-dyeing itself in the process. The result is a new vegan, plastic-free leather that’s suitable for items such as wallets and shoes—although perhaps not the most fashionable looking shoes at the moment. 

As much as 200 million liters of water is consumed across the global textile industry every year, and 85 percent of all used clothing in the US winds up in landfills. Meanwhile, the particulates shed from washing polyester and other polymer-based fabrics already make up 20-and-35 percent of the oceans’ microplastics. Then there’s all the pesticides used in industrial cotton farming. And when it comes to animal leather production, the statistics are arguably just as bad. Basically, from an ecological standpoint, it costs a lot to dress fashionably.

Sustainable, microbial-based textile alternatives haven increasingly shown promise for greener manufacturing, especially the utilization of bacterial cellulose.

[Related: A new color-changing, shape-shifting fabric responds to heat and electricity.]

“Bacterial cellulose is inherently vegan, and its growth requires a tiny fraction of the carbon emissions, water, land use and time of farming cows for leather,” Tom Ellis, a bioengineering professor at Imperial College London and study lead author, said in a statement on Wednesday. “Unlike plastic-based leather alternatives, bacterial cellulose can also be made without petrochemicals, and will biodegrade safely and non-toxically in the environment.”

Unfortunately, synthetically dyeing products like vegan leather remains some of the most toxic stages within the fashion industry. By combining both the manufacturing and dyeing processes, researchers believe they can create even more environmentally friendly wearables.

To harness both capabilities, Ellis and his colleagues genetically modified bacteria commonly used in microbial cellulose to self-produce a black pigment known as eumelanin. Over a two-week period, the team then allowed their new material to grow over a “bespoke, shoe-shaped vessel.” Once completed, the leather-like cellulose was loaded into a machine that gently shook it for about 48-hours at roughly 86-degrees Fahrenheit, which stimulated the bacteria to begin darkening from the inside out. Finally, the material was attached to a pre-made sole to reveal… well, if not a “shoe,” then certainly a “shoe-shaped vessel.” Beauty is in the eye of the beholder, of course. But if the bulbous clogs aren’t your style, maybe the team’s other example—a simple bifold wallet—makes more sense for your daily outfit.

Wallet made from bacterial cellulose
Credit: Imperial College London

According to their study, the team notes they still want to cut down the cellulose’s water consumption even further, as well as engineering their bacterial cellulose to allow for additional colors, materials, and even patterns.

The post Would you wear this ‘shoe-like vessel’ made from genetically engineered bacteria? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA is designing a time zone just for the moon https://www.popsci.com/science/coordinated-lunar-time/ Wed, 03 Apr 2024 14:57:29 +0000 https://www.popsci.com/?p=609290
Buzz Aldrin on the moon next to American flag.
The White House has instructed the agency to begin looking into Coordinated Lunar Time ahead of our return to the moon—something Buzz Aldrin never had. NASA

Timekeeping works differently up there.

The post NASA is designing a time zone just for the moon appeared first on Popular Science.

]]>
Buzz Aldrin on the moon next to American flag.
The White House has instructed the agency to begin looking into Coordinated Lunar Time ahead of our return to the moon—something Buzz Aldrin never had. NASA

What time is it on the moon?

Well, right now, that’s somewhat a matter of interpretation. But humanity is going to need to get a lot more specific if it intends to permanently set up shop there. In preparation, NASA is aligning its clocks in preparation for the upcoming Artemis missions. On Tuesday, the White House issued a memo directing the agency to establish a Coordinated Lunar Time (LTC), which will help guide humanity’s potentially permanent presence on the moon. Like the internationally recognized Universal Time Zone (UTC), LTC will lack time zones, as well as a Daylight Savings Time.

It’s not quite a time zone like those on Earth, but an entire frame of time reference for the moon. 

As Einstein famously noted, time is very much relative. Most timekeeping on Earth is tied to Coordinated Universal Time (UTC), which relies on an international array of atomic clocks designed to determine the most precise time possible. This works just fine in relation to our planet’s gravitational forces, but thanks to physics, things are observed differently elsewhere in space, including on the moon.

“Due to general and special relativity, the length of a second defined on Earth will appear distorted to an observer under different gravitational conditions, or to an observer moving at a high relative velocity,” Arati Prabhakar, Assistant to the President for Science and Technology and Director at the Office of Science and Technology Policy (OSTB), explained in yesterday’s official memorandum

Because of this, an Earth-based clock seen by a lunar astronaut would appear to lose an average of 58.7 microseconds per Earth day, alongside various other periodic variational influences. This might not seem like much, but it would pose major issues for any future lunar spacecraft and satellites that necessitate extremely precise timekeeping, synchronization, and logistics.

[Related: How to photograph the eclipse, according to NASA.]

“A consistent definition of time among operators in space is critical to successful space situational awareness capabilities, navigation, and communications, all of which are foundational to enable interoperability across the U.S. government and with international partners,” Steve Welby, OTSP Deputy Director for National Security, said in Tuesday’s announcement.

NASA’s new task is about more than just literal timing—it’s symbolic, as well. Although the US aims to send the first humans back to the lunar surface since the 1970’s, it isn’t alone in the goal. As Reuters noted yesterday, China wants to put astronauts on the moon by 2030, while both Japan and India have successfully landed uncrewed spacecraft there in the past year. In moving forward to establish an international LTC, the US is making its lunar leadership plans known to everyone.

[Related: Why do all these countries want to go to the moon right now?]

But it’s going to take a lot of global discussions—and, yes, time—to solidify all the calculations needed to make LTC happen. In its memo, the White House acknowledged putting Coordinated Lunar Time into practice will need international agreements made with the help of “existing [timekeeping] standards bodies,” such as the United Nations International Telecommunications Union. They’ll also need to discuss matters with the 35 other countries who signed the Artemis Accords, a pact concerning international relations in space and on the moon. Things could also get tricky, given that Russia and China never agreed to those accords.

“Think of the atomic clocks at the US Naval Observatory. They’re the heartbeat of the nation, synchronizing everything,” Kevin Coggins, NASA’s space communications and navigation chief, told Reuters on Tuesday. “You’re going to want a heartbeat on the moon.”

NASA has until the end of 2026 to deliver its standardization plan to the White House. If all goes according to plan, there might be actual heartbeats on the moon by that point—the Artemis III crewed lunar mission is scheduled to launch “no earlier than September 2026.”

The post NASA is designing a time zone just for the moon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A 3,200-megapixel digital camera is ready for its cosmic photoshoot https://www.popsci.com/science/largest-digital-camera/ Wed, 03 Apr 2024 13:00:00 +0000 https://www.popsci.com/?p=609139
LSST Camera Deputy Project Manager Travis Lange shines a flashlight into the LSST Camera.
The LSST Camera took two decades to build, and will embark on a 10-year-long cosmic imaging project. Credit: Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

The Legacy Survey of Space and Time (LSST) Camera is the size of a small car—and the biggest digital camera ever built for astronomy.

The post A 3,200-megapixel digital camera is ready for its cosmic photoshoot appeared first on Popular Science.

]]>
LSST Camera Deputy Project Manager Travis Lange shines a flashlight into the LSST Camera.
The LSST Camera took two decades to build, and will embark on a 10-year-long cosmic imaging project. Credit: Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

The world’s largest digital camera is officially ready to begin filming “the greatest movie of all time,” according to its makers. This morning, engineers and scientists at the Department of Energy’s SLAC National Accelerator Laboratory announced the completion of the Legacy Survey of Space and Time (LSST) Camera, a roughly 6,610-pound, car-sized tool designed to capture new information about the nature of dark matter and dark energy.

Following a two-decade construction process, the 3,200-megapixel LSST Camera will now travel to the Vera C. Rubin Observatory located 8,900-feet atop Chile’s Cerro Pachón. Once attached to the facility’s Simonyi Survey Telescope later this year, its dual five-foot and three-foot-wide lenses will aim skyward for a 10-year-long survey of the solar system, the Milky Way galaxy, and beyond.

Just how much detail can you get from a focal plane leveled to within a tenth the width of a human hair alongside 10-micron-wide pixels? Aaron Roodman, SLAC professor and Rubin Observatory Deputy Director and Camera Program Lead, likens its ability to capturing the details of a golf ball from 15-miles away “while covering a swath of the sky seven times wider than the full moon.” The resultant images will include billions of stars and galaxies, and with them, new insights into the universe’s structure.

[Related: JWST takes a jab at the mystery of the universe’s expansion rate.]

Among its many duties, the LSST Camera will search for evidence of weak gravitational lensing, which occurs when a gigantic galaxy’s gravitational mass bends light pathways from the galaxies behind it. Analyzing this data can offer researchers a better look at how mass is distributed throughout the universe, as well as how that distribution changed over time. In turn, this could help provide astronomers new ways to explore how dark energy influences the universe’s expansion.

Illustration breakdown of LSST Camera components
An artist’s rendering of the LSST Camera showing its major components including lenses, sensor array, and utility trunk. Credit: Chris Smith/SLAC National Accelerator Laboratory

To achieve these impressive goals, the LSST Camera needed to be much more than simply a scaled-up version of a point-and-shoot digital camera. While lenses like those within your smartphone often don’t include physical shutters, they are still usually found within SLR cameras. That said, their shutter speeds aren’t nearly as slow as the LSST Camera. 

“The [LSST] sensors are read out much more slowly and deliberately… ” Andy Rasmussen, SLAC staff physicist and LSST Camera Integration and Testing Scientist, tells PopSci. “… the shutter is open for 15 seconds (for the exposure) followed by 2 seconds to read (with shutter closed).” This snail’s pace allows LSST Camera operators to only deal with lower noise—only around 6 or 7 electrons—resulting in capturing much darker skies.

“We need quiet sensors so that we can tell that the dark sky is actually dark and also so that we can measure very dim objects in the sky,” Rasmussen continues. “During this 2 second readout period, we need to block any more light from entering the Camera, so that’s why we have a shutter (one of several mechanisms inside the Camera).”

To further ensure operators can capture the measurements of dim objects, they also ostensibly slow atomic activity near the LSST Camera’s focal point by lowering surrounding temperatures as low as -100C (173 Kelvin).

Beyond dark matter and dark energy research, cosmologists intend to use the LSST Camera to conduct a new, detailed census of the solar solar system. Researchers estimate new imagery could increase the number of known objects by a factor of 10, and thus provide additional insight into how the solar system formed, as well as keep track of any errant asteroids that may speed by Earth a little too close for comfort.

“More than ever before, expanding our understanding of fundamental physics requires looking farther out into the universe,” Kathy Turner, the Department of Energy’s Cosmic Frontier Program manager, said in today’s announcement. With LSST Camera’s installation, Turner believes researchers will be on the path to “answer some of the hardest, most important questions in physics today.”

The post A 3,200-megapixel digital camera is ready for its cosmic photoshoot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Melting ice makes Arctic a target for a new deep sea internet cable https://www.popsci.com/technology/arctic-cable-project/ Tue, 02 Apr 2024 20:30:00 +0000 https://www.popsci.com/?p=609190
Arctic ice flow
The 9,000-mile deep sea fiber optic cable could be completed by the end of 2026. Deposit Photos

The Far North Fiber project would connect Europe to Japan, but is only possible because of climate change.

The post Melting ice makes Arctic a target for a new deep sea internet cable appeared first on Popular Science.

]]>
Arctic ice flow
The 9,000-mile deep sea fiber optic cable could be completed by the end of 2026. Deposit Photos

Each day an estimated 95 percent of the world’s data travels across the roughly 900,000 miles of submarine fiber optic cables criss-crossing the ocean floor. Modern life as we know it—from internet communications to video calls to streaming services—would look significantly different without this massive infrastructure. To keep up with the world’s insatiable data needs, construction could soon begin on a new cable located within a once-inaccessible environment.

Politico reports that a consortium of companies intends to move forward with the Far North Fiber project—a deep sea cable that would stretch over 9,000 miles through the Northwest Passage, connecting Europe to Japan, alongside additional landing sites in Alaska, Canada, Norway, Finland, and Ireland. Ironically, the potential endeavor is only possible due to one of the most pressing threats facing humanity.

As our digital lives travel along these submarine cables, they devour gigantic amounts of energy and further exacerbate climate change. The Arctic, for example, is currently warming almost four times faster than the rest of the planet, causing its sea ice to shrink by roughly 13 percent per decade. According to one Far North Fiber developer, however, all that terrifying environmental decimation creates a new business opportunity.

[Related: A 10-million-pound undersea cable just broke an internet speed record.]

The Arctic’s previously unthinkable thaws will present a “sweet spot where it’s now accessible and allows us a time window when we can get the cable safely installed,” Ik Icard, chief strategy officer at Far North Digital, told Politico.

Far North Fiber’s backers claim that, once constructed, their cable would also be better protected compared to similar lines elsewhere in the world. An estimated 100 to 150 lines are damaged every year globally, be it from accidental encounters with boat anchors and fishing equipment, or due to intentional sabotage.

The threat of sabotage is an increasing concern to the telecom companies overseeing deep sea cable systems. More than 90 percent of all Europe-Asia data traffic travels along cables within the Red Sea trading corridor. Thanks to a recent increase in the region’s geopolitical unrest and violence, cable lines face greater risk of damage. Just last month, three such lines were cut during ongoing Houthi rebel attacks on nearby shipping vessels.

Company representatives believe establishing a new route through the Northwest Passage could avoid similar issues in the future—at an estimated cost of €1 billion ($1.08 billion). That’s about four times the cost of laying a cable across the Atlantic Ocean, and around three times as much to do so in the Pacific. But despite the exponential price tag, the European Union has signaled its interest with a €23 million investment in Far North Fiber. The project’s developers also hope to convince the US and Canada to get involved. 

“Nobody wants to cut a cable under the ice, it’s really hard to do,” Far North Digital co-founder Ethan Berkowitz said.

A study published in Nature Reviews Earth & Environment estimates the Arctic could experience seasonally ice-free waters as soon as 2035—less than a decade removed from Far North Fiber’s proposed 2026 launch date.

The post Melting ice makes Arctic a target for a new deep sea internet cable appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Spider conversations decoded with the help of machine learning and contact microphones https://www.popsci.com/technology/wolf-spider-vibration-research/ Tue, 02 Apr 2024 14:51:17 +0000 https://www.popsci.com/?p=609092
Close up of wolf spider resting on web
Spiders communicate using complex movement and vibration patterns. Deposit Photos

A new approach to monitoring arachnid behavior could help understand their social dynamics, as well as their habitat’s health.

The post Spider conversations decoded with the help of machine learning and contact microphones appeared first on Popular Science.

]]>
Close up of wolf spider resting on web
Spiders communicate using complex movement and vibration patterns. Deposit Photos

Arachnids are born dancers. After millions of years of evolution, many species rely on fancy footwork to communicate everything from courtship rituals, to territorial disputes, to hunting strategies. Researchers usually observe these movements in lab settings using what are known as laser vibrometers. After aiming the tool’s light beam at a target, the vibrometer measures miniscule vibration frequencies and amplitudes emitted from the Doppler shift effect. Unfortunately, such systems’ cost and sensitivity often limit their field deployment.

To find a solution for this long-standing problem, a University of Nebraska-Lincoln PhD student recently combined an array of tiny, cheap contact microphones alongside a sound-processing machine learning program. Then, once packed up, he headed into the forests of north Mississippi to test out his new system.

Noori Choi’s results, recently published in Communications Biology, highlight a never-before-seen approach to collecting spiders’ extremely hard-to-detect movements across woodland substrates. Choi spent two sweltering summer months placing 25 microphones and pitfall traps across 1,000-square-foot sections of forest floor, then waited for the local wildlife to make its vibratory moves. In the end, Choi left the Magnolia State with 39,000 hours of data including over 17,000 series of vibrations.

[Related: Meet the first electric blue tarantula known to science.]

Not all those murmurings were the wolf spiders Choi wanted, of course. Forests are loud places filled with active insects, chatty birds, rustling tree branches, as well as the invasive sounds of human life like overhead plane engines. These sound waves are also absorbed into the ground as vibrations, and need to be sifted out from scientists’ arachnid targets.

“The vibroscape is a busier signaling space than we expected, because it includes both airborne and substrate-borne vibrations,” Choi said in a recent university profile.

In the past, this analysis process was a frustratingly tedious, manual endeavor that could severely limit research and dataset scopes. But instead of pouring over roughly 1,625 days’ worth of recordings, Choi designed a machine learning program capable of filtering out unwanted sounds while isolating the vibrations from three separate wolf spider species: Schizocosa stridulans, S. uetzi, and S. duplex.

Further analysis yielded fascinating new insights into arachnid behaviors, particularly an overlap of acoustic frequency, time, and signaling space between the S. stridulans and S. uetzi sibling species. Choi determined that both wolf spider variants usually restricted their signaling for when they were atop leaf litter, not pine debris. According to Choi, this implies that real estate is at a premium for the spiders.

“[They] may have limited options to choose from, because if they choose to signal in different places, on different substrates, they may just disrupt the whole communication and not achieve their goal, like attracting mates,” Choi, now a postdoctoral researcher at Germany’s Max Planck Institute of Animal Behavior, said on Monday.

What’s more, S. stridulans and S. uetzi appear to adapt their communication methods depending on how crowded they are at any given time, and who was crowding them. S. stridulans, for example, tended to lengthen their vibration-intense courtship dances when they detected nearby, same-species males. When they sensed nearby S. uetzi, however, they often varied their movements slightly to differentiate them from the other species, thus reducing potential courtship confusion.

In addition to opening up entirely new methods of observing arachnid behavior, Choi’s combination of contact microphones and machine learning analysis could also help others one day monitor an ecosystem’s overall health by keeping an ear on spider populations.

“Even though everyone agrees that arthropods are very important for ecosystem functioning… if they collapse, the whole community can collapse,” Choi said. “Nobody knows how to monitor changes in arthropods.”

Now, however, Choi’s new methodology could allow a non-invasive, accurate, and highly effective aid in staying atop spiders’ daily movements.

The post Spider conversations decoded with the help of machine learning and contact microphones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This cap is a big step towards universal, noninvasive brain-computer interfaces https://www.popsci.com/technology/bci-wearable-cap/ Mon, 01 Apr 2024 18:48:27 +0000 https://www.popsci.com/?p=608932
Users wearing BCI cap to play video game
Machine learning programming enables a much more universal training process for wearers. University of Texas at Austin

Users controlled a car racing video game with the device, no surgery needed.

The post This cap is a big step towards universal, noninvasive brain-computer interfaces appeared first on Popular Science.

]]>
Users wearing BCI cap to play video game
Machine learning programming enables a much more universal training process for wearers. University of Texas at Austin

Multiple brain-computer interface (BCI) devices can allow now users to do everything from control computer cursors, to translate neural activity into words, to convert handwriting into text. While one of the latest BCI examples appears to accomplish very similar tasks, it does so without the need for time-consuming, personalized calibration or high-stakes neurosurgery.

AI photo

As recently detailed in a study published in PNAS Nexus, University of Texas Austin researchers have developed a wearable cap that allows a user to accomplish complex computer tasks through interpreting brain activity into actionable commands. But instead of needing to tailor each device to a specific user’s neural activity, an accompanying machine learning program offers a new, “one-size-fits-all” approach that dramatically reduces training time.

“Training a BCI subject customarily starts with an offline calibration session to collect data to build an individual decoder,” the team explains in their paper’s abstract. “Apart from being time-consuming, this initial decoder might be inefficient as subjects do not receive feedback that helps them to elicit proper [sensorimotor rhythms] during calibration.”

To solve for this, researchers developed a new machine learning program that identifies an individual’s specific needs and adjusts its repetition-based training as needed. Because of this interoperable self-calibration, trainees don’t need the researcher team’s guidance, or complex medical procedures to install an implant.

[Related: Neuralink shows first human patient using brain implant to play online chess.]

“When we think about this in a clinical setting, this technology will make it so we won’t need a specialized team to do this calibration process, which is long and tedious,” Satyam Kumar, a graduate student involved in the project, said in a recent statement. “It will be much faster to move from patient to patient.”

To prepare, all a user needs to do is don one of the extremely red, electrode-dotted devices resembling a swimmer’s cap. From there, the electrodes gather and transit neural activity to the researcher team’s newly created decoding software during training. Thanks to the program’s machine learning capabilities, developers avoided the time-intensive, personalized training usually required for other BCI tech to calibrate for each individual user.  

Over a five-day period, 18 test subjects effectively learned to mentally envision playing both a car racing game and a simpler bar-balancing program using the new training method. The decoder was so effective, in fact, that wearers could train on both the bar and racing games simultaneously, instead of one at a time. At the annual South by Southwest Conference last month, the UT Austin team took things a step further. During a demonstration, volunteers put on the wearable BCI, then learned to control a pair of hand and arm rehabilitation robots within just a few minutes.

So far, the team has only tested their BCI cap on subjects without motor impairments, but they plan to expand their decoder’s abilities to encompass users with disabilities.

“On the one hand, we want to translate the BCI to the clinical realm to help people with disabilities,” said José del R. Millán, study co-author and UT professor of electrical and computer engineering. “On the other, we need to improve our technology to make it easier to use so that the impact for these people with disabilities is stronger.” Millán’s team is also working to incorporate similar BCI technology into a wheelchair.

The post This cap is a big step towards universal, noninvasive brain-computer interfaces appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Gmail debuted on April Fool’s Day 20 years ago. The joke is still on us. https://www.popsci.com/technology/gmail-20-year-anniversary/ Mon, 01 Apr 2024 15:29:33 +0000 https://www.popsci.com/?p=608872
Close-up of Gmail homepage on a monitor screen.
Gmail's features were so impressive at the time that many people thought it was an April Fool's prank. Deposit Photos

Google's new email service offered astounding features—at a cost.

The post Gmail debuted on April Fool’s Day 20 years ago. The joke is still on us. appeared first on Popular Science.

]]>
Close-up of Gmail homepage on a monitor screen.
Gmail's features were so impressive at the time that many people thought it was an April Fool's prank. Deposit Photos

A completely free email service offering 1 GB of storage, integrated search capabilities, and automatic message threading? Too good to be true.

At least, that’s what many people thought 20 years ago today, when Google announced Gmail’s debut. To be fair, it’s easy to see why some AP News readers wrote letters claiming the outlet’s reporters had unwittingly fallen for Google’s latest April Fool’s Day prank. Given the state of email in 2004, the prospect of roughly 250-500 times greater storage capability than the likes of Yahoo! Mail and Hotmail sounded far-fetched enough—offering all that for free felt absurd.  But there was something else even more absurd than Gmail’s technological capabilities.

It’s hard to imagine now, but there was a time when forking over all your data to a private company in exchange for its product wasn’t the default practice. Gmail marked a major shift in strategy (and ethics) for Google—in order to take advantage of all those free, novel webmail features, new users first consented to letting the company vacuum up all their communications and associated data. This lucrative information would then be utilized to offer personalized advertising alongside sponsored ads embedded in the margins of Gmail’s browser.

“Depending on your take, Gmail is either too good to be true, or it’s the height of corporate arrogance, especially coming from a company whose house motto is ‘Don’t Be Evil,’” Slate tech journalist Paul Boutin wrote on April 15, 2004.

The stipulations buried within Gmail’s terms of use quickly earned the ire of watchdogs. Within a week of its announcement (and subsequent confirmation that it wasn’t an April Fool’s prank), tech critics and privacy advocates published a co-signed open letter to Google’s co-founders, Sergey Brin and Larry Page, urging them to reconsider Gmail’s underlying principles.

“Scanning personal communications in the way Google is proposing is letting the proverbial genie out of the bottle,” they cautioned. “Today, Google wants to make a profit from selling ads. But tomorrow, another company may have completely different ideas about how to use such an infrastructure and the data it captures.”

But the worries didn’t phase Google. Gmail’s features were truly unheard-of for the time, and a yearslong, invite-only rollout continued to build hype while establishing it as an ultra-exclusive service. The buzz was so strong that some people shelled out as much as $250 on eBay for invite codes.

As Engadget noted earlier today, Google would continue its ad-centric email scans for more than a decade. Gmail opened to the general public on Valentine’s Day, 2007; by 2012, its over 425 million active users officially made it the world’s most popular email service–and one of the most desirable online data vaults.

It would take another five years before Google finally acquiesced to intensified criticism, agreeing to end its ad-based email scanning tactics in 2017. By then, however, the damage was done—trading “free” services for personal data is basically the norm for Big Tech companies like Meta and Amazon. Not only that, but Google still manages to find plenty of ways to harvest data across its many other services—including allowing third-party app developers to pony up for peeks into Gmail inboxes. And with 1.5 billion active accounts these days, that’s a lot of very profitable information to possess.

In the meantime, Google’s ongoing push to shove AI into its product suite has opened an entirely new chapter in its long-running online privacy debate—one that began two decades ago with Gmail’s reveal. Although it debuted on April 1, 2004, Gmail’s joke is still on us all these years later.

The post Gmail debuted on April Fool’s Day 20 years ago. The joke is still on us. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Researchers unlock fiber optic connection 1.2 million times faster than broadband https://www.popsci.com/technology/fiber-optic-wavelength-record/ Fri, 29 Mar 2024 20:35:04 +0000 https://www.popsci.com/?p=608782
Dr Ian Phillips with the wavelength management device
Dr. Ian Phillips with the wavelength management device. Aston University

Using an optical processor to operate in the E- and S-band ranges, UK researchers hit a transfer rate of 301 terabits per second.

The post Researchers unlock fiber optic connection 1.2 million times faster than broadband appeared first on Popular Science.

]]>
Dr Ian Phillips with the wavelength management device
Dr. Ian Phillips with the wavelength management device. Aston University

In the average American house, any download rate above roughly 242 Mbs is considered a solidly speedy broadband internet connection. That’s pretty decent, but across the Atlantic, researchers at UK’s Aston University recently managed to coax about 1.2 million times that rate using a single fiber optic cable—a new record for specific wavelength bands.

As spotted earlier today by Gizmodo, the international team achieved a data transfer rate of 301 terabits, or 301,000,000 megabits per second by accessing new wavelength bands normally unreachable in existing optical fibers—the tiny, hollow glass strands that carry data through beams of light. According to Aston University’s recent profile, you can think of these different wavelength bands as different colors of light shooting through a (largely) standard cable.

[Related: No, ‘10G internet’ is not a thing.]

Commercially available fiber cabling utilizes what are known as C- and L-bands to transmit data. By constructing a device called an optical processor, however, researchers could access the never-before-used E- and S-bands.

“Over the last few years Aston University has been developing optical amplifiers that operate in the E-band, which sits adjacent to the C-band in the electromagnetic spectrum but is about three times wider,” Ian Phillips, the optical processor’s creator, said in a statement. “Before the development of our device, no one had been able to properly emulate the E-band channels in a controlled way.”

But in terms of new tech, the processor was basically it for the team’s experiment. “Broadly speaking, data was sent via an optical fiber like a home or office internet connection,” Phillips added. 

What’s particularly impressive and promising about the team’s achievement is that they didn’t need new, high-tech fiber optic lines to reach such blindingly fast speeds. Most existing optical cables have always technically been capable of reaching E- and S-bands, but lacked the equipment infrastructure to do so. With further refinement and scaling, internet providers could ramp up standard speeds without overhauling current fiber optic infrastructures.

[Related: An inside look at how fiber optic glass is made.]

“[It] makes greater use of the existing deployed fiber network, increasing its capacity to carry data and prolonging its useful life & commercial value,” said Wladek Forysiak, a professor at the Aston Institute of Photonic Technologies. In doing so, Forsyiak believes their solution may also offer a much greener solution to the world’s rapidly increasing data demands.

The post Researchers unlock fiber optic connection 1.2 million times faster than broadband appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot named ‘Emo’ can out-smile you by 840 milliseconds https://www.popsci.com/technology/emo-smile-robot-head/ Fri, 29 Mar 2024 14:00:00 +0000 https://www.popsci.com/?p=608662
Yuhang Hu working on Emo robot head
Emo contains 26 actuators to help mimic human smiles. John Abbott/Columbia Engineering

The bot's head and face are designed to simulate facial interactions in conversation with humans.

The post A robot named ‘Emo’ can out-smile you by 840 milliseconds appeared first on Popular Science.

]]>
Yuhang Hu working on Emo robot head
Emo contains 26 actuators to help mimic human smiles. John Abbott/Columbia Engineering

If you want your humanoid robot to realistically simulate facial expressions, it’s all about timing. And for the past five years, engineers at Columbia University’s Creative Machines Lab have been honing their robot’s reflexes down to the millisecond. Their results, detailed in a new study published in Science Robotics, are now available to see for yourself.

Meet Emo, the robot head capable of anticipating and mirroring human facial expressions, including smiles, within 840 milliseconds. But whether or not you’ll be left smiling at the end of the demonstration video remains to be seen.

AI photo

AI is getting pretty good at mimicking human conversations—heavy emphasis on “mimicking.” But when it comes to visibly approximating emotions, their physical robots counterparts still have a lot of catching up to do. A machine misjudging when to smile isn’t just awkward–it draws attention to its artificiality. 

Human brains, in comparison, are incredibly adept at interpreting huge amounts of visual cues in real-time, and then responding accordingly with various facial movements. Apart from making it extremely difficult to teach AI-powered robots the nuances of expression, it’s also hard to build a mechanical face capable of realistic muscle movements that don’t veer into the uncanny.

[Related: Please think twice before letting AI scan your penis for STIs.]

Emo’s creators attempt to solve some of these issues, or at the very least, help narrow the gap between human and robot expressivity. To construct their new bot, a team led by AI and robotics expert Hod Lipson first designed a realistic robotic human head that includes 26 separate actuators to enable tiny facial expression features. Each of Emo’s pupils also contained high-resolution cameras to follow the eyes of its human conversation partner—another important, nonverbal visual cue for people. Finally, Lipson’s team layered a silicone “skin” over Emo’s mechanical parts to make it all a little less.. you know, creepy.

From there, researchers built two separate AI models to work in tandem—one to predict human expressions through a target face’s minuscule expressions, and another to quickly issue motor responses for a robot face. Using sample videos of human facial expressions, Emo’s AI then learned emotional intricacies frame-by-frame. Within just a few hours, Emo was capable of observing, interpreting, and responding to the little facial shifts people tend to make as they begin to smile. What’s more, it can now do so within about 840 milliseconds.

“I think predicting human facial expressions accurately is a revolution in [human-robot interactions,” Yuhang Hu, Columbia Engineering PhD student and study lead author, said earlier this week. “Traditionally, robots have not been designed to consider humans’ expressions during interactions. Now, the robot can integrate human facial expressions as feedback.”

Right now, Emo lacks any verbal interpretation skills, so it can only interact by analyzing human facial expressions. Lipson, Hu, and the rest of their collaborators hope to soon combine the physical abilities with a large language model system such as ChatGPT. If they can accomplish this, then Emo will be even closer to natural(ish) human interactions. Of course, there’s a lot more to relatability than smiles, smirks, and grins, which the scientists appear to be focusing on. (“The mimicking of expressions such as pouting or frowning should be approached with caution because these could potentially be misconstrued as mockery or convey unintended sentiments.”) However, at some point, the future robot overlords may need to know what to do with our grimaces and scowls.

The post A robot named ‘Emo’ can out-smile you by 840 milliseconds appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Please think twice before letting AI scan your penis for STIs https://www.popsci.com/health/calmara-ai-sti/ Thu, 28 Mar 2024 18:45:00 +0000 https://www.popsci.com/?p=608402
person taking photos of themselves in the dark
Calmara offers a QR code taking you to its AI photo scanner. DepositPhotos

Awkward Gen Z buzzwords, troubling tech, and outdated sex ed: Calmara is not your 'intimacy bestie.'

The post Please think twice before letting AI scan your penis for STIs appeared first on Popular Science.

]]>
person taking photos of themselves in the dark
Calmara offers a QR code taking you to its AI photo scanner. DepositPhotos

A website promising its AI service can accurately scan pictures of penises for signs of sexually transmitted infections is earning the ire of healthcare advocates and digital privacy experts, among many other critics. But while the internet (and Jimmy Fallon) have taken the makers of Calmara to task over the past week, it actually took two years to get here.

Where did the AI ‘intimacy bestie’ come from?

Back in 2022, the company HeHealth debuted itself as an online way to “get answers about your penis health in minutes.” To receive this information, the website uses a combination of questionnaires and what the company claims is a “65-96 percent accurate” AI screening tool allegedly trained on proprietary datasets to flag photographic evidence of various STIs, including genital warts, herpes eruptions, and syphilis. “Cancer” is also included in the list of scannable signs. If the results come back “positive”, HeHealth can then refer users to healthcare professionals for actual physical screenings, diagnoses, and treatment options. It’s largely flown under the radar since then, with only around 31,000 people reportedly using its allegedly anonymized, encrypted services over the last two years. And then came Calmara.

Calmara website screenshot
Credit: Calmara

With a website overloaded with Gen Z-centric buzzwords, Calmara sells itself as women’s new “intimacy bestie,” offering to scan pictures of their potential sexual partners’ penises for indications of STIs. According to HeHealth CEO’s latest LinkedIn post, HeHealth and Calmara “are totally different products.” However, according to Calmara’s website, HeHealth’s owners are running Calmara, and it utilizes the same AI. Calmara also markets itself as (currently) free and “really in its element when focused on the D.”

In a March 19 reveal announcement, one “anonymous user” claimed Calmara is already “changing the conversation around sexual health.” Calmara certainly sparked a conversation over the last week—just not the one its makers likely intended.

A novelty app 

Both Calmara’s and HeHealth’s fine print concede their STI judgments “should not be used as substitutes for professional medical advice, diagnosis, treatment, or management of any disease or condition.” There’s an obvious reason why this is not actually a real medical diagnosis tool, despite its advertising. 

It doesn’t take an AI “so sharp you’d swear it aced its SATs” to remember that the majority of STIs are asymptomatic. In those cases, they definitely wouldn’t be visible in a photograph. What’s more, a preprint, typo-laden paper explaining Calmara’s AI indicates it was trained on an extremely limited image database that included “synthetic” photos of penises, i.e. computer-generated images. Meanwhile, determining its surprisingly accuracy is difficult to do—Calmara’s preprint paper says its AI is around 94.4-percent accurate, while the homepage says 95 percent. Scroll down a little further, and the FAQ section offers 65-to-90 percent reliability. Not a very encouraging approach to helping foster safe sex practices that would, presumably, require mutual, trustworthy statements about sexual health.

Calmara website screenshot
Credit: Calmara

“On its face, the service is so misguided that it’s easy to dismiss it as satire,” sex and culture critic Ella Dawson wrote in a viral blog post last week. Calmara’s central conceit—that new intimate partners would be comfortable enough to snap genital photos for an AI service to “scan”—is hard to imagine actually playing out in real life. “… This is not how human beings interact with each other. This is not how to normalize conversations about sexual health. And this is not how to promote safer sex practices.”

No age verification

Given its specific targeting of younger demographics, Dawson told PopSci she believes “it’s easy to see how a minor could find Calmara in a moment of panic and use it to self-diagnose” which would constitute obvious legal issues, as well as ethical ones. For one, explicit images of minors could constitute sexual child abuse material, or CSAM. While Calmara expressly states its program shouldn’t be used by minors, it still lacks even the most basic of age verification protocols at the time of writing.  

“Calmara’s lack of any age verification, or even a checkbox asking users to confirm that they are eighteen years of age or older, is not just lazy, it’s irresponsible,” Dawson concludes.

Side by side of age verification and consent pages for Calmara
Credit: Calmara / PopSci

Dubious privacy practices 

More to the point, simply slapping caveats across your “wellness” websites could amount to the “legal equivalent of magic pixie dust,” according to digital privacy expert Carey Lening’s rundown. While Calmara’s FAQ section is much vaguer on technical details, HeHealth’s FAQ page does state their services are HIPAA compliant because they utilize Amazon Web Services (AWS) “to collect, process, maintain, and store” data—which is technically true.

On its page dedicated to HIPAA regulations, AWS makes clear that there is no such thing as “HIPAA certification” for cloud service providers. Instead, AWS “aligns our HIPAA risk management program” to meet requirements “applicable to our operating model.” According to AWS, it utilizes “higher security standards that map to the HIPAA Security Rule” which enables “covered entities and their business associates” subject to HIPAA to use AWS for processing, maintaining, and storing protected health information. Basically, if you consent to use Calmara or HeHealth, you are consenting to AWS handling penis pictures—be them yours, or someone else’s.

[Related: A once-forgotten antibiotic could be a new weapon against drug-resistant infections.]

That said, Lening says Calmara’s makers may have failed to consider newer state laws, such as Washington’s My Health My Data Act, with its “extremely broad and expansive view of consumer health data” set to go into effect in late June. The first of its kind in the US, the My Health My Data Act is designed specifically to protect personal health data that may fall outside HIPAA qualifications. 

“In short, they didn’t do their legal due diligence,” Lening contends.

“What’s frustrating from the perspective of privacy advocates and practitioners is not that they were ‘embracing health innovation‘ and ‘making a difference‘, but rather that they took a characteristic ‘Move Fast, Break Things’ kind of approach to the problem,” she continues. “The simple fact is, the [online] outrage is entirely predictable, because the Calmara folks did not, in my opinion, adequately assess the risk of harm their app can cause.”

Keep Calmara and carry on

When asked about these issues directly, Calmara and HeHealth’s founders appeared nonplussed.

“Most of the criticism is based on wrong information and misinformation,” HeHealth CEO and Calmara co-founder Yudara Kularathne wrote to PopSci last Friday, pointing to an earlier LinkedIn statement about its privacy policies. Kularathne added that “concerns about potential for anonymized data to be re-identified” are being considered.

On Monday, Kularathne published another public LinkedIn post, claiming to be at work addressing, “Health data and Personally Identifiable Information (PHI) related issues,” “CSAM related issues,” “communication related issues,” and “synthetic data related issues.”

“We are addressing most of the concerns raised, and many changes have been implemented immediately,” Kularathne wrote.

Calmara QR code page screenshot
Credit: Calmara

When reached for additional details, Calmara CEO Mei-Ling Lu avoided addressing criticisms in email, and instead offered PopSci an audio file from “one of our female users” recounting how the nameless user and her partner employed HeHealth’s (and now Calmara’s) AI to help determine they had herpes.

“[W]hile they were about to start, she realized something ‘not right’ on her partner’s penis, but he said: ‘you know how much I sweat, this is heat bubbles,’” writes Lu. After noticing similar “heat bubbles… a few days later,” Stacy and her partner consulted HeHealth’s AI scanner, which flagged the uploaded photos and directed them to healthcare professionals who confirmed they both had herpes.

To be clear, medical organizations such as the Mayo Clinic freely offer concise, accurate information on herpes symptoms, which can include pain or itching alongside bumps or blisters around the genitals, anus or mouth, painful urination, and discharge from the urethra or vagina. Symptoms generally occur 2-12 days after infection, and although many people infected with the virus display either mild or no symptoms, they can still spread the disease to others. 

Meanwhile, Calmara’s glossy (NSFW) promotional, double entendre-laden video promises that it is “The PERFECT WEBSITE for HOOKING UP,” but no matter how many bananas are depicted, using AI to give penises a once-over doesn’t seem particularly reliable, enjoyable, or even natural.

The post Please think twice before letting AI scan your penis for STIs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Japan’s SLIM moon lander survives a second brutal lunar night https://www.popsci.com/science/slim-reboot-again/ Thu, 28 Mar 2024 14:00:00 +0000 https://www.popsci.com/?p=608358
Image taken of JAXA SLIM lunar lander on moon upside down
SLIM lived through another two weeks of -200 degree temperatures. JAXA/Takara Tomy/Sony Group Corporation/Doshisha University

It's still upside down, but it's showing signs of life.

The post Japan’s SLIM moon lander survives a second brutal lunar night appeared first on Popular Science.

]]>
Image taken of JAXA SLIM lunar lander on moon upside down
SLIM lived through another two weeks of -200 degree temperatures. JAXA/Takara Tomy/Sony Group Corporation/Doshisha University

SLIM, Japan’s first successful lunar lander, isn’t going down without a fight. After making history—albeit upside down—in January, the Smart Lander for Investigating Moon continues to surprise mission control at Japan Aerospace Exploration Agency (JAXA) by surviving not one, but now two brutally frigid lunar nights.

“Last night, we received a response from #SLIM, confirming that the spacecraft made it through the lunar night for the second time!” JAXA posted to X on Wednesday alongside a new image of its likely permanent, inverted vantage point near the Shioli crater. JAXA also noted that, because the sun is currently high above the lunar horizon, SLIM’s equipment is currently extremely hot (212-degrees Fahrenheit or so), so only the navigation camera can be used for the time being.

Based on their newly acquired data, however, it appears that some of the lander’s temperature sensors and unused battery cells are beginning to malfunction. Even so, JAXA says “the majority of functions that survived the first lunar night” are still going strong after yet another two-week stretch of darkness that sees temperatures drop to -208 Fahrenheit.

It’s been quite the multi-month journey for SLIM. After launching last September, SLIM eventually entered lunar orbit in early October, where it then spent weeks rotating around the moon’s surface. On January 19, JAXA initiated SLIM’s landing procedures, with early indications pointing towards a successful touchdown. After reviewing lander data, JAXA confirmed the spacecraft stuck the landing roughly 180-feet from an already extremely narrow 330-feet-wide target site—thus living up to SLIM’s “Moon Sniper” nickname.

[Related: SLIM lives! Japan’s upside-down lander is online after a brutal lunar night.]

The historic moment wasn’t a flawless mission, however. In the same update, JAXA explained that one of its lander’s main engines malfunctioned as it neared the surface, causing SLIM to tumble over, ostensibly on its head. In doing so, the craft’s solar panels now can’t work at their full potential, thus limiting battery life and making basic functions much more difficult for the lander.

JAXA still managed to make the most of its situation by using SLIM’s sensors to gather a ton of data on the surrounding lunar environment, as well as deploy a pair of tiny autonomous robots to survey the lunar landscape. On January 31, mission control released what it cautioned could very well be SLIM’s last postcard image from the moon ahead of an upcoming lunar night. The lander wasn’t designed for a lengthy life even in the best of circumstances, but its prospects appeared even dimmer given its accidental positioning.

Roughly two weeks later, however, SLIM proved it could endure in spite of the odds by booting back up and offering JAXA another opportunity to gather additional lunar information. A repeat of JAXA’s same warning came a few days later—and yet here things stand, with SLIM still chugging along. From the start, researchers have employed the lander’s multiple tools, including a Multi-Band Camera, to analyze the moon’s chemical composition, particularly the amounts of olivine, ““will help solve the mystery of the origin of the moon,” says JAXA.

At this point, it’s anyone’s guess how much longer the lander has in it. Perhaps it’s taking a cue from NASA’s only-recently-retired Mars Ingenuity rotocopter, which lasted around three years longer than intended.

The post Japan’s SLIM moon lander survives a second brutal lunar night appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New material neutralizes 96-percent of virus cells using nanospikes https://www.popsci.com/technology/silicon-virus-spikes/ Wed, 27 Mar 2024 20:00:00 +0000 https://www.popsci.com/?p=608272
Microscopic image of virus cell impaled on silicon wafer needles
A virus cell on the nano spiked silicon surface, magnified 65,000 times. After 1 hour it has already begun to leak material. RMIT

This 'smooth' silicon wafer is actually covered in very tiny, virus-slaying needles.

The post New material neutralizes 96-percent of virus cells using nanospikes appeared first on Popular Science.

]]>
Microscopic image of virus cell impaled on silicon wafer needles
A virus cell on the nano spiked silicon surface, magnified 65,000 times. After 1 hour it has already begun to leak material. RMIT

Researchers at Australia’s Royal Melbourne Institute of Technology (RMIT) have combined brute force with high tech manufacturing to create a new silicon material for hospitals, laboratories and other potentially sensitive environments. And although it might look and feel like a flat, black mirror to humans, the thin layering actually functions as a thorny deathtrap for pathogens.

As recently detailed in the journal ACS Nano, the interdisciplinary team spent over two years developing the novel material, which is smooth to the human touch. At a microscopic level, however, the silicon surface is covered in “nanospikes” so small and sharp that they can impale individual cells. In lab tests, 96-percent of all hPIV-3 virus cells that came into contact with the material’s miniscule needles either tore apart, or came away so badly damaged that they couldn’t replicate and create their usual infections like pneumonia, croup, and bronchitis. With no external assistance, these eradication levels could be accomplished within six hours.

A virus cell on the nano spiked silicon surface, magnified 65,000 times. After 6 hours it has been completely destroyed.
A virus cell on the nano spiked silicon surface, magnified 65,000 times. After 6 hours it has been completely destroyed. Credit: RMIT

Interestingly, inspiration came not from vampire hunters, but from insects. Prior to designing the spiky silicon, researchers studied the structural composition of cicada and dragonfly wings, which have evolved to feature similarly sharp nanostructures capable of skewering fungal spores and bacterial cells. Viruses are far more microscopic than even bacteria, however, which meant effective spikes needed to be comparably smaller.

[Related: A once-forgotten antibiotic could be a new weapon against drug-resistant infections.]

To make such a virus-slaying surface, its designers subjected a silicon wafer to ionic bombardment using specialized equipment at the Melbourne Center for Nanofabrication. During this process, the team directed the ions to chip away at specific areas of the wafer, thus creating countless, 2-nanometer-thick, 290-nanometer tall spires. For perspective, a single spike is about 30,000 times thinner than a human hair.

Researchers believe their new silicon material could one day be applied atop commonly touched surfaces in often pathogenic-laden settings.

“Implementing this cutting-edge technology in high-risk environments like laboratories or healthcare facilities, where exposure to hazardous biological materials is a concern, could significantly bolster containment measures against infectious diseases,” Samson Mah, study first author and PhD researcher, said on Wednesday. “By doing so, we aim to create safer environments for researchers, healthcare professionals, and patients alike.”

By relying on the material’s simple, mechanical methods to effectively clean spaces (i.e., stabbing virus cells like they’re shish kabobs), the designers believe overall chemical disinfectant usage could also decrease—a major concern as society contends with the continued rise of increasingly resilient “superbugs.”

The post New material neutralizes 96-percent of virus cells using nanospikes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Vinyl records outsold CDs for the second year running https://www.popsci.com/technology/vinyl-sales-cds-2023/ Wed, 27 Mar 2024 15:00:00 +0000 https://www.popsci.com/?p=608132
Hand flipping through vinyl records at store
Taylor Swift unsurprisingly made up a solid chunk of those sales. Credit: Peter Nicholls/Getty Images

Taylor Swift had a lot to do with it, from the looks of things.

The post Vinyl records outsold CDs for the second year running appeared first on Popular Science.

]]>
Hand flipping through vinyl records at store
Taylor Swift unsurprisingly made up a solid chunk of those sales. Credit: Peter Nicholls/Getty Images

Somehow, someway, vinyl records keep defying the odds. Despite falling firmly behind compact disc sales for decades, the vintage physical music medium returned to the top spot in 2022 for the first time since 1987. Now, new numbers released by the Recording Industry Association of America (RIAA) indicate that wasn’t just a random fluke—yet again, vinyl outsold CDs for a second year running in 2023. This time, however, LPs managed to widen the lead even more.

As noted by The Verge on Tuesday, US music fans purchased around 43 million vinyl records in 2023, about 6 million more than total CD sales last year. What’s more, LP and EP purchases actually increased year-to-year by nearly 3 million sales—for the record (sorry), “long playing” vinyl are generally 12-inch records containing full albums usually played at 33 1⁄3 RPM, while “extended play” records are usually shorter, 7-inch releases spinning at 45 RPM. At the same time, people simultaneously bought less CDs this year than in 2022. All told, records generated nearly double the profit of their successor format—about $1.4 billion versus $437 million.

For reference, here’s a list of last year’s bestselling vinyl releases:

US Top Vinyl Album Sales of 2023
Credit: Luminate

Unsurprisingly, it was a lot of Taylor Swift. But what’s more impressive to consider is that new vinyl is usually much more expensive to manufacture than compact discs.

[Related: Vinyl is back. But until now, record-making has been stuck in the ’80s.]

After spending decades as listeners’ go-to physical music medium, vinyl records finally passed the torch over to CDs back in 1987. For the next 35 years, that dynamic remained the same, with CDs’ comparative portability, durability, and overall audio quality making them the preferred method of enjoying music.

Sorry record purists, but it’s science. While transferring sound waves’ electrical signals into etched grooves on vinyl can offer “lossless” audio, that’s only under perfect conditions. And, as any vinyl owner knows, keeping a record in “perfect” condition is no easy task. This means that it doesn’t take much, or long, for an LP’s quality to degrade at least somewhat during playbacks.

[Related: This DVD-sized disk can store a massive 125,000 gigabytes of data.]

Contrast that to CDs, which digitally encode audio files onto optical discs that are translated back into sound via laser scanning. This can also sometimes result in diminished sound quality, but it’s overall a more reliable and standardized way to play music. Also (and perhaps most importantly) it’s generally a bit less of a hassle to keep CDs in good shape than vinyl upkeep, not to mention easier to travel with them.

But given the pros and cons of both options, it’s always really come down to how and where people want to enjoy their music—but for the second year in a row, it’s clear audiophiles are buying more vinyl instead of their once-unequivocal heirs.

The post Vinyl records outsold CDs for the second year running appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to photograph the eclipse, according to NASA https://www.popsci.com/science/nasa-eclipse-photo-tips/ Tue, 26 Mar 2024 15:00:00 +0000 https://www.popsci.com/?p=607943
2017 Total Solar Eclipse timelapse
This composite image shows the progression of a partial solar eclipse over Ross Lake, in Northern Cascades National Park, Washington on Monday, Aug. 21, 2017. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina. A partial solar eclipse was visible across the entire North American continent along with parts of South America, Africa, and Europe. NASA/Bill Ingalls

You're gonna need some protection for your smartphone and camera lenses.

The post How to photograph the eclipse, according to NASA appeared first on Popular Science.

]]>
2017 Total Solar Eclipse timelapse
This composite image shows the progression of a partial solar eclipse over Ross Lake, in Northern Cascades National Park, Washington on Monday, Aug. 21, 2017. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina. A partial solar eclipse was visible across the entire North American continent along with parts of South America, Africa, and Europe. NASA/Bill Ingalls

It’s hard to think of anyone as excited about the upcoming North American total solar eclipse as NASA. From citizen research projects to hosted events within the path of totality, the agency is ready to make the most of next month’s cosmic event—and they want to help you enjoy it, too. Earlier this month, NASA offered a series of tips on how to safely and effectively photograph the eclipse come April 8. Certain precautions are a must, but with a little bit of planning, you should be able to capture some great images of the moon’s journey across the sun, as well as its effects on everything beneath it.

First and foremost is protection. Just as you wouldn’t stare directly at the eclipse with your own eyes, NASA recommends you place specialized filters in front of your camera or smartphone’s lens to avoid damage. The easiest way to do this is simply use an extra pair of eclipse viewing glasses, but there also are a number of products specifically designed for cameras. It’s important to also remember to remove the filter while the moon is completely in front of the sun—that way you’ll be able to snap pictures of the impressive coronal effects.

[Related: How to photograph solar eclipse: The only guide you need]

Sun photo

And while you’re welcome to use any super-fancy, standalone camera at your disposal, NASA reminds everyone that it’s not necessary to shell out a bunch of money ahead of time. Given how powerful most smartphone cameras are these days, you should be able to achieve some stunning photographs with what’s already in your pocket. That said, there are still some accessories that could make snapping pictures a bit easier, such as a tripod for stabilization.

Next: practice makes perfect, as they say. Even though you can’t simulate the eclipse ahead of time, you can still test DSLR and smartphone camera settings on the sun whenever it’s out and shining (with the proper vision protection, of course). For DSLR cameras, NASA recommends using a fixed aperture of f/8 to f/16, alongside shutter speeds somewhere between 1/1000 to one-fourth of a second. These variations can be used during the many stages of the partial eclipse as it heads into its totality. Once that happens, the corona’s brightness will vary greatly, “so it’s best to use a fixed aperture and a range of exposures from approximately 1/1000 to 1 second,” according to the agency. Most smartphone cameras offer similar fine-tuning, so experiment with those as needed, too.

[Related: NASA needs your smartphone during April’s solar eclipse.]

A few other things to keep in mind: Make sure you turn off the flash, and opt for a wide-angle or portrait framing. For smartphones during totality, be sure to lock the camera’s focus feature, as well as enable the burst mode to capture a bunch of potentially great images. Shooting in the RAW image format is a favorite for astrophotographers, so that’s an option for those who want to go above and beyond during the eclipse. While Google Pixel cameras can enable RAW files by themselves, most other smartphones will require a third-party app download to do so, such as Yamera and Halide.

But regardless of your camera (and/or app) choice, it’s not just the sun and moon you should be striving to capture. NASA makes a great point that eclipses affect everything beneath them, from the ambient light around you, to the “Wow” factor on the faces of nearby friends and family members. Be sure to grab some shots of what’s happening around you in addition to what’s going on above.
For more detailed info on your best eclipse photographic options, head over to NASA.

The post How to photograph the eclipse, according to NASA appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This implant will tell a smartphone app when you need to pee https://www.popsci.com/health/bladder-sensor-implant/ Mon, 25 Mar 2024 19:00:00 +0000 https://www.popsci.com/?p=607873
Bladder sensor next to smartphone displaying its app
The sensor responds to a bladder's natural expansions and contractions throughout the day. Northwestern University

The stretchy, wireless sensor could keep patients with bladder issues informed in real-time.

The post This implant will tell a smartphone app when you need to pee appeared first on Popular Science.

]]>
Bladder sensor next to smartphone displaying its app
The sensor responds to a bladder's natural expansions and contractions throughout the day. Northwestern University

For people dealing with spina bifida, paralysis, and various bladder diseases, determining when to take a bathroom break can be an issue. To help ease the frequent stress, researchers at Northwestern University have designed a sensor array that attaches to the bladder’s exterior wall, enabling it to detect its fullness in real time. Using embedded Bluetooth technology, the device then transmits its data to a smartphone app, allowing users to monitor their bodily functions without far less discomfort and guesswork.

The new tool, detailed in a study published today in the Proceedings of the National Academy of Sciences (PNAS), isn’t only meant to prevent incontinence issues. Lacking an ability to feel bladder fullness extends far beyond the obvious inconveniences—for millions of Americans dealing with bladder dysfunctions, not knowing when to go to the bathroom can cause additional organ damage such as regular infections and kidney damage. To combat these issues, the new medical device mirrors the bladder’s own elasticity. 

[Related: This drug-delivery soft robot may help solve medical implants’ scar tissue problem.]

“The key advance here is in the development of super soft, ultrathin, stretchable strain gauges that can gently wrap the outside surface of the bladder, without imposing any mechanical constraints on the natural filling and voiding behaviors,” John Rogers, study co-lead and professor of material sciences and biomedical engineering at Northwestern University, said in a statement.

As a bladder fills with urine, its expansion stretches out the sensor material, which in turn wirelessly sends data to a patient’s smartphone app. This also works as the organ contracts after urination, providing users with the real-time data throughout the day’s ebbs and flows. In small animal lab tests, the battery-free device could accurately monitor a bladder for 30 days, while the implant lasted in non-human primates as long as 8 weeks.

“Depending on the use case, we can design the technology to reside permanently inside the body or to harmlessly dissolve after the patient has made a full recovery,” regenerative engineer and study co-lead Guillermo Ameer said on Monday

Researchers believe their device could reduce the need for uncomfortable, infection-prone catheters, as well as limiting the use of more invasive, in-patient bladder monitoring procedures. But why stop there?

The team is also testing a separate, biodegradable “patch” using a patient’s own stem cells. Called a pro-regenerative scaffold (PRS), the new material also expands and contracts alongside the bladder’s movements while encouraging the growth of new organ cells. New tissue remains in place as the patch dissolves, allowing for faster, more effective healing possibilities. Researchers hope to one day combine their PRS work alongside their wireless monitoring sensors.

“This work brings us closer to the reality of smart regenerative systems, which are implantable pro-regenerative devices capable of probing their microenvironment, wirelessly reporting those findings outside the body… and enabling on-demand or programmed responses to change course and improve device performance or safety,” said Ameer.

For even more restored functionality, the team believes their sensors could eventually incorporate additional technology to stimulate urination on demand using the smartphone app. Taken as a whole, the trio of medical advances could one day offer a far less invasive, comfortable, and effective therapy for patients dealing with bladder issues. 

The post This implant will tell a smartphone app when you need to pee appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Boom Supersonic’s prototype jet sets off on first flight https://www.popsci.com/technology/boom-xb1-test-flight/ Mon, 25 Mar 2024 15:30:00 +0000 https://www.popsci.com/?p=607813
Boom Supersonic's XB-1 test plane taking off
The XB-1 is one-third the size of Boom Supersonic's proposed Overture aircraft. Boom Supersonic

The XB-1 finally took to the sky, but don’t expect its supersonic sibling anytime soon.

The post Boom Supersonic’s prototype jet sets off on first flight appeared first on Popular Science.

]]>
Boom Supersonic's XB-1 test plane taking off
The XB-1 is one-third the size of Boom Supersonic's proposed Overture aircraft. Boom Supersonic

The race to reboot commercial supersonic travel is well underway, and one company just took a major step forward. On Friday, Boom Supersonic announced the successful first flight of its XB-1, a prototype jet built to test the plane’s construction materials and aerodynamic designs meant for the company’s eventual full-size passenger aircraft, Overture.

XB-1 took off from Mojave Air & Space Port in Mojave, California on March 22—near the site where the Bell X-1 became the first plane to break the sound barrier in 1947. Boom’s test craft flew for about 12 minutes at a maximum altitude of 7,120 feet, achieving a top speed of 238 knots (273 mph) with 12,300-pounds of thrust in the process. 

Aviation photo

Interestingly, XB-1 is powered by three GE J85-15 turbojet engines, which have been around for over 20 years. That means XB-1 is far slower than the roughly 741 mph required to achieve Mach 1, but that was never the goal for Friday’s takeoff. Instead, Boom’s engineers intended the flight to showcase technology such as the cockpit’s augmented reality vision system, as well as a frame almost entirely built using carbon fiber composite materials. The company is currently developing its sustainable engine fuel, 35,000-lb thrust Symphony jet engine meant for the final Overture plane.

[Related: This test plane could be a big step towards supersonic commercial flights.]

“I’ve been looking forward to this flight since founding Boom in 2014, and it marks the most significant milestone yet on our path to bring supersonic travel to passengers worldwide,” Boom Supersonic founder and CEO Blake Scholl said on Friday.

It was a long road to this weekend’s milestone, however. Boom Supersonic first unveiled the XB-1 prototype back in late 2020, with an eye to begin test flights the following year. While that development phase was ultimately delayed until Friday’s event, such pushbacks are commonplace in the aviation industry, however, especially when attempting to revitalize supersonic travel.

[Related: All your burning questions about sustainable aviation fuel, answered.]

The nearly 63-foot-long XB-1 is just one-third the size of Overture, the company’s proposed commercial supersonic jet. If completed, Overture will zip 64-80 passengers around the world at speeds as fast as Mach 1.7 (about 1,260 mph), around twice the speed of current subsonic planes. That’s still a big “if,” of course, given that the public has only seen is a one-third scale model of the Symphony engine revealed last year at the Paris Air Show. And given the time it took to get XB-1 off the ground, Boom Supersonic’s proposed 2029 debut for the Overture seems a bit optimistic.

[Related: NASA plans to unveil experimental X-59 supersonic jet.]

Still, plenty of people seem pretty confident about Boom Supersonic’s chances of making the Overture a reality. The company reports it already has received 130 orders and pre-orders from American Airlines, United Airlines, and Japan Airlines. It also previously received a $60 million influx of cash from a partnership with the US Air Force—a reminder of the military’s own interest in expanding supersonic air travel. 

The post Boom Supersonic’s prototype jet sets off on first flight appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Drones offer a glimpse inside Fukushima nuclear reactor 13 years after disaster https://www.popsci.com/environment/fukushima-reactor-drones/ Fri, 22 Mar 2024 18:00:00 +0000 https://www.popsci.com/?p=607517
Aerial view of Fukushima nuclear reactor meltdown
In this satellite view, the Fukushima Dai-ichi Nuclear Power plant after a massive earthquake and subsequent tsunami on March 14, 2011 in Futaba, Japan. DigitalGlobe via Getty Images via Getty Images

The tiny robots could only explore a small portion of No. 1 reactor’s main structural support, showing the cleanup challenges ahead.

The post Drones offer a glimpse inside Fukushima nuclear reactor 13 years after disaster appeared first on Popular Science.

]]>
Aerial view of Fukushima nuclear reactor meltdown
In this satellite view, the Fukushima Dai-ichi Nuclear Power plant after a massive earthquake and subsequent tsunami on March 14, 2011 in Futaba, Japan. DigitalGlobe via Getty Images via Getty Images

A team of miniature drones recently entered the radioactive ruins of one of Fukushima’s nuclear reactors in an attempt to help Japanese officials continue planning their decades’ long cleanup effort. But if the images released earlier this week didn’t fully underscore just how much work is still needed, new footage from the tiny robots’ excursion certainly highlights the many challenges ahead.

On Thursday, Tokyo Electric Power Company Holdings (TEPCO), the Japanese utility organization that oversees the Fukushima Daiichi plant reclamation project, revealed three-minutes of video recorded by a bread slice-sized flying drone alongside a snake-like bot that provided its light. Obtained during TEPCO’s two-day probe, the new clip offers viewers some of the best looks yet at what remains of portions of the Fukushima Daiichi nuclear facility—specifically, the main structural support in its No. 1 reactor’s primary containment vessel.

The Fukushima plant suffered a catastrophic meltdown on March 11, 2011, after a magnitude 9.0 earthquake off the Japanese coast produced a 130-foot-tall tsunami that subsequently bore down on the region. Of the three reactors damaged during the disaster, No. 1 is considered the most severely impacted. A total of 880 tons of molten radioactive fuel debris is believed to remain within those reactors, with No.1 believed to contain the largest amount. An estimated 160,000 people were evacuated from the surrounding areas, with only limited returns allowed the following year. Around 20,000 people are believed to have been killed during the tsunami itself.

Last week’s drone-gathered images and video show the remains of the No. 1 reactor’s control-rod drive mechanism, alongside other equipment attached to the core, which indicate the parts were dislodged during the meltdown. According to NHK World, “agglomerated or icicle-shaped objects” seen in certain areas could be nuclear fuel debris composed of “a mixture of molten nuclear fuel and surrounding devices.”

[Related: Japan begins releasing treated Fukushima waste water into the Pacific Ocean.]

Experts say only a fraction of the damage could be accessed by the drones due to logistical difficulties, and that the robots couldn’t reach the core bottom because of poor visibility. Similarly, radiation levels could not be ascertained during this mission, since the drones did not include instruments such as dosimeters so as to remain light enough to maneuver through the plant.

Drones photo

TEPCO now plans to analyze the drone data to better establish a plan of action to collect and remove the radioactive debris within Fukushima. In August 2023, officials began a multiphase project to release treated radioactive wastewater from the plant into the Pacific Ocean. While deemed safe by multiple agencies and watchdogs, the ongoing endeavor has received strong pushback from neighboring countries, including China.

The Japanese government and TEPCO have previously estimated cleanup will take 30-40 years, although critics believe the timeline to be extremely optimistic.

The post Drones offer a glimpse inside Fukushima nuclear reactor 13 years after disaster appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A designer 3D printed a working clone of the iconic Mac Plus https://www.popsci.com/technology/mac-plus-diy-clone/ Fri, 22 Mar 2024 16:00:00 +0000 https://www.popsci.com/?p=607446
Brewintosh Plus next to original Mac Plus
Kevin Noki painstakingly built his own Mac Plus to the exact specs as the original. Kevin Noki

Kevin Noki created his 'Brewintosh Plus' using a 3D printer, retrofitted electronics, and a lot of patience.

The post A designer 3D printed a working clone of the iconic Mac Plus appeared first on Popular Science.

]]>
Brewintosh Plus next to original Mac Plus
Kevin Noki painstakingly built his own Mac Plus to the exact specs as the original. Kevin Noki

Got around 40 free weekends in the near future? Possess a 3D printer, extensive knowledge of vintage computer coding, soldering techniques, and near-superhuman patience? Then you, too, could be the proud owner of a “Brewintosh Plus,” a maddeningly accurate, completely working clone of Apple’s iconic Macintosh Plus computer system.

It might be hard to imagine, but there was a time when 1Mb of RAM was a big deal—and in 1986, the Mac Plus contained such immense processing power. To call Apple’s third Macintosh release a success is arguably an understatement. Until 2018, it remained the company’s longest-produced Macintosh model, with operating system updates made regularly until 1996.

Engineering photo

It’s a pivotal piece of tech history, but finding one in decent condition, let alone complete working order, can be difficult after nearly four decades since its debut. For some collectors like Kevin Noki, however, the allure of tinkering with the iconic, retro hardware is too strong to resist. Unfortunately, it can be even harder to obtain a Mac Plus in places like Germany—where Noki happens to live.

But after scouring eBay for some time, Noki finally found and purchased an original, worse-for-wear 1Mb Macintosh Plus from eBay. Despite a broken power supply and missing floppy disk drive, one could technically emulate the original computer system simply by installing a Raspberry Pi and calling it a day—but that wouldn’t be much of a challenge, would it?

[Related: Macs are better at video gaming (emulators) than PCs. Here’s how to set up yours.]

Instead, Noki decided to use his vintage piece of tech history as a template for something much more accurate, if a bit more complicated: He built his own Mac Plus computer from the literal ground up.

“We are talking a properly sized, colored, and textured box, which takes wall power, swallows 3.5-inch disks, works with both telephone-cord and ADB Apple keyboards and mice, has a screen dimmer, and makes the startup sound (the beep, not the chord),” Ars Technica summarized earlier this week.

But even that laundry list of features doesn’t properly do Noki’s journey justice. At least 40 individual parts were measured, rendered into production specs through AutoDesk Fusion 460, and 3D-printed to create exact clones of the desktop’s many components. Then there was augmenting a USB floppy drive reader to use an Arduino-controlled motor that Noki coded himself, installing said floppy drive… not to mention soldering and wiring internal speakers, dyeing external parts to match the exact Mac Plus case color scheme, and even creating replicas of all its original labels, stickers, and raised-text stereotypes. But when it comes to picking the most difficult aspect of the entire saga, however, Noki doesn’t mince words.

“Honestly, everything was somewhat tough,” he tells PopSci, although he would wager that figuring out how to use an emulator to communicate with the rebuilt hardware was his biggest hurdle. “For instance, determining when to eject the floppy disk was particularly tricky, especially given my limited programming skills,” Noki says.

“Limited programming skills” is honestly pretty humble after watching Noki’s nearly hour-long YouTube rundown, which is genuinely worth an entire watch. Now that the job is done, the designer tells PopSci he’s gained an even greater respect for emulator programmers, “particularly the team responsible for the Mini vMac,” which simulates classic multiple Macintosh OS versions.

“Their dedication not only preserves computing history but also ensures its accessibility for generations to come, and for that, I’m incredibly thankful,” he says.

That thanks can certainly be extended to Noki, whose Brewintosh Plus and accompanying step-by-step guide now offers its own unique contribution to computing history preservation and accessibility.

The post A designer 3D printed a working clone of the iconic Mac Plus appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Vernor Vinge, influential sci-fi author who warned of AI ‘Singularity,’ has died https://www.popsci.com/science/vernor-vinge-obit/ Thu, 21 Mar 2024 18:09:37 +0000 https://www.popsci.com/?p=607369
Vernor Vinge
Vernor Vinge is one of the first thinkers to popularize a technological Singularity. Lisa Brewster / Wikipedia Commons

Vinge’s visions of the future enthralled and influenced generations of writers and tech industry leaders. He was 79.

The post Vernor Vinge, influential sci-fi author who warned of AI ‘Singularity,’ has died appeared first on Popular Science.

]]>
Vernor Vinge
Vernor Vinge is one of the first thinkers to popularize a technological Singularity. Lisa Brewster / Wikipedia Commons

Vernor Vinge, prolific science-fiction writer, professor, and one of the first prominent thinkers to conceptualize the concepts of a “Technological Singularity” and cyberspace, has died at the age of 79. News of his passing on March 20 was confirmed through a Facebook post from author and friend David Brin, citing complications from Parkinson’s Disease.

“Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters, and the implications of science,” Brin writes.

The Hugo Award-winning author of sci-classics like A Fire Upon the Deep and Rainbow’s End, Vinge also taught mathematics and computer science at San Diego State University before retiring in 2000 to focus on his writing. In his famous 1983 op-ed, Vinge adapted the physics concept of a “singularity” to describe the moment in humanity’s technological progress marking “an intellectual transition as impenetrable as the knotted space-time at the center of a black hole” when “the world will pass far beyond our understanding.” The Singularity, Vinge hypothesized, would likely stem from the creation of artificial intelligence systems that surpassed humanity’s evolutionary capabilities. How life on Earth progressed from there was anyone’s guess—something plenty of Vinge-inspired writers have since attempted.

[Related: What happens if AI grows smarter than humans? The answer worries scientists.]

John Scalzi, bestselling sci-fi author of the Old Man’s War series, wrote in a blog post on Thursday that Vinge’s singularity theory in now so ubiquitous within science fiction and the tech industry that “it doesn’t feel like it has a progenitor, and that it just existed ambiently.”

“That’s a hell of a thing to have contributed to the world,” he continued.

In many ways, Vinge’s visions have arguably borne out almost to the exact year, as evidenced by the recent, rapid advances within an AI industry whose leaders are openly indebted to his work. In a 1993 essay further expounding on the Singularity concept, Vinge predicted that, “Within thirty years, we will have the technological means to create superhuman intelligence,” likening the moment to the “rise of human life on Earth.”

“Shortly after, the human era will be ended,” Vinge dramatically hypothesized at the time.
Many critics have since (often convincingly) argued that creating a true artificial general intelligence still remains out-of-reach, if not completely impossible. Even then, however, Vinge appeared perfectly capable of envisioning a dizzying, non-Singularity future—humanity may never square off against sentient AI, but it’s certainly already contending with “a glut of technical riches never properly absorbed.”

The post Vernor Vinge, influential sci-fi author who warned of AI ‘Singularity,’ has died appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Neuralink shows first human patient using brain implant to play online chess https://www.popsci.com/technology/neuralink-first-human-video/ Thu, 21 Mar 2024 16:30:00 +0000 https://www.popsci.com/?p=607341
Neuralink logo on smartphone
Neuralink's first human patient is a 29-year-old quadriplegic man from Texas. Deposit Photos

Elon Musk previously said his brain-computer interface company implanted the device in January.

The post Neuralink shows first human patient using brain implant to play online chess appeared first on Popular Science.

]]>
Neuralink logo on smartphone
Neuralink's first human patient is a 29-year-old quadriplegic man from Texas. Deposit Photos

The first human patient to reportedly receive Neuralink’s wireless brain-computer interface (BCI) implant appeared to demonstrate the device’s early capabilities during a company livestream to X on Wednesday night. In late January, Elon Musk publicly stated that the experimental medical procedure was completed, but neither he nor his controversial medical startup had offered evidence of the results until yesterday evening’s 9-minute video.

Neuralink’s first volunteer is 29-year-old Noland Arbaugh from Texas, who dislocated his C4 and C5 vertebrae during a diving accident in 2016, permanently paralyzing him below the shoulders. During the livestream, Arbaugh appears to be playing online chess as a Neuralink BCI implant translates his brain activity into actionable computer inputs.

“If y’all can see the cursor moving around the screen, that’s all me,” he says at one point while highlighting a chess piece. “It’s pretty cool, huh?” According to Arbaugh, the key to successfully employing his new implant involved learning how to mentally differentiate between intentional and attempted movement—i.e, brain activity expressing the desire to move as opposed to activity which literally controls motor functions. “From there, I think it just became intuitive to me to start imagining the cursor moving,” Arbaugh continued, likening the feeling to “using the Force” from Star Wars.

Neuralink’s BCI is implanted using a robotic surgeon that subcutaneously connects the device’s microscopic wiring to a patient’s brain. Once installed, the hardware supposedly cannot be seen externally, and recharges wireless from “outside via a compact, inductive charger,” according to Neuralink’s website. Musk has repeatedly stated his hopes Neuralink will ultimately allow users to connect to the internet, smartphones, and computers through a line of upgradable, reversible BCI implants—and that he intends to one day receive the procedure himself.

“Long-term, it is possible to shunt the signals from the brain motor cortex past the damaged part of the spine to enable people to walk again and use their arms normally,” Musk claimed in a reply to Neuralink’s Wednesday evening post.

During the livestream hosted by Neuralink engineer Bliss Chapman, Arbaugh also described independently playing video games like the turn-based strategy game Civilization 6, which often entails more complex user inputs. Before the implant, Arbaugh says he frequently required assistance from his parents or a friend to play such games. Now, however, he says he has been able to do so for as long as eight hours and that the biggest impediment is simply waiting for the Neuralink’s battery to recharge.

[Related: Elon Musk alleges Neuralink completed its first human trial implant.]

“It’s not perfect. We have run into some issues,” Arbaugh concedes at one point, although he does not elaborate on the hurdles. “I don’t want people to think this is the end of the journey. There’s still a lot of work to be done.”

“We have more work to do. We have a lot to learn about the brain here,” agreed Chapman.

Neuralink is far from the first company to develop and install BCI implants, with the first successful commercial procedure dating back to 2010. Similar devices have since converted imagined handwriting into text, as well as thoughts into words. One competitor’s implant has also enabled users to browse the web as well as conduct online shopping and banking since 2019.

But it’s unclear if the reveal of Neuralink’s first human participant will assuage critics’ concerns regarding the company’s research record. Less than a day after Neuralink announced it would begin screening human volunteers for its multiyear ​​Precise Robotically Implanted Brain-Computer Interface (PRIME) Study last September, Wired released a damning exposé detailing graphic accounts of lab animal abuse during research. At the time, Wired’s coverage was one in a string of similar investigations into Musk’s company, including internal complaints of “hack job” surgical procedures resulting in over 1,500 animal deaths since 2018. The reports have since prompted multiple federal regulatory reviews and human trial delays.

As Wired also noted this week, Neuralink has not registered its PRIME Study on ClinicalTrials.gov, the federal documentation site for human medical studies, so information like how many human subjects Neuralink is seeking, where its procedures are taking place, or how its results will be assessed are not publicly available.

Still, Arbaugh encouraged people to consider applying to Neuralink’s ongoing PRIME Study, saying “there’s nothing to be afraid of” about the “super easy” procedure which he says has resulted in no cognitive impairments for him. Chapman, meanwhile, stated last night that additional updates on both Neuralink’s and Arbaugh’s progress will be released in the coming days.

The post Neuralink shows first human patient using brain implant to play online chess appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA needs your smartphone during April’s solar eclipse https://www.popsci.com/science/nasa-smartphone-eclipse-app/ Thu, 21 Mar 2024 14:00:00 +0000 https://www.popsci.com/?p=607305
Timelapse of total solar eclipse showcasing Baily's beads
This image highlights Baily's beads, a feature of total solar eclipses that are visible at the very beginning and the very end of totality. It's composed of a series of images taken during a total solar eclipse visible from ESO's La Silla Observatory on 2 July 2019. Baily's Beads are caused by the Moon's mountains, valleys, and craters. These surface features create an uneven edge of the Moon, where small "beads" of sunlight still shine through the lowest parts for a few moments after the rest of the Sun is covered. P. Horálek/European Southern Observatory

The free SunSketcher app will use your phone’s camera to record the event and help study the sun’s ‘oblateness.’

The post NASA needs your smartphone during April’s solar eclipse appeared first on Popular Science.

]]>
Timelapse of total solar eclipse showcasing Baily's beads
This image highlights Baily's beads, a feature of total solar eclipses that are visible at the very beginning and the very end of totality. It's composed of a series of images taken during a total solar eclipse visible from ESO's La Silla Observatory on 2 July 2019. Baily's Beads are caused by the Moon's mountains, valleys, and craters. These surface features create an uneven edge of the Moon, where small "beads" of sunlight still shine through the lowest parts for a few moments after the rest of the Sun is covered. P. Horálek/European Southern Observatory

Listening for crickets isn’t the only way you can help NASA conduct research during the total solar eclipse passing across much of North America on April 8—you can also lend your smartphone camera to the cause. The agency is calling on anyone within the upcoming eclipse’s path to totality to participate in its SunSketcher program. The program will amass volunteer researcher data to better understand the star’s shape. To participate, all you need is NASA’s free app, which uses a smartphone’s camera coupled with its GPS coordinates to record the eclipse. But why?

The sun looks simply spherical in many photographs and renderings, and in the sun if you happen to briefly glance at it during the day—an emphasis on “briefly,” of course. But thanks to what’s known as oblateness, this isn’t ever really the case. A rotating spheroid will oblate when its centrifugal force generates enough inertia to slightly flatten it out into a more irregular, elliptical shape. Within the solar system, Earth, Jupiter, and Saturn all also display oblateness, but the sun has some unique characteristics affecting how it oblates in particular.

Total solar eclipse showcasing Baily's beads
Baily’s Beads as seen during the 2017 total eclipse. CREDIT: NASA/Aubrey Gemignani

According to NASA, the sun’s oblateness “depends upon the interior structure of the rotation, which we know from sunspot motions to be latitude-dependent at least.” Astronomers also think gas flows accompanying the sun’s magnetic activity and convection can create “transient distortions at a smaller level.” The upcoming total solar eclipse will provide astronomers an opportunity to better understand all this in the sun, but to make that happen, NASA wants you to harness the moon.

Earth’s natural satellite can serve as a valuable research partner in measuring the sun’s oblateness. This is due to a phenomenon known as “Baily’s beads,” which are the tiny flashes of light during an eclipse that occur as solar light passes over the moon’s rugged terrain of craters, hills, and valleys. Since satellite imagery has helped produce extremely detailed mappings of lunar topography, experts can match Baily’s beads to the moon’s features as it passes in front of the sun.

[Related: New evidence suggests dogs may ‘picture’ objects in their minds, similarly to people.]

These flashes will vary depending on where an observer is located within the path of totality. If you could amass data from a vast number of observer locales, however, you could better understand the sun’s surface variations due to its oblateness. And there are potentially millions of individual locales directly underneath the April 8 eclipse. Enter: SunSketcher.

“With your help, we hope to create a massive hour-long database of observations, more than we could ever make on our own,” NASA says.

All volunteers need to do is angle their phones up to capture the big event and let SunSketcher record the rest. Once all those videos are collected, NASA says the solar disk’s size and shape can be calculated to within a few kilometers, “an accuracy that is far better than currently known.” The reliable, detailed information on solar oblateness captured during SunSketcher can also be used to study how solar gravity affects the motions of inner planets, as well as help test various gravitational theories.

It’s worth noting that serving as an official SunSketcher volunteer will sacrifice the ability to use your smartphone to snap videos or pictures for yourself—but that’s arguably a small price to pay for helping conduct valuable scientific research.

The post NASA needs your smartphone during April’s solar eclipse appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
EPA says over half of all new cars must be EVs or hybrids by 2032 https://www.popsci.com/environment/epa-car-pollution-standards/ Wed, 20 Mar 2024 17:30:00 +0000 https://www.popsci.com/?p=607265
High traffic road with signs and light trails on sunset
Transportation pollution is the single largest greenhouse gas contributor in the US. Deposit Photos

The Biden Administration’s new policies are the strictest auto pollution regulations yet.

The post EPA says over half of all new cars must be EVs or hybrids by 2032 appeared first on Popular Science.

]]>
High traffic road with signs and light trails on sunset
Transportation pollution is the single largest greenhouse gas contributor in the US. Deposit Photos

The Biden administration has announced some of the biggest pollution regulations in US history. On Wednesday, the Environmental Protection Agency revealeded the finalization of new, enforceable standards meant to ensure electric and hybrid vehicles make up at least 56 percent of all passenger car and light truck sales by 2032.

To meet this goal, automotive manufacturers will face increasing tailpipe pollution limits over the next few years. This gradual shift essentially means over half of all car companies’ sales will need to be zero-emission models to meet the new federal benchmarks.

According to the EPA, this unprecedented industry transition could cut an estimated 7 billion tons of emissions over the next three decades. Regulators believe this will also offer a nearly $100 billion in annual net benefits for the nation, including $13 billion of annual public health benefits from improved air quality alongside $62 billion in reduced annual fuel, maintenance, and repair costs for everyday drivers.

[Related: EPA rule finally bans the most common form of asbestos.]

Transportation annually generates 29 percent of all US carbon emissions, making it the country’s largest single climate change contributor. Aggressively pursuing a nationwide shift towards EV adoption was a cornerstone of Biden’s 2020 presidential campaign platform. While in office, Donald Trump rolled back the Obama administration’s previous automotive pollution standards applicable to vehicles manufactured through 2025. He has promised to enact similar orders if re-elected during this year’s presidential election.

The EPA’s new standards is actually a slightly relaxed version of a previous proposal put forth last year. To address concerns of both manufacturers and the industry’s largest union, United Auto Workers, the Biden administration agreed to slow the rise of tailpipe standards over the next few years. By 2030, however, limits will increase substantially to make up for the lost time. The EPA claims today’s finalized policy will still reduce emissions by the same amount over the next three decades.

The new rules are by no means an “EPA car ban” on gas-powered vehicles, as lobbyists with the American Fuel & Petrochemical Manufacturers continue to falsely claim. The guidelines go into effect in 2027, and only pertain to new cars and light trucks over the coming years. The stipulations also cover companies’ entire product lines, so it’s up to manufacturers to determine how their fleets as a whole meet the EPA benchmarks.

Still, fossil fuel companies and Republican authorities are extremely likely to file legal challenges over today’s announcement—challenges that could easily arrive in front of the Supreme Court in the coming years. Earlier today, the vice president of federal policy for the League of Conservation Voters said during a press call that they already discussed such possibilities with the Biden administration, and “they are crystal clear about the importance of getting rules out to make sure that they withstand both legal challenges from the fossil fuel industry and any congressional attacks should Republicans take over the Senate and the White House.”

The post EPA says over half of all new cars must be EVs or hybrids by 2032 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Cyberflasher’ sent to prison for the first time in England https://www.popsci.com/technology/cyberflash-england-prison/ Wed, 20 Mar 2024 16:05:00 +0000 https://www.popsci.com/?p=607238
Female hand with mobile phone on blurred night lights background
The convicted offender received 66 weeks in prison. Deposit Photos

While legislation similar to the country's Online Safety Act exist worldwide, it is inconsistent.

The post ‘Cyberflasher’ sent to prison for the first time in England appeared first on Popular Science.

]]>
Female hand with mobile phone on blurred night lights background
The convicted offender received 66 weeks in prison. Deposit Photos

England’s court system has sentenced a “cyberflasher” to over a year in prison—a first for the country after its Online Safety Act went into effect on January 31. The 39-year-old culprit—already a registered sex offender—recently admitted in court to sending explicit photos of himself in February to both an adult woman and teenage girl via the messaging platform, WhatsApp. The woman subsequently took a screenshot of the interaction and reported it to police on the same day.

Passed by UK legislators last year, the new laws are designed to protect children and adults from exposure to unwanted imagery. They also place additional “legal responsibility on tech companies to prevent and rapidly remove illegal content, like terrorism and revenge pornography.”

A 2020 academic study determined roughly 76 percent of girls between 12-and-18 have received unsolicited sexual images from boys and men, often at random in the form of cyberflashing. The form of harassment is defined as sending unsolicited sexual imagery to targets via social media, text messages, or dating apps “for the purpose of their own sexual gratification or to cause the victim humiliation, alarm or distress,” and was added to the UK’s Online Safety Bill in March 2023 ahead of its formal passage the following October. Offenders can face a maximum of two years in prison if convicted.

[Related: How can you safely send nudes?]

“Just as those who commit indecent exposure in the physical world can expect to face the consequences, so too should offenders who commit their crimes online,” Hannah von Dadelzsen, Deputy Chief Crown Prosecutor for CPS East of England, said in an official statement on March 19.

As Engadget notes, similar digital legislative actions exist around the world, although they vary in scope and penalty. Scotland and Northern Ireland banned cyberflashing in 2010 and 2011, respectively, while both Australia and Singapore also enforce criminal charges for cyberflashing.

Here in the US, regulations continue on a more piecemeal basis. In 2022, California became the third state (after Texas and Virginia) to enact laws protecting against cyberflashing harassment. Dating app companies like Bumble have also voiced support for new laws to better prosecute cyberflashing. According to Bumble’s own internal surveying, despite billing itself as a “women-first” app, half of its women users have received such images on the platform. Attempts to address these issues at a federal level have yet to materialize in actual legislation.

Meanwhile, some lawmakers are attempting to leverage these legitimate concerns into wider-reaching censorship campaigns. In Oklahoma, for example, Republican state senators put forth a bill last month that seeks to ban exchanging all explicit content, even if solicited, for anyone except married couples as part of a broader anti-pornography push.

Following Tuesday’s conviction announcement, Deputy Chief Crown Prosecutor von Dadelzsen vowed “it will not be the last” of such prosecutions, and urged additional victims to come forward “knowing you have the right to lifelong anonymity” through England’s legal projections.

The post ‘Cyberflasher’ sent to prison for the first time in England appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Silicon Valley wants to deploy AI nursebots to handle your care https://www.popsci.com/technology/ai-nurse-chatbots-nvidia/ Tue, 19 Mar 2024 18:30:00 +0000 https://www.popsci.com/?p=607152
Woman talking with nurse chatbot on iPad
Hippocratic AI is using Nvidia GPUs to power its nurse chatbot avatars. Nvidia / Hippocratic AI / YouTube

Medical startup Hippocratic AI and Nvidia say it's all about the chatbots' 'empathy inference.'

The post Silicon Valley wants to deploy AI nursebots to handle your care appeared first on Popular Science.

]]>
Woman talking with nurse chatbot on iPad
Hippocratic AI is using Nvidia GPUs to power its nurse chatbot avatars. Nvidia / Hippocratic AI / YouTube

The medical startup Hippocratic AI and Nvidia have announced plans to deploy voice-based “AI healthcare agents.” In demonstration videos provided Monday, at-home patients are depicted conversing with animated human avatar chatbots on tablet and smartphone screens. Examples include a post-op appendectomy screening, as well as a chatbot instructing someone on how to inject penicillin. Hippocratic’s web page suggests providers could soon simply purchase its nursebots for less than $9/hour to handle such tasks, instead of paying an actual registered nurse $90/hour, Hippocratic claims. (The average pay for a registered nurse in the US is $38.74/hour, according to a 2022 U.S. Bureau of Labor Statistics’ occupational employment statistics survey.)

A patient’s trust in AI apparently is all about a program’s “seamless, personalized, and conversational” tone, said Munjal Shah, Hippocratic AI co-founder and CEO, in the company’s March 18 statement. Based on their internal research, people’s ability to “emotionally connect” with an AI healthcare agent reportedly increases “by 5-10% or more” for every half-second of conversational speed improvement, dubbed Hippocratic’s “empathy inference” engine. But quickly simulating all that worthwhile humanity requires a lot of computing power—hence Hippocratic’s investment in countless Nvidia H100 Tensor Core GPUs.

AI photo

“Voice-based digital agents powered by generative AI can usher in an age of abundance in healthcare, but only if the technology responds to patients as a human would,” said Kimberly Powell, Nvidia’s VP of Healthcare, said on Monday

[Related: Will we ever be able to trust health advice from an AI?]

But an H100 GPU-fueled nurse-droid’s capacity to spew medical advice nearly as fast as an overworked healthcare worker is only as good as its accuracy and bedside manner. Hippocratic says it’s also got that covered, of course, and cites internal surveys and beta testing of over 5,500 nurses and doctors voicing overwhelming satisfaction with the AI as proof. When it comes to its ability to avoid AI’s (well documented) racial, gendered, and age-based biases, however, testing is apparently still underway. And in terms of where Hippocratic’s LLM derived its diagnostic and conversational information—well, that’s even vaguer than their mostly anonymous polled humans.

In the company’s white paper detailing Polaris, its “Safety-focused LLM Constellation Architecture for Healthcare,” Hippocratic AI researchers say their model is trained “on a massive collection of proprietary data including clinical care plans, healthcare regulatory documents, medical manuals, drug databases, and other high-quality medical reasoning documents.” And that’s about it for any info on that front. PopSci has reached out to Hippocratic for more specifics, as well as whether or not patient medical info will be used in future training.

In the meantime, it’s currently unclear when healthcare companies (or, say, Amazon, for that matter) can “augment their human staff” with “empathy inference” AI nurses, as Hippocratic advertises. The company did note it’s already working with over 40 “beta partners” to test AI healthcare agents on a wide gamut of responsibilities, including chronic care management, wellness coaching, health risk assessments, pre-op outreach, and post-discharge follow-ups.

It’s hard to envision a majority of people ever preferring to talk with uncanny chat avatars instead of trained, emotionally invested, properly compensated healthcare workers. But that’s not necessarily the point here. The global nursing shortage remains dire, with recent estimates pointing to a shortage of 15 million health workers by 2030. Instead of addressing the working conditions and wage concerns that led unions representing roughly 32,000 nurses to strike in 2023, Hippocratic claims its supposed cost-effective AI solution is the “only scalable way” to close the shortfall gap—a scalability reliant on Nvidia’s H100 GPU.

The H100 is what helped make Nvidia one of the world’s most lucrative, multi trillion-dollar companies, and the chips still support many large language model (LLM) AI supercomputer systems. That said, it’s now technically Nvidia’s third most-powerful offering, following last year’s GH200 Grass Hopper Super Chip, as well as yesterday’s simultaneous reveal of a forthcoming Blackwell B200 GPU. Still, at roughly $30,000-to-$40,000 per chip, the H100’s price tag is reserved for the sorts of projects valued at half-a-billion dollars–projects like Hippocratic AI.

But before jumping at the potential savings that an AI labor workaround could provide the healthcare industry, it’s worth considering these bots’ energy costs. For reference, a single H100 GPU requires as much power per day as the average American household.

The post Silicon Valley wants to deploy AI nursebots to handle your care appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Flexible, resilient origami-inspired bridges could help navigate disaster zones https://www.popsci.com/technology/origami-engineering-modules/ Mon, 18 Mar 2024 19:30:00 +0000 https://www.popsci.com/?p=606972
Engineers constructing structure using their origami module materias
From left, Yi Zhu, a Research Fellow in Mechanical Engineering, and Evgueni Filipov, an associate professor in both Civil and Environmental Engineering and Mechanical Engineering, working in his lab in the George G. Brown Laboratories Building on the North Campus of the University of Michigan in Ann Arbor. Brenda Ahearn/University of Michigan, College of Engineering, Communications and Marketing

The folding art form may help develop a new generation of sturdy buildings, and even lunar habitats.

The post Flexible, resilient origami-inspired bridges could help navigate disaster zones appeared first on Popular Science.

]]>
Engineers constructing structure using their origami module materias
From left, Yi Zhu, a Research Fellow in Mechanical Engineering, and Evgueni Filipov, an associate professor in both Civil and Environmental Engineering and Mechanical Engineering, working in his lab in the George G. Brown Laboratories Building on the North Campus of the University of Michigan in Ann Arbor. Brenda Ahearn/University of Michigan, College of Engineering, Communications and Marketing

Origami traditionally involves the creation of extremely delicate paper structures, but the art form’s underlying principles could soon be adapted to help navigate tough construction situations. That’s the theory behind a new series of collapsible components designed by a team of University of Michigan engineering professors. When unfolded and assembled using hinges and locks, the researchers’ pieces combine to become extremely sturdy modular structures. Given their design’s impressive durability and spatial economy, the new origami-inspired constructions could be deployed across natural disaster zones, or even in outer space.

Engineering photo

The collaborators have detailed their work in a new study published on March 15 in Nature Communications. While the creators used mid-density fiberboard frames alongside aluminum hinges and locking mechanisms in their first tests, they believe materials such as plastic, encased glass, or metal could all work in future iterations.

In one lab example, engineers utilized a single square-foot’s worth of their lattice-like, repeating triangular fiberboard pieces and metal hinges. Despite altogether weighing barely 16-pounds, the parts could combine into a 3.3-foot-tall column capable of supporting over 2 tons of weight. In another scenario, a 1.6-foot-wide cube’s worth of the origami parts could unfold and assemble into multiple structures, such as a 6.5-foot-tall “bus stop,” a 13-foot-tall vertical building column, or same-sized walking bridge.

To pull off their improved construction design, the engineers realized that uniformity beat out selective reinforcements.

[Related: Microflier robots use the science of origami to fall like leaves.]

While other engineers in the past attempted to strategically thicken certain regions of their origami building materials, researchers created their components with a standardized thickness to allow for more evenly distributed weight loads. The result—the Modular and Uniformly Thick Origami-Inspired Structure (MUTOIS) system—not only solves this long-standing stress distribution problem, but allows for immense customizability depending on a user’s needs, such as size, purpose, and materials.

Certain parts can either be completely solid, or contain partial openings within the repeating triangular framework. The pedestrian bridge, for example, employed solid panels for its base alongside trussed panels on either side for “efficient load-carrying,” according to the team’s paper. These modules also allow individual pieces to be replaced and repaired as needed.

The MUTOIS system currently relies on simple connectors instead of more specialized, self-latching designs. As such, the structures require people to manually construct their intended projects, as opposed to robotic-assisted or factory assembly. That said, however, the team believes further research could continue to expand the MUTOIS system’s potential utility to help build “aerospace systems, extra-terrestrial habitats, robotics, mechanical devices, and more.”

The post Flexible, resilient origami-inspired bridges could help navigate disaster zones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
For nearly $500,000, you too can have dinner in the ‘SpaceBalloon’ above Earth https://www.popsci.com/science/spacevip-balloon-trip/ Mon, 18 Mar 2024 15:30:00 +0000 https://www.popsci.com/?p=606914
SpaceVIP SpaceBalloon above Earth concept art
The high-priced trips are scheduled to start in late 2025. SpaceVIP

'Space is for everybody.'

The post For nearly $500,000, you too can have dinner in the ‘SpaceBalloon’ above Earth appeared first on Popular Science.

]]>
SpaceVIP SpaceBalloon above Earth concept art
The high-priced trips are scheduled to start in late 2025. SpaceVIP

A luxury space tourism company called SpaceVIP is currently taking reservations for its Stratospheric Dining Experience. For $495,000, six participants will enjoy a Michelin Star restaurant-catered jaunt into suborbit, sans rockets or zero gravity. Scheduled to launch as early as 2025 from Florida’s Space Coast, the travelers will “gently lift” into the sky aboard the pressurized cabin of Spaceship Neptune, a supposedly carbon neutral “SpaceBalloon” designed by another elite getaway startup called Space Perspective. Over the course of six hours, travelers will be wined and dined by Rasmus Munk, Head Chef at Alchemist, a 2 Michelin Star “Holistic Cuisine” restaurant. 

What is “Holistic Cuisine?” According to a joint announcement, it’s apparently a meal that doubles as “an intentional story… that will inspire thought and discussion on the role of humanity in protecting our planet” while “challenging the diner to reexamine our relationship with Earth and those who inhabit it.”  The diners can ponder this while watching the sunrise over Earth’s curvature from approximately 100,000-feet above sea level.

“Embarking on this unprecedented culinary odyssey to the cosmos marks a pivotal moment in human history,” Roman Chiporukha, founder of SpaceVIP, said in a statement. “This inaugural voyage is but the first chapter in SpaceVIP’s mission to harness the transformative power of space travel to elevate human consciousness and shape the course of our collective evolution.”

Concept art of SpaceBalloon cabin interior
Concept art depicting the SpaceBalloon’s interior. Credit: SpaceVIP

Space Perspective representatives also said they believe such a trip will spur what’s known as the “Overview Effect” within their “Explorers,” referring to the feeling of awe many astronauts have described upon the Earth from the heavens. If it doesn’t, at least their tickets reportedly will be going to Space Prize Foundation, a nonprofit dedicated to advancing women within the space industry.

Those astronauts, however, felt their Overview Effect after years of physical, mental, and technological training. With a pressurized cabin, stable gravity, and a Space Spa (the name for the bathroom), Stratospheric Dining Experience attendees can simply bypass all of that by ponying up 12-times the annual salary of a first-year public school teacher in the US. For a more generalized overview effect, one ticket is about 2,640-percent higher than the global average yearly wage.

Test flights will commence later this year ahead of the 2025 launch window, when SpaceVIP’s Explorers “will be making history by enjoying the meal of a lifetime above 99-percent of Earth’s atmosphere.”

Despite being replete with mentions of “space” throughout the press materials, the meal won’t technically be in outer space. At its apex, SpaceVIP and Space Perspective’s “SpaceBalloon” will be about 43 miles below the Kármán line. For an actual, albeit brief, trip to space, Blue Origin spots are reportedly going for about $250,000 a seat.

The post For nearly $500,000, you too can have dinner in the ‘SpaceBalloon’ above Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Crypto scammers flooded YouTube with sham SpaceX Starship livestreams https://www.popsci.com/technology/crypto-scam-starship-launch-livestream/ Thu, 14 Mar 2024 15:26:22 +0000 https://www.popsci.com/?p=606533
Starship rocket launching during third test
The SpaceX Starship Flight 3 Rocket launches at the Starbase facility on March 14, 2024 in Brownsville, Texas. The operation is SpaceX's third attempt at launching this rocket into space. The Starship Flight 3 rocket becomes the world's largest rocket launched into space and is vital to NASA's plans for landing astronauts on the Moon and Elon Musk's hopes of eventually colonizing Mars. Photo by Brandon Bell/Getty Images

A fake Elon Musk hawked an ‘amazing opportunity’ during this morning’s big launch.

The post Crypto scammers flooded YouTube with sham SpaceX Starship livestreams appeared first on Popular Science.

]]>
Starship rocket launching during third test
The SpaceX Starship Flight 3 Rocket launches at the Starbase facility on March 14, 2024 in Brownsville, Texas. The operation is SpaceX's third attempt at launching this rocket into space. The Starship Flight 3 rocket becomes the world's largest rocket launched into space and is vital to NASA's plans for landing astronauts on the Moon and Elon Musk's hopes of eventually colonizing Mars. Photo by Brandon Bell/Getty Images

YouTube is flooded with fake livestream accounts airing looped videos of “Elon Musk” supposedly promoting crypto schemes. Although not the first time to happen, the website’s layout, verification qualifications, and search results page continue to make it difficult to separate legitimate sources from the con artists attempting to leverage today’s Starship test launch—its most successful to date, although ground control eventually lost contact with the rocket yet again.

After entering search queries such as “Starship Launch Livestream,” at least one supposed verified account within the top ten results takes users to a video of Elon Musk standing in front of the over 400-feet-tall rocket’s launchpad in Boca Chica, Texas. Multiple other accounts airing the same clip can be found further within the search results.

Space X photo

“Don’t miss your chance to change your financial life,” a voice similar to Musk’s tells attendees over footage of him attending a previous, actual Starship event. “This initiative symbolizes our commitment to making space exploration accessible to all, while also highlighting the potential of financial innovations represented by cryptocurrencies.”

“…to send either 0.1 Bitcoin or one Ethereum or Dogecoin to the specified address. After completing the transaction within a minute, twice as much Bitcoin or Ethereum will be returned to your address. …It is very important to use reliable and verified sources to scan the QR code and visit the promotion website. This will help avoid possible fraudulent schemes. Please remember administration is not responsible for loss due to not following the rules of our giveaway due to incorrect transactions or the use of unreliable sources. Don’t miss your chance to change your financial life. Connect Cryptocurrency wallet right now and become part of this amazing opportunity. You will receive double the amount reflected in your Bitcoin wallet. This initiative symbolizes our commitment to making space exploration accessible to all while also highlighting the potential of financial innovations are represented by cryptocurrencies. So let us embark on this remarkable journey to financial independence and cosmic discoveries…”

Fake Elon Musk

It’s unclear if the audio is AI vocal clone or simply a human impersonation, but either way oddly stilted and filled with glitches. A QR code displayed at the bottom of the screen (which PopSci cropped out of the video above) takes viewers to a website falsely advertising an “Official event from SpaceX Company” offering an “opportunity to take a share of 2,000 BTC,” among other massive cryptocurrency hauls.

There are currently multiple accounts mirroring the official SpaceX YouTube page airing simultaneous livestreams of the same scam clip. One of those accounts has been active since May 16, 2022, and has over 2.3 million subscribers—roughly one-third that of SpaceX’s actual, verified profile. Unlike the real company’s locale, however, the fake profile is listed as residing in Venezuela.

[Related: Another SpaceX Starship blew up.]

Scammers have long leveraged Musk’s public image for similar con campaigns. The SpaceX, Tesla, and X CEO is a longtime pusher of various cryptocurrency ventures, and is one of the world’s wealthiest men. Likewise, YouTube is a particularly popular venue for crypto grifters. In June 2020, for example, bad actors made away with $150,000 through nearly identical SpaceX YouTube channels. Almost exactly two years later, the BBC noted dozens of fake Musk videos advertising crypto scams, earning a public rebuke from the actual Musk himself. The crypto enthusiast outlet BitOK revealed a campaign almost exactly the same as today’s scams around the time as the November 2023 Starship event.

Update 3/15/24 12:40pm: YouTube spokesperson confirmed that the company has “terminated four channels in line with our policies which prohibit cryptocurrency phishing schemes.” According to YouTube, video uploads are monitored by a combination of machine learning and human reviewers.

The post Crypto scammers flooded YouTube with sham SpaceX Starship livestreams appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A cargo ship’s ‘WindWing’ sails saved it up to 12 tons of fuel per day https://www.popsci.com/technology/windwing-ship-sails/ Thu, 14 Mar 2024 14:00:00 +0000 https://www.popsci.com/?p=606516
Pyxis Ocean shipping vessel with two WindWing sails
Pyxis Ocean sailing through the English Channel from Spain to Amsterdam, March 2024. Business Wire / Cargill

After six months sailing around the world, the numbers are in for the retrofitted ‘Pyxis Ocean.’

The post A cargo ship’s ‘WindWing’ sails saved it up to 12 tons of fuel per day appeared first on Popular Science.

]]>
Pyxis Ocean shipping vessel with two WindWing sails
Pyxis Ocean sailing through the English Channel from Spain to Amsterdam, March 2024. Business Wire / Cargill

A shipping vessel left China for Brazil while sporting some new improvements last August—a pair of 123-feet-tall, solid “wings” retrofitted atop its deck to harness wind power for propulsion assistance. But after its six-week maiden voyage testing the green energy tech, the Pyxis Ocean MC Shipping Kamsarmax vessel apparently had many more trips ahead of it. Six months later, its owners at the shipping company, Cargill, shared the results of those journeys this week—and it sounds like the vertical WindWing sails could offer a promising way to reduce existing vessels’ emissions.

Using the wind force captured by its two giant, controllable sails to boost its speed, Pyxis Ocean reportedly saved an average of 3.3 tons of fuel each day. And in optimal weather conditions, its trips through portions of the Indian, Pacific, and Atlantic Oceans reduced fuel consumption by over 12 tons a day. According to Cargill’s math, that’s an average of 14 percent less greenhouse gas emissions from the ship. On its best days, Pyxis Ocean could cut that down by 37 percent. In all, the WindWing’s average performance fell within 10 percent ts designers’ computational fluid dynamics simulation predictions.

[Related: A cargo ship with 123-foot ‘WindWing’ sails has just departed on its maiden voyage.]

In total, an equally sized ship outfitted with two WindWings could annually save the same amount of emissions as removing 480 cars from roads—but that could even be a relatively conservative estimate, according to WindWing’s makers at BAR Technologies.

“[W]hile the Pyxis Ocean has two WindWings, we anticipate the majority of Kamsarmax vessels will carry three wings, further increasing the fuel savings and emissions reductions by a factor of 1.5,” BAR Technologies CEO John Cooper said in a statement on Tuesday.

The individual success of Pyxis Ocean is encouraging news, but that’s just one of the 110,000-or-so merchant ships in the world. On top of that, ports are currently designed to accommodate shipping vessels’ traditional proportions—that 125-feet of height added by WindWings could potentially complicate docking in many locations. According to Jan Dieleman, president of Cargill’s Ocean Transportation business, they’re already working to address such issues.

“Cargill is creating ways for all [wind assisted propulsion] vessels—not just the Pyxis Ocean—to operate on global trade routes,” they said in this week’s announcement, adding that the company has begun talking to over 250 ports to figure out the logistics needed to accommodate such ships.

The post A cargo ship’s ‘WindWing’ sails saved it up to 12 tons of fuel per day appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Montana traffickers illegally cloned Frankensheep hybrids for sport hunting https://www.popsci.com/environment/sheep-hybrid-hunting/ Wed, 13 Mar 2024 17:08:37 +0000 https://www.popsci.com/?p=606435
Group of Marco Polo Sheep on a snowy mountainside.
Genetic material harvested from Marco Polo argali sheep like those pictured above were used to illegally breed hybrids. Deposit Photos

Conspirators used the genetic material of Marco Polo argali sheep from Kyrgyzstan to breed entirely new animals.

The post Montana traffickers illegally cloned Frankensheep hybrids for sport hunting appeared first on Popular Science.

]]>
Group of Marco Polo Sheep on a snowy mountainside.
Genetic material harvested from Marco Polo argali sheep like those pictured above were used to illegally breed hybrids. Deposit Photos

Please do not spend nearly a decade working to secretly clone endangered sheep in a bid to create giant Frankensheep hybrids for wealthy people to hunt for sport. It is very illegal, and the US government will make an example out of you.

Case in point: Arthur “Jack” Schubarth. The 80-year-old owner of a 215-acre “alternative livestock” ranch in Montana who the Justice Department reports pleaded guilty on Tuesday to two felony wildlife crimes—conspiracy to violate, as well as “substantively violating” the Lacey Act, a law enacted in 1900 to combat illegal animal trafficking.

Located in Vaughn, Montana, Schubarth Ranch is what’s known as a shooting preserve or game ranch, where people pay exorbitant amounts to hunt captive, often exotic animals like mountain goats. Or, in this case, extremely large, never-before-seen hybrid supersheep derived from Central Asia’s Ovis ammon polii, or the Marco Polo argali.

With a shoulder height as tall as 49-inches and horns over five-feet wide, the 300-pound Marco Polo argali is unequivocally the world’s largest sheep species. They are also extremely protected, and fall under the jurisdictions of both the Convention on International Trade in Endangered Species and the US Endangered Species Act. On top of that, they’re prohibited from the state of Montana in an effort to protect native species against disease and hybridization. Despite all this, Schubarth and at least five associates thought it wise to try breeding new sheep hybrid species using Marco Polo argali DNA in the hopes of jacking up hunting rates.

[Related: How hunting deer became a battle cry in conservation.]

Pulling it off apparently required serious scientific and international scheming. According to Justice Department officials, Schubarth secretly purchased “parts” of Marco Polo argali sheep from Kyrgyzstan in 2013, then arranged transportation of the biological samples to the US. Once here, Schubarth then tasked a lab to create embryo clones from the Marco Polo argali genetic material. These embryos were then implanted in ewes of a different sheep species on his farm, which eventually produced a pure male Marco Polo argali Schubarth crowned the “Montana Mountain King,” aka MMK.

From there, “other unnamed co-conspirators” alongside Schubarth artificially inseminated other ewes (also apparently of sheep species illegal in Montana) using MMK semen. All the while, the sheep scandal grew to include forged vet inspection certificates claiming the legality of their livestock, as well as even the sale of MMK’s semen to breeders in other states. According to court documents, sheep containing 25-percent Montana Mountain King genetics fetched as much as $15,000 per head. A son of MMK, dubbed Montana Black Magic, helped produce sheep worth around $10,000 each.

The genetic thievery wasn’t limited to Marco Polo argali, either. Court filings also show Schubarth pursued similar endeavors to amass genetic material harvested from Rocky Mountain bighorn sheep, which he then also sold through interstate deals.” All of this, perhaps unsurprisingly, also violated state laws prohibiting the sale of game animal parts and the use of game animals on alternative livestock ranches.

The crimes unfortunately go far beyond simple greed. These animal trafficking laws are not simply meant to protect conservation efforts—they’re in place to maintain the health of local ecosystems.

“In pursuit of this scheme, Schubarth violated international law and the Lacey Act, both of which protect the viability and health of native populations of animals,”  Todd Kim, Assistant Attorney General of the Justice Department Environment and Natural Resources Division (ENRD), said on Tuesday, with Montana Fish, Wildlife & Parks Chief of Enforcement Ron Howell adding, “The kind of crime we uncovered here could threaten the integrity of our wildlife species in Montana.”

It’s unclear how many hybrid sheep Schubarth and his colleagues successfully bred, as well as how many were ultimately sold and potentially hunted. PopSci has reached out to the Justice Department’s Environment and Natural Resources Division for clarification.

In the meantime, Schubarth now faces upwards of five years in prison per felony count, a maximum $250,000 fine, and three years supervised release. He’s scheduled to be sentenced on July 11.

The post Montana traffickers illegally cloned Frankensheep hybrids for sport hunting appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Researchers propose fourth traffic signal light for hypothetical self-driving car future https://www.popsci.com/technology/fourth-traffic-light-self-driving-cars/ Wed, 13 Mar 2024 16:00:00 +0000 https://www.popsci.com/?p=606404
Traffic light flashing yellow signal
The classic traffic signal design was internationally recognized in 1931. Deposit Photos

It's called 'white' for now, until a color that 'does not create confusion' is picked.

The post Researchers propose fourth traffic signal light for hypothetical self-driving car future appeared first on Popular Science.

]]>
Traffic light flashing yellow signal
The classic traffic signal design was internationally recognized in 1931. Deposit Photos

Fully self-driving cars, despite the claims of some companies, aren’t exactly ready to hit the roads anytime soon. There’s even a solid case to be made that completely autonomous vehicles (AVs) will never take over everyday travel. Regardless, some urban planners are already looking into ensuring how such a future could be as safe and efficient. According to a team at North Carolina State University, one solution may be upending the more-than-century-old design of traffic signals.

The ubiquity of stop lights’ Red-Yellow-Green phases aren’t just coincidence—they’re actually codified in an international accord dating back to 1931. This has served drivers pretty well since then, but the NC State team argues AVs could eventually create the opportunity for better road conditions. Or, at the very least, could benefit from some infrastructure adjustments.

Last year, researchers led by civil, construction, and environmental engineering associate professor Ali Hajbabaie created a computer model for city commuting patterns which indicated everyday driving could one day actually improve from a sizable influx of AVs. By sharing their copious amounts of real-time sensor information with one another, Hajbabaie and colleagues believe these vehicles could hypothetically coordinate far beyond simple intersection changes to adjust variables like speed and break times.

To further harness these benefits, they proposed the introduction of a fourth, “white” light to traffic signals. In this scenario, the “white” phase activates whenever enough interconnected AVs approach an intersection. Once lit, the phase indicates nearby drivers should simply follow the car (AV or human) in front of them, instead of trying to anticipate something like a yellow light’s transition time to red. Additionally, such interconnectivity could communicate with traffic signal systems to determine when it is best for “Walk” and “Do-Not-Walk” pedestrian signals. Based on their modeling, it appeared such a change could reduce intersection congestion by at least 40-percent compared to current traffic system optimization software. In doing so, this could improve overall travel times, fuel efficiency, and safety.

[Related: What can ‘smart intersections’ do for a city? Chattanooga aims to find out.]

But for those concerned about the stressful idea of confusing, colorless lights atop existing signals, don’t worry—the “white” is just a theoretical stand-in until regulators decide on something clearer.

“Research needs to be done to find the best color/indication,” Hajbabaie writes in an email to PopSci. “Any indication/color could be used as long as it does not associate with any existing message and does not create confusion.”

This initial model had a pretty glaring limitation, however—it did not really take pedestrians into much consideration. In the year since, Hajbabaie’s team has updated their four-phase traffic light computer model to account for this crucial factor in urban traffic. According to their new results published in Computer-Aided Civil Infrastructure and Engineering, the NC State researchers determined that even with humans commuting by foot, an additional fourth light could reduce delays at intersections by as much as 25-percent from current levels.

Granted, this massive reduction is dependent on an “almost universal adoption of AVs,” Hajbabaie said in a separate announcement this week. Given the current state of the industry, such a future seems much further down the road than many have hoped. But while not a distinct possibility at the moment, the team still believes even a modest increase in AVs on roads—coupled with something like this fourth “white” phase—could improve conditions in an extremely meaningful way. What’s more, Hajbabaie says that waiting for fully autonomous cars may not be necessary.

“We think that this concept would [also] work with vehicles that have adaptive cruise control and some sort of lateral movement controller such as lane keeping feature,” he tells PopSci. “Having said that, we think we would require more sensors in the intersection vicinity to be able to observe the location of vehicles if they are not equipped with all the sensors that smart cars will be equipped with.”

But regardless of whether cities ever reach a driverless car future, it’s probably best to just keep investing in green urban planning projects like cycling lanes, protected walkways, and even e-bikes. They’re simpler, and more eco-friendly. 

The post Researchers propose fourth traffic signal light for hypothetical self-driving car future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Huge 60-foot-tall buoy uses ocean waves to create clean energy https://www.popsci.com/technology/buoy-wave-generator/ Tue, 12 Mar 2024 14:20:00 +0000 https://www.popsci.com/?p=606198
CorPower C4 buoy turbine in ocean
The buoy shifts into a passive 'transparent' mode when the waters get too choppy. CorPower

CorPower’s C4 prototype just completed a successful six-month test run off the coast of Portugal. Here are the results.

The post Huge 60-foot-tall buoy uses ocean waves to create clean energy appeared first on Popular Science.

]]>
CorPower C4 buoy turbine in ocean
The buoy shifts into a passive 'transparent' mode when the waters get too choppy. CorPower

Giant buoys over 60-feet tall may one day generate clean energy to feed into local power grids—but making it a reality isn’t as simple as going with the ocean’s flow. To successfully keep the idea afloat, it’s all about timing.

Swedish company CorPower recently announced the completion of its first commercial scale buoy generator demonstration program off the coast of northern Portugal. Over the course of a six-month test run, CorPower’s three-story C4 Wave Energy Converter (WEC) endured four major Atlantic storms and adapted to constantly shifting wave heights. Although final analysis is still ongoing, CorPower believes the technology offers a promising new way to transition towards a sustainable future.

Global Warming photo

As New Atlas explains, the basic theory behind CorPower’s C4 is relatively straightforward. As its air-filled chassis bobs along the rolling waves, an internal system converts the up-and-down movement into rotational power for energy generation. At the same time, however, a tensioned, internal pneumatic cylinder reacts in real-time to wave phases—slightly delaying its movements behind the waves amplifies the buoy’s bobbing, thus creating even more energy production. According to CorPower, using this system can boost power generation as much as 300-percent.

But what about when the sea inevitably gets choppier, as was the case during storms that produced waves nearly as high as the C4 itself? When this happens, the pneumatic cylinder switches off its active control to allow the machine to enter “transparent” mode, during which time it simply rides out the adverse ocean conditions until it’s time to spring back into action. CorPower compares this “tuning and detuning” feature to similar systems in wind turbines, which adjust the pitch of their blades in response to surrounding weather conditions.

[Related: Huge underwater ‘kite’ turbine powered 1,000 homes in the Faroe Islands.]

CorPower says its team recorded as much as 600kW of peak power production during the C4 trial, although they believe it’s possible for the buoy’s current version to ramp that up to around 850kW. While that by itself isn’t much compared to a single offshore wind turbine’s multi-megawatt range, CorPower’s plan is to eventually deploy thousands of more efficient WEC machines to create a much more powerful generator network. If it can scale a farm up to produce 20 gigawatts of energy, it estimates the buoys could offer something between $33-$44 per megawatt-hour. That’s pretty attractive to investors, especially given C4’s aquatic power source operates virtually 24/7, unlike wind or solar generators.

Right now, however, 20 gigawatts would require over 20,000 buoys, so a more economical and efficient buoy system is definitely needed before anyone starts seeing fleets of these canary yellow contraptions floating out there on the open oceans. CorPower seems confident it can get there, and is next planning a new trial phase that will see multiple C4 buoys in action.

The post Huge 60-foot-tall buoy uses ocean waves to create clean energy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Airbnb finally bans all indoor security cameras https://www.popsci.com/technology/airbnb-camera-ban/ Mon, 11 Mar 2024 18:00:00 +0000 https://www.popsci.com/?p=606098
CCTV security camera operating in home.
Airbnb previously allowed visible security cameras in common spaces like living rooms and hallways. Deposit Photos

Even when restricted to ‘common spaces,’ the cameras made many renters uncomfortable.

The post Airbnb finally bans all indoor security cameras appeared first on Popular Science.

]]>
CCTV security camera operating in home.
Airbnb previously allowed visible security cameras in common spaces like living rooms and hallways. Deposit Photos

Certain Airbnb hosts will need to make a few adjustments to their properties. On Monday, the short-term rental service announced it is finally prohibiting the use of all indoor security cameras, regardless of room location. For years, hosts could install video cameras in “common areas” such as living rooms, kitchens, and hallways, so long as they were both clearly visible and disclosed in the listings. Beginning April 30, however, zero such devices are permitted within any Airbnb location around the world.

Airbnb’s head of community policy and partnerships announced that the policy shift is intended to offer “new, clear rules” for both hosts and guests while providing “greater clarity about what to expect on Airbnb.” Privacy advocates have previously voiced concerns about what footage could be captured even in Airbnb “common spaces,” and are celebrating the news.

“No one should have to worry about being recorded in a rental,” Albert Fox Cahn, executive director of the civil rights watchdog nonprofit, Surveillance Technology Oversight Project (STOP), said in a public statement. STOP has campaigned Airbnb for this specific policy change since 2022. Cahn also called the policy reversal “a clear win for privacy and safety,” citing the allegedly easy exploitation of recording devices.

[Related: How to rent out your spare room and be an excellent host.]

According to the company, most Airbnb locales do not report indoor security cameras, so the upcoming policy revision is likely to only impact a smaller portion of rentals. And while indoor video cameras are soon-to-be banned, Airbnb will continue allowing other monitoring devices in rental locations under certain circumstances. Both doorbell and outdoor cameras, for example, are still permitted, so long as these are disclosed to guests and are not angled to see inside a residence. Cameras are also still prohibited from outdoor spots with “a greater expectation of privacy,” such as saunas or pool showers.

Other devices that remain available to hosts are decibel monitors to measure a common space’s noise levels—an increasingly popular tool meant to dissuade unauthorized parties. That said, the equipment must only be designed to assess sound volume, and can’t actually record or transmit audio.

After April 30, guests can report any hosts that do not adhere to the new regulations, with penalties including listing or account bans as a result.

The post Airbnb finally bans all indoor security cameras appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hat-wearing cyborg jellyfish could one day explore the ocean depths https://www.popsci.com/technology/cyborg-jellyfish-biorobot/ Mon, 11 Mar 2024 16:30:00 +0000 https://www.popsci.com/?p=606077
Concept art of cyborg jellyfish with forebody attachments
An artist's rendering of jellyfish donning Caltech's sensor hat. Credit: Caltech/Rebecca Konte

A cheap pair of accessories may transform some of the Earth’s oldest creatures into high-tech, deep sea researchers.

The post Hat-wearing cyborg jellyfish could one day explore the ocean depths appeared first on Popular Science.

]]>
Concept art of cyborg jellyfish with forebody attachments
An artist's rendering of jellyfish donning Caltech's sensor hat. Credit: Caltech/Rebecca Konte

To better understand the ocean’s overall health, researchers hope to harness some of evolution’s simplest creatures as tools to assess aquatic ecosystems. All they need is $20 worth of materials, a 3D-printer, and some jellyfish hats. 

Jellyfish first began bobbing through Earth’s ancient oceans at least half a billion years ago, making them some of the planet’s oldest creatures. In all that time, however, their biology has remained pretty consistent—a bell-shaped, brainless head attached to a mass of tentacles, all of which is composed of around 95 percent water. Unfortunately, that same steady state can’t be said of their habitat, thanks to humanity’s ongoing environmental impacts.

Although it’s notoriously dangerous, technologically challenging, and expensive for humans to reach the ocean’s deepest regions, jellyfish do it all the time. Knowing this, a team of Caltech researchers, led by aeronautics and mechanical engineering professor John Dabiri, first created a jellyfish-inspired robot to explore the abyss. While the bot’s natural source material is Earth’s most energy efficient swimmer, the mechanical imitation couldn’t quite match the real thing. Dabiri and colleagues soon realized another option: bringing the robotics to actual jellyfish.

Ocean photo

“Since they don’t have a brain or the ability to sense pain, we’ve been able to collaborate with bioethicists to develop this biohybrid robotic application in a way that’s ethically principled,” Dabiri said in a recent profile.

First up was a pacemaker-like implant capable of controlling the animal’s speed. Given its efficiency, a jellyfish with the implant could swim three times as fast as normal while only requiring double the energy. After some additional tinkering, the team then designed a “forebody” that also harmlessly attaches to a jelly’s bell.

This 3D-printed, hat-like addition not only houses electronics and sensors, but makes its wearer even faster. Its sleek shape is “much like the pointed end of an arrow,” described Simon Anuszczyk, the Caltech graduate student and study lead author who came up with the forebody design. In a specially built, three-story vertical aquarium, the cyborg hat-sporting jellyfish could swim 4.5 times faster than its regular counterparts.

[Related: Even without brains, jellyfish learn from their mistakes.]

By controlling their jellies’ vertical ascent and descent, Dabiri’s team believes the biohybrids could one day help gather deep ocean data previously obtainable only by using extremely costly research vessels and equipment. Although handlers can only control the up-and-down movement of their cyborg animals at the moment, researchers believe additional work could make them fully steerable in any direction. They’ll also need to develop a sensor array capable of withstanding the deep sea’s crushing pressures, but the team is confident they are up to the challenge.

“It’s well known that the ocean is critical for determining our present and future climate on land, and yet, we still know surprisingly little about the ocean, especially away from the surface,” Dabiri said. “Our goal is to finally move that needle by taking an unconventional approach inspired by one of the few animals that already successfully explores the entire ocean.”

The post Hat-wearing cyborg jellyfish could one day explore the ocean depths appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Alien’ signal was likely a very big truck https://www.popsci.com/science/uap-seismic-data-truck/ Fri, 08 Mar 2024 18:15:00 +0000 https://www.popsci.com/?p=605936
Google Earth Image of seismic center and truck road
The area near the seismic station in Manus Island, based on satellite images acquired on March 23, 2023. CREDIT: ROBERTO MOLAR CANDANOSA AND BENJAMIN FERNANDO/JOHNS HOPKINS UNIVERSITY, WITH IMAGERY FROM CNES/AIRBUS VIA GOOGLE

Researchers took a deeper look at seismic data taken during the 2014 fireball landing near Papua New Guinea.

The post ‘Alien’ signal was likely a very big truck appeared first on Popular Science.

]]>
Google Earth Image of seismic center and truck road
The area near the seismic station in Manus Island, based on satellite images acquired on March 23, 2023. CREDIT: ROBERTO MOLAR CANDANOSA AND BENJAMIN FERNANDO/JOHNS HOPKINS UNIVERSITY, WITH IMAGERY FROM CNES/AIRBUS VIA GOOGLE

There’s no doubt an extremely bright fireball careened through the atmosphere north of Papua New Guinea on January 8, 2014. It’s also true that divers recovered materials at the bottom of the ocean last year near where many experts believed the object landed—and that prominent Harvard astrophysicist Avi Loeb theorized some of these metallic spherules were possibly of “extraterrestrial technological” origin. But as to the ground vibrations recorded at a seismic station on Manus Island during the same atmospheric event? The explanation is likely much more mundane.

“[T]hey have all the characteristics we’d expect from a truck and none of the characteristics we’d expect from a meteor,” Johns Hopkins planetary seismologist Benjamin Fernando said on Thursday.

Fernando and his colleagues will present their findings on March 12 during the annual Lunar and Planetary Science Conference in Houston, Texas.

Although Fernando’s team concedes it’s difficult to prove what something isn’t through signal data, it’s pretty easy to highlight the characteristics it may share with existing, explainable seismic info. 

“The signal changed directions over time, exactly matching a road that runs past the seismometer,” said Fernando.

[Related: How scientists decide if they’ve actually found signals of alien life.]

To further bolster the much more everyday explanation, researchers also utilized data collected during the 2014 event by facilities in Australia and Palau originally built to measure nuclear test sound waves. After factoring in those recordings, Fernando’s team revised the previous location estimations for a more exact spot of the atmospheric occurrence—an area 100 miles away from the original region.

“The fireball location was actually very far away from where the oceanographic expedition went to retrieve these meteor fragments,” Fernando said of the 2023 recovery trip. “Not only did they use the wrong signal, they were looking in the wrong place.”

The team also doesn’t mince words in their new paper, “Probably Not Aliens: Seismic Data Analysis from the 2014 ‘Interstellar Meteor.’” Of the alien theory, the researchers “consider it to be at best highly overstated and at worst entirely erroneous.” And of the material recovered last year, “poor localisation implies that any material recovered is far less likely to be from the meteor, let alone of interstellar or even extraterrestrial origin.”

[Related: How lightning on exoplanets could make it harder to find alien life.]

Given NASA’s estimate that around 50 tons of meteoritic material bombards Earth every day, Fernando’s team says it’s definitely possible some of those fragments retrieved from the ocean floor may indeed be from some other meteorite. Regardless, they “strongly suspect that it wasn’t aliens.”

Disappointing? Perhaps. But there’ll probably be plenty of new UAP sightings to parse in the future—especially if people take up the government’s offer to submit their own inexplicable events.

For more detailed debunking, tune into a livestream of next week’s findings here.

The post ‘Alien’ signal was likely a very big truck appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
VR and electric brain stimulation show promise for treating PTSD https://www.popsci.com/health/vr-ptsd-treatment/ Thu, 07 Mar 2024 21:30:00 +0000 https://www.popsci.com/?p=605809
Concept of human intelligence with human brain on blue background
Patients reported improvements after only three sessions. Deposit Photos

A new study involving military veterans reported ‘meaningful’ improvements after only a couple weeks.

The post VR and electric brain stimulation show promise for treating PTSD appeared first on Popular Science.

]]>
Concept of human intelligence with human brain on blue background
Patients reported improvements after only three sessions. Deposit Photos

Although it can sound cliché, there’s a lot of truth in the old axiom “face your fears.” In fact, exposure therapy ostensibly puts that adage into practice. For many people, reprocessing their trauma with the help of trained professionals can allow their brains to relearn the important differences between an actual traumatic event and its harmless memories.

Unfortunately, post-traumatic stress disorder often reworks the brain by limiting the ventromedial prefrontal cortex’s ability to control regions like the amygdala. This can lead to memory and safety learning issues that limit exposure therapy’s efficacy.

To potentially solve this issue, researchers wondered if combining the treatment alongside another popular trauma therapy might compensate for this brain barrier. Their results, published this week in JAMA Psychiatry, indicate a workaround may actually be found through a trio of tools: exposure therapy, transcranial direct current stimulation (tDCS), and virtual reality.

[Related: PTSD patients’ brains work differently when recalling traumatic experiences.]

In their recent study, a collaborative team from Brown University and the Providence V.A. Center for Neurorestoration and Neurotechnology asked 54 military veterans to participate in a new, double-blind study. Every volunteer agreed to six VR exposure therapy sessions over two to three weeks that depicted generalized warzone situations.

“It can be difficult for patients to talk about their personal trauma over and over, and that’s one common reason that participants drop out of psychotherapy,” Noah Philip, the study’s author and Brown University psychiatry professor, said in a statement. “This VR exposure tends to be much easier for people to handle.”

During these 25-minute sessions, half of the veterans simultaneously received painless, 2 milliamp tDCS stimulations directed at their ventromedial prefrontal cortex. The other participants, meanwhile, served as controls, and only felt a small sensation meant to mimic tDCS treatment.

According to researchers, veterans who received both therapies reported “meaningful” improvement in their PTSD symptoms after just three sessions, with a “significantly greater” reduction in issues reported during their one month follow-up interviews.

What’s more, the results were achieved much faster than volunteers who only underwent VR exposure therapy. In only two weeks, tDCS/VR approach produced results normally only seen after about 12 weeks of exposure therapy alone.

It’s important to note that this initial participant sample size is relatively small, and researchers need to continue studying the results to better grasp how the treatment works over time. Still, the team hopes to conduct similar experiments on larger populations in the future, potentially alongside additional treatment sessions with longer follow-up times.

The post VR and electric brain stimulation show promise for treating PTSD appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
TSA is testing a self-screening security checkpoint in Vegas https://www.popsci.com/technology/tsa-vegas-self-screening/ Thu, 07 Mar 2024 16:37:31 +0000 https://www.popsci.com/?p=605766
Passenger staying at self-scan TSA station
The prototype is meant to resemble a grocery store's self checkout kiosk. Credit: TSA at Harry Reid International Airport at Las Vegas

The new prototype station is largely automated, and transfers much of the work onto passengers.

The post TSA is testing a self-screening security checkpoint in Vegas appeared first on Popular Science.

]]>
Passenger staying at self-scan TSA station
The prototype is meant to resemble a grocery store's self checkout kiosk. Credit: TSA at Harry Reid International Airport at Las Vegas

The Transportation Security Administration is launching the pilot phase of an autonomous self-screening checkpoint system. Unveiled earlier this week and scheduled to officially open on March 11 at Harry Reid International Airport in Las Vegas, the station resembles grocery store self-checkout kiosks—but instead of scanning milk and eggs, you’re expected to…scan yourself to ensure you aren’t a threat. Or at least that’s what it seems from the looks of it.

“We are constantly looking at innovative ways to enhance the passenger experience, while also improving security,” TSA Administrator David Pekoske said on Wednesday, claiming “trusted travelers” will be able to complete screenings “at their own pace.”

For now, the prototype station is only available to TSA PreCheck travelers. Although it’s possible additional passengers could use similar self-scan options in the future, depending on the prototype’s success. Upon reaching the Las Vegas airport’s “TSA Innovation Checkpoint,” users will see something similar to the standard security checks alongside the addition of a camera-enabled video screen. TSA agents are still nearby, but they won’t directly interact with passengers unless they request assistance, which may also take the form of a virtual agent popping up on the video screen.

Woman standing in TSA self scan booth at airport
A woman standing in the TSA’s self-screening security checkpoint in Las Vegas. Credit: TSA at Harry Reid International Airport at Las Vegas

The new self-guided station’s X-ray machines function similarly to standard checkpoints, while its automated conveyor belts feed all luggage into a more sensitive detection system. That latter tech, however, sounds a little overly cautious at the moment. In a recent CBS News video segment, items as small as a passenger’s hair clips triggered the alarm. That said, the station is designed to allow “self-resolution” in such situations to “reduce instances where a pat-down or secondary screening procedure would be necessary,” according to the TSA.

[Related: The post-9/11 flight security changes you don’t see.]

The TSA’s proposed solution to one of airports’ most notorious bottlenecks comes at a tricky moment for both the travel and automation industries. A string of recent, high-profile technological and manufacturing snafus have, at best, severely inconvenienced passengers and, at worst, absolutely terrified them. Meanwhile, businesses’ aggressive implementation of self-checkout systems has backfired in certain markets as consumers increasingly voice frustrations with the often finicky tech. Meanwhile, critics contend that automation “solutions” like the TSA’s new security checkpoint project are simply ways to employ fewer human workers who often ask for pesky things like living wages and health insurance.

Whether or not self-scanning checkpoints become an airport staple won’t be certain for a few years. The TSA cautioned as much in this week’s announcement, going so far as to say some of these technologies may simply find their way into existing security lines. Until then, the agency says its new prototype at least “gives us an opportunity to collect valuable user data and insights.”

And if there’s anything surveillance organizations love, it’s all that “valuable user data.”

The post TSA is testing a self-screening security checkpoint in Vegas appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA’s astronaut applications are open again. Do you have what it takes? https://www.popsci.com/science/nasa-astronaut-application-open/ Wed, 06 Mar 2024 17:00:00 +0000 https://www.popsci.com/?p=605607
NASA astronaut training
You can apply for the 24th astronaut candidate class until April 6. NASA / YouTube

If you missed out on space camp, it's time to see if you qualify for the real thing.

The post NASA’s astronaut applications are open again. Do you have what it takes? appeared first on Popular Science.

]]>
NASA astronaut training
You can apply for the 24th astronaut candidate class until April 6. NASA / YouTube

NASA is wasting no time after yesterday’s 23rd astronaut class graduation ceremony. After congratulating the 10 newest people now eligible for flight assignments, the agency has opened the application portal for its next pool of potentially spacebound voyagers. And to celebrate the occasion, NASA enlisted the legend Morgan Freeman to narrate its announcement video.

International Space Station photo

A total of 360 candidates have now taken part in the demanding, two-year training school since 1959, but only three did not finish the program. Currently, just 48 Astronaut Office members are currently active. NASA picked its latest 10 graduates from over 12,000 applicants, who are now also qualified for future assignments aboard the International Space Station, Artemis program missions, and even future commercial space station projects. As Space.com notes, however, their current newbie status will more likely place them in technical roles supporting flights, such as serving as Mission Control capsule communicators (capcoms), as well as overseeing rocket and spacecraft preparations. Before long, however, they could find themselves in line to board those very same vehicles for missions to the ISS or the moon, based on their backgrounds and career experience.

“Picture yourself in space, contributing to a new chapter of human exploration as a NASA astronaut,” Freeman says during the one-minute spot—okay, easy to do, but what about the reality of what’s required to apply?

The astronaut application checklist

According to NASA’s current astronaut candidate application page, you need at least a master’s degree or international equivalent in a STEM-related field such as “engineering, biological science, physical science, computer science, or mathematics.” A minimum of two years currently enrolled in a PhD or similar program can also qualify you, as well as advanced medical degrees. Currently, participation in a stateside or international Test Pilot School program is fair game, too. Either two years of related STEM professional experience or a minimum of 1,000 hours pilot-in-time spent on jet aircrafts are also needed. There aren’t any age restrictions, but every astronaut candidate has so far been somewhere between 26 and 46-years-old, with a median age of 34. 

[Related: How to apply for NASA’s next Mars habitat simulation.]

Unsurprisingly, there’s also a lengthy list of physical assessments and medical requirements, including preliminary and random drug testing for illegal substances, psychiatric evaluations, swimming tests, and the Agency Physical Fitness Test. One’s sitting blood pressure can’t exceed 140/90 and you need 20/20 vision, although LASIK surgery or eyeglasses is fine these days. On the shorter or taller side? Sorry—the height window is only 65-to-75-inches in order to fit into NASA’s (increasingly trendy) spacesuits.

If all those hurdles sound relatively feasible to clear, feel free to head over to the USAJOBS page to fire off an application by April 6. That said, if you’re looking to start a bit closer to Earth, there’s always NASA’s Mars habitat simulation project to consider.

The post NASA’s astronaut applications are open again. Do you have what it takes? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Oh good, the humanoid robots are running even faster now https://www.popsci.com/technology/fastest-humanoid-robot/ Tue, 05 Mar 2024 17:05:32 +0000 https://www.popsci.com/?p=605431
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers.
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers. YouTube

Shanghai's Unitree Robotics says their H1 robot trots at 7.38 mph—nearly two miles’ faster than the Boston Dynamics' Atlas.

The post Oh good, the humanoid robots are running even faster now appeared first on Popular Science.

]]>
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers.
H1 V3.0 can also handle stairs, tight turns, and getting kicked by its designers. YouTube

Step aside, Atlas: A new bipedal bot reportedly laid claim to the world’s fastest full-sized humanoid machine. According to the Shanghai-based startup, Unitree Robotics, its H1 V3.0 now clocks in at 7.38 mph while gingerly walking along a flat surface. With the previous Guinness World Record set at 5.59 mph by the Boston Dynamics robot, H1’s new self-reported achievement could be a pretty massive improvement. If that weren’t enough, if pulled off its new feat while apparently wearing pants. (Or, more specifically, chaps.) 

[Related: OpenAI wants to make a walking, talking humanoid robot smarter.]

In a new video, Unitree’s H1 can also be seen trotting across a park courtyard, lifting and transporting a small crate, jumping, as well as ascending and descending stairs. It also can perform a choreographed, TikTok-esque dance troupe routine—basically an industry requirement, at this point. It’s also wearing pants, for some reason.

Engineering photo

At 71-inches tall, H1 is about as tall as an average human, although considerably lighter at just 100 pounds. According to Unitree, the robot utilizes both a 3D LiDAR sensor alongside a depth camera to supply 360-degree visual information. One other interesting feature in H1’s overall design is its hollow torso and limbs, which house all of the bot’s electrical routing. Although it currently doesn’t currently include articulated hands (they sort of look like wiffle balls at the moment), Unitree is reportedly developing the appendages to integrate into future versions.

Alongside its quadrupedal B1 robot, Unitree aims to take on existing competitors like Boston Dynamics by offering potentially more affordable products. H1’s current estimated price tag is somewhere between $90,000 and $150,000—that’s likely more than most people are willing to shell out for a robot (even a world record-holder) but with Atlas rumored to cost $150,000 minimum, it might prove attractive to researchers and other companies.

Major companies like Hyundai and Amazon (not to mention the military) are extremely interested in these two- and four-legged robots—either through integrating them into increasingly automated workplaces, or… strapping guns to them, apparently. In the meantime, startups including OpenAI are aiming to make these machines “smarter” and more responsive to real-time human interactions.

But while H1 is allegedly the fastest humanoid robot for the time being, it still doesn’t appear to be nearly as agile as the parkouring Atlas… or, it should be noted, as egg-friendly as Tesla’s latest Optimus prototype. And although both H1 and Atlas can walk faster than a lot of humans and keep pace with most joggers, their biological inspirations can still break away at a full sprint. For now, at least…

Oh, wait.

The post Oh good, the humanoid robots are running even faster now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch the plasma fly in space capsule’s dramatic fall to Earth https://www.popsci.com/science/space-capsule-reentry-video/ Thu, 29 Feb 2024 21:45:00 +0000 https://www.popsci.com/?p=605067
Varda W-1 capsule reentry video screenshot
After 8 months in orbit, Varda's first reusable capsule made a safe return to Earth on Feb. 21. Varda / YouTube

Varda's W-1 spent 8 months in orbit before recording its entire trip home.

The post Watch the plasma fly in space capsule’s dramatic fall to Earth appeared first on Popular Science.

]]>
Varda W-1 capsule reentry video screenshot
After 8 months in orbit, Varda's first reusable capsule made a safe return to Earth on Feb. 21. Varda / YouTube

It took less than 30 minutes for Varda Space Industries’ W-1 capsule to leave its orbital home of eight months and plummet back to Earth. Such a short travel time not only required serious speed (around 25 times the speed of sound), but also the engineering wherewithal to endure “sustained plasma conditions” while careening through the atmosphere. In spite of these challenges, Varda’s first-of-its-kind reentry mission was a success, landing back on the ground on February 21. To celebrate, the company has released video footage of the capsule’s entire descent home.

Check out W-1’s fiery return below—available as both abbreviated and extended cuts:

Installed on a Rocket Lab Photon satellite bus, Varda’s W-1 capsule launched aboard a SpaceX Falcon 9 rocket June 12, 2023. Once in low-Earth orbit, its mini-lab autonomously grew crystals of the common HIV treatment drug ritonavir. Manufacturing anything in space, let alone pharmaceuticals, may seem like overcomplicating things, but there’s actually a solid reason for it. As Varda explains on its website, processing materials in microgravity may benefit from a “lack of convection and sedimentation forces, as well as the ability to form more perfect structures due to the absence of gravitational stresses.”

In other words, medication crystals like those in ritonavir can be grown larger and more structurally sound than is typically possible here on Earth.

Although the experiment wrapped up in just three weeks, Varda needed to delay reentry plans multiple times due to issues securing FAA approval. After finally getting the go-ahead, the W-1 readied for its return earlier this month. All the while, it contained a video camera ready to capture its dramatic fall.

Private Space Flight photo

After ejecting from its satellite host, W-1 begins a slightly dizzying spin that provides some incredible shots from hundreds of miles above Earth. At about the 12-minute mark, the planet’s gravitational pull really takes hold—that’s when things begin to heat up for Varda’s experimental capsule.

[Related: First remote, zero-gravity surgery performed on the ISS from Earth (on rubber)]

At Mach 25 (around 17,500 mph), exterior friction between the craft and Earth’s atmosphere becomes so intense that it literally splits the chemical bonds of nearby air molecules. This results in a dazzling show of sparks and plasma before W-1’s parachute deploys to slow and stabilize its final descent. Finally, the capsule can be seen touching down in a remote region of Utah, where it was recovered by the Varda crew.

Next up will be an assessment of the space-grown drug ingredients, and additional launches of capsules for more manufacturing experiments. While they might not all include onboard cameras to document their returns, W-1’s is plenty mesmerizing enough.

The post Watch the plasma fly in space capsule’s dramatic fall to Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI wants to devour a huge chunk of the internet. Who’s going to stop them? https://www.popsci.com/technology/openai-wordpress-tumblr/ Thu, 29 Feb 2024 15:43:16 +0000 https://www.popsci.com/?p=604994
Vacuum moving towards two blocks with Wordpress and Tumblr logos
WordPress supports around 43 percent of the internet you're most likely to see. DepositPhotos, Deposit Photos

The AI giant plans to buy WordPress and Tumblr data to train ChatGPT. What could go wrong?

The post OpenAI wants to devour a huge chunk of the internet. Who’s going to stop them? appeared first on Popular Science.

]]>
Vacuum moving towards two blocks with Wordpress and Tumblr logos
WordPress supports around 43 percent of the internet you're most likely to see. DepositPhotos, Deposit Photos

You probably don’t know about Automattic, but they know you.

As the parent company of WordPress, its content management systems host around 43 percent of the internet’s 10 million most popular websites. Meanwhile, it also owns a vast suite of mega-platforms including Tumblr, where a massive amount of embarrassing personal posts live. All this is to say that, through all those countless Terms & Conditions and third-party consent forms, Automattic potentially has access to a huge chunk of the internet’s content and data.

[Related: OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss.]

According to 404 Media earlier this week, Automattic is finalizing deals with OpenAI and Midjourney to provide a ton of that information for their ongoing artificial intelligence training pursuits. Most people see the results in chatbots, since tech companies need the text within millions of websites to train large language model conversational abilities. But this can also take the form of training facial recognition algorithms using your selfies, or improving image and video generation capabilities by analyzing original artwork you uploaded online. It’s hard to know exactly what and how much data is used, however, since companies like Midjourney and OpenAI maintain black box tech products—such is the case in this imminent business deal.

So, what if you wanna opt-out of ChatGPT devouring your confessional microblog entries or daily workflows? Good luck with that.

When asked to comment, a spokesperson for Automattic directed PopSci to its “Protecting User Choice” page, published Tuesday afternoon after 404 Media’s report. The page attempts to offer you a number of assurances. There’s now a privacy setting to “discourage” search engine indexing sites on WordPress.com and Tumblr, and Automattic promises to “share only public content” hosted on those platforms. Additional opt-out settings will also “discourage” AI companies from trawling data, and Automattic plans to regularly update its partners on which users “newly opt out,” so that their content can be removed from future training and past source sets.

There is, however, one little caveat to all this:

“Currently, no law exists that requires crawlers to follow these preferences,” says Automattic.

“From what I have seen, I’m not exactly sure what could be shared with AI,” says Erin Coyle, an associate professor of media and communication at Temple University. “We do have a confusing landscape right now, in terms of what data privacy rights people have.”

To Coyle, nebulous access to copious amounts of online user information “absolutely speaks” to an absence of cohesive privacy legislation in the US. One of the biggest challenges impeding progress is the fact that laws, by and large, are reactive instead of preventative regulation.

“There is no data privacy in general.”

“It’s really hard for legislators to get ahead of the developments, especially in technology,” she adds. “While there are arguments to be made for them to be really careful and cautious… it’s also very challenging in times like this, when the technology is developing so rapidly.”

As companies like OpenAI, Google, and Meta continue their AI arms race, it’s the everyday people providing the bulk of the internet’s content—both public and private—who are caught in the middle. Clicking “Yes” to the manifesto-length terms and conditions prefacing almost every app, site, or social media platform is often the only way to access those services.

“Everything is about terms of service, no matter what website we’re talking about,” says Christopher Terry, a University of Minnesota journalism professor focused on regulatory and legal analysis of media ownership, internet policy, and political advertising.

Speaking to PopSci, Terry explains that basically every single terms of service agreement you have signed online is a legal contractual obligation with whoever is running a website. Delve deep enough into the legalese, and “you’re gonna see you agreed to give them, and allow them to use, the data that you generate… you allowed them to monetize that.”

Of course, when was the last time you actually read any of those annoying pop-ups?

“There is no data privacy in general,” Terry says. “With the digital lives that we have been living for decades, people have been sharing so much information… without really knowing what happens to that information,” Coyle continues. “A lot of us signed those agreements without any idea of where AI would be today.”

And all it takes to sign away your data for potential AI training is a simple Terms of Service update notification—another pop-up that, most likely, you didn’t read before clicking “Agree.”

You either opt out, or you’re in

Should Automattic complete its deal with OpenAI, Midjourney, or any other AI company, some of those very same update alerts will likely pop-up across millions of email inboxes and websites—and most people will reflexively shoo them away. But according to some researchers, even offering voluntary opt-outs in such situations isn’t enough.

“It is highly probable that the majority of users will have no idea that this is an option and/or that the partnership with OpenAI/Midjourney is happening,” Alexis Shore, a Boston University researcher focused on technology policy and communication studies, writes to PopSci. “In that sense, giving users this opt-out option, when the default settings allow for AI crawling, is rather pointless.”

“They’re going all in on it right now while they still can.”

Experts like Shore and Coyle think one potential solution is a reversal in approach—changing voluntary opt-outs to opt-ins, as is increasingly the case for internet users in the EU thanks to its General Data Protection Regulation (GDPR). Unfortunately, US lawmakers have yet to make much progress on anything approaching that level of oversight.

The next option, should you have enough evidence to make your case, is legal action. And while copyright infringement lawsuits continue to mount against companies like OpenAI, it will be years before their legal precedents are established. By then, it’s anyone’s guess what the AI industry will have done to the digital landscape, and your privacy. Terry compares the moment to a 19th-century gold rush.

“They’re going all in on it right now while they still can,” he says. “You’re going out there to stake out your claim right now, and you’re pouring everything you can into that machine so that later, when that’s a [legal] problem, it’s already done.”

 Neither OpenAI nor Midjourney responded to multiple requests for comment at the time of writing.

The post OpenAI wants to devour a huge chunk of the internet. Who’s going to stop them? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Odie the lunar lander is not dead yet https://www.popsci.com/science/odysseus-lunar-lander-mission/ Wed, 28 Feb 2024 19:35:57 +0000 https://www.popsci.com/?p=604519
On Feb. 22, 2024, Intuitive Machines’ Odysseus lunar lander captures a wide field of view image of Schomberger crater on the Moon approximately 125 miles (200 km) uprange from the intended landing site, at approximately 6 miles (10 km) altitude.
On Feb. 22, 2024, Intuitive Machines’ Odysseus lunar lander captures a wide field of view image of Schomberger crater on the Moon approximately 125 miles (200 km) uprange from the intended landing site, at approximately 6 miles (10 km) altitude. Intuitive Machines

Despite toppling on its side during landing, Odysseus is outliving its 10-20 hour prognosis.

The post Odie the lunar lander is not dead yet appeared first on Popular Science.

]]>
On Feb. 22, 2024, Intuitive Machines’ Odysseus lunar lander captures a wide field of view image of Schomberger crater on the Moon approximately 125 miles (200 km) uprange from the intended landing site, at approximately 6 miles (10 km) altitude.
On Feb. 22, 2024, Intuitive Machines’ Odysseus lunar lander captures a wide field of view image of Schomberger crater on the Moon approximately 125 miles (200 km) uprange from the intended landing site, at approximately 6 miles (10 km) altitude. Intuitive Machines

Despite landing on its side and struggling to maintain power, Odysseus, the first US spacecraft to land on the moon in over half a century, is still somewhat operational. Built by the Houston-based company, Intuitive Machines, “Odie” marked a historic return to the lunar surface, and became the first privately funded venture ever to successfully reach the moon.

On Tuesday morning, Intuitive predicted that the spacecraft “may continue up to an additional 10-20 hours.” Yet, mission control plans to put the lander to sleep later tonight. Odie “continues to generate solar power,” said Intuitive Machines co-founder and president Steve Altemus during today’s mission update. Altemus also confirmed that engineers will attempt to revive Odysseus in 2-to-3 weeks following the upcoming lunar night’s conclusion.

“We’ve gotten over 15 megabytes of data,” said CLPS project scientist Sue Lederer when discussing the data the team is retrieving from Odysseus on Wednesday. “We went from basically a cocktail straw of data coming back to a boba tea size straw of data coming back.”

picture of odie on the surface of the moon, touching down with its engine firing. the landing gear pieces are broken off
An image of Odysseus on the surface of the moon, touching down with its engine firing. Pieces of landing gear are are broken off. Credit: Intuitive Machines

Launched from NASA’s Kennedy Space Center on February 15 aboard a SpaceX Falcon 9 rocket, Odysseus spent the next week traveling 230,000-miles towards the moon—and even documented its journey in the process.

[Related: ‘Odie’ makes space history with successful moon landing.]

For a moment, it seemed as though Odysseus might meet a recent predecessor’s similar fate. Less than a week before the Odysseus launch, the Peregrine lunar lander built by Astrobotics experienced a “critical loss of propellant” on its way to the moon, forcing the private company to abandon its mission.

NASA’s Lunar Reconnaissance Orbiter captured this image of the Intuitive Machines’ Nova-C lander, called Odysseus, on the Moon’s surface on Feb. 24, 2024, at 1:57 p.m. EST). Odysseus landed at 80.13 degrees south latitude, 1.44 degrees east longitude, at an elevation of 8,461 feet (2,579 meters). The image is 3,192 feet (973 meters) wide, and lunar north is up. (LROC NAC frame M1463440322L) Credit: NASA/Goddard/Arizona State University
NASA’s Lunar Reconnaissance Orbiter captured this image of the Intuitive Machines’ Nova-C lander, called Odysseus, on the Moon’s surface on Feb. 24, 2024, at 1:57 p.m. EST). Odysseus landed at 80.13 degrees south latitude, 1.44 degrees east longitude, at an elevation of 8,461 feet (2,579 meters). The image is 3,192 feet (973 meters) wide, and lunar north is up. (LROC NAC frame M1463440322L) Credit: NASA/Goddard/Arizona State University

While circling the moon ahead of last week’s descent, Odysseus ground engineers discovered they failed to turn on the spacecraft’s navigating laser system. As luck would have it, Odysseus housed an experimental NASA laser navigation device intended for testing once it reached its final destination. Mission controllers managed to boot up the laser, which allowed the lander to finish its trip. On February 22, Odysseus arrived close to the Malapert A crater within a mile of its target, approximately 185 miles from the moon’s south pole—but not without a debilitating setback.

While landing, a faster-than-intended descent caused one of its six legs to malfunction and tip Odysseus on its side. According to mission representatives, the resulting position blocked a number of Odie’s antennas, and angled solar panels in a way that limited their ability to draw power. A similar issue plagued yet another recent historic lunar landing mission, when Japan’s SLIM spacecraft arrived to the moon last month intact, if upside down.

[ Related: SLIM lives! Japan’s upside-down lander is online after a brutal lunar night ]

But even if it perfectly stuck the landing, Odysseus would still only have had another two-to-three days of life before powering down as the moon entered its next lunar night. Designers did not intend Odie to survive the harsh, 14.5-day phase that sees temperatures plummet as low as -208 Fahrenheit.

During a February 28 mission update, representatives say NASA Adminstrator Bill Nelson considers Odie’s landing a “success” despite setbacks.

Odysseus contained six NASA experiments (including that aforementioned laser nav system) intended to help plan for future Artemis program missions, a camera designed by university students, a lunar telescope prototype, as well as an art project containing 125 steel sculptures by Jeff Koons. According to Intuitive Machines CEO Steve Altemus, Odysseus tipped so that only the Koons cargo faces downward into the lunar dirt.

This story is developing. We will update this article with more details.

The post Odie the lunar lander is not dead yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Apple Car is dead https://www.popsci.com/technology/apple-car-dead/ Wed, 28 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=604807
Apple logo in store
Plans for an Apple car date as far back as 2014, but the project is no more. Deposit Photos

Apple has officially scrapped its multibillion dollar autonomous EV plans to focus on AI.

The post The Apple Car is dead appeared first on Popular Science.

]]>
Apple logo in store
Plans for an Apple car date as far back as 2014, but the project is no more. Deposit Photos

It turns out that last month’s report on Apple kicking its tortured, multibillion dollar electric vehicle project down the road another few years was a bit conservative. During an internal meeting on Tuesday, company representatives informed employees that all EV plans are officially scrapped. After at least a decade of rumors, research, and arguably unrealistic goals, it would seem that CarPlay is about as much as you’re gonna get from Apple while on the roads. RIP, “iCar.”

The major strategic decision, first reported by Bloomberg, also appears to reaffirm Apple’s continuing shift towards artificial intelligence. Close to 2,000 Special Projects Group employees worked on car initiatives, many of whom will now be folded into various generative AI divisions. The hundreds of vehicle designers and hardware engineers formerly focused on the Apple car can apply to other positions, although yesterday’s report makes clear that layoffs are imminent.

[Related: Don’t worry, that Tesla driver only wore the Apple Vision Pro for ’30-40 seconds’]

Previously referred to as Project Titan or T172, Apple’s intentions to break into the automotive market date as far back as at least 2014. It was clear from the start that Apple executives such as CEO Tim Cook wanted an industry-changing product akin to the iPod or iPhone—an electric vehicle with fully autonomous driving capabilities, voice-guided navigation software, no steering wheel or even pedals, and a “limousine-like interior.”

As time progressed, however, it became clear—both internally and vicariously through competitors like Tesla—that such goals were lofty, to say the least. Throughout multiple leadership shakeups, reorganizations, and reality checks, an Apple car began to sound much more like existing EVs already on the road. Basic driver components returned to the design, and AI navigation plans downgraded from fully autonomous to current technology such as acceleration assist, brake controls, and adaptive steering. Even then, recent rumors pointed towards the finalized car still costing as much as $100,000, which reportedly concerned company leaders for the hyper-luxury price point.

This isn’t the first time Apple pulled the plug on a major project—in 2014, for example, saw the abandonment of a 4K Apple smart TV. But the company has rarely, if ever, spent as much time and money on a product that never even officially debuted, much less made it to market.

Fare thee well, Apple Car. You sounded pretty cool, but it’s clear Tim Cook believes its future profits reside in $3,500 “spatial computing” headsets and attempting to integrate generative AI into everything. For now, the closest anyone will get to an iCar is wearing Apple Vision Pro while seated in a Tesla… something literally no one recommends.

The post The Apple Car is dead appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Jellyfish-inspired glowing dye can glom onto fingerprints at crime scenes https://www.popsci.com/environment/jellyfish-fingerprint-fluorescent-dye/ Tue, 27 Feb 2024 20:17:31 +0000 https://www.popsci.com/?p=604630
Jellyfish glowing green underwater
Green Fluorescent Protein can be found in jellyfish, and might provide a new way to lift fingerprints. Deposit Photos

Forensic science might get a boost from an unlikely source.

The post Jellyfish-inspired glowing dye can glom onto fingerprints at crime scenes appeared first on Popular Science.

]]>
Jellyfish glowing green underwater
Green Fluorescent Protein can be found in jellyfish, and might provide a new way to lift fingerprints. Deposit Photos

Imagine a crime scene. Chances are, you’re also imagining someone dusting for fingerprints. Despite recent debates of whether fingerprint evidence is accurate and reliable, it can still prove extremely useful in certain situations, such as narrowing down potential suspect lists. Unfortunately, this technique often employs toxic powders, including environmentally harmful petrochemicals that can damage DNA evidence.

[Related: The racist history behind using biology in criminology.]

Thanks to a collaboration between scientists from the UK’s University of Bath and China’s Shanghai Normal University, this may change in the future. In a new study published in the Journal of the American Chemical Society, researchers laid out their case for a novel method of lifting latent fingerprints—a water soluble spray that is not only safer and faster, but easier to examine thanks to its ability to glow in the dark.

It all started with a tip-off from jellyfish.

For millions of years, many of these ocean invertebrates have contained Green Fluorescent Protein (GFP), which are fluorescent under certain lighting conditions. Knowing this, the team created two different dyes, LFP-Yellow and LFP-Red, that are based on the protein found in jellyfish. Short for “latent fingerprints,” LFP-Yellow and LFP-Red are applied using a simple spray bottle, which then selectively binds to negatively-charged molecules within fingerprints. Once stuck to the residual prints, the dyes begin to glow under blue light in just 10 seconds.

Interestingly, the solution is only “weakly fluorescent” before applied to LFPs, according to University of Bath researcher, Luling Wu, in a recent profile. It’s only once the dyes interact with fingerprint’s fatty or amino acids created by skin oil and sweat that they glow brighter.

Wildlife photo

Because it is applied as a fine mist, forensics examiners don’t need to worry about splashes that could potentially disturb prints. It also avoids the mess that often accompanies dusting with frequently toxic powders, and is even effective on rougher surfaces like concrete or brick.

Going forward, researchers hope to make their less harmful solution available commercially, as well as expand on the number of fluorescent colors to ensure use across a wider array of surfaces. Forensic analysts may not consider fingerprint evidence as ironclad as before, but with alternative methods of detection, they could soon lift them more accurately and safely. What’s more, doing so won’t risk damaging any nearby, much more sought after DNA clues.

The post Jellyfish-inspired glowing dye can glom onto fingerprints at crime scenes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA and Google Earth Engine team up with researchers to help save tigers https://www.popsci.com/environment/tiger-conservation-nasa-google/ Tue, 27 Feb 2024 15:37:58 +0000 https://www.popsci.com/?p=604513
Tiger walking across snow
A female tiger in the Sikhote-Alin Biosphere Reserve, a UNESCO site, in Russia. ANO WCS and Sikhote-Alin Biosphere Reserve

Here’s how a new real-time data system could improve wild tiger habitats—and the health of our planet.

The post NASA and Google Earth Engine team up with researchers to help save tigers appeared first on Popular Science.

]]>
Tiger walking across snow
A female tiger in the Sikhote-Alin Biosphere Reserve, a UNESCO site, in Russia. ANO WCS and Sikhote-Alin Biosphere Reserve

Less than 4,500 tigers remain in the world, according to the International Union for the Conservation of Nature (IUCN). Habitat loss continues to pose an immense existential threat to the planet’s largest cat species—a problem compounded due to the animals residing in some of Earth’s most ecologically at-risk regions and landscapes.

To better monitor the situation in real time, NASA, Google Earth Engine, and over 30 researcher collaborators are announcing TCL 3.0 today, a new program that combines satellite imagery and powerful computer processing to keep an eye on tigers’ existing and reemerging ecosystems.

“The ultimate goal is to monitor changes in real time to help stabilize tiger populations across the range,” Eric W. Sanderson, VP for Urban Conservation at the New York Botanical Garden and first author of a recent foundational study published in Frontiers in Conservation Science explained.

[Related: A new algorithm could help detect landslides in minutes.]

“Tiger Conservation Landscapes,” or TCLs, refer to the planet’s distinct locales where Panthera tigris still roam in the wild. Because of their size, diet, and social habits, tigers require comparatively large areas to not only survive, but flourish.

According to researchers, stable tiger populations “are more likely to retain higher levels of biodiversity, sequester more carbon, and mitigate the impacts of climate change, at the same time providing ecosystem services to millions of humans in surrounding areas.” In doing so, TCLs can serve as a reliable, informative indicator of overall environmental health markers.

Unfortunately, the total area of Tiger Conservation Landscapes declined around 11 percent between 2001 and 2020. Meanwhile, potential restored habitats have only plateaued near 16 percent of their original scope—if such spaces were properly monitored and protected, however, tigers could see a 50 percent increase in available living space. 

Using this new analytical computing system based on Google Earth Engine data, NASA Earth satellite observations, biological info, and conservation modeling, TCL 3.0 will offer environmentalist groups and national leaders critical, near-real time tools for tiger conservation efforts.

“Analysis of ecological data often relies on models that can be difficult and slow to implement, leading to gaps in time between data collection and actionable science,” Charles Tackulic, a research statistician with the US Geological Survey, said in today’s announcement. “The beauty of this project is that we were able to minimize the time required for analysis while also creating a reproducible and transferable approach.”

Researchers say government and watchdog users of TCL 3.0 will be able to pinpoint tiger habitat loss as it happens, and hopefully respond accordingly. National summaries of initial available data can be found through the Wildlife Conservation Society, with more information to come.

TCL 3.0 provides an unprecedentedly complex and advanced monitoring system for one of the planet’s most threatened creatures, but as researchers note in their new study, the solution is arguably extremely simple.
“What have we learned about tiger conservation over the last two decades? Conservation works when we choose to make it so,” the authors conclude in their recent report. “Conservation is straightforward. Don’t cut down their habitat. Don’t stalk them, harass them, or kill them or their prey. Control poaching and extinguish the illegal trade in tiger bones and parts. Prevent conflicts with people and livestock wherever possible, and where and when not, then mitigate losses to forestall retaliation.”

Correction 2/27/24 5:53PM: This article has been updated to more accurately reflect the world’s remaining tiger population. PopSci regrets the error.

The post NASA and Google Earth Engine team up with researchers to help save tigers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A 3D-printed titanium ‘metamaterial’ design solved a longtime engineering issue https://www.popsci.com/technology/titanium-lattice-metamaterial/ Mon, 26 Feb 2024 19:00:00 +0000 https://www.popsci.com/?p=604270
Hand holding cube of 3D printed titanium allow metamaterial
Engineers used a process in which a laser instantly flashes metal powder into a fused solid. Credit: RMIT

These lattice structures could one day strengthen bone implants and rocket parts.

The post A 3D-printed titanium ‘metamaterial’ design solved a longtime engineering issue appeared first on Popular Science.

]]>
Hand holding cube of 3D printed titanium allow metamaterial
Engineers used a process in which a laser instantly flashes metal powder into a fused solid. Credit: RMIT

Cellular structures made from metal alloys could strengthen everything from bone implants to rocket parts—if they didn’t keep cracking under pressure. Researchers have so far spent years attempting to solve for uneven weight distribution issues across these artificial “metamaterials” to little success. As detailed in a recent study published in Advanced Materials, however, a team at Australia’s RMIT University appears to have finally figured out the solution after drawing inspiration from plants and coral, with some help from a cutting-edge 3D-printing tool.

Using a common titanium alloy, engineers manufactured latticelike structures composed of hollow struts—each imbued with an additional, thin band running throughout it. According to Ma Qian, an RMIT Distinguished Professor of advanced manufacturing and study co-author, the team combined “two complementary lattice structures to evenly distribute stress, we avoid the weak points where stress normally concentrates.”

Close up stress test looks at titanium alloy design
Compression testing shows (left) stress concentrations in red and yellow on the hollow strut lattice, while (right) the double lattice structure spreads stress more evenly to avoid hot spots. Credit: RMIT

“These two elements together show strength and lightness never before seen together in nature,” Qian continued in a university profile published on Monday.

To construct their lattice metamaterials, researchers utilized a highly advanced manufacturing process known as laser powder bed fusion, in which a powerful laser beam flash-melts layered titanium granules directly into place. Subsequent stress tests of a cube made from the new, hollow latticework withstood 50-percent more weight than a similarly dense cast of WE54, a magnesium alloy commonly used for aerospace engineering.

Although the resilient metamaterial can already withstand temperatures up to 350-degrees Celsius (662 Fahrenheit), its makers believe that utilizing more heat-resistant titanium alloys could raise that threshold up to 600-degrees Celsius (1,112 Fahrenheit). If so, the metalwork could find more uses in rocketry manufacturing, and even firefighting drones.

[Related: Titanium-fused bone tissue connects this bionic hand to a patient’s nerves.]

Meanwhile, the team thinks these lattice structures could also prove useful in human bone implants, since their hollowness may allow for bone cell regrowth as the equipment fuses with a patient’s body.

That said, it might be a little while before the titanium metamaterial becomes commonplace. As study lead author and PhD candidate Jordan Noronha explained in RMIT’s feature, “Not everyone has a laser powder bed fusion machine in their warehouse.”

Still, Noronha, Qian, and their colleagues believe technological advances and increased equipment accessibility will eventually make it easier for others to also harness their metamaterial design.

The post A 3D-printed titanium ‘metamaterial’ design solved a longtime engineering issue appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
SLIM lives! Japan’s upside-down lander is online after a brutal lunar night https://www.popsci.com/science/slim-moon-lander-reboot/ Mon, 26 Feb 2024 16:00:00 +0000 https://www.popsci.com/?p=604194
Image taken of JAXA SLIM lunar lander on moon upside down
SLIM is defying the odds yet again after a two-week lunar night. JAXA/Takara Tomy/Sony Group Corporation/Doshisha University

The historic moon lander beat the odds.

The post SLIM lives! Japan’s upside-down lander is online after a brutal lunar night appeared first on Popular Science.

]]>
Image taken of JAXA SLIM lunar lander on moon upside down
SLIM is defying the odds yet again after a two-week lunar night. JAXA/Takara Tomy/Sony Group Corporation/Doshisha University

Japan Aerospace Exploration Agency (JAXA) announced on Monday that its historic Smart Lander for Investigating Moon has defied the odds—after surviving a brutal, two-week lunar night while upside down, SLIM’s solar cells subsequently gathered enough energy to restart the spacecraft over the weekend. In an early morning post to X, JAXA reported it briefly established a communication relay with its lunar lander on Sunday, but the moon’s extremely high surface temperature currently prevents engineers from doing much else at the moment. Once SLIM’s instrument temperatures cool off in a few days’ time, however, JAXA intends to “resume operations” through additional scientific observations as long as possible.

[Related: This may be SLIM’s farewell transmission from the moon.]

SLIM arrived near the moon’s Shioli crater on January 19, making Japan the fifth nation to ever reach the lunar surface. Although JAXA’s lander successfully pulled off an extremely precise touchdown, it did so upside down after its main engines malfunctioned about 162-feet above the ground. The resulting nose-down angle meant SLIM’s solar cell arrays now face westward, thereby severely hindering its ability to gather power. Despite these problems, the craft’s two tiny robots still deployed and carried out their reconnaissance duties as hoped and snapped some images of the inverted lander. Meanwhile, SLIM transmitted its own geological survey data back to Earth for a few precious hours before shutting down.

Although JAXA officials cautioned that might be it for their lander, SLIM defied the odds and rebooted 10 days later with enough juice to continue surveying its lunar surroundings, such as identifying and measuring nearby rock formations.

“Based on the large amount of data obtained, analysis is now underway to identify rocks and estimate the chemical composition of minerals, which will help to solve the mysteries surrounding the origin of the Moon. The scientific results will be announced as soon as they are obtained,” JAXA said at the time.

But by February 1, the moon’s roughly 14.5-day lunar night was setting in, plunging temperatures down to a potentially SLIM-killing -208 Fahrenheit. Once again, JAXA bid a preemptive farewell to their plucky, inverted technological achievement—only to be surprised yet again over the weekend.

The rocks on which a detailed 10-band observation was performed. Due to different solar illumination conditions, a few of the rocks selected for observation were changed and additions added.
Figure 2: The rocks on which a detailed 10-band observation was performed. Due to different solar illumination conditions, a few of the rocks selected for observation were changed and additions added. CREDIT: JAXA, RITSUMEIKAN UNIVERSITY, THE UNIVERSITY OF AIZU

In the few days since the most recent lunar evening’s conclusion, SLIM apparently recharged its solar cells enough to come back online. But as frigid as the moon’s night phases are, its daytime temperatures can be just as brutal. According to JAXA, some of the lander’s equipment initially warmed up to over 212-degrees Fahrenheit. To play it safe, mission control is giving things a little time to cool off before tasking SLIM with additional scans, such as using its Multi-Band Camera to assess nearby regolith formations’ chemical compositions.

JAXA has a few more days before the moon enters another two-week night, during which SLIM will go into yet another hibernation. While it could easily succumb to the lunar elements this next time, it’s already proven far more resilient than its designers thought possible. It may not surpass expectations as dramatically as NASA’s Mars Ingenuity rotocopter (RIP), but the fact that SLIM made it this long is cause enough for celebration.

The post SLIM lives! Japan’s upside-down lander is online after a brutal lunar night appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Gene-edited pigs immune to deadly virus could arrive on farms by next year https://www.popsci.com/environment/gene-edited-pigs/ Fri, 23 Feb 2024 19:00:00 +0000 https://www.popsci.com/?p=604074
Pigs in sty at factory farm
Animal rights groups say the solution remains factory farming reforms, not genetic editing. Deposit Photos

A company used CRISPR to make the animals resistant to deadly diseases, but watchdogs say viruses are not the problem.

The post Gene-edited pigs immune to deadly virus could arrive on farms by next year appeared first on Popular Science.

]]>
Pigs in sty at factory farm
Animal rights groups say the solution remains factory farming reforms, not genetic editing. Deposit Photos

US farmers are closer than ever to raising genetically edited pigs immune to one of the animal’s deadliest diseases. But while millions of dollars could be saved with livestock impervious to highly virulent, diverse strains of porcine reproductive and respiratory syndrome (PRRS), animal rights groups maintain the cutting-edge idea isn’t an ethical solution, but yet another industrial farming stopgap.

PRRS is a dynamic, often fatal virus that affects millions of pigs around the world and costs farmers as much as $2.7 billion each year. Current vaccines only reduce symptom severity, and the antibiotics used to treat an infected pig’s weakened immune system can exacerbate the development of other resistant bacterial diseases.

[Related: Scientists swear their lab-grown ‘beef rice’ tastes ‘pleasant’]

Genus, an international breeding company, believes the best way to solve this major issue is to engineer pigs that are incapable of contracting the virus. As highlighted in a recent New Scientist profile, Genus researchers succeeded through CRISPR gene editing technology. By removing a portion of protein called CD163, the disease cannot infect a pig’s cells and allows the animal to remain “healthy and indistinguishable in appearance and behavior,” according to a Genus research study recently published in The CRISPR Journal.

Doing so isn’t an easy task. Just one-fifth of Genus-bred piglets possessed the desired gene—and even then, only within certain body cells due to a biological condition known as mosaicism. Meanwhile, some lab livestock may have lacked CD163, but at the cost of other unwanted genome changes.

Because of such issues, experts have spent years attempting to create a healthy, gene-edited animal. Genus says it has so far bred hundreds of PRRS-immune pigs, and expects to receive approval from the US Food and Drug Administration to begin public sales as soon as next year. Meanwhile, regulatory approvals are also being pursued globally in countries including China and Mexico—both of which import large amounts of US pork.

But according to factory farming critics and animal rights advocates, the real issue isn’t livestock disease susceptibility—it’s the livestock’s living conditions. According to the international welfare nonprofit World Animal Protection, farm stock receives three-quarters of global antibiotic supplies each year in an attempt to stave off disease, treat infections, and promote faster growth rates. Doing so is directly linked to the rise in treatment-resistant superbugs, which are more likely to leap between animals to humans within a factory farm’s cramped, poorly ventilated environments.

“Crowding animals into stressful, unhealthy conditions has led to the emergence of new virulent pathogens and diseases,” Gene Baur, President and Co-Founder of the animal rights group Farm Sanctuary, said in an email to PopSci. “Rather than developing genetically engineered animals who can survive the horrific cruelties of factory farming, agribusiness should focus instead on addressing the conditions that create these diseases in the first place.”

The post Gene-edited pigs immune to deadly virus could arrive on farms by next year appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA wants you to record crickets during April’s solar eclipse https://www.popsci.com/science/nasa-eclipse-study-soundscapes/ Fri, 23 Feb 2024 18:00:00 +0000 https://www.popsci.com/?p=604045
Colorful cricket on green leaf
The behaviors of animals such as birds and crickets can be affected when they see a solar eclipse. Credit: Moment Open / Getty

Here's how to capture nature for the Eclipse Soundscapes Project.

The post NASA wants you to record crickets during April’s solar eclipse appeared first on Popular Science.

]]>
Colorful cricket on green leaf
The behaviors of animals such as birds and crickets can be affected when they see a solar eclipse. Credit: Moment Open / Getty

American scientist William Wheeler not only looked to the sky during a total solar eclipse; he also made sure to pay attention to everything around him. On August 31, 1932, Wheeler and fellow collaborators located throughout the northeastern regions of US and Canada took part in one of the earliest eclipse-related participatory studies to document the celestial event’s effects on wildlife. Volunteers made nearly 500 records of animal and insect reactions that day—nearly a century later, NASA hopes to honor those contributions, as well as exponentially expand on them.

On April 8, the agency is calling for citizen scientist volunteers along the upcoming total solar eclipse’s path to help in its ongoing Eclipse Soundscapes Project. Through a combination of visual, audio, and written recordings, NASA aims to help further researchers’ understanding of the occurrence’s influence on various ecosystems across the country.

Sun photo

As the moon passes in front of the sun, ambient light dims, temperatures fall, and even some stars begin to appear. These sudden environmental shifts have been known to fool animals into behaving as they would at dusk or dawn. According to NASA, the agency is specifically interested in better understanding the behavior of crickets, as well as observing the differences between how nocturnal and diurnal animals may respond.

“The more audio data and observations we have, the better we can answer these questions,” Kelsey Perrett, Communications Coordinator with the Eclipse Soundscapes Project, said in an announcement earlier this month. “Contributions from participatory scientists will allow us to drill down into specific ecosystems and determine how the eclipse may have impacted each of them.”

[Related: Delta’s solar eclipse flight sold out, but your best bet to see it is still down here.]

There are multiple ways any of the roughly 30 million people within the eclipse’s path can participate on April 8. People on or close to the path of totality can act as designated “Data Collectors” by purchasing a relatively low-cost audio recorder called an AudioMoth alongside a micro-SD card to capture surrounding sounds. Meanwhile, “Observers” can write down what they see and hear, then submit them through the project’s website, while “Apprentices” and “Data Analysts” can take quick, free online courses to help assess the incoming data. There are also plenty of options for anyone with sensory accessibility issues, and NASA made sure to include resources for facilitating large groups of volunteers through local schools, libraries, parks, and community centers.

The post NASA wants you to record crickets during April’s solar eclipse appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Delta’s solar eclipse flight sold out, but your best bet to see it is still down here https://www.popsci.com/science/delta-solar-eclipse-flight/ Thu, 22 Feb 2024 20:07:58 +0000 https://www.popsci.com/?p=603866
A total solar eclipse is seen on Monday, Aug. 21, 2017, from onboard a NASA Armstrong Flight Research Center’s Gulfstream III 25,000 feet above the Oregon coast. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina.
A total solar eclipse is seen on Monday, August 21, 2017 from onboard a NASA Armstrong Flight Research Center’s Gulfstream III 25,000 feet above the Oregon coast. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina. NASA/Carla Thomas

Don’t worry, there are plenty of places to still catch the April 8 event on the ground.

The post Delta’s solar eclipse flight sold out, but your best bet to see it is still down here appeared first on Popular Science.

]]>
A total solar eclipse is seen on Monday, Aug. 21, 2017, from onboard a NASA Armstrong Flight Research Center’s Gulfstream III 25,000 feet above the Oregon coast. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina.
A total solar eclipse is seen on Monday, August 21, 2017 from onboard a NASA Armstrong Flight Research Center’s Gulfstream III 25,000 feet above the Oregon coast. A total solar eclipse swept across a narrow portion of the contiguous United States from Lincoln Beach, Oregon to Charleston, South Carolina. NASA/Carla Thomas

Earlier this week, Delta Air Lines announced an extra flight for its April 8 schedule, timed specifically to provide passengers an aerial view of the total solar eclipse. But if you were still hoping to snag a ticket for the afternoon jaunt alongside the path of totality, you’re already out of luck—seats aboard the Airbus A220-300 sold out within 24 hours.

According to Delta’s original announcement, DL Flight 1218 with service from Austin to Detroit will depart at 12:15 PM CST for its roughly 1,380-mile, 3-hour-long trip. Once at a cruising altitude of 30,000-feet, passengers will be able to view the celestial event through the plane’s “extra-large windows,” which the official Airbus specs manual says measure in at 11×16 inches. For comparison, a Boeing 777 includes 10×15 inch glimpses of the outside world. Everyone on the plane will receive special glasses to safely watch the eclipse (which is nice to hear, given how few free amenities remain on most commercial flights).

[Related: We can predict solar eclipses to the second. Here’s how.]

While the solar eclipse will last several minutes for anyone on the ground, Flight 1218’s timing and route should grant a longer spectacle.

As cool as a first-class seat to the eclipse would be, there are plenty of (likely cheaper) locations across the US to consider visiting on April 8. After traveling across Central America, the path of totality will pass across large portions of  Oklahoma, Arkansas, Missouri, Illinois, Kentucky, Indiana, Ohio, Pennsylvania, New York, Vermont, New Hampshire, and Maine.

If you’re truly determined to head to skies, NPR notes that there are other flight options scheduled to pass by at least some part of the eclipse, including from Delta, as well as several from Southwest.

But keep in mind: A plane’s altitude doesn’t necessarily guarantee a picture-perfect view of the eclipse—if anything, there’s a chance that cloud coverage could impede an onlooker’s vantage. There’s also the possibility of weather or air traffic control delays, which… well, this country has a history of such headaches.

So despite the multiple jetset options, your best bet to see April’s eclipse is simply making sure you’re within its route, firmly on the ground, and equipped with proper eyewear. Seriously, take it from NASA: “Viewing any part of the bright Sun through a camera lens, binoculars, or a telescope without a special-purpose solar filter secured over the front of the optics will instantly cause severe eye injury.”

The post Delta’s solar eclipse flight sold out, but your best bet to see it is still down here appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This DVD-sized disk can store a massive 125,000 gigabytes of data https://www.popsci.com/technology/optical-disk-petabit/ Thu, 22 Feb 2024 16:00:00 +0000 https://www.popsci.com/?p=603799
Close up of laser etching optical disk
Researchers encoded the equivalent of 10,000 Blu-rays onto a standard-sized optical disk. Credit: University of Shanghai for Science and Technology

It can hold the same amount of information as 10,000 Blu-rays.

The post This DVD-sized disk can store a massive 125,000 gigabytes of data appeared first on Popular Science.

]]>
Close up of laser etching optical disk
Researchers encoded the equivalent of 10,000 Blu-rays onto a standard-sized optical disk. Credit: University of Shanghai for Science and Technology

Even in a digital-first world, optical disks like DVDs and Blu-rays still have their many uses. But despite being cheap, sturdy, and small, they can’t keep up with today’s storage needs. This is because, spatially speaking, optical disks almost always offer just a single, 2D layer–that reflective, silver underside–for data encoding. If you could boost a disk’s number of available, encodable layers, however, you could hypothetically gain a massive amount of extra space.

Researchers at the University of Shanghai for Science and Technology recently set out to do just that, and published the results earlier this week in the journal, Nature. Using a 54-nanometer laser, the team managed to record a 100 layers of data onto an optical disk, with each tier separated by just 1 micrometer. The final result is an optical disk with a three-dimensional stack of data layers capable of holding a whopping 1 petabit (Pb) of information—that’s equivalent to 125,000 gigabytes of data.

[Related: Inside the search for the best way to save humanity’s data.]

This is a bonkers amount of data compared to what can currently reside on even the most high-end flash or hybrid hard drives (HHDs). As Gizmodo offers for reference, that same petabit of information would require roughly a six-and-a-half foot tall stack of HHD drives—if you tried to encode the same amount of data onto Blu-rays, you’d need around 10,000 blank ones to complete your (extremely inefficient) challenge.

To pull off their accomplishment, engineers needed to create an entirely new material for their optical disk’s film, known as (take a big breath here) “dye-doped photoresist with aggregation-induced emission luminogens.” For brevity’s sake, AIE-DDPR is apparently just fine, too. AIE-DDPR film utilizes a combination of specialized, photosensitive molecules capable of absorbing photonic data at a nanoscale level, which is then encoded using a high-tech dual-laser array.

Because AIE-DDPR is so incredibly transparent, designers could apply layer-upon-layer to an optical disk without worrying about degrading the overall data. This basically generated a 3D “box” for digitized information, thus exponentially raising the normal-sized disk’s capacity.

But how much is a petabit, really? According to ZME Science, datasets used to train generative AI can include roughly 5.8 billion indexed webpages, totalling about 56 Pb of data. So, hypothetically, instead of relying on unsustainably energy-hungry data centers, one could conceivably fit all of ChatGPT’s training material in one of those retro CD album trapper keepers from the 2000s.

Unfortunately, a CD folder containing enough data to train your own AI program isn’t likely to arrive anytime soon. Creating the cutting-edge optical disk reportedly takes quite a while, and is still comparatively energy inefficient. Still, researchers believe they can solve for both hindrances with further experimentation and innovation. If so, some of the biggest issues in modern data management could be tackled by literally building upon a decades’ old physical format.

The post This DVD-sized disk can store a massive 125,000 gigabytes of data appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Do not put your wet iPhone in rice, warns Apple https://www.popsci.com/technology/apple-iphone-rice-disproven/ Tue, 20 Feb 2024 19:00:00 +0000 https://www.popsci.com/?p=603411
iPhone submerged in white rice
Save the rice for cooking, says Apple. Deposit Photos

The company has finally acknowledged the longstanding 'hack' could do more damage.

The post Do not put your wet iPhone in rice, warns Apple appeared first on Popular Science.

]]>
iPhone submerged in white rice
Save the rice for cooking, says Apple. Deposit Photos

You know the nightmare situation: You dropped your iPhone in water—be it pool, ocean, or toilet. Although iPhone 12’s and onward are designed to survive 30 minutes of aquatic submersion as deep as 20 feet, your worries get the best of you. In a frantic bid to save your expensive device from potential damage or even demise, you remember your friend’s suggestion to throw it in a bag of rice overnight. Supposedly, the grain draws out any remaining water droplets from the smartphone’s tiny crevices, saving its precious circuitry in the process. They swore by it, after all. What is there to lose?

[Related: Apple’s newest gadgets include titanium iPhones with USB-C ports.]

Well, whatever the supposed results (and despite a fair amount of longstanding contradictory evidence) the DIY repair is officially obtaining “urban myth” status. As MacWorld spotted earlier today, a recently updated Apple support document states in no uncertain terms that the ol’ bag of rice trick is bogus. What’s more, it could actually cause further issues in your iPhone.

“Don’t put your iPhone in a bag of rice,” Apple warns in the revised article on its dreaded Liquid Detection Alert. “Doing so could allow small particles of rice to damage your iPhone,” while the rice starch can gunk up the innards after making its way through the device’s small crevices. Besides all that, rice simply isn’t as effective a desiccant as other materials, such as those silica packets you already should be recycling, anyway.

Among the other rumored solutions to avoid, the company advises iPhone owners not to use an “external heat source” such as a blow dryer, as well as leave the compressed air can in the utility closet. Similarly, trying to stuff cotton swabs, napkins, paper towels, or any other “foreign object” into charging ports could make things worse.

So, what should you do if your iPhone takes a plunge? Apple advises a gentle approach in such situations, such as simply tapping the device against your hand “with the connector facing down” to dislodge liquid, then leaving it in an open, dry space with decent airflow for at least 30 minutes. From there, try connecting it to a cable charger.

If the Liquid Detection Alert proves persistent, Apple suggests allowing up to 24 hours to fully dry. And if even that doesn’t work? Well…

“If your phone has dried out but still isn’t charging, unplug the cable from the adapter and unplug the adapter from the wall (if possible) and then connect them again,” says Apple.

Yes. They really did pull out the trusty “Have you tried turning it on and off again?” line for this one.

The post Do not put your wet iPhone in rice, warns Apple appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dead satellite hurtles towards Earth in new grainy images https://www.popsci.com/science/ers-2-deorbit-photos/ Tue, 20 Feb 2024 16:00:00 +0000 https://www.popsci.com/?p=603399
Satellite image of ERS-2 deorbiting in Earth's atmosphere
ERS-2 launched in 1995, and surveyed Earth's topography and natural events for the ESA. ESA / HEO Space

After 29 years in orbit, ERS-2 is en route for a fiery demise tomorrow.

The post Dead satellite hurtles towards Earth in new grainy images appeared first on Popular Science.

]]>
Satellite image of ERS-2 deorbiting in Earth's atmosphere
ERS-2 launched in 1995, and surveyed Earth's topography and natural events for the ESA. ESA / HEO Space

A 5,000-pound dead satellite resembling a spaceship from Star Wars is hurtling towards Earth, but don’t worry—experts say situations like this happen “every week or two.”

Launched in 1995 by the European Space Agency from Kourou, French Guiana, the European Remote Sensing 2 (ERS-2) array spent over a decade-and-a-half observing the planet’s topography and weather events, including natural disasters in remote, hard-to-document regions. Alongside its sibling, ERS-1, the pair were considered the “most sophisticated Earth observation spacecraft” ever developed at the time of their deployment.

In July 2011, however, the ESA decided to retire its “nominally” functioning ERS-2 and begin a scheduled deorbiting process. The satellite underwent 66 maneuvers over the ensuing month, using up its remaining fuel to descend from an altitude from roughly 487-miles to 356-miles above the Earth’s surface. Since then, ERS-2’s orbit has slowly decayed to its current point—caught in the planet’s gravitational pull, and picking up speed as it falls into the atmosphere. 

On Sunday, the ESA posted grainy, black-and-white images to X taken last month by the Australian commercial imaging company, HEO, which show ERS-2 (then about 150-miles high) spiraling downwards during its final journey. From the camera’s vantage, the satellite certainly looks a lot like an incoming TIE Fighter from Star Wars

But no need to evade Imperial scrutiny—or even fiery orbital debris, for that matter. ERS-2 is currently falling at a rate of over 6.2 miles per day, a speed expected to accelerate as atmospheric drag takes an even greater hold. As of February 20, ERS-2 has around 120-or-so miles left to go, and will start breaking up and bursting into flames once about 50 miles high. Most, if not all, of the subsequent detritus will then immolate to harmless dust and ash, posing an extremely low damage risk for anything or anyone below it.

[Related: Some space junk just got smacked by more space junk, complicating cleanup.]

The ESA estimates ERS-2 will burn away around 3:53PM EST on Wednesday, although trackers offer as much as a 7-hour window on either side to account for “unpredictable solar activity” that could influence its descent speed. As to where in the world the satellite will fall apart—well, that part is a little more difficult to predict at the moment, although more accurate geolocation estimates are expected over the next day.

Deorbiting satellites is vital to ensuring enough room is kept for the thousands upon thousands of other human-made objects orbiting Earth. Increasingly crowded skies is a major concern for space agencies, private companies, and watchdog groups—an issue that isn’t likely to diminish anytime soon. Back in October, for example, a space junk cleanup mission proved more complicated when another piece of debris smacked into the satellite targeted for decommissioning. In the meantime, regulators like the FCC are fining companies for failing to do their part in accounting for their dead satellites.

After all, while a single satellite burning up during deorbit isn’t cause for concern—a “Kessler cascade” most certainly is. 

The post Dead satellite hurtles towards Earth in new grainy images appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to apply for NASA’s next Mars habitat simulation https://www.popsci.com/science/nasa-mars-habitat-chapea-volunteers/ Fri, 16 Feb 2024 21:00:00 +0000 https://www.popsci.com/?p=603220
Concept art of NASA Mars habitat
Three, one-year-long stints in a Mars habitat simulation are meant to pave the way for the real thing. NASA

See if you qualify to be a volunteer for a yearlong stint.

The post How to apply for NASA’s next Mars habitat simulation appeared first on Popular Science.

]]>
Concept art of NASA Mars habitat
Three, one-year-long stints in a Mars habitat simulation are meant to pave the way for the real thing. NASA

Looking for a change of pace from your day-to-day routine? Life on Earth feeling a bit overwhelming at the moment? How about a one-year residency alongside three strangers at a 3D-printed Mars habitat simulation?

On Friday, NASA announced it is now accepting applications for the second of three missions in its ongoing Crew Health and Performance Analog (CHAPEA) experiment. For 12 months, a quartet of volunteers will reside within Mars Dune Alpha, a 1,700-square-foot residence based at the Johnson Space Center in Houston, Texas, where they can expect to experience “resource limitations, equipment failures, communication delays, and other environmental stressors.” 

[Related: To create a small Mars colony, leave the jerks on Earth.]

When not pretending to fight for your survival on a harsh, barren Martian landscape, CHAPEA team members will also conduct virtual reality spacewalk simulations, perform routine maintenance on the Mars Dune Alpha structure itself, oversee robotic operations, and grown their own crops, all while staying in shape through regular exercise regimens.

But if the thought of pretending to reside 300 million miles away from your current home sounds appealing, well… cool your jets. NASA makes it clear that there are a few requirements applicants must meet before being considered for the jobs—such as a master’s degree in a STEM field like engineering, computer science, or mathematics. Then you’ll need either two years professional experience in a related field, or a minimum of 1,000 hours spent piloting aircrafts. Also, only non-smokers between 30 and 55-years-old will be considered, and military experience certainly sounds like a plus.

Oh, and you’ll also need to fill out NASA’s lengthy questionnaire, which includes entries like, “Are you willing to have no communication outside of your crew without a minimum time delay of 20 minutes for extended periods (up to one year)?” and, “Are you willing to consume processed, shelf-stable spaceflight foods for a year with no input into the menu?”

It’s certainly a lot to consider. But as tough as it might be, simulations like CHAPEA are vital for NASA’s Artemis plans to establish a permanent human presence on both the moon and Mars. The truly intrepid and accomplished among you have until April 2 to fill out the official application. Seeing as how CHAPEA’s inaugural class is currently about halfway through their one-year stint, this second round of volunteers won’t need to report for duty until sometime in 2025. 

The post How to apply for NASA’s next Mars habitat simulation appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss https://www.popsci.com/technology/openai-sora-generative-video/ Fri, 16 Feb 2024 18:15:23 +0000 https://www.popsci.com/?p=603154
Sora AI generated video still of woolly mammoth herd in tundra
A screenshot from one of the many hyperrealistic videos generated by OpenAI's Sora program. OpenAI

Generative AI videos advanced from comical to photorealistic within a single year. This is uncharted, dangerous territory.

The post OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss appeared first on Popular Science.

]]>
Sora AI generated video still of woolly mammoth herd in tundra
A screenshot from one of the many hyperrealistic videos generated by OpenAI's Sora program. OpenAI

It’s hard to write about Sora without feeling like your mind is melting. But after OpenAI’s surprise artificial intelligence announcement yesterday afternoon, we have our best evidence yet of what a yet unregulated, consequence-free tech industry wants to sell you: a suite of energy-hungry black box AI products capable of producing photorealistic media that pushes the boundaries of legality, privacy, and objective reality.

Barring decisive, thoughtful, and comprehensive regulation, the online landscape could very well become virtually unrecognizable, and somehow even more untrustworthy, than ever before. Once the understandable “wow” factor of hyperreal woolly mammoths and paper art ocean scapes wears off, CEO Sam Altman’s newest distortion project remains concerning.

The concept behind Sora (Japanese for “sky”) is nothing particularly new: It apparently is an AI program capable of generating high-definition video based solely on a user’s descriptive text inputs. To put it simply: Sora reportedly combines the text-to-image diffusion model powering DALL-E with a neural network system known as a transformer. While generally used to parse massive data sequences such as text, OpenAI allegedly adapted the transformer tech to handle video frames in a similar fashion.

“Apparently,” “reportedly,” “allegedly.” All these caveats are required when describing Sora, because as MIT Technology Review explains, OpenAI only granted access to yesterday’s example clips after media outlets agreed to wait until after the company’s official announcement to “seek the opinion of outside experts.” And even when OpenAI did preview their newest experiment, they did so without releasing a technical report or a backend demonstration of the model “actually working.”

This means that, for the conceivable future, not a single outside regulatory body, elected official, industry watchdog, or lowly tech reporter will know how Sora is rendering the most uncanny media ever produced by AI, what data Altman’s company scraped to train its new program, and how much energy is required to fuel these one-minute video renderings. You are at the mercy of what OpenAI chooses to share with the public—a company whose CEO repeatedly warned the extinction risk from AI is on par with nuclear war, but that only men like him can be trusted with the funds and resources to prevent this from happening.

The speed at which we got here is as dizzying as the videos themselves. New Atlas offered a solid encapsulation of the situation yesterday—OpenAI’s sample clips are by no means perfect, but in just nine months, we’ve gone from the “comedic horror” of AI Will Smith eating spaghetti, to near-photorealistic, high-definition videos depicting crowded city streets, extinct animals, and imaginary children’s fantasy characters. What will similar technology look like nine months from now—on the eve of potentially one of the most consequential US presidential elections in modern history.

Once you get over Sora’s parlor trick impressions, it’s hard to ignore the troubling implications. Sure, the videos are technological marvels. Sure, Sora could yield innovative, fun, even useful results. But what if someone used it to yield, well, anything other than “innovative,” “fun,” or “useful?” Humans are far more ingenious than any generative AI programs. So far, jailbreaking these things has only required some dedication, patience, and a desire to bend the technology for bad faith gains.

Companies like OpenAI promise they are currently developing security protocols and industry standards to prevent bad actors from exploiting our new technological world—an uncharted territory they continue to blaze recklessly into with projects like Sora. And yet they have failed miserably in implementing even the most basic safeguards: Deepfakes abuse human bodies, school districts harness ChatGPT to acquiesce to fascist book bans, and the lines between fact and fiction continue to smear.

[Related: Generative AI could face its biggest legal tests in 2024.]

OpenAI says there are no immediate plans for Sora’s public release, and that they are conducting red team tests to “assess critical areas for harms or risks.” But barring any kind of regulatory pushback, it’s possible OpenAI will unleash Sora as soon as possible.

“Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving [Artificial General Intelligence],” OpenAI said in yesterday’s announcement, once again explicitly referring to the company’s goal to create AI that is all-but-indistinguishable from humans.

Sora, a model to understand and simulate the real world—what’s left of it, at least.

The post OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
First remote, zero-gravity surgery performed on the ISS from Earth (on rubber) https://www.popsci.com/technology/remote-surgery-robot-iss/ Fri, 16 Feb 2024 15:00:00 +0000 https://www.popsci.com/?p=602988
Surgeon using spaceMIRA remote surgery tool on ISS
A team of surgeons used rubber bands to represent human tissue aboard the ISS. Credit: Virtual Incision

Surgeons in Nebraska controlled spaceMIRA from 250 miles below the ISS as it cut through simulated human tissue.

The post First remote, zero-gravity surgery performed on the ISS from Earth (on rubber) appeared first on Popular Science.

]]>
Surgeon using spaceMIRA remote surgery tool on ISS
A team of surgeons used rubber bands to represent human tissue aboard the ISS. Credit: Virtual Incision

Researchers successfully completed the first remote, zero-gravity “surgery” procedure aboard the International Space Station. Over the weekend, surgeons based at the University of Nebraska spent two hours testing out a small robotic arm dubbed the Miniaturized In Vivo Robotic Assistant, or spaceMIRA, aboard the ISS as it orbited roughly 250 miles above their heads. 

But don’t worry—no ISS astronauts were in need of desperate medical attention. Instead, the experiment utilized rubber bands to simulate human skin during its proof-of-concept demonstration on Saturday.

[Related: ‘Odie’ is en route for its potentially historic moon landing.]

Injuries are inevitable, but that little fact of life gets complicated when the nearest hospital is a seven-month, 300-million-mile journey away. But even if an incredibly skilled doctor is among the first people to step foot on Mars, they can’t be trained to handle every possible emergency. Certain issues, such as invasive surgeries, will likely require backup help. To mitigate these problems in certain situations, remote controlled operations could offer a possible solution.

Designed by Virtual Incision, a startup developing remote-controlled medical tools for the world’s most isolated regions, spaceMIRA weights only two pounds and takes up about as much shelf-space as a toaster oven. One end of its wandlike is topped with a pair of pronglike arms—a left one to grip, and right one to cut.

[Related: 5 space robots that could heal human bodies—or even grow new ones ]

Speaking with CNN on Wednesday, Virtual Incision cofounder and chief technology officer Shane Farritor explained spaceMIRA’s engineering could offer Earthbound the hands and eyes needed to perform “a lot of procedures minimally invasively.”

On February 10, a six-surgeon team in Lincoln, Nebraska, took spaceMIRA (recently arrived aboard the ISS via a SpaceX Falcon 9 rocket) for its inaugural test drive. One arm gripped a mock tissue sample, and the other used scissors to dissect specific portions of the elastic rubber bands.

spaceMIRA prototype on desk
A version of the spaceMIRA (seen above) traveled to the ISS earlier this month. Credit: Virtual Incision

While researchers deemed the experiment a success, surgeons noted the difficulty in accounting for lag time. Communications between Earth and the ISS are delayed about 0.85 seconds—while a minor inconvenience in most circumstances, even milliseconds can mean a matter of life or death during certain medical emergencies. Once on the moon, Artemis astronauts and NASA headquarters will deal with a full 1.3 seconds of delay between both sending and receiving data. On Mars, the first human explorers will face a full hour of waiting after firing off their message, then waiting for a response. Even taking recent laser communications breakthroughs into consideration, patience will remain a virtue for everyone involved in future lunar and Mars expeditions.

This means that, for the time being, devices like spaceMIRA are unlikely to help in split second medical decisions. But for smaller issues—say, a lunar resident’s stitch up after taking a tumble, such medical tools could prove invaluable for everyone involved. In the meantime, Virtual Incision’s remote controlled equipment could still find plenty of uses here on Earth.

The post First remote, zero-gravity surgery performed on the ISS from Earth (on rubber) appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This edible, wriggling robot mimics experience of eating moving food https://www.popsci.com/technology/edible-moving-soft-robot-japan/ Thu, 15 Feb 2024 22:00:00 +0000 https://www.popsci.com/?p=603044
Edible soft robot on table
The gelatin gummy component wriggles when inflated with air. Osaka University

In Japanese ‘odorigui’ cuisine, food is still alive. This gyrating robot is not.

The post This edible, wriggling robot mimics experience of eating moving food appeared first on Popular Science.

]]>
Edible soft robot on table
The gelatin gummy component wriggles when inflated with air. Osaka University

Remember the old reality show competition stunt of getting contestants to eat live bugs on primetime television? Consuming “food” while it’s still alive spans numerous cultures around the world. In Japan, for example, odorigui (or “dance-eating”) is a centuries’ old tradition often involving squid, octopus, and tiny translucent fish known as ice gobies. Diners pop these still-living creatures into their mouths, as the wriggling is part of the overall meal experience.   

To potentially better understand the psychology and emotional responses associated with consuming odorigui dishes, researchers designed their own stand-in—a moving gelatin robo-food combining 3D-printing, kitchen cooking, and air pumps. The results appear not only tastier than your average reality show shock snack, but a potential step towards creative culinary and medical applications.

… And yet, judging from this video, it’s undeniably still a little odd.

Engineering photo

Detailed in a study published earlier this month in PLOS One, a team at Japan’s University of Electro-Communications and Osaka University recently devised a pneumatically-driven handheld device to investigate what they dub “human-edible robot interaction,” or HERI. For the “edible” portion of HERI, researchers cooked up a gummy candy-like mixture using a little extra sugar and apple juice for flavor. 

After letting the liquid cure in molds that included two hollow airways, the team then attached the snack to a coffee mug-like holder. The design allowed researchers to inject air through the gelatin in different combinations—alternating airflow between each tube produced a side-to-side wagging motion, while simultaneous inflation offered a (slightly unnerving) pulsating movement.

And then, the taste tests.

The team directed 16 Osaka University students to grab the device holding their designated, writhing soft robot morsel, place the edible portion in their mouth, allow it to move about for 10 seconds, then chomp. Another (possibly relieved) group of control students also ate a normal, immobile gelatin gummy. Following their meals, each volunteer answered a survey including questions such as:

– Did you think what you just ate had animateness?

– Did you feel an emotion in what you just ate?

– Did you think what you just ate had intelligence?

– Did you feel guilty about what you just ate?

Perhaps unsurprisingly, it seems that a meal’s experience can be influenced by whether or not the thing you just put in your mouth is also moving around in your mouth. Students described this sensation using the Japanese onomatopoeic terms gabu, or “grappling,” and kori-kori, meaning “crisp.” Movement also more frequently caused volunteers to feel a bit of guilt at eating a “still living” dish, as well as attach a sense of intelligence to it.

[Related: Scientists swear their lab-grown ‘beef rice’ tastes ‘pleasant’]

While only an early attempt at looking into some of the dynamics in odorigui, researchers believe more intricate soft robot designs can allow for more accurate experiments. Meanwhile, such research could lead to a “deepening understanding of ethical, social, and philosophical implications of eating,” as well as potential uses in medical studies involving oral and psychological connections. There’s also a possibility for “innovative culinary” experiences down the line, so who knows what might be coming to high-brow restaurants in the future—perhaps gyrating gyros, or wobbly waffles. Hopefully, nothing too macabre will wind up on menus. It’s certainly something researchers took into consideration during their tests.

“NOTE: During the experiment, we did not draw a face on the edible robot,” reads the fine print at the bottom of the demonstration video, presumably meaning they were just having a bit of fun with the project.

Which is good to hear. Otherwise, this whole thing might have come across as weird.

The post This edible, wriggling robot mimics experience of eating moving food appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists swear their lab-grown ‘beef rice’ tastes ‘pleasant’ https://www.popsci.com/environment/hybrid-beef-rice-food/ Wed, 14 Feb 2024 22:00:00 +0000 https://www.popsci.com/?p=602780
Pink lab-grown beef rice in white bowl
It might not be the most appetizing, but it is definitely more eco-friendly. Yonsei University

Anyone hungry for a 'novel flavor experience?'

The post Scientists swear their lab-grown ‘beef rice’ tastes ‘pleasant’ appeared first on Popular Science.

]]>
Pink lab-grown beef rice in white bowl
It might not be the most appetizing, but it is definitely more eco-friendly. Yonsei University

The whole point of lab-grown meat, by and large, is to create a sustainable product capable of… you know, replacing meat. Researchers at universities and startup companies across the world have spent years and a lot of money on attempts to accurately imitate chicken, beef, fish, and even extinct woolly mammoths.

It’s an uphill battle, but convincing a substantial portion of the population to reduce, if not entirely cut, animal meat from their diets is widely considered a key way to combat industrial farming’s massive global carbon emissions. But instead of trying to replicate the minutiae of a burger’s mouthfeel and flavor, one group of scientists decided to sidestep those goals entirely for a new dish: “beef rice” grown from lab-cultured cow fat cells.

Beef rice lab culture on table next to equipment
Looks delicious. Credit: Yonsei University

But if you are skeptical at the thought of spoonfuls of synthetic meat-grain meals, fear not: Its makers swear their pinkish globules offer its consumers a “unique blend of aromas” including that “slight nuttiness and umami” usually associated with meat… or, at least, that’s what research lead Jinkee Hong swears.

“We tried it with various accompaniments and it pairs well with a range of dishes,” he relayed in a Wednesday profile at The Guardian.

Hong and his collaborators have detailed their process in a new paper published with Matter. Before unleashing their Frankenstein concoction into the world, the team first slathered regular rice grains with fish gelatin and injected them with lab-grown muscle and fat stem cells. The resultant hodgepodge then cultured anywhere from 9-to-11 days before being steamed for dinner time.

[Related: Scientists made a woolly mammoth meatball.]

Depending on the meat-to-fat cell ratios, taste tests of Hong’s reportedly yielded different scent and taste palates. Higher muscular contents predictably gave hints of meat and almond, while fattier variants offered notes of cream, butter, and coconut oil. Due to the altered chemical compositions, however, the rice generally proved firmer and more brittle than standard grains. Generally, the new dish also contains 8 percent more protein and 7 percent more fat than its naturally grown source rice.

Of course, rice isn’t exactly known for its high amounts of protein or fat, so those numbers aren’t going to factor into anyone’s pre-workout meal prep anytime soon. The real benefits to such a food alternative, argues researchers, is its impressively sustainability and cost-saving potential.

By their calculations, beef rice “has a significantly smaller carbon footprint at a fraction of a price.” Real beef farming releases nearly 50 kg (110 lbs) of CO2 emissions per 100 g of protein—the hybrid grain, meanwhile, releases less than 6.27 kg (14.8 lbs) for the same amount. And while beef costs less than $14.90 per kg (2.2 lbs), the equivalent rice might only set you back $2.23.

For what it’s worth, it doesn’t sound like the mad scientists behind beef rice expect their pink granules to replace your next hot pot’s bottom layer anytime soon. Instead, such a creation could find its way into emergency food supplies in regions struck by famine or natural disaster, as well as potentially within astronaut and military rations.

“While it does not exactly replicate the taste of beef, it offers a pleasant and novel flavor experience,” Hong said. Hungry yet?

The post Scientists swear their lab-grown ‘beef rice’ tastes ‘pleasant’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new AI-powered satellite will create Google Maps for methane pollution https://www.popsci.com/technology/methanesat-edf-google-satellite/ Wed, 14 Feb 2024 16:00:00 +0000 https://www.popsci.com/?p=602657
MethaneSAT concept art above Earth
Methane is very hard to track around the world, but a new satellite project could help address the issue. MethaneSAT LLC

Google and the Environmental Defense Fund have teamed up to track the elusive emissions—from space.

The post A new AI-powered satellite will create Google Maps for methane pollution appeared first on Popular Science.

]]>
MethaneSAT concept art above Earth
Methane is very hard to track around the world, but a new satellite project could help address the issue. MethaneSAT LLC

Methane emissions, be it from industrial cattle farming or fossil fuel extraction, are responsible for roughly 30 percent of the Earth’s climate change issues. But despite the massive amounts of methane emissions released into the atmosphere every year, it’s often difficult to track the pollutant—apart from being invisible to the human eye and satellites’ multispectral near-infrared wavelength sensors, methane is also hard to assess due to spectral noise in the atmosphere.

To help tackle this immediate crisis, Google and the Environmental Defense Fund are teaming up for a new project with lofty goals. Announced in a new blog post earlier today, MethaneSAT in a new, AI-enhanced satellite project to better track and quantify the dangerous emissions, with an aim to offer the info to researchers around the world.

Google Earth Image screenshot displaying methane geodata map
EDF’s aerial data, available in Earth Engine, shows both high-emitting point sources as yellow dots, and diffuse area sources as a purple and yellow heat map. MethaneSAT will collect this data with the same technology, at a global scale and with more frequency. Credit: Google

“MethaneSAT is highly sophisticated; it has a unique ability to monitor both high-emitting methane sources and small sources spread over a wide area,” Yael Maguire, Google’s VP and General Manager of Geo Developer & Sustainability, said in a February 14 statement.

[Related: How AI could help scientists spot ‘ultra-emission’ methane plumes faster—from space.]

To handle such a massive endeavor, the EDF developed new algorithmic software with researchers at the Smithsonian Astrophysical Observatory andHarvard University’s School of Engineering and Applied Science and its Center for Astrophysics. Their new supercomputer-powered AI system can calculate methane emissions in specific locations, and subsequently track those pollutants as they spread in the atmosphere. 

MethaneSAT is scheduled to launch aboard a SpaceX Falcon 9 rocket in early March. Once deployed at an altitude of over 350 miles, the satellite will circle the Earth 15 times per day at roughly 1,660 mph. Aside from emission detection duties, Google and EDF intend to harness their AI programs to compile a worldwide map of oil and gas infrastructure systems to hone in what aspects rank as the worst offenders. According to Google, this will function much like how its AI programs interpret satellite imagery for Google Maps. Instead of road names, street signs, and sidewalk markers, however, MethaneSAT will help tag points like oil storage containers.

Google satellite imagery displaying oil wells
The top satellite image shows a map of dots, which are correctly identified as oil well pads. Using our satellite and aerial imagery, we applied AI to detect infrastructure components. Well pads are shown in yellow, oil pump jacks are shown in red, and storage tanks are shown in blue. Credit: Google

“Once we have this complete infrastructure map, we can overlay the MethaneSAT data that shows where methane is coming from,” Maguire said on Wednesday. “When the two maps are lined up, we can see how emissions correspond to specific infrastructure and obtain a far better understanding of the types of sources that generally contribute most to methane leaks.” Datasets like these could prove valuable to watchdogs and experts attempting to rein in oil and gas emission locations that may become more prone to leaks.

All this much-needed information is intended to become available later this year through the official MethaneSAT website, as well as Google Earth Engine, the company’s open-source global environmental monitoring platform. In the very near future, the new emissions data will be able to combine alongside datasets concerning factors like waterways, land cover, and regional borders to better assess where we are as a global community, and what needs to be done in order to stave off climate change’s worst outcomes.

The post A new AI-powered satellite will create Google Maps for methane pollution appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Huge underwater ‘kite’ turbine powered 1,000 homes in the Faroe Islands https://www.popsci.com/environment/minesto-dragon-kite-turbine/ Tue, 13 Feb 2024 18:30:00 +0000 https://www.popsci.com/?p=602566
Minesto Dragon 4 undersea kite turbine traveling atop water
Kite turbines like the Dragon 4 and Dragon 12 could soon provide tidal power to nearby homes. Minesto

Minesto’s Dragon 12 can create 1.2 megawatts of power by swimming against the ocean currents.

The post Huge underwater ‘kite’ turbine powered 1,000 homes in the Faroe Islands appeared first on Popular Science.

]]>
Minesto Dragon 4 undersea kite turbine traveling atop water
Kite turbines like the Dragon 4 and Dragon 12 could soon provide tidal power to nearby homes. Minesto

It’s been over a decade since PopSci last checked in on Minesto’s underwater “kite” turbine technology. Since then, the Swedish green energy startup has made some big strides in their creative approach to generating clean electricity from swimming against the ocean currents. 

Last week, Minesto announced a major moment for their largest creation. A nearly 40-foot-wide, 30-ton, highlighter yellow Dragon 12 “tidal power plant” delivered its first 1.2 megawatts (MW) of energy to the Faroe Islands’ national grid. That’s enough power to sustain a small town of 1,000 homes.

[Related: Tidal turbines put a new spin on the power of the ocean.]

Although referred to as a “kite,” Dragon 12 arguably more resembles a biplane, and remains almost entirely below the ocean surface. Minesto’s video montage celebrating the inaugural voyage shows their tidal energy system leashed to a tugboat as it travels across an inland bay for installation.

Renewables photo

Once installed, the Dragon 12 uses an onboard control system to steer its rudders. This allows continuous travel along a predetermined, countercurrent figure-8 pattern faster than surrounding water to rotate its turbine. The resulting generated energy then transfers down a subsea cable tether and to an onshore power facility through an umbilical line installed on the ocean floor.

The idea behind tidal green energy plants isn’t new, but for years the underlying technology has proven cost prohibitive and logistically difficult. Other designs are frequently massive endeavors. Scotland-based Orbital Marine Power’s 232-feet-long O2 turbine “superstructure,” for example, weighs in at nearly 700 tons while generating about 4 MW of power—a little more than four-times what Dragon 12 accomplished this month. Both approaches likely have their uses, but Minesto’s latest milestone indicates smaller, more modular, interlocked options could soon become available to energy providers.

And linking up multiple Dragon turbines is exactly what Minesto hopes to do next. According to The Next Web, the company intends to partner with a local Faroe Islands utility company to construct a 120MW system comprising around 100 tidal kite turbines. If successful, such a project could provide as much as 40-percent of the island archipelago’s entire electricity needs.

For microgrid plans, Minesto also has a smaller sibling to the Dragon 12. Dubbed the Dragon 4, this kite turbine system can generate 100kW of energy, and at just 13 x 16 x 9ft, can fit inside a standard shipping container for easy transport.

The post Huge underwater ‘kite’ turbine powered 1,000 homes in the Faroe Islands appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A Martian solar eclipse turns the sun into a giant googly eye https://www.popsci.com/science/phobos-mars-solar-eclipse/ Mon, 12 Feb 2024 19:11:26 +0000 https://www.popsci.com/?p=602387
Phobos creating partial solar eclipse on Mars, image taken by Perseverance rover
A Phobos eclipse will only grow larger over the next 50 million years as it continues to descend towards Mars. NASA/JPL-Caltech/ASU

NASA's Perseverance rover captured Phobos as it crossed in front of the sun last week.

The post A Martian solar eclipse turns the sun into a giant googly eye appeared first on Popular Science.

]]>
Phobos creating partial solar eclipse on Mars, image taken by Perseverance rover
A Phobos eclipse will only grow larger over the next 50 million years as it continues to descend towards Mars. NASA/JPL-Caltech/ASU

The next solar eclipse to cross North America is fast approaching, but over on Mars, the Red Planet already experienced one of its own celestial shadow events this year.

On February 8, the asteroid-sized Martian moon Phobos crossed in front of the sun above Jezero Crater—the area just so happening to host NASA’s Perseverance rover. As Phobos continued across the sky, Percy’s left Mastcam-Z camera angled away from its usual landscape vista subject matter towards the satellite, snapping a few dozen photos for project coordinators back at NASA’s Jet Propulsion Laboratory (JPL).

Gallery of Phobos solar eclipse thumbnails
Credit: NASA/JPL/ASU

The images showcase a markedly different full lunar eclipse than the ones Earth receives every 2.5 or so years. Given both Phobos’ size and shape, the moon doesn’t fully cover the sun—instead, the  17x14x11 mile misshapen hunk of rock blocks only a small portion of the star as it continues along its path. The result arguably resembles more googly eye than awe-inspiring cosmic calendar occurrence, but it’s still a pretty impressive vantage point.

Phobos and its smaller sibling moon Deimos were discovered in 1877 by US astronomer Asaph Hall, and are respectively named after the Greek words for “Fear” and “Dread.” The origins of both satellites aren’t wholly understood, although astronomers theorize them to be either asteroids or debris leftover from the solar system’s formation that occurred around 4.5 billion years ago.

[Related: The Mars Express just got up close and personal with Phobos.]

While the Earth’s moon continues to inch away from its planetary pull at a rate of roughly 1.5 inches per year, Phobos is actually being drawn towards Mars—about six feet closer every century. While that makes for a comparatively slow descent, it does still mean the moon will eventually either crash into Mars, or break it up into thousands of fragments to form a planetary ring like Saturn’s. No need to worry, though, since that grand finale isn’t expected for another 50 million years. In the meantime, Phobos will continue orbiting Mars at a rate of three times per day, while the slower Deimos completes its journey every 30 hours.

Perseverance’s lunar eclipse capture, while incredible on its own, naturally fails to capture much detail of the moon’s pockmarked surface. Luckily, the European Space Agency’s Mars Express caught a closer look back in 2022, when the satellite came within just 52 miles of the moon to snap its own photos.

The post A Martian solar eclipse turns the sun into a giant googly eye appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A crowd torched a Waymo robotaxi in San Francisco https://www.popsci.com/technology/waymo-torched-vandals/ Mon, 12 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=602323
Destroyed Waymo on after attacked by vandals in San Francisco
The vehicle appeared 'decapitated' by the time first responders arrived, but no one was injured. Credit: San Francisco Fire Dept. Media / Séraphine Hossenlopp

No injuries were reported after the fire department extinguished Saturday evening's blaze.

The post A crowd torched a Waymo robotaxi in San Francisco appeared first on Popular Science.

]]>
Destroyed Waymo on after attacked by vandals in San Francisco
The vehicle appeared 'decapitated' by the time first responders arrived, but no one was injured. Credit: San Francisco Fire Dept. Media / Séraphine Hossenlopp

Vandals thoroughly obliterated a Waymo autonomous taxi in San Francisco’s Chinatown on Saturday evening to the cheers of onlookers. In an emailed statement provided to PopSci, a Waymo spokesperson confirmed the vehicle was empty when the February 10 incident began just before 9PM, and no injuries were reported. Waymo says they are also “working closely with local safety officials to respond to the situation.”

A San Francisco Fire Department (SFFD) representative also told PopSci responders arrived on the scene at 9:03PM to a “reported electric autonomous vehicle on fire” in the 700 block of Jackson St., which includes a family owned musical instrument store and a pastry shop.

“SFFD responded to this like any other vehicle fire with 1 engine, 1 truck, and for this particular incident the battalion chief was on scene as well,” the representative added in their email.

Multiple social media posts over the weekend depict roughly a dozen people smashing the Waymo Jaguar I-Pace’s windows, covering it in spray paint, and eventually tossing a firework inside that set it ablaze—all to the enthusiastic encouragement of bystanders. After posting their own video recordings to X, one onlooker told Reuters that someone wearing a white hoodie “jumped on the hood of the car and literally WWE style K/O’ed the windshield & broke it.” Additional footage uploaded by street reporter “Franky Frisco” to their YouTube channel also shows emergency responders dousing the flaming EV, which reportedly caught fire after someone tossed a firecracker inside the car. Chinatown’s streets were already crowded by visitors attending Lunar New Year celebrations.

Speaking to The Autopian, Frisco says that they have covered similar autonomous vehicle situations in the past, but this weekend’s drama left the Waymo vehicle looking “completely ‘decapitated.’” Upon arrival, emergency responders reportedly even had difficulty discerning whether it was a Waymo or Zoox car. Although both companies (owned by Google and Amazon, respectively) offer driverless taxi services, neither fleet resembles one another—when they are in better condition.

[Related: Self-driving taxis blocked an ambulance and the patient died, says SFFD.]

Electric Vehicles photo

Motive for Saturday night’s incident remains unclear. The event took place as locals continue to push back against autonomous taxi operations in the area. Since receiving a regulatory greenlight for 24/7 services in August 2023, numerous reports detail cars from companies like Waymo, Zoox, and Cruise creating traffic jams, running stop signs, and blocking emergency responders. In October 2023, a Cruise driverless taxi allegedly hit a pedestrian and dragged her 20-feet down the road. Cruise’s CEO stepped down the following month, and the General Motors-owned company subsequently issued first San Francisco, then nationwide, operational moratoriums.

Not only is this weekend’s autonomous taxi butchering aggressive, dangerous, and illegal—it’s also apparently a bit of overkill. According to previous reports, driverless car protestors around San Francisco have found that simply stacking orange traffic cones atop a taxi’s hood renders its camera navigation system useless until the obstruction is removed.

The post A crowd torched a Waymo robotaxi in San Francisco appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A sea creature extinct for half a billion years inspired a new soft robot https://www.popsci.com/technology/extinct-sea-creature-soft-robot/ Sat, 10 Feb 2024 13:00:00 +0000 https://www.popsci.com/?p=602170
pleurocystitid soft robot
Pleurocystitid inspired soft robot on rocky beach. Desatnick et al. / Carnegie Mellon

Pleurocystitids arrived in the oceans alongside jellyfish. Although long gone, they may help guide the future of 'paleobionics.'

The post A sea creature extinct for half a billion years inspired a new soft robot appeared first on Popular Science.

]]>
pleurocystitid soft robot
Pleurocystitid inspired soft robot on rocky beach. Desatnick et al. / Carnegie Mellon

Plenty of robots are inspired by existing animals, but not as many take their cue from extinct creatures. To design their own new machine, Carnegie Mellon University researchers looked over 500-million years back in time for guidance. Their result, presented during the 68th Biophysical Society Annual Meeting, is an underwater soft robot modeled after one of the sea urchin’s oldest ancestors.

[Related: Watch robot dogs train on obstacle courses to avoid tripping.]

Pleurocystitids swam the oceans around half a billion years ago—about the same time experts now believe jellyfish first appeared. While an ancient precursor to invertebrates such as sea stars, pleurocystitids featured a muscular, tail-like structure that likely allowed them to better maneuver underwater. After studying CT scans of the animal’s fossilized remains, researchers fed the data into a computer program to analyze and offer mobility simulations.

While no one knows for sure exactly how pleurocystitids moseyed around, the team determined the most logical possibility likely involved side-to-side sweeping tail motions that allowed it to propel across the ocean floor. This theory is also reinforced by fossil records, which indicate the animal’s tail lengthened over time to make them faster without the need for much more energy expenditure. From there, engineers built their own tail-touting, soft robot pleurocystitid.

Evolution photo

To the casual viewer, footage of the mechanical monster clumsily inching across the ground may seem to hint at why the pleurocystitid is long gone. But according to Richard Desatnick, a Carnegie Mellon PhD student under the direction of mechanical engineering faculty Phil LeDuc and Carmel Majidi, the ancient animal likely deserves more credit.

“There are animals that were very successful for millions of years and the reason they died out wasn’t from a lack of success from their biology—there may have been a massive environmental change or extinction event,” Desatnick said in a recent profile.

Geologic records certainly reinforce such an argument. What’s more, given that today’s animal world barely accounts for one percent of all creatures to ever roam, swim, or soar above the planet, there is a wealth of potential biomechanical inspirations left to explore. Desatnick and his colleagues hope that their proof-of-concept pleurocystitid will help inspire new entries into a field they call paleobionics—the study of Earth’s animal past to guide some of tomorrow’s robotic creations.

The Carnegie Mellon team believes future iterations of their soft robot could offer a variety of uses—including surveying dangerous geological locations, and helping out with underwater machine repairs. More agile robo-pleurocystitids may one day glide through the waters. Even if nearby sea stars and urchins don’t recognize it, neither would exist without their shared source of inspiration.

The post A sea creature extinct for half a billion years inspired a new soft robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Aging reactor sets new fusion energy record in last hurrah https://www.popsci.com/technology/jet-fusion-reactor-record/ Fri, 09 Feb 2024 20:00:00 +0000 https://www.popsci.com/?p=602165
Interior of JET fusion reactor with plasma superimposed
The historic nuclear fusion facility generated over 69 megajoules of energy in just 5 seconds. EUROfusion

The Joint European Torus (JET) facility retired after four decades of service, but not without achieving one final milestone.

The post Aging reactor sets new fusion energy record in last hurrah appeared first on Popular Science.

]]>
Interior of JET fusion reactor with plasma superimposed
The historic nuclear fusion facility generated over 69 megajoules of energy in just 5 seconds. EUROfusion

After 40 years of major nuclear fusion milestones, the Joint European Torus (JET) facility finally shut down in December 2023—but not without one final record shattering achievement. On Thursday, representatives for the groundbreaking tokamak reactor confirmed its final experiment generated 69.26 megajoules of energy in only five seconds. That’s over 10 megajoules more than JET’s previous world record, and more than triple its very first 22 megajoule peak power level back in 1997.

[Related: The world’s largest experimental tokamak nuclear fusion reactor is up and running.]

Located in Oxfordshire, UK, the JET reactor facility began operations in 1983 in the hopes of edging the world closer to sustainable, economically viable fusion production. While fission emits massive amounts of energy through splitting atoms, fusion involves smashing atoms such as tritium and deuterium together at temperatures over 150 million degrees Celsius to create helium plasma, a neutron, and ridiculous amounts of energy. The sun—and every other star, by extension—are essentially gigantic celestial nuclear fusion reactors, so mimicking even a fraction of that kind of power here on Earth could revolutionize the energy industry.

The first tokamak—an acronym of “toroidal chamber with magnetic coils”—reactor came online in the USSR in 1958. Tokamaks resemble a huge, extremely high-tech tire filled with hydrogen gas fuel that is then spun at high speeds through magnetic coiling. The force of its rotations around the chamber then ionizes the atoms into helium plasma.

While multiple facilities around the world can produce nuclear fusion reactions, it remains extremely cost prohibitive. JET’s December record, for example, pulled off its all-time energy levels in only five seconds—but that 69 megajoules was still only enough to warm a few bathtubs’ worth of water.

Even the most optimistic realists estimate it could take another 20 years (at the very least) before affordable fusion energy is a viable option. Others, meanwhile, argue useful fusion reactors will never be a financially feasible solution. It currently costs hundreds of thousands of dollars to simply fire up a fusion reactor, much less sustain its processes indefinitely—which none can, since the technology isn’t available yet. On top of that, today’s climate emergency can’t wait for a solution two-or-more decades down the line. But if society ever does make fusion reactors a real and sustainable alternative, however, it will be largely owed to everything JET accomplished over its four decades of service.

Speaking with the BBC on Thursday, UK Minister for Nuclear and Networks Andrew Bowie called JET’s final experiment a “fitting swan song” for the reactor pushing the world “closer to fusion energy than ever before.”
With JET powered down for good, the world’s largest fusion reactor is now Japan’s six-story-tall JT-60SA tokamak located north of Tokyo. Although inaugurated in December 2023, if all goes as planned the JT-60SA won’t hold the title for long. Its European sibling, the International Thermonuclear Experimental Reactor (ITER) is scheduled to go online sometime in 2025—although that project has not been without its difficulties and delays.

The post Aging reactor sets new fusion energy record in last hurrah appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
2,000 new characters from burnt-up ancient Greek scroll deciphered with AI https://www.popsci.com/technology/vesuvius-scrolls-ai-deciphered/ Fri, 09 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=602097
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners.
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners. Vesuvius Challenge

The Vesuvius Challenge winners were able to digitally reconstruct a philosopher's rant previously lost to volcanic damage.

The post 2,000 new characters from burnt-up ancient Greek scroll deciphered with AI appeared first on Popular Science.

]]>
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners.
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners. Vesuvius Challenge

Damaged ancient papyrus scrolls dating back to the 1st century CE are finally being deciphered by the Vesuvius Challenge contest winners using computer vision and AI machine learning programs. The scrolls were carbonized during the eruption of Italy’s Mount Vesuvius in 79 CE and have been all-but-inaccessible using normal restoration methods, as they have been reduced to a fragile, charred log. Three winners–Luke Farritor (US), Youssef Nader (Egypt), and Julian Schilliger (Switzerland)–will split the $700,000 grand prize after deciphering roughly 2,000 characters making up 15 columns of never-before-seen Greek texts.

[Related: AI revealed the colorful first word of an ancient scroll torched by Mount Vesuvius.]

In October 2023, Farritor, a 21-year-old Nebraska native and former SpaceX intern won the challenge’s “First Word” contest after developing a machine learning model to parse out the first few characters and form the word Πορφύραc—or porphyras, ancient Greek for “purple.” He then teamed up with Nader and Schlinder to tackle the remaining fragments using their own innovative AI programs. The newly revealed text is an ancient philosopher’s meditation on life’s pleasures—and a dig on people who don’t appreciate them.  

AI photo

A 1,700 year journey

The scrolls once resided within a villa library believed to belong to Julius Caesar’s father-in-law, south of Pompeii in the town of Herculaneum. Upon its eruption, Mount Vesuvius’ historic volcanic blast near-instantly torched the library before subsequently burying it in ash and pumice. The carbonized scrolls remained lost for centuries until rediscovered by a farmer in 1752. Over the next few decades, a Vatican scholar utilized an original, ingenious weighted string method to carefully “unroll” much of the collection. Even then, the monk’s process produced thousands of small, crumbled fragments which he then needed to laboriously piece back together.

Fast forward to 2019, and around 270 “Villa of the Papyri” scrolls still remained inaccessible—a lingering mystery prompting a team at the University of Kentucky to 3D scan the archive and launch the Vesuvius Challenge in 2023. After releasing open-source software alongside thousands of 3D X-ray scans made from three papyrus fragments and two scrolls, challenge sponsors offered over $1 million in various prizes to help develop new, high-tech methods for accessing the unknown contents.

What do the scrolls say?

According to a February 5 post on X from competition sponsor Nat Friedman, the first scroll’s final 15 columns were likely penned by Epicurean philosopher Philodemus, and discuss “music, food, and how to enjoy life’s pleasures.”

According to the Vesuvius Challenge announcement, two columns of the scroll, for example, center on whether or not the amount of available food influences the level of pleasure diners will feel from their meals. In this case, the scroll’s author argues it doesn’t: “[A]s too in the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant.”

“In the closing section, he throws shade at unnamed ideological adversaries—perhaps the stoics?—who ‘have nothing to say about pleasure, either in general or in particular,'” Friedman also said on X.

Although much more remains to be uncovered, challenge organizers have previously hypothesized the scrolls could include long-lost works including the poems of Sappho.

AI photo

But despite the grand prize announcement, the Vesuvius Challenge is far from finished—the newly translated text makes up just 5 percent of a single scroll, after all. In the same X announcement, Friedman revealed the competition’s next phase: a new, $100,000 prize to the first team to retrieve at least 90 percent of the four currently scanned scrolls.

At this point, learning the ancient scrolls’ contents is more a “when” than an “if” for researchers. Once that’s done, well, huge sections of the Villa of the Papyri remain unexcavated. And within those ruins? According to experts, potentially thousands more scrolls await eager eyes.

The post 2,000 new characters from burnt-up ancient Greek scroll deciphered with AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
FCC bans AI-generated robocalls https://www.popsci.com/technology/fcc-ai-robocall-ban/ Thu, 08 Feb 2024 22:00:00 +0000 https://www.popsci.com/?p=602015
Hand reaching to press 'accept' on unknown smartphone call
The FCC wants to deter bad actors ahead of the 2024 election season. Deposit Photos

Thanks to a 1991 telecom law, scammers could face over $25,000 in fines per call.

The post FCC bans AI-generated robocalls appeared first on Popular Science.

]]>
Hand reaching to press 'accept' on unknown smartphone call
The FCC wants to deter bad actors ahead of the 2024 election season. Deposit Photos

The Federal Communications Commission unanimously ruled on Thursday that robocalls containing AI-generated vocal clones are illegal under the Telephone Consumer Protection Act of 1991.The telecommunications law passed over 30 years ago now encompasses some of today’s most advanced artificial intelligence programs. The February 8 decision, effective immediately, marks the FCC’s strongest escalation yet in its ongoing efforts to curtail AI-aided scam and misinformation campaigns ahead of the 2024 election season.

“It seems like something from the far-off future, but it is already here,” FCC Chairwoman Jessica Rosenworcel said in a statement accompanying the declaratory ruling. “This technology can confuse us when we listen, view, and click, because it can trick us into thinking all kinds of fake stuff is legitimate.”

[Related: A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election.]

The FCC’s sweeping ban arrives barely two weeks after authorities reported a voter suppression campaign targeting thousands of New Hampshire residents ahead of the state’s presidential primary. The robocalls—later confirmed to originate from a Texas-based group—featured a vocal clone of President Joe Biden telling residents not to vote in the January 23 primary.

Scammers have already employed AI software for everything from creating deepfake celebrity videos to hawk fake medical benefit cards, to imitating an intended victim’s loved ones for fictitious kidnappings. In November, the FCC launched a public Notice of Inquiry regarding AI usage in scams, as well as how to potentially leverage the same technology in combating bad actors.

According to Rosenworcel, Thursday’s announcement is meant “to go a step further.” Passed in 1991, the Telephone Consumer Protection Act at the time encompassed unwanted and “junk” calls containing artificial or prerecorded voice messages. Upon reviewing the law, the FCC (unsurprisingly) determined AI vocal clones are ostensibly just much more advanced iterations of the same spam tactics, and thereby are subject to the same prohibitions.

“We all know unwanted robocalls are a scourge on our society. But I am particularly troubled by recent harmful and deceptive uses of voice cloning in robocalls,” FCC Commissioner Geoffrey Starks said in an accompanying statement. Starks continued by calling generative AI “a fresh threat” within voter suppression efforts ahead of the US campaign season, and thus warranted immediate action.

In addition to potentially receiving regulatory fines of more than $23,000 per call, vocal cloners are now also open to legal action from victims. The Telephone Consumer Protection Act states individuals can recover as much as $1,500 in damages per unwanted call.

The post FCC bans AI-generated robocalls appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cyborg locusts may one day help search-and-rescue missions https://www.popsci.com/technology/cyborg-locust-nanoparticles-smell/ Thu, 08 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=601989
Desert locust (Schistocerca gregaria)
Locusts have strong olfactory senses, but it can be difficult to track their brain activity with just electrodes. Deposit Photos

Researchers think that injecting nanoparticles into the bugs’ brains will harness their strong olfactory senses.

The post Cyborg locusts may one day help search-and-rescue missions appeared first on Popular Science.

]]>
Desert locust (Schistocerca gregaria)
Locusts have strong olfactory senses, but it can be difficult to track their brain activity with just electrodes. Deposit Photos

It’s tough to top locusts’ destructive capabilities—there’s a reason they’re one of the biblical plagues, after all. The insect’s notorious ability to hone in on food sources like agricultural fields is largely owed to impressive olfactory senses powered by its antennae. Although researchers previously integrated this biological tool into robotics to potentially develop a new generation of bomb sniffing and search-and-rescue aids, a team at Washington University in St. Louis, MO, is experimenting with harnessing the bugs themselves… after augmenting them into cyborgs.

Engineers can already utilize locusts’ sense of smell by recording signals of electrodes attached to their brains, but the results are often inaccurate and unreliable. To solve this issue, scientists led by mechanical engineering and materials science professor Srikanth Singamaneni instead injected infrared-sensitive nanoparticles into the brains of locusts.

[Related: This robot gets its super smelling power from locust antennae.]

“[A]pproaches to read-out information from biological systems, especially neural signals, tend to be suboptimal due to the number of electrodes that can be used and where these can be placed,” Singamaneni and colleagues wrote in their new paper published in the journal, Nature Nanomaterials. “By harnessing the photothermal properties of nanostructures… we show that the odor-evoked response from the interrogated regions of the insect olfactory system can not only be enhanced but can also improve odor identification.”

These tiny additives, made of a silicon shell-encased protein core, were first imbued with octopamine—a neurotransmitter associated with an insect’s “fight or flight” instinct. When exposed to infrared laser light, the nanoparticles then emitted chemicals to boost brain activity tied to a locust’s olfactory senses. This then made it easier for scientists to locate that specific neural activity, and use the (previously unreliable) electrodes to identify chemicals in a common lab test set. 

[Related: This surgical smart knife can detect endometrial cancer cells in seconds.]

At the moment, these early demonstrations are more proof-of-concept than anything else. Speaking with New Scientist on Thursday, Singamaneni explains that the system currently only works in closed laboratory settings, and not in real time situations. Still, Singamaneni’s team hopes further research and experimentation may one day develop a method for creating small swarms of cyborg-enhanced locusts capable of detecting medical issues in humans, locating explosives, or honing in on environmental contaminants.

The post Cyborg locusts may one day help search-and-rescue missions appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>