AI | Popular Science https://www.popsci.com/category/ai/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Wed, 01 May 2024 19:12:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 AI | Popular Science https://www.popsci.com/category/ai/ 32 32 Watch a tech billionaire talk to his AI-generated clone https://www.popsci.com/technology/ai-clone-interview/ Wed, 01 May 2024 19:12:52 +0000 https://www.popsci.com/?p=613256
Side by side of Reid AI deepfake and Reid Hoffman
Both Hoffmans appear to miss the larger point during their lengthy interview. YouTube

The deepfake double picks its nose in a very weird interview.

The post Watch a tech billionaire talk to his AI-generated clone appeared first on Popular Science.

]]>
Side by side of Reid AI deepfake and Reid Hoffman
Both Hoffmans appear to miss the larger point during their lengthy interview. YouTube

Billionaire LinkedIn co-founder Reid Hoffman has recently released a video ‘interview’ with his new digital avatar, Reid AI. Built on a custom GPT trained on two decades’ worth of Hoffman’s books, articles, speeches, interviews, and podcasts, Reid AI utilizes speech and video deepfake technology to create a digital clone capable of approximating its source subject’s mannerisms and conversational tone. For over 14 minutes, you can watch the two Hoffmans gaze lovingly and dead-eyed, respectively, into the tech industry’s uncanny navel. In doing so, viewers aren’t offered a “way to be better, to be more human,” as the real Hoffman argues—but a way towards a misguided, dangerous, unethical, and hollow future.

AI photo

Many people might shudder at the idea of unleashing a talking, animated AI avatar of themselves into the world, but the tech utopian “city of yesterday” investor sounds absolutely jazzed about it. According to an April 24 blog post, he finds the whole prospect so “interesting and thought-provoking,” in fact, that he recently partnered with generative AI video company Hour One and the AI audio startup 11ElevenLabs to make it happen. (If that latter name sounds familiar, it’s because 11ElevenLabs’ product is what scammers misused to create those audio deepfake Biden robocalls earlier this year.)

After teasing a showcase of his digital clone for months, Hoffman finally revealed a (heavily edited) video conversation between himself and “Reid AI” last week. And what does the cutting-edge, deepfake-animated culmination of a custom built GPT-4 chatbot reportedly trained on all things Hoffman? A solid question—and one that isn’t easy to answer after watching the surreal, awkward, and occasionally unhygienic simulated interaction.

“Why would I want to be interviewed by a digital version of myself?” Hoffman posits at the video’s outset. First and foremost, it’s apparently to summarize one of his books for an array of potential audience demographics: the smartest person in the world, 5-year-old children, Seinfeld fans, and Klingons. While Hoffman seems to love each subsequent Blitzscaling encapsulation (particularly the “smartest person” one) they all sound like it came from a ChatGPT prompt—which, technically, they did. The difference here is that, instead of only a text answer, the words get a Hoffman vocal approximation layered atop of a (still clearly artificial) video rendering of the man.

Amidst all his excitement, Hoffman—like so many influential tech industry figures—yet again betrays a fundamental misunderstanding of how generative AI works. Technology like OpenAI’s GPT, no matter how gussied up with visual and audio additions, is not capable of comprehension. When an AI responds, “Thank you” or “I think that’s a great point,” they don’t actually experience gratitude or think anything. Generative AI sees sentences as lines of code, each letter or space followed by the next, most probably letter or space. This can be adapted into conversational audio and dubbed to video personas, but that doesn’t change the underlying functionality. It simply received new symbolic input that influences what basically amounts to a superpowered autocorrect system. Even if its language is set to Klingon, as Reid AI offers at one point.

So when Reid AI warns Hoffman a wrong answer may result “because I misinterpreted the information you gave, or I don’t have the full context of your question,” Hoffman doesn’t pause to explain any of the above facts for viewers. He instead moves along to his next conversation point, which usually involves a plug for his books or LinkedIn.

[Related: A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election.]

Meanwhile, Reid AI’s visual component is supposedly meant to simulate many of Hoffman’s conversational mannerisms and queues. Judging from Reid AI’s performance, these largely boil down to stilted attempts at “nodding vigorously,” “emphatically tapping to illustrate a point,” and “picking his nose.” As New Atlas points out, the moment at 10:44 is an odd quirk to include in such a clearly condensed and edited video—perhaps meant to illustrate some of humanity’s more awkward, relatable traits. If so, it does little to distract from the far more absurd and troubling sentiments said by both Hoffman’s.

Reid AI expounds on boilerplate techno-libertarian talking points for fostering a “framework that fuels innovation.” Hoffman repeatedly opines that any concerns about bias, privacy, labor, and digital ownership concerns are just “start[ing] with the negative and [not realizing] all the things that are positive.” The digital clone regurgitates bland, uncreative ways to spruce up Hoffman’s LinkedIn page, like adding “personal flair” such as a fun and colorful header image.

Reid AI and Reid Hoffman side by side
Credit: YouTube

But the most worrisome moment arrives when Hoffman contends “Everyone should be asking themselves, ‘What can I do to help?’” make AI like digital avatars more commonplace. He even goes so far as to equate the current technological era to Europe’s adoption of the steam engine, which made it “such a dominant force in the entire world.” (Neither he, nor Reid AI, concede the other tools involved in the industrial revolution, of course—namely a colonialist system built on the labor of millions of exploited and enslaved populations.)

Hoffman says future iterations of Reid AI will add “to the range of capabilities, of things that I could do.” It’s an extremely telling sentiment—one implying people like Hoffman have no qualms with erasing any demarcation between their cloned and authentic selves. If nothing else, Hoffman has already found at least one task Reid AI can handle for him.

“I am curious to know what others’ thoughts are on how to mitigate impersonation and all other types of risks stemming from such a use-case and achieve ‘responsible AI,’” one LinkedIn user asked him in his announcement post’s comments.

“Great question… Here is Reid AI’s answer,” Hoffman responded alongside a link to a new avatar clip.

The post Watch a tech billionaire talk to his AI-generated clone appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Boston Dynamics gives Spot bot a furry makeover https://www.popsci.com/technology/furry-boston-dynamics-spot/ Tue, 30 Apr 2024 19:04:16 +0000 https://www.popsci.com/?p=613083
Boston Dynamics Spot robot in puppet dog costume sitting next to regular Spot robot.
That's certainly one way to honor 'International Dance Day.'. Boston Dynamics/YouTube

'Sparkles' shows off the latest in robo-dog choreography.

The post Boston Dynamics gives Spot bot a furry makeover appeared first on Popular Science.

]]>
Boston Dynamics Spot robot in puppet dog costume sitting next to regular Spot robot.
That's certainly one way to honor 'International Dance Day.'. Boston Dynamics/YouTube

Boston Dynamics may have relocated the bipedal Atlas to a nice farm upstate, but the company continues to let everyone know its four-legged line of Spot robots have a lot of life left in them. And after years of obvious dog-bot comparisons, Spot’s makers finally went ahead and commissioned a full cartoon canine getup for its latest video showcase. Sparkles is here and like its fellow Boston Dynamics family, it’s perfectly capable of cutting a rug.

Dogs photo

Unlike, say, a mini Spot programmed to aid disaster zone search-and-rescue efforts or explore difficult-to-reach areas in nuclear reactors, Sparkles appears designed purely to offer viewers some levity. According to Boston Dynamics, the shimmering, blue, Muppet-like covering is a “custom costume designed just for Spot to explore the intersections of robotics, art, and entertainment” in honor of International Dance Day. In the brief clip, Sparkles can be seen performing a routine alongside a more standardized mini Spot, sans any extra attire.

But Spot bots such as this duo aren’t always programmed to dance for humanity’s applause—their intricate movements highlight the complex software built to take advantage of the machine’s overall maneuverability, balance, and precision. In this case, Sparkles and its partner were trained using Choreographer, a dance-dedicated system made available by Boston Dynamics with entertainment and media industry customers in mind.

[Related: RIP Atlas, the world’s beefiest humanoid robot.]

With Choreographer, Spot owners don’t need a degree in robotics or engineering to get their machines to move in rhythm. Instead, they are able to select from “high-level instruction” options instead of needing to key in specific joint angle and torque parameters. Even if one of Boston Dynamics robots running Choreographer can’t quite pull off a user’s routine, it is coded to approximate the request as best as possible.

“If asked to do something physically impossible, or if faced with an environmental challenge like a slippery floor, Spot will find the possible motion most similar to what was requested and do that instead—analogously to what a human dancer would do,” the company explains.

Choreographer is behind some of Boston Dynamics’ most popular demo showcases, including those BTS dance-off and the “Uptown Funk” videos. It’s nice to see the robots’ moves are consistently improving—but maybe nice still is that it’s at least one more time people don’t need to think about a gun-toting dog bot. Or even what’s in store for humanity after that two-legged successor to Atlas finally hits the market.

The post Boston Dynamics gives Spot bot a furry makeover appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Can AI help tell the difference between a good and bad sweet potato? https://www.popsci.com/technology/sweet-potato-ai/ Thu, 25 Apr 2024 18:13:48 +0000 https://www.popsci.com/?p=612561
Researchers used a hyperspectral camera to create images of 141 potatoes and inspect their firmness and dry matter content.
Researchers used a hyperspectral camera to create images of 141 potatoes and inspect their firmness and dry matter content. Llez/Wikimedia

Scientists used hyperspectral imaging to sort produce.

The post Can AI help tell the difference between a good and bad sweet potato? appeared first on Popular Science.

]]>
Researchers used a hyperspectral camera to create images of 141 potatoes and inspect their firmness and dry matter content.
Researchers used a hyperspectral camera to create images of 141 potatoes and inspect their firmness and dry matter content. Llez/Wikimedia

Most grocery store patrons take for granted just what it takes to transport a humble sweet potato out of the ground and into a shopping basket. The slightly-sweet red root vegetable can come in various sizes and flavor profiles but consumers have come to expect a level of consistency. To meet that market demand, sweet potatoes are subjected to rounds of laborious and time-consuming quality assessments to root out undesirable batches that are either too firm, not sweet enough, or otherwise deemed unlikely to sell. This process is currently performed methodically by humans in a lab, but a new study suggests hyperspectral cameras and AI could help speed up that process.

In a study published this week in Computers and Electronics in Agriculture, researchers from the University of Illinois set out to see if data collected by a hyperspectral imaging camera could help narrow down certain potato attributes typically determined by manual inspectors and tests. Hyperspectral cameras collect vast amounts of data across the electromagnetic spectrum and are often used to help determine the chemical makeup of materials. In this case, the researchers wanted to see if they could analyze data from the potato images to accurately determine a spud’s firmness, soluble solid content, and dry matter content—three key attributes that contribute to the vegetable’s overall taste and market appeal. Ordinarily, this process requires tedious, sometimes wasteful testing that can include leaving test potatoes heated in a 103 degrees celsius oven for 24 hours. 

“Traditionally, quality assessment is done using laboratory analytical methods,” University of Illinois College of Agricultural, Consumer and Environmental Sciences assistant professor Mohammed Kamruzzaman said in a statement. “You need different instruments to measure different attributes in the lab and you need to wait for the results.”

The researchers gathered 141 defect-free sweet potatoes and took photos from multiple angles. Hyperspectral imaging produces torrents of data, which can be both blessing and curse for researchers looking for specific variables. To solve that problem, the researchers used an AI model to help filter down the noisy data into several wavelengths. They were then able to connect those wavelengths to the specific desirable sweet potato attributes they were looking for. 

“With hyperspectral imaging, you can measure several parameters simultaneously. You can assess every potato in a batch, not just a few samples,” Kamruzzaman added.

AI and hyperspectral cameras could speed up vegetable inspection

The researchers argue farmers and food inspectors could use their combination of hyperspectral imaging and AI to accurately and cost effectively scan sweet potatoes for key attributes while simultaneously cutting down on food waste created as a byproduct of traditional testing. And while this particular study focused on sweet potatoes, it’s possible similar tactics could be used to find desired features in a host of other vegetables and fruits as well. Kamruzzaman says he and his colleagues eventually want to create quickly and easily scan sweet potato batches. On the consumer side, the researchers envision one-day building out an app grocery store patrons could use to scan a potato and look up its particular attributes. Such an app, in theory, could cut down on patrons awkwardly fondling their produce. 

“We believe this is a novel application of this method for sweet potato assessment,” doctoral student and study lead author Toukir Ahmed wrote. “This pioneering work has the potential to pave the way for usage in a wide range of other agricultural and biological research fields as well.”

The agriculture industry is increasingly turning to AI solutions to try and ramp up efficiency and head off growing farm labor shortages. From autonomous Tulip-inspecting machines in Holland to self-driving John Deere tractors, farmers across the world are hoping these new innovations can eventually drive down food prices and increase their own profitability at the same time. How exactly that will all play out, however, remains to be seen. Agriculture gains derived from AI solutions may also take longer to benefit economically developing countries, where some farming is still done by hand.

The post Can AI help tell the difference between a good and bad sweet potato? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The algorithmic ocean: How AI is revolutionizing marine conservation https://www.popsci.com/technology/ai-marine-conservation/ Sat, 20 Apr 2024 16:00:00 +0000 https://www.popsci.com/?p=611727
The Cutter Douglas Munro and crew searching for illegal, unreported, and unregulated fishing activity including high seas driftnet fishing.
The Cutter Douglas Munro and crew searching for illegal, unreported, and unregulated fishing activity including high seas driftnet fishing. U.S. Coast Guard

Driven by a childhood marked by war and environmental devastation, Dyhia Belhabib developed an innovative technology to combat illegal fishing.

The post The algorithmic ocean: How AI is revolutionizing marine conservation appeared first on Popular Science.

]]>
The Cutter Douglas Munro and crew searching for illegal, unreported, and unregulated fishing activity including high seas driftnet fishing.
The Cutter Douglas Munro and crew searching for illegal, unreported, and unregulated fishing activity including high seas driftnet fishing. U.S. Coast Guard

This article was originally featured on MIT Press Reader.

Dyhia Belhabib’s journey to becoming a marine scientist began with war funerals on TV. Her hometown, on the pine-forested slopes of the Atlas Mountains in northern Algeria, lies only 60 miles from the Mediterranean Sea. But a trip to the beach was dangerous. A bitter civil war raged across the mountains as she was growing up in the 1990s; the conflict was particularly brutal for Belhabib’s people, the Berbers, one of the Indigenous peoples of North Africa. As she puts it: “We didn’t go to the ocean much, because you could get killed on the way there.”

The ocean surfaced in her life in another way, on state-run television. When an important person was assassinated or a massacre occurred, broadcasters would interrupt regular programming to show a sober documentary. They frequently chose a Jacques Cousteau film, judged sufficiently dignified and neutral to commemorate the deaths. Whenever she saw the ocean on television, Belhabib would wonder who had died. “My generation thinks of tragedies when we see the ocean,” she says. “I didn’t grow to love it in my youth.”

By the time she was ready for university, the civil war had ended. The Islamists had lost the war, but their cultural influence had grown. Engaged at 13 to a fiancé who wanted her to become a banker, Belhabib chafed at the restrictions. Her given name, Dyhia, refers to a Berber warrior queen who successfully fought off invading Arab armies over a thousand years ago; Queen Kahina, as she is also known, remains a symbol of female empowerment, an inspiration for Berbers and for the thousands of Algerian women who took up arms in the war of independence. In a society where one in four women cannot read, Belhabib realized she didn’t want to go to university only to spend her life “counting other people’s money.

“We didn’t go to the ocean much, because you could get killed on the way there.”

One day, her brother’s friend visited their house. He was a student in marine sciences in the capital city, Algiers. When he described traveling out to sea, Belhabib felt a calling for an entirely unexpected path. “It was,” she recalls, “a career I had never heard of, and one that challenged every stereotype of women in Algerian society.” Soon after the visit, she moved to Algiers to study at the National Institute of Marine Sciences and Coastal Management, where she was one of the only women in her program. She also broke off the engagement with her fiancé, so that she could focus full-time on studies. She still vividly remembers her feelings of freedom, fear, and unreality on her first trip out to sea. While other students dove for samples, she floated on top of the water, trying to survive. “I never learned how to swim, and I still don’t know how,” she admits.

Belhabib graduated at the top of her class, but was repeatedly rejected when she applied to universities overseas. Her luck turned when she met Daniel Pauly, one of the world’s most famous fish scientists, at a conference. Unintimidated by the fact that Pauly had just won the Volvo Prize—the environmental equivalent of a Nobel—she introduced herself and told him she wanted to study with his team. Although she did not yet speak fluent English, Pauly accepted her as a student. When she began her doctoral research, over 90 percent of the world’s wild fisheries had been eradicated, and Pauly was sounding the alarm about a new, global surge in illegal fishing that was decimating marine food webs and depriving coastal communities of livelihoods. He wanted her to work on Africa, where illegal fishing had reached epidemic proportions.

Belhabib spent the next few years in West Africa. When her research uncovered the extent of illegal fishing to feed Chinese and European markets, she made the front page of the New York Times. “Being African myself, I was able to bring people together to openly share data in a way they never had before,” she explains. It’s not hard to imagine her corralling government officials: Disarmingly frank and engagingly energetic, the whip-smart, hijab-wearing Belhabib stands a little over five feet tall and talks a mile a minute, with a self-deprecating laugh and a talent for gently posed, bitingly direct questions.

Her startling findings touched a nerve. Tens of thousands of boats commit fishing crimes every year, but no global repository of fishing crimes exists. A fishing vessel will often commit a crime in one jurisdiction, pay a meager fine, and sail off to another jurisdiction, thus operating with impunity. If a global database of fishing vessel criminal records could be created, Belhabib realized, there would be nowhere left to hide. She suggested the idea to a variety of international organizations, but the issue was a political hot potato; national sovereignty, they argued, prevented them from tracking international criminals. Undeterred, Belhabib decided to build the database herself. Late at night, while her infant son was sleeping, she began combing through government reports and news articles in dozens of languages (she speaks several fluently). Her database grew, word spread, and her network of informants—often government officials frustrated with international inaction on illegal fishing—began expanding. She moved to a small nonprofit and began advising Interpol and national governments. The database, christened Spyglass, grew into the world’s largest registry of the criminal history of industrial fishing vessels and their corporate backers. But the registry, Belhabib knew, was useful only if the information made its way into the right hands. So in 2021 she cofounded Nautical Crime Investigation Services, a startup that uses AI and customized monitoring technology to enable more effective policing of marine crimes and criminal vessels at sea. Together with her cofounder Sogol Ghattan, who has a background in ethical AI, she named their core algorithm ADA, in homage to Ada Lovelace—the woman who wrote the world’s first computer program.

Belhabib is attempting to tackle one of the most intractable problems in contemporary environmental conservation: illegal fishing. Across the oceans, the difficulty of tracking ships creates ideal cover for some of the world’s largest environmental crimes. After the end of World War II, the world’s fishing fleets rapidly industrialized. Wartime technologies that had been developed for detecting underwater submarines were repurposed for spotting fish. The size of nets grew exponentially, and offshore factory ships were outfitted so they could spend months at sea, extending the reach of industrial fishing into the furthest reaches of the ocean. As the world’s population grew, fish protein became an increasingly important source of food. But warning signs soon appeared: crashes in key fish populations, an alarming trend of “fishing down marine food webs,” and a series of cascading impacts that rapidly depleted marine ecosystems.

“Being African myself, I was able to bring people together to openly share data in a way they never had before.”

In the wake of depleting stocks, fishers should have responded by reducing their take. Instead, they redoubled their efforts. After the world’s leading fishing nations—China and Europe are the largest markets—overfished their own waters, they began exporting industrial overfishing to the global oceans. China’s offshore fishing fleet of several hundred thousand vessels, which received nearly $8 billion in government subsidies in 2018, is now the largest in the world.

Governments of wealthier nations subsidized massive fleets of corporate-backed vessels to fish the high seas, using bottom trawling and drift nets stretching for dozens of miles, killing everything in their path. Artisanal fishers were squeezed out, and as fish stocks collapsed, rising food insecurity generated protests and political unrest. In West Africa, for example, fishing boats from the world’s wealthiest nations have depleted local fisheries to such an extent that waves of migrants—faced with food insecurity and uncertain futures—have begun fleeing their homes in a desperate, risky attempt to reach European outposts such as the Spanish Canary Islands; thousands of migrants have died at sea. The smaller fishing fleet, meanwhile, has struggled to remain solvent; impoverished fishers are increasingly vulnerable targets for criminal organizations seeking mules for hire to transport drugs, or boats to serve as cover operations for human trafficking.

Over 90 percent of the world’s fish stocks are now fished to capacity or overfished. Despite this, scientists’ calls for reduced fishing have largely fallen on deaf ears. Conventional attempts to manage fisheries are stymied by the limits of logbooks and onboard human observers, and local electronic monitoring systems. Fishing boats that exceed quotas or fish in off-limits areas are rarely caught, operating with impunity in front of local fishermen’s eyes; and even if caught, they are even more rarely punished.

Marine panopticon

The world’s oceans are experiencing an onslaught: As fish have become scarcer, illegal fishing has surged. Rather than merely document the decline of fish stock, Belhabib decided to do something about it. Her solution: to combine ADA, her AI-powered database of marine crimes, with data that tracks vessel movements in real time. She began by tracking signals from the marine traffic transponders carried by oceangoing ships—also known as automatic information systems (AIS). AIS signals are detected by land transceivers or satellites and used to track and monitor individual vessel movements around the world. AIS signals are also detected by other ships in the vicinity, reducing the potential for ship collisions. Belhabib and her team then built an AI-powered risk assessment tool called GRACE (in honor of the pioneering coder Grace Hopper), which predicts risks of environmental crimes at sea. When combined with vessel detection devices such as AIS, GRACE provides real-time information on the likelihood of a particular ship committing environmental crimes, which can be used by enforcement agencies to catch the criminals in the act. Belhabib’s database means that criminal vessels—which often engage in multiple forms of crime, including human trafficking and drug smuggling, as well as illegal fishing—now find it much harder to hide.

The high seas are one of the world’s last global commons, largely unregulated. The UN Convention on the Law of the Sea provides little protection for the high seas, two-thirds of the ocean’s surface. The adoption of a new United Nations treaty on the high seas in 2023 will create more protection, but this will require years to be implemented. Even within 200 nautical miles of the coast, where national authorities have legal jurisdiction, most struggle to monitor the oceans beyond the areas a few miles from the coast. And beyond the 200-mile limit, no one effectively governs the open ocean.

So Belhabib hands her data on human rights and labor abuses over to Global Fishing Watch, a not-for-profit organization that collaborates with the national Coast Guards and Interpol to target vessels suspected of illegal fishing for boarding, apprehend rogue fishing vessels, and police the boundaries of marine parks. The observatory visualizes, tracks, and shares data about global fishing activity in near real time and for free; launched at the 2016 U.S. State Department’s “Our Ocean” conference in Washington, it is backed by some of the world’s largest foundations. Its partners include Google (which provides tools for processing big data), the marine conservation organization Oceana, and SkyTruth—a not-for-profit that uses satellite imagery to advance environmental protection.

Global Fishing Watch uses satellite data on boat location, combined with Belhabib’s data on criminal activity, to train artificial intelligence algorithms to identify vessel types, fishing activity patterns, and even specific gear types (tasks that would require human fisheries experts hundreds of years to complete). The tracking system pinpoints each individual fishing vessel with laser-like accuracy, predicts whether it is actually fishing, and even identifies what type of fishing is underway. Their reports have revealed that half of the global ocean is actively fished, much of it covertly.

Fred Abrahams, a researcher with Human Rights Watch, explains that this approach is just one example of a new generation of conservation technology that could act as a check on anyone engaged in resource exploitation. His team at Human Rights Watch uses satellite imagery to track everything from illegal mining to undercover logging operations. As Abrahams says: “This is why we are so committed to these technologies . . . they make it much harder to hide large-scale abuses.” Abrahams, like other advocates, is confident that the glitches—for example, AIS tags are not yet carried by all fishing vessels globally, poor reception makes coverage in some regions challenging, and some boats turn off the AIS when they want to go into stealth mode—will eventually be solved. Researchers have recently figured out, for example, how to use satellites to triangulate the position of fishing boats in stealth mode—enabling tracking of so-called dark fleets. These results can inform a new era of independent oversight of illegal fishing and transboundary fisheries. Meanwhile, researchers are developing other applications for AIS data, including assessments of the contribution of ship exhaust emissions to global air pollution, the exposure of marine species to shipping noise, and the extent of forced labor—often hidden, and linked to human trafficking—on the world’s fishing fleets.

Researchers now use satellites to triangulate the position of fishing boats in stealth mode—enabling tracking of so-called dark fleets.

It’s a herculean task for one organization to police the world’s oceans. And Global Fishing Watch’s data is mostly retroactive; by the time the data is analyzed and the authorities have arrived, fishing vessels have often left the scene. What is still lacking is a method for marine criminals to be more effectively tracked in real time, and apprehended locally. This is where Belhabib’s next venture comes in. She is now working with local governments in Africa—where much illegal fishing is concentrated—to provide them with trackers and AI-powered technologies to catch illegal fishing and other maritime crimes in the act. As she notes: “When you ask the Guinean Navy how much of their territorial waters they can actually monitor, it’s only a fraction of a vast area. They simply don’t have the resources.” Belhabib’s system pinpoints vessels that may be committing infractions, and assesses the risk live on screen. This allows the Coast Guard and other agencies such as Interpol to more easily find illegal fishers, while reducing the costs of deployment, monitoring, and interdiction.

She cautions, however, about the use of similar digital technologies to track illegal migrants. The European Union, for example, has strengthened its “digital frontier” through satellite monitoring, unmanned drones, and remotely piloted aircraft, in some cases relying on private security and defense companies to undertake data analytics and tracking. But these technologies are often focused on surveillance rather than search and rescue of migrants stranded at sea. As Belhabib relates: “Recently I spoke with the Spanish Navy and they told me they watched over 100 people die when a boat full of migrants capsized and they could only save a few people. They told me, ‘We take their fish away, they risk their lives to have a better and decent life.’ It’s heartbreaking and avoidable.” In Belhabib’s view, Digital Earth technologies should prioritize ecological and humanitarian goals, rather than surveillance and profit.

Digital Earth technologies enable more rapid detection and, in some cases, prediction of marine crimes. Digital monitoring, combined with artificial intelligence, allows precise analysis of fishing vessel locations and movements at a global scale. Although this does not guarantee enforcement, it could enable more efficient policing of the world’s oceans. The use of digital technologies enables conservationists to tackle two common flaws that lead to failures in environmental enforcement. First: data is scarce; if available, there is often a time lag, geographical gaps, or data biases. This makes evidence-gathering difficult or impossible. Second, enforcement often comes too late. Environmental criminals can be prosecuted, but legal victories are uncertain, and happen after the damage has been done. These shortcomings of contemporary environmental governance—sparse data, unenforceable regulations, and patchy, sporadic enforcement that punishes but fails to prevent environmental harm—can be overcome by digital monitoring, which mobilizes abundant data in real time to gather systematic evidence and enable timely enforcement.

These techniques appear to be achieving some success. In Ghana, for example, there has been a long-standing conflict between industrial fishing boats and small-scale, artisanal fishers using canoes and small boats to fish near the shore. Satellite data has helped the government’s Fisheries Enforcement Unit track and reduce the incursions of larger fishing boats into near-shore waters. In Indonesia, the world’s largest archipelago country with the second-longest coastline in the world, the government has entered into an agreement with Global Fishing Watch data to monitor fisheries and share the data about vessels’ movements publicly online, a major step forward in transparency in fisheries enforcement. The Indonesian partnership is an example of the longer-term aim of Global Fishing Watch: to share its geospatial datasets and online mapping platform with governments around the world.

Despite these recent gains to combat illegal fishing, digital tech is also exacerbating the underlying problem, as fishers themselves have started taking advantage of digital strategies. One example is the growing use of fish aggregating devices, which use acoustic technology, combined with satellite-linked global positioning systems, to better spot schools of fish. Fishers can effectively assess location, biomass, and even species, allowing them to aggregate and fish more efficiently. Digitization is ratcheting up the already intensely competitive fishing industry and accelerating the overfishing of endangered species.

Even if conservationists can win this digital arms race, there is a more fundamental problem: The underlying structural drivers of overfishing—consumer demand, particularly in Asia and Europe, and a lack of adequate governance for the high seas—are not solvable by digital technologies alone. Governance reform and digital innovation must work in tandem. For example, in the absence of government regulation, digital monitoring of fishing on the open ocean would be unlikely to scale up. But the adoption of the new UN treaty on the high seas in 2023 included a significant commitment to creating new Marine Protected Areas, aligned with Global Biodiversity Convention’s commitment to protect 30 percent of the Earth’s land and oceans by 2030.

These new developments create an impetus for digital monitoring; and, in turn, digital monitoring will increase the likelihood that Marine Protected Areas will be effective at protecting fish populations. This illustrates two key points about environmental governance in the 21st century: the interplay between digital and governance innovation, and the fact that planetary governance of the environment is possible only with planetary-scale computation.


Karen Bakker was a Guggenheim Fellow, a Professor at the University of British Columbia, and the Matina S. Horner Distinguished Visiting Professor at the Radcliffe Institute for Advanced Study at Harvard University. She was the author of “The Sounds of Life” (Princeton University Press) and “Gaia’s Web,” from which this article is excerpted. Karen Bakker died on August 14, 2023.

The post The algorithmic ocean: How AI is revolutionizing marine conservation appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Startup pitches a paintball-armed, AI-powered home security camera https://www.popsci.com/technology/paintball-armed-ai-home-security-camera/ Mon, 15 Apr 2024 14:51:01 +0000 https://www.popsci.com/?p=610934
PaintCam Eve shooting paintballs at home
PaintCam Eve supposedly will guard your home using the threat of volatile ammunition. Credit: PaintCam

PaintCam Eve also offers a teargas pellet upgrade.

The post Startup pitches a paintball-armed, AI-powered home security camera appeared first on Popular Science.

]]>
PaintCam Eve shooting paintballs at home
PaintCam Eve supposedly will guard your home using the threat of volatile ammunition. Credit: PaintCam

It’s a bold pitch for homeowners: What if you let a small tech startup’s crowdfunded AI surveillance system dispense vigilante justice for you?

A Slovenia-based company called OZ-IT recently announced PaintCam Eve, a line of autonomous property monitoring devices that will utilize motion detection and facial recognition to guard against supposed intruders. In the company’s zany promo video, a voiceover promises Eve will protect owners from burglars, unwanted animal guests, and any hapless passersby who fail to heed its “zero compliance, zero tolerance” warning.

The consequences for shrugging off Eve’s threats: Getting blasted with paintballs, or perhaps even teargas pellets.

“Experience ultimate peace of mind,” PaintCam’s website declares, as Eve will offer owners a “perfect fusion of video security and physical presence” thanks to its “unintrusive [sic] design that stands as a beacon of safety.”

AI photo

And to the naysayers worried Eve could indiscriminately bombard a neighbor’s child with a bruising paintball volley, or accidentally hock riot control chemicals at an unsuspecting Amazon Prime delivery driver? Have no fear—the robot’s “EVA” AI system will leverage live video streaming to a user’s app, as well as employ facial recognition technology system that would allow designated people to pass by unscathed.

In the company’s promotional video, there appears to be a combination of automatic and manual screening capabilities. At one point, Eve is shown issuing a verbal warning to an intruder, offering them a five-second countdown to leave its designated perimeter. When the stranger fails to comply, Eve automatically fires a paintball at his chest. Later, a man watches from his PaintCam app’s livestream as his frantic daughter waves at Eve’s camera to spare her boyfriend, which her father allows.

“If an unknown face appears next to someone known—perhaps your daughter’s new boyfriend—PaintCam defers to your instructions,” reads a portion of product’s website.

Presumably, determining pre-authorized visitors would involve them allowing 3D facial scans to store in Eve’s system for future reference. (Because facial recognition AI has such an accurate track record devoid of racial bias.) At the very least, require owners to clear each unknown newcomer. Either way, the details are sparse on PaintCam’s website.

Gif of PaintCam scanning boyfriend
What true peace of mind looks like. Credit: PaintCam

But as New Atlas points out, there aren’t exactly a bunch of detailed specs or price ranges available just yet, beyond the allure of suburban crowd control gadgetry. OZ-IT vows Eve will include all the smart home security basics like live monitoring, night vision, object tracking, movement detection, night vision, as well as video storage and playback capabilities.

There are apparently “Standard,” “Advanced,” and “Elite” versions of PaintCam Eve in the works. The basic tier only gets owners “smart security” and “app on/off” capabilities, while Eve+ also offers animal detection. Eve Pro apparently is the only one to include facial recognition, which implies the other two models could be a tad more… indiscriminate in their surveillance methodologies. It’s unclear how much extra you’ll need to shell out for the teargas tier, too.

PaintCam’s Kickstarter is set to go live on April 23. No word on release date for now, but whenever it arrives, Eve’s makers promise a “safer, more colorful future” for everyone. That’s certainly one way of describing it.

The post Startup pitches a paintball-armed, AI-powered home security camera appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Do ‘griefbots’ help mourners deal with loss? https://www.popsci.com/technology/ai-dead-loved-ones/ Sun, 14 Apr 2024 16:00:00 +0000 https://www.popsci.com/?p=610631
grave with flowers on it
An approach to grief that focuses on continuing bonds with the deceased loved one suggests that finding closure is about more than letting the person go. DepositPhotos

Bereaved people should temper their expectations when chatting with AI-driven simulations of their lost loved ones.

The post Do ‘griefbots’ help mourners deal with loss? appeared first on Popular Science.

]]>
grave with flowers on it
An approach to grief that focuses on continuing bonds with the deceased loved one suggests that finding closure is about more than letting the person go. DepositPhotos

This article was originally featured on Undark.

Various commercial products known as “griefbots” create a simulation of a lost loved one. Built on artificial intelligence that makes use of large language models, or LLMs, the bots imitate the particular way the deceased person talked by using their emails, text messages, voice recordings, and more. The technology is supposed to help the bereaved deal with grief by letting them chat with the bot as if they were talking to the person. But we’re missing evidence that this technology actually helps the bereaved cope with loss.

Humans have used technology to deal with feelings of loss for more than a century. Post-mortem photographs, for example, gave 19th century Victorians a likeness of their dead to remember them by, when they couldn’t afford a painted portrait. Recent studies have provided evidence that having a drawing or picture as a keepsake helps some survivors to grieve. Yet researchers are still learning how people grieve and what kinds of things help the bereaved to deal with loss.

An approach to grief that focuses on continuing bonds with the deceased loved one suggests that finding closure is about more than letting the person go. Research and clinical practice show that renewing the bond with someone they’ve lost can help mourners deal with their passing. That means griefbots might help the bereaved by letting them transform their relationship to their deceased loved one. But a strong continuing bond only helps the bereaved when they can make sense of their loss. And the imitation loved ones could make it harder for people to do that and accept that their loved one is gone.

Carla Sofka, a professor of social work at Siena College in New York state, is an expert on technology and grief. As the internet grew in the mid-1990s, she coined the term “thanatechnology” to describe any technology—including digital or social media—that helps someone deal with death, grief, and loss, such as families and friends posting together on the social media profile of a deceased loved one or creating a website in their memory. Other survivors like rereading emails from the deceased or listening to their recorded voice messages. Some people may do this for years as they come to terms with the intense emotions of loss.

Griefbots could give the bereaved a new tool to cope with grief, or they could create the illusion that the loved one isn’t gone.

If companies are going to build AI simulations of the deceased, then “they have to talk to the people who think they want this technology” to better create something that meets their needs, Sofka said. Current commercial griefbots target different groups. Seance AI’s griefbot, for example, is intended for short-term use to provide a sense of closure, while the company You, Only Virtual—or YOV—promises to keep someone’s loved one with them forever, so they “never have to say goodbye.”

But if companies can create convincing simulations of people who died, Sofka said it’s possible that could change the whole reality of the person being gone. Though we can only speculate, it might affect the way people who knew them grieve. As Sofka wrote in an email, “everyone is different in how they process grief.” Griefbots could give the bereaved a new tool to cope with grief, or they could create the illusion that the loved one isn’t gone and force mourners to confront a second death if they want to stop using the bot.

Public health and technology experts, such as Linnea Laestadius of the University of Wisconsin-Milwaukee, are concerned griefbots could trap mourners in secluded online conversations, unable to move on with their lives. Her work on chatbots suggests people can form strong emotional ties to virtual personas that make them dependent on the program for emotional support. Given how hard it is to predict how such chatbots will affect the way people grieve, Sofka wrote in an email, “it’s challenging for social scientists to develop research questions that capture all possible reactions to this new technology.”  

That hasn’t stopped companies from releasing their products. But to develop griefbots responsibly, it’s not just about knowing how to make an authentic bot and then doing it, said Wan-Jou She, an assistant professor at the Kyoto Institute of Technology.

She collaborated with Anna Xygkou, a doctoral student at the University of Kent, and other coauthors on a research project to see how chatbot technologies can be used to support grief. They interviewed 10 people who were using virtual characters created by various apps to cope with the loss of a loved one. Five of their participants chatted with a simulation of the person they lost, while the others used chatbots that took on different roles, such as a friend. Xygkou said that the majority of them talked to the characters for less than a year. “Most of them used it as a transitional stage to overcome grief, in the first stage” she said, “when grief is so intense you cannot cope with the loss.” Left to themselves, these mourners chose a short-term tool to help them deal with loss. They did not want to recreate a loved one to keep them at their side for life. While this study suggests that griefbots can be helpful to some bereaved people, more studies will be needed to show that the technology doesn’t harm them—and that it helps beyond this small group.

What’s more, the griefbots didn’t need to convince anyone they were human. The users interviewed knew they were talking to a chatbot, and they did not mind. They suspended their disbelief, Xygkou said, to chat with the bot as though they were talking to their loved ones. As anyone who has used LLM-driven chatbots knows, it’s easy to feel like there’s a real person on the other side of the screen. During the emotional upheaval of losing a loved one, indulging this fantasy could be especially problematic. That’s why simulations must make clear that they’re not a person, Xygkou said.

People may become more comfortable talking to computers, or poor oversight might mean that many people won’t know they are talking to a computer.

Critically, according to She, chatbots are currently not under any regulation, and without that, it’s hard to get companies to prove their products help users to deal with loss. Lax lawmaking has encouraged other chatbot apps to claim they can help improve mental health without providing any evidence. As long as these apps categorize themselves as wellness rather than therapy, the U.S. Food and Drug Administration will not enforce its requirements, including that apps prove they do more good than harm. Though it’s unclear which regulatory body will be ultimately responsible, it is possible that the Federal Trade Commission could handle false or unqualified claims made by such products.

Without much evidence, it’s uncertain how griefbots will affect the way we deal with loss. Usage data doesn’t appear to be public, but She and Xygkou had so much trouble finding participants for their study that Xygkou thinks not many mourners currently use the technology. But that could change as AI continues to proliferate through our lives. Maybe more people will use griefbots as the shortage of qualified mental professionals worsens. People may become more comfortable talking to computers, or poor oversight might mean that many people won’t know they are talking to a computer in the first place. So far, neither questionable ethics nor tremendous cost have prevented companies from trying to use AI any chance they get.

But no matter what comfort a bereaved person finds in bot, by no means should they trust them, She said. When a LLM is talking to someone, “it’s just predicting: what is the next word.”

The post Do ‘griefbots’ help mourners deal with loss? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a tripod robot test its asteroid leaping skills https://www.popsci.com/technology/spacehopper-zero-gravity/ Fri, 12 Apr 2024 13:35:48 +0000 https://www.popsci.com/?p=610621
SpaceHopper robot in midair during parabolic flight test
SpaceHopper is designed to harness an asteroid's microgravity to leap across its surface. Credit: ETH Zurich / Nicolas Courtioux

SpaceHopper maneuvered in zero gravity aboard a parabolic flight.

The post Watch a tripod robot test its asteroid leaping skills appeared first on Popular Science.

]]>
SpaceHopper robot in midair during parabolic flight test
SpaceHopper is designed to harness an asteroid's microgravity to leap across its surface. Credit: ETH Zurich / Nicolas Courtioux

Before astronauts leave Earth’s gravity for days, weeks, or even months at a time, they practice aboard NASA’s famous parabolic flights. During these intense rides in modified passenger jets, trainees experience a series of stomach-churning ups and downs as the aircraft’s steep up-and-down movements create zero-g environments. Recently, however, a robot received similar education as their human counterparts—potentially ahead of its own journeys to space.

A couple years back, eight students at ETH Zürich in Switzerland helped design the SpaceHopper. Engineered specifically to handle low-gravity environments like asteroids, the small, three-legged bot is meant to (you guessed it) hop across its surroundings. Using a neural network trained in simulations with deep reinforcement learning, SpaceHopper is built to jump, coast along by leveraging an asteroid’s low-gravity, then orient and stabilize itself mid-air before safely landing on the ground. From there, it repeats this process to efficiently span large distances.

But it’s one thing to design a machine that theoretically works in computer simulations—it’s another thing to build and test it in the real-world.

Private Space Flight photo

Sending SpaceHopper to the nearest asteroid isn’t exactly a cost-effective or simple way to conduct a trial run. But thanks to the European Space Agency and Novespace, a company specializing in zero-g plane rides, the robot could test out its moves in the next best thing.

Over the course of a recent 30 minute parabolic flight, researchers let SpaceHopper perform in a small enclosure aboard Novespace’s Airbus A310 for upwards of 30 zero-g simulations, each lasting between 20-25 seconds. In one experiment, handlers released the robot in the middle of the air once the plane hit zero gravity, then observed it resituate itself to specific orientations using only its leg movements. In a second test, the team programmed SpaceHopper to leap off the ground and reorient itself before gently colliding with a nearby safety net.

Because a parabolic flight creates completely zero-g environments, SpaceHopper actually made its debut in less gravity than it would on a hypothetical asteroid. Because of this, the robot couldn’t “land” as it would in a microgravity situation, but demonstrating its ability to orient and adjust in real-time was still a major step forward for researchers. 

[Related: NASA’s OSIRIS mission delivered asteroid samples to Earth.]

“Until that moment, we had no idea how well this would work, and what the robot would actually do,” SpaceHopper team member Fabio Bühler said in ETH Zürich’s recent highlight video. “That’s why we were so excited when we saw it worked. It was a massive weight off of our shoulders.”

SpaceHopper’s creators believe deploying their jumpy bot to an asteroid one day could help astronomers gain new insights into the universe’s history, as well as provide information into our solar system’s earliest eras. Additionally, many asteroids are filled with valuable rare earth metals—resources that could provide a huge benefit across numerous industries back at home.

The post Watch a tripod robot test its asteroid leaping skills appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ready or not, AI is in our schools https://www.popsci.com/technology/ai-in-schools/ Thu, 11 Apr 2024 18:24:32 +0000 https://www.popsci.com/?p=610551
students using AI in class
Around one in five highschool aged teens who’ve heard about ChatGPT say they have already used the tools on classwork, according to a recent Pew Research survey. Philipp von Ditfurth/picture alliance via Getty Images

(We’re not ready.)

The post Ready or not, AI is in our schools appeared first on Popular Science.

]]>
students using AI in class
Around one in five highschool aged teens who’ve heard about ChatGPT say they have already used the tools on classwork, according to a recent Pew Research survey. Philipp von Ditfurth/picture alliance via Getty Images

Students worldwide are using generative AI tools to write papers and complete assignments. Teachers are using similar tools to grade tests. What exactly is going on here? Where is all of this heading? Can education return to a world before artificial intelligence? 

How many students are using generative AI in school?  

Many high school and college-age students embraced popular generative AI writing tools like OpenAI’s ChatGPT almost as soon as they started gaining international attention in 2022. The incentive was pretty clear. With just a few simple prompts, large language models (LLMs) at the time could scour their vast databases of articles, books, and archives and spit out relatively coherent short-form essay or question responses in seconds. The language wasn’t perfect and the models were prone to fabricating facts, but they were good enough to skirt past some educators, who, at the time, weren’t primed to spot tell-tell signs of AI manipulation.

The trend caught on like wildfire. Around one in five highschool aged teens who’ve heard about ChatGPT say they have already used the tools on classwork, according to a recent Pew Research survey. A separate report from ACT, which creates one of the two most popular standardized exams for college admission, claims nearly half (46%) of high school students have used AI to complete assignments. Similar trends are playing out in higher education. More than a third of US college students (37%) surveyed by the online education magazine Intelligent.com say they’ve used ChatGPT either to generate ideas, write papers, or both.

Those AI tools are finding their way onto graded papers. Turnitin, a prominent plagiarism detection company used by educators, recently told Wired it found evidence of AI manipulation in 22 million college and high school papers submitted through its service last year. Out of 200 million papers submitted in 2023, Turnitin claims 11% had more than 20% of its content allegedly composed using AI-generated material. And even though generative AI usage generally has cooled off among the general public, students aren’t showing signs of letting up. 

Educators turn to imperfect AI detection tools 

Almost immediately after students started using AI writing tools, teachers turned to other AI models to try and stop them. As of writing, dozens of tech firms and startups currently claim to have developed software capable of detecting signs of AI-generated text. Teachers and professors around the country are already relying on these to various degrees. But critics say AI detection tools, even years after ChatGPT became popular, remain far from perfect.

A recent analysis of 18 different AI detection tools in the International Journal for Educational Integrity highlights a lack of comprehensive accuracy. None of the models studied accurately differentiated AI generated material from human writing. Worse still, only five of the models achieved an accuracy above 70%. Detection could get even more difficult as AI writing models improve over time. 

Accuracy issues aren’t the only problem with limiting AI detection tools effectiveness. An overreliance on these still developing detection systems risks punishing students who might use otherwise helpful AI software that, in other contexts, would be permitted. That exact scenario played out recently with a University of North Georgia student named Marley Stevens who claims an AI detection tool interpreted her use of the popular spelling and writing aid Grammarly as cheating. Stevens claims she received a zero on that essay, making her ineligible for a scholarship she was pursuing.

“I talked to the teacher, the department head, and the dean, and [they said] I was ‘unintentionally cheating,’” Stevens alleged in a TikTok post. The University of North Georgia did not immediately respond to PopSci’s request for comment. 

There’s evidence current AI detection tools also mistakenly confuse genuine human writing for AI content. In addition to general false positives, Stanford researchers warn detection tools may disproportionately penalize writing from non-native speakers. More than half (61.2%) of essays written by US-born, non-native speaking eighth graders included in the research were classified as AI generated. 97% of the essays from non-native speakers were flagged as AI generated by at least one of the seven different AI detection tools tested in the research. Widely rolled out detection tools could put more pressure on non-native speakers who are already tasked with overcoming language barriers. 

How are schools responding to the rise in AI?

Educators are scrambling to find a solution to the influx of AI writing. Some major school districts in New York and Los Angeles have opted to ban use of the ChatGPT and related tools entirely. Professors in universities around the country have begun begrudgingly using AI detection software despite recognizing its known accuracy shortcomings. One of those educators, Michigan Technological University Professor of Composition, described these detectors as a “tool that could be beneficial while recognizing it’s flawed and may penalize some students,” during an interview with Inside Higher Ed

Others, meanwhile, are taking the opposite approach and leaning into AI education tools with more open arms. In Texas, according to The Texas Tribune, the state’s Education Agency just this week moved to replace several thousand human standardized test grades with an “automated scoring system.” The agency claims its new system, which will score open-ended written responses included in the state’s public exam, could save the state $15-20 million per year. It will also leave an estimated 2,000 temporary graders out of a job. Elsewhere in the state, an elementary school is reportedly experimenting with using AI learning modules to teach children basic core curriculums and then supplementing that with human teachers. 

AI in education: A new normal 

While its possible AI writing detection tools could evolve to increase accuracy and reduce false positives, it’s unlikely they alone will transport education back to a time prior to ChatGPT. Rather than fight the new normal, some scholars argue educators should instead embrace AI tools in classrooms and lecture halls and instruct students how to use them effectively. In a blog post, researchers at MIT Sloan argue professors and teachers can still limit use of certain tools, but note they should do so through clearly written rules explaining their reasoning. Students, they write, should feel comfortable approaching teachers to ask when AI tools are and aren’t appropriate.

Others, like former Elon University professor C.W. Howell argue explicitly and intentionally exposing students to AI generated writing in a classroom setting may actually make them less likely to use it. Asking students to grade an AI-generated essay, Howell writes in Wired, can give students first hand experience noticing the way AI often fabricates sources or hallucinate quotes from an imaginary ether. AI generated essays, when looked at through a new lens, can actually improve education.

“Showing my students just how flawed ChatGPT is helped restore confidence in their own minds and abilities,” Howell writes. 

Then again, if AI does fundamentally alter the economic landscape as some doomsday enthusiasts believe, students could always spend their days learning how to engineer prompts to train AI and contribute to the architecture of their new AI-dominated future.

The post Ready or not, AI is in our schools appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch two tiny, AI-powered robots play soccer https://www.popsci.com/technology/deepmind-robot-soccer/ Wed, 10 Apr 2024 18:00:00 +0000 https://www.popsci.com/?p=610317
Two robots playing soccer
Deep reinforcement learning allowed a pair of robots to play against one another. Credit: Google DeepMind / Tuomas Haarnoja

Google DeepMind's bipedal bots go head-to-head after years of prep.

The post Watch two tiny, AI-powered robots play soccer appeared first on Popular Science.

]]>
Two robots playing soccer
Deep reinforcement learning allowed a pair of robots to play against one another. Credit: Google DeepMind / Tuomas Haarnoja

Google DeepMind is now able to train tiny, off-the-shelf robots to square off on the soccer field. In a new paper published today in Science Robotics, researchers detail their recent efforts to adapt a machine learning subset known as deep reinforcement learning (deep RL) to teach bipedal bots a simplified version of the sport. The team notes that while similar experiments created extremely agile quadrupedal robots (see: Boston Dynamics Spot) in the past, much less work has been conducted for two-legged, humanoid machines. But new footage of the bots dribbling, defending, and shooting goals shows off just how good a coach deep reinforcement learning could be for humanoid machines.

While ultimately meant for massive tasks like climate forecasting and materials engineering, Google DeepMind can also absolutely obliterate human competitors in games like chess, go, and even Starcraft II. But all those strategic maneuvers don’t require complex physical movement and coordination. So while DeepMind can study simulated soccer movements, it hasn’t been able to translate to a physical playing field—but that’s quickly changing.

AI photo

To make the miniature Messi’s, engineers first developed and trained two deep RL skill sets in computer simulations—the ability to get up from the ground and how to score goals against an untrained opponent. From there, they virtually trained their system to play a full one-on-one soccer matchup by combining these skill sets, then randomly pairing them against partially trained copies of themselves.

[Related: Google DeepMind’s AI forecasting is outperforming the ‘gold standard’ model.]

“Thus, in the second stage, the agent learned to combine previously learned skills, refine them to the full soccer task, and predict and anticipate the opponent’s behavior,” researchers wrote in their paper introduction, later noting that, “During play, the agents transitioned between all of these behaviors fluidly.”

AI photo

Thanks to the deep RL framework, DeepMind-powered agents soon learned to improve on existing abilities, including how to kick and shoot the soccer ball, block shots, and even defend their own goal against an attacking opponent by using its body as a shield.

During a series of one-on-one matches using robots utilizing the deep RL training, the two mechanical athletes walked, turned, kicked, and uprighted themselves faster than if engineers simply supplied them a scripted baseline of skills. These weren’t miniscule improvements, either—compared to a non-adaptable scripted baseline, the robots walked 181 percent faster, turned 302 percent faster, kicked 34 percent faster, and took 63 percent less time to get up after falling. What’s more, the deep RL-trained robots also showed new, emergent behaviors like pivoting on their feet and spinning. Such actions would be extremely challenging to pre-script otherwise.

Screenshots of robots playing soccer
Credit: Google DeepMind

There’s still some work to do before DeepMind-powered robots make it to the RoboCup. For these initial tests, researchers completely relied on simulation-based deep RL training before transferring that information to physical robots. In the future, engineers want to combine both virtual and real-time reinforcement training for their bots. They also hope to scale up their robots, but that will require much more experimentation and fine-tuning.

The team believes that utilizing similar deep RL approaches for soccer, as well as many other tasks, could further improve bipedal robots movements and real-time adaptation capabilities. Still, it’s unlikely you’ll need to worry about DeepMind humanoid robots on full-sized soccer fields—or in the labor market—just yet. At the same time, given their continuous improvements, it’s probably not a bad idea to get ready to blow the whistle on them.

The post Watch two tiny, AI-powered robots play soccer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ https://www.popsci.com/technology/slide-whistle-quartet/ Wed, 03 Apr 2024 21:00:00 +0000 https://www.popsci.com/?p=609382
Slide Whistle robot quartet
Somehow, it only took Tim Alex Jacobs two weeks to build. YouTube

Well, the notes start coming and they don't stop coming.

The post Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ appeared first on Popular Science.

]]>
Slide Whistle robot quartet
Somehow, it only took Tim Alex Jacobs two weeks to build. YouTube

The slide whistle isn’t known as a particularly difficult instrument to play—there’s a reason they’re usually marketed to children. But designing, programming, and building a robotic slide whistle quartet? That takes a solid background in computer science, a maddening amount of trial-and-error, logistical adjustments to account for “shrinkflation,” and at least two weeks to make it all happen.

That said, if you’re confident in your technical abilities, you too can construct a portable slide-whistle symphony-in-a-box capable of belting out Smash Mouth’s seminal, Billboard-topping masterpiece “All Star.” Fast forward to the 4:47 mark to listen to the tune. 

AI photo


Despite his initial apology for “crimes against all things musical,” it seems as though Tim Alex Jacobs isn’t feeling too guilty about his ongoing robot slide whistle hobby. Also known online as “mitxela,” Jacobs has documented his DIY musical endeavors on his YouTube channel for years. It appears plans to create MIDI-controlled, automated slide whistle systems have been in the works since at least 2018, but it’s difficult to envision anything much more absurd than Jacob’s latest iteration, which manages to link four separate instruments alongside motorized fans and mechanical controls, all within a latchable carrying case.

Aside from the overall wonky tones that come from slide whistles in general, Jacobs notes just how difficult it would be to calibrate four of them. What’s more, each whistle’s dedicated fan motor differs slightly from one another, making the resultant pressures unpredictable. To compensate for this, Jacobs drilled holes in the pumps to create intentional air leaks, allowing him to run the motors closer to full power than before without overheating.

[Related: Check out some of the past year’s most innovative musical inventions.]

“If we can run them at a higher power level, then the effects of friction will be less significant,” Jacobs explains. But although this reportedly helped a bit, he admits the results were “far from adequate.” Attaching contact microphones to each slide whistle was also a possibility, but the work involved in calibrating them to properly isolate the whistle tones simply wasn’t worth it.

So what was worth the effort? Well, programming the whistles to play “All Star” in its entirety, of course. The four instruments are in no way tuned to one another, but honestly, it probably wouldn’t be as entertaining if they somehow possessed perfect pitch.

Jacobs appears to have plans for further fine tuning (so to speak) down the line, but it’s unclear if he’ll stick with Smash Mouth, or move onto another 90s pop-rock band.

The post Watch this robotic slide whistle quartet belt out Smash Mouth’s ‘All Star’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Spider conversations decoded with the help of machine learning and contact microphones https://www.popsci.com/technology/wolf-spider-vibration-research/ Tue, 02 Apr 2024 14:51:17 +0000 https://www.popsci.com/?p=609092
Close up of wolf spider resting on web
Spiders communicate using complex movement and vibration patterns. Deposit Photos

A new approach to monitoring arachnid behavior could help understand their social dynamics, as well as their habitat’s health.

The post Spider conversations decoded with the help of machine learning and contact microphones appeared first on Popular Science.

]]>
Close up of wolf spider resting on web
Spiders communicate using complex movement and vibration patterns. Deposit Photos

Arachnids are born dancers. After millions of years of evolution, many species rely on fancy footwork to communicate everything from courtship rituals, to territorial disputes, to hunting strategies. Researchers usually observe these movements in lab settings using what are known as laser vibrometers. After aiming the tool’s light beam at a target, the vibrometer measures miniscule vibration frequencies and amplitudes emitted from the Doppler shift effect. Unfortunately, such systems’ cost and sensitivity often limit their field deployment.

To find a solution for this long-standing problem, a University of Nebraska-Lincoln PhD student recently combined an array of tiny, cheap contact microphones alongside a sound-processing machine learning program. Then, once packed up, he headed into the forests of north Mississippi to test out his new system.

Noori Choi’s results, recently published in Communications Biology, highlight a never-before-seen approach to collecting spiders’ extremely hard-to-detect movements across woodland substrates. Choi spent two sweltering summer months placing 25 microphones and pitfall traps across 1,000-square-foot sections of forest floor, then waited for the local wildlife to make its vibratory moves. In the end, Choi left the Magnolia State with 39,000 hours of data including over 17,000 series of vibrations.

[Related: Meet the first electric blue tarantula known to science.]

Not all those murmurings were the wolf spiders Choi wanted, of course. Forests are loud places filled with active insects, chatty birds, rustling tree branches, as well as the invasive sounds of human life like overhead plane engines. These sound waves are also absorbed into the ground as vibrations, and need to be sifted out from scientists’ arachnid targets.

“The vibroscape is a busier signaling space than we expected, because it includes both airborne and substrate-borne vibrations,” Choi said in a recent university profile.

In the past, this analysis process was a frustratingly tedious, manual endeavor that could severely limit research and dataset scopes. But instead of pouring over roughly 1,625 days’ worth of recordings, Choi designed a machine learning program capable of filtering out unwanted sounds while isolating the vibrations from three separate wolf spider species: Schizocosa stridulans, S. uetzi, and S. duplex.

Further analysis yielded fascinating new insights into arachnid behaviors, particularly an overlap of acoustic frequency, time, and signaling space between the S. stridulans and S. uetzi sibling species. Choi determined that both wolf spider variants usually restricted their signaling for when they were atop leaf litter, not pine debris. According to Choi, this implies that real estate is at a premium for the spiders.

“[They] may have limited options to choose from, because if they choose to signal in different places, on different substrates, they may just disrupt the whole communication and not achieve their goal, like attracting mates,” Choi, now a postdoctoral researcher at Germany’s Max Planck Institute of Animal Behavior, said on Monday.

What’s more, S. stridulans and S. uetzi appear to adapt their communication methods depending on how crowded they are at any given time, and who was crowding them. S. stridulans, for example, tended to lengthen their vibration-intense courtship dances when they detected nearby, same-species males. When they sensed nearby S. uetzi, however, they often varied their movements slightly to differentiate them from the other species, thus reducing potential courtship confusion.

In addition to opening up entirely new methods of observing arachnid behavior, Choi’s combination of contact microphones and machine learning analysis could also help others one day monitor an ecosystem’s overall health by keeping an ear on spider populations.

“Even though everyone agrees that arthropods are very important for ecosystem functioning… if they collapse, the whole community can collapse,” Choi said. “Nobody knows how to monitor changes in arthropods.”

Now, however, Choi’s new methodology could allow a non-invasive, accurate, and highly effective aid in staying atop spiders’ daily movements.

The post Spider conversations decoded with the help of machine learning and contact microphones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This cap is a big step towards universal, noninvasive brain-computer interfaces https://www.popsci.com/technology/bci-wearable-cap/ Mon, 01 Apr 2024 18:48:27 +0000 https://www.popsci.com/?p=608932
Users wearing BCI cap to play video game
Machine learning programming enables a much more universal training process for wearers. University of Texas at Austin

Users controlled a car racing video game with the device, no surgery needed.

The post This cap is a big step towards universal, noninvasive brain-computer interfaces appeared first on Popular Science.

]]>
Users wearing BCI cap to play video game
Machine learning programming enables a much more universal training process for wearers. University of Texas at Austin

Multiple brain-computer interface (BCI) devices can allow now users to do everything from control computer cursors, to translate neural activity into words, to convert handwriting into text. While one of the latest BCI examples appears to accomplish very similar tasks, it does so without the need for time-consuming, personalized calibration or high-stakes neurosurgery.

AI photo

As recently detailed in a study published in PNAS Nexus, University of Texas Austin researchers have developed a wearable cap that allows a user to accomplish complex computer tasks through interpreting brain activity into actionable commands. But instead of needing to tailor each device to a specific user’s neural activity, an accompanying machine learning program offers a new, “one-size-fits-all” approach that dramatically reduces training time.

“Training a BCI subject customarily starts with an offline calibration session to collect data to build an individual decoder,” the team explains in their paper’s abstract. “Apart from being time-consuming, this initial decoder might be inefficient as subjects do not receive feedback that helps them to elicit proper [sensorimotor rhythms] during calibration.”

To solve for this, researchers developed a new machine learning program that identifies an individual’s specific needs and adjusts its repetition-based training as needed. Because of this interoperable self-calibration, trainees don’t need the researcher team’s guidance, or complex medical procedures to install an implant.

[Related: Neuralink shows first human patient using brain implant to play online chess.]

“When we think about this in a clinical setting, this technology will make it so we won’t need a specialized team to do this calibration process, which is long and tedious,” Satyam Kumar, a graduate student involved in the project, said in a recent statement. “It will be much faster to move from patient to patient.”

To prepare, all a user needs to do is don one of the extremely red, electrode-dotted devices resembling a swimmer’s cap. From there, the electrodes gather and transit neural activity to the researcher team’s newly created decoding software during training. Thanks to the program’s machine learning capabilities, developers avoided the time-intensive, personalized training usually required for other BCI tech to calibrate for each individual user.  

Over a five-day period, 18 test subjects effectively learned to mentally envision playing both a car racing game and a simpler bar-balancing program using the new training method. The decoder was so effective, in fact, that wearers could train on both the bar and racing games simultaneously, instead of one at a time. At the annual South by Southwest Conference last month, the UT Austin team took things a step further. During a demonstration, volunteers put on the wearable BCI, then learned to control a pair of hand and arm rehabilitation robots within just a few minutes.

So far, the team has only tested their BCI cap on subjects without motor impairments, but they plan to expand their decoder’s abilities to encompass users with disabilities.

“On the one hand, we want to translate the BCI to the clinical realm to help people with disabilities,” said José del R. Millán, study co-author and UT professor of electrical and computer engineering. “On the other, we need to improve our technology to make it easier to use so that the impact for these people with disabilities is stronger.” Millán’s team is also working to incorporate similar BCI technology into a wheelchair.

The post This cap is a big step towards universal, noninvasive brain-computer interfaces appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot named ‘Emo’ can out-smile you by 840 milliseconds https://www.popsci.com/technology/emo-smile-robot-head/ Fri, 29 Mar 2024 14:00:00 +0000 https://www.popsci.com/?p=608662
Yuhang Hu working on Emo robot head
Emo contains 26 actuators to help mimic human smiles. John Abbott/Columbia Engineering

The bot's head and face are designed to simulate facial interactions in conversation with humans.

The post A robot named ‘Emo’ can out-smile you by 840 milliseconds appeared first on Popular Science.

]]>
Yuhang Hu working on Emo robot head
Emo contains 26 actuators to help mimic human smiles. John Abbott/Columbia Engineering

If you want your humanoid robot to realistically simulate facial expressions, it’s all about timing. And for the past five years, engineers at Columbia University’s Creative Machines Lab have been honing their robot’s reflexes down to the millisecond. Their results, detailed in a new study published in Science Robotics, are now available to see for yourself.

Meet Emo, the robot head capable of anticipating and mirroring human facial expressions, including smiles, within 840 milliseconds. But whether or not you’ll be left smiling at the end of the demonstration video remains to be seen.

AI photo

AI is getting pretty good at mimicking human conversations—heavy emphasis on “mimicking.” But when it comes to visibly approximating emotions, their physical robots counterparts still have a lot of catching up to do. A machine misjudging when to smile isn’t just awkward–it draws attention to its artificiality. 

Human brains, in comparison, are incredibly adept at interpreting huge amounts of visual cues in real-time, and then responding accordingly with various facial movements. Apart from making it extremely difficult to teach AI-powered robots the nuances of expression, it’s also hard to build a mechanical face capable of realistic muscle movements that don’t veer into the uncanny.

[Related: Please think twice before letting AI scan your penis for STIs.]

Emo’s creators attempt to solve some of these issues, or at the very least, help narrow the gap between human and robot expressivity. To construct their new bot, a team led by AI and robotics expert Hod Lipson first designed a realistic robotic human head that includes 26 separate actuators to enable tiny facial expression features. Each of Emo’s pupils also contained high-resolution cameras to follow the eyes of its human conversation partner—another important, nonverbal visual cue for people. Finally, Lipson’s team layered a silicone “skin” over Emo’s mechanical parts to make it all a little less.. you know, creepy.

From there, researchers built two separate AI models to work in tandem—one to predict human expressions through a target face’s minuscule expressions, and another to quickly issue motor responses for a robot face. Using sample videos of human facial expressions, Emo’s AI then learned emotional intricacies frame-by-frame. Within just a few hours, Emo was capable of observing, interpreting, and responding to the little facial shifts people tend to make as they begin to smile. What’s more, it can now do so within about 840 milliseconds.

“I think predicting human facial expressions accurately is a revolution in [human-robot interactions,” Yuhang Hu, Columbia Engineering PhD student and study lead author, said earlier this week. “Traditionally, robots have not been designed to consider humans’ expressions during interactions. Now, the robot can integrate human facial expressions as feedback.”

Right now, Emo lacks any verbal interpretation skills, so it can only interact by analyzing human facial expressions. Lipson, Hu, and the rest of their collaborators hope to soon combine the physical abilities with a large language model system such as ChatGPT. If they can accomplish this, then Emo will be even closer to natural(ish) human interactions. Of course, there’s a lot more to relatability than smiles, smirks, and grins, which the scientists appear to be focusing on. (“The mimicking of expressions such as pouting or frowning should be approached with caution because these could potentially be misconstrued as mockery or convey unintended sentiments.”) However, at some point, the future robot overlords may need to know what to do with our grimaces and scowls.

The post A robot named ‘Emo’ can out-smile you by 840 milliseconds appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Please think twice before letting AI scan your penis for STIs https://www.popsci.com/health/calmara-ai-sti/ Thu, 28 Mar 2024 18:45:00 +0000 https://www.popsci.com/?p=608402
person taking photos of themselves in the dark
Calmara offers a QR code taking you to its AI photo scanner. DepositPhotos

Awkward Gen Z buzzwords, troubling tech, and outdated sex ed: Calmara is not your 'intimacy bestie.'

The post Please think twice before letting AI scan your penis for STIs appeared first on Popular Science.

]]>
person taking photos of themselves in the dark
Calmara offers a QR code taking you to its AI photo scanner. DepositPhotos

A website promising its AI service can accurately scan pictures of penises for signs of sexually transmitted infections is earning the ire of healthcare advocates and digital privacy experts, among many other critics. But while the internet (and Jimmy Fallon) have taken the makers of Calmara to task over the past week, it actually took two years to get here.

Where did the AI ‘intimacy bestie’ come from?

Back in 2022, the company HeHealth debuted itself as an online way to “get answers about your penis health in minutes.” To receive this information, the website uses a combination of questionnaires and what the company claims is a “65-96 percent accurate” AI screening tool allegedly trained on proprietary datasets to flag photographic evidence of various STIs, including genital warts, herpes eruptions, and syphilis. “Cancer” is also included in the list of scannable signs. If the results come back “positive”, HeHealth can then refer users to healthcare professionals for actual physical screenings, diagnoses, and treatment options. It’s largely flown under the radar since then, with only around 31,000 people reportedly using its allegedly anonymized, encrypted services over the last two years. And then came Calmara.

Calmara website screenshot
Credit: Calmara

With a website overloaded with Gen Z-centric buzzwords, Calmara sells itself as women’s new “intimacy bestie,” offering to scan pictures of their potential sexual partners’ penises for indications of STIs. According to HeHealth CEO’s latest LinkedIn post, HeHealth and Calmara “are totally different products.” However, according to Calmara’s website, HeHealth’s owners are running Calmara, and it utilizes the same AI. Calmara also markets itself as (currently) free and “really in its element when focused on the D.”

In a March 19 reveal announcement, one “anonymous user” claimed Calmara is already “changing the conversation around sexual health.” Calmara certainly sparked a conversation over the last week—just not the one its makers likely intended.

A novelty app 

Both Calmara’s and HeHealth’s fine print concede their STI judgments “should not be used as substitutes for professional medical advice, diagnosis, treatment, or management of any disease or condition.” There’s an obvious reason why this is not actually a real medical diagnosis tool, despite its advertising. 

It doesn’t take an AI “so sharp you’d swear it aced its SATs” to remember that the majority of STIs are asymptomatic. In those cases, they definitely wouldn’t be visible in a photograph. What’s more, a preprint, typo-laden paper explaining Calmara’s AI indicates it was trained on an extremely limited image database that included “synthetic” photos of penises, i.e. computer-generated images. Meanwhile, determining its surprisingly accuracy is difficult to do—Calmara’s preprint paper says its AI is around 94.4-percent accurate, while the homepage says 95 percent. Scroll down a little further, and the FAQ section offers 65-to-90 percent reliability. Not a very encouraging approach to helping foster safe sex practices that would, presumably, require mutual, trustworthy statements about sexual health.

Calmara website screenshot
Credit: Calmara

“On its face, the service is so misguided that it’s easy to dismiss it as satire,” sex and culture critic Ella Dawson wrote in a viral blog post last week. Calmara’s central conceit—that new intimate partners would be comfortable enough to snap genital photos for an AI service to “scan”—is hard to imagine actually playing out in real life. “… This is not how human beings interact with each other. This is not how to normalize conversations about sexual health. And this is not how to promote safer sex practices.”

No age verification

Given its specific targeting of younger demographics, Dawson told PopSci she believes “it’s easy to see how a minor could find Calmara in a moment of panic and use it to self-diagnose” which would constitute obvious legal issues, as well as ethical ones. For one, explicit images of minors could constitute sexual child abuse material, or CSAM. While Calmara expressly states its program shouldn’t be used by minors, it still lacks even the most basic of age verification protocols at the time of writing.  

“Calmara’s lack of any age verification, or even a checkbox asking users to confirm that they are eighteen years of age or older, is not just lazy, it’s irresponsible,” Dawson concludes.

Side by side of age verification and consent pages for Calmara
Credit: Calmara / PopSci

Dubious privacy practices 

More to the point, simply slapping caveats across your “wellness” websites could amount to the “legal equivalent of magic pixie dust,” according to digital privacy expert Carey Lening’s rundown. While Calmara’s FAQ section is much vaguer on technical details, HeHealth’s FAQ page does state their services are HIPAA compliant because they utilize Amazon Web Services (AWS) “to collect, process, maintain, and store” data—which is technically true.

On its page dedicated to HIPAA regulations, AWS makes clear that there is no such thing as “HIPAA certification” for cloud service providers. Instead, AWS “aligns our HIPAA risk management program” to meet requirements “applicable to our operating model.” According to AWS, it utilizes “higher security standards that map to the HIPAA Security Rule” which enables “covered entities and their business associates” subject to HIPAA to use AWS for processing, maintaining, and storing protected health information. Basically, if you consent to use Calmara or HeHealth, you are consenting to AWS handling penis pictures—be them yours, or someone else’s.

[Related: A once-forgotten antibiotic could be a new weapon against drug-resistant infections.]

That said, Lening says Calmara’s makers may have failed to consider newer state laws, such as Washington’s My Health My Data Act, with its “extremely broad and expansive view of consumer health data” set to go into effect in late June. The first of its kind in the US, the My Health My Data Act is designed specifically to protect personal health data that may fall outside HIPAA qualifications. 

“In short, they didn’t do their legal due diligence,” Lening contends.

“What’s frustrating from the perspective of privacy advocates and practitioners is not that they were ‘embracing health innovation‘ and ‘making a difference‘, but rather that they took a characteristic ‘Move Fast, Break Things’ kind of approach to the problem,” she continues. “The simple fact is, the [online] outrage is entirely predictable, because the Calmara folks did not, in my opinion, adequately assess the risk of harm their app can cause.”

Keep Calmara and carry on

When asked about these issues directly, Calmara and HeHealth’s founders appeared nonplussed.

“Most of the criticism is based on wrong information and misinformation,” HeHealth CEO and Calmara co-founder Yudara Kularathne wrote to PopSci last Friday, pointing to an earlier LinkedIn statement about its privacy policies. Kularathne added that “concerns about potential for anonymized data to be re-identified” are being considered.

On Monday, Kularathne published another public LinkedIn post, claiming to be at work addressing, “Health data and Personally Identifiable Information (PHI) related issues,” “CSAM related issues,” “communication related issues,” and “synthetic data related issues.”

“We are addressing most of the concerns raised, and many changes have been implemented immediately,” Kularathne wrote.

Calmara QR code page screenshot
Credit: Calmara

When reached for additional details, Calmara CEO Mei-Ling Lu avoided addressing criticisms in email, and instead offered PopSci an audio file from “one of our female users” recounting how the nameless user and her partner employed HeHealth’s (and now Calmara’s) AI to help determine they had herpes.

“[W]hile they were about to start, she realized something ‘not right’ on her partner’s penis, but he said: ‘you know how much I sweat, this is heat bubbles,’” writes Lu. After noticing similar “heat bubbles… a few days later,” Stacy and her partner consulted HeHealth’s AI scanner, which flagged the uploaded photos and directed them to healthcare professionals who confirmed they both had herpes.

To be clear, medical organizations such as the Mayo Clinic freely offer concise, accurate information on herpes symptoms, which can include pain or itching alongside bumps or blisters around the genitals, anus or mouth, painful urination, and discharge from the urethra or vagina. Symptoms generally occur 2-12 days after infection, and although many people infected with the virus display either mild or no symptoms, they can still spread the disease to others. 

Meanwhile, Calmara’s glossy (NSFW) promotional, double entendre-laden video promises that it is “The PERFECT WEBSITE for HOOKING UP,” but no matter how many bananas are depicted, using AI to give penises a once-over doesn’t seem particularly reliable, enjoyable, or even natural.

The post Please think twice before letting AI scan your penis for STIs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Autonomous robots help farmers prepare for world’s largest tulip bloom https://www.popsci.com/technology/robots-tulips/ Tue, 26 Mar 2024 16:30:00 +0000 https://www.popsci.com/?p=607975
H2L’s Selector Robot will looks for signs of virus in the worlds leading exporter of tulips.
H2L’s Selector Robot will looks for signs of virus in the worlds leading exporter of tulips. DepositPhotos

The farming machines use a combination of cameras and AI models to find and remove diseased bulbs in an effort to ensure a healthy tulip season.

The post Autonomous robots help farmers prepare for world’s largest tulip bloom appeared first on Popular Science.

]]>
H2L’s Selector Robot will looks for signs of virus in the worlds leading exporter of tulips.
H2L’s Selector Robot will looks for signs of virus in the worlds leading exporter of tulips. DepositPhotos

Starting in early March, dozens of large, futuristic-looking white machines started slowly trolling through farmland in The Netherlands. At first glance, the machines look like a cross between a tractor and World War I-era track-based tank, albeit with a distinctly shiny sci-fi shimmer. The machines are actually fully autonomous, AI enabled agriculture robots tasked with spotting and eliminating diseased tulip bulbs ahead of the country’s iconic and financially significant Spring tulip bloom. The Dutch-made robot is just one of many new autonomous tools quickly making their way onto farms and ranches around the world

How does the robot spot infected tulips?

The tulip-spotting robot, designed by the Netherlands based company H2L Robotics, is officially called “Selector180.” Weighing in at roughly 2,600 pounds, the Selector uses GPS coordinates to autonomously drive through tulip fields and onboard cameras to take thousands of photos. An AI model then combs through those images looking for signs of potentially diseased bulbs which often are identifiable by distinctive red stripes on the bulb’s leaves. The Selector machine then picks out the diseased bulbs and separates them from the others to prevent the disease from spreading. H2L describes the machine as the “world’s first autonomous tulip selection robot.” A video below shows the Selector in action sorting through a row of bulbs.  

AI photo

Speaking with PopSci, H2L Robotics Managing Director Erik de Jong said the Selector’s AI models were trained using the wisdoms of specialized tulip farmers, referred to in the industry as “sickness’s spotter” who previously performed the laborious inspections by hand. H2L would show these spotters images of firms and they would point out bulbs with signs of thes virus. Those observations in turn helped train the model powering the Selector. As more farmers participated, Selector’s ability to accurately spot the virus improved. The machine, dr Jong said, benefited from a “wisdom of crowds.” 

Machines like H2L’s will become increasingly important in the coming years, de Jong added, because now aging human sports are “basically becoming extinct.” 

“Typically these are old guys that have been doing it [spotting sick tulips] for decades,” de Jong said. “There just are not that many of them any more so it is becoming a real problem.” 

Around a million winter-weary tourists flock to the Netherlands every year to catch a glimpse of the colorful blooming tulips. The season begins in March and reaches peak bloom around the middle of April. If left unchecked, diseased tulip buds can lead to smaller and weaker bulbs. Eventually, it can even result in bulbs that are unable to flower at all.  

[Related: How John Deere’s tech evolved from 19th-century plows to AI and autonomy]

For Dutch farmers, tulips aren’t just pretty to look at either. They are a big business. The Netherlands is consistently the world’s leading exporter of tulips and reportedly exported €81.9 million (or $88.78 million USD) worth of flowers to countries outside of the European Union in 2022, according to The Brussels Times. Around 800 different varieties of tulips are planted and can bloom in vibrant reds, oranges, and yellow rows. The rows of colorful fields are massive and can even be observed from NASA satellites in space

H2L robotics was founded in 2019 and shipped its first robot to farmers in February 2021. Prior to the Selector’s introduction, virus identification was reportedly carried out by human “sickness spotters.” The robots, which reportedly cost around $200,000 each, can work long hours without rest and potentially cover more area faster than a human counterpart. As of writing, H2L has sold 62 Selector machines, 55 of which are currently operational. 

“We’ve always sold these machines with the promise that it will be approximately  the performance of a human,” de Jong said. “We’ve never tried to oversell it.”

AI tools are helping farmer increases harvest yields and explore sustainability 

Farmers and agricultural startups worldwide have been exploring computer vision and machine learning algorithms to improve harvests and lower costs long before generative AI tools like ChatGPT and DALL-E were household names. In addition to autonomous robots, large-scale farmers are increasingly leaning on a combination of drones, satellite imagery, and remote sensors to aid in detecting diseases or potentially dangerous chemicals. Elsewhere, farmers are using AI to comb through weather and other environmental data in an effort to promote more sustainable farming methods and optimize planting schedules. de Jong, from H2L Robotics, says systems similar to the Selector robot could one day be used to detect sickness or anomalies in other crops like potatoes or onions. 

But robots like the kinds deployed in Dutch tulip fields aren’t a silver bullet for all farmers, at least not yet. Autonomous technology and AI solutions require strong wireless internet connectivity and large databases of reliable trading data, both of which may be in short supply in developing countries. Even where wireless capabilities are available, some of the most appealing autonomous solutions like self-driving tractors require building up new infrastructure for charging which isn’t  easily retrofitted onto existing farm land. Certain fruits and vegetables are also too delicate to be harvested by any machines and still require labor intensive hand picking. And even when most of those barriers are overcome it may take time and more real-world data to truly understand whether or not the upfront cost of automation actually ends up being net profitable for farmers.

The post Autonomous robots help farmers prepare for world’s largest tulip bloom appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI companies eye fossil fuels to meet booming energy demand https://www.popsci.com/technology/ai-power/ Mon, 25 Mar 2024 18:00:00 +0000 https://www.popsci.com/?p=607864
data center dark hallway green shade fluorescent light
Energy-intensive data centers were responsible for an estimated 4% of the US’ overall energy use in 2022, according to the International Energy Agency. DepositPhotos

Recent reports suggest renewable energy sources alone won’t be enough to meet data centers' increasingly intensive power needs.

The post AI companies eye fossil fuels to meet booming energy demand appeared first on Popular Science.

]]>
data center dark hallway green shade fluorescent light
Energy-intensive data centers were responsible for an estimated 4% of the US’ overall energy use in 2022, according to the International Energy Agency. DepositPhotos

It takes massive amounts of energy to power the data center brains of popular artificial intelligence models. That demand is only growing. In 2024, many of Silicon Valley’s largest tech giants and hoards of budding, well-funded startups have (very publically) aligned themselves with climate action–awash with PR about their sustainability goals, their carbon neutral pledges, and their promises to prioritize recycled materials. But as AI’s intensive energy demands become more apparent, it seems like many of those supposed green priorities could be jeopardized. 

A March International Energy Agency forecast estimates input-hungry AI models and cryptocurrency mining combined could cause data centers worldwide to double their energy use in just two years. Recent reports suggest tech leaders interested in staying relevant in the booming AI race may consider turning to old-fashioned, carbon-emitting energy sources to help meet that demand. 

AI models need more energy to power data centers 

Though precise figures measuring AI’s energy consumption remain a matter of debate, it’s increasingly clear complex data centers required to train and power those systems are energy-intensive. A recently released peer reviewed data analysis, energy demands from AI servers in 2027 could be on par with those of Argentina, the Netherlands, or Sweden combined. Production of new data centers isn’t slowing down either. Just last week, Washington Square Journal reports, Amazon Web Service Vice President of Engineering Bill Vass told an audience at an energy industry event in Texas he believes a new data center is being built every three days. Other energy industry leaders speaking at the event, like Former U.S. Energy Secretary Ernest Moniz, argued renewable energy production may fall short of what is  needed to power this projected data center growth. 

“We’re not going to build 100 gigawatts of new renewables in a few years,” Moniz said. The Obama-era energy secretary went on to say unmet energy demands brought on by AI, primarily via electricity, would require tapping into more natural gas and coal power plants. When it comes to meeting energy demands with renewables, he said, “you’re kind of stuck.” 

Others, like Dominion Energy CEO Robert Blue say the increased energy demand has led them to build out a new gas power plant while also trying to meet a 2050 net-zero goal. Other natural gas company executives speaking with the Journal, meanwhile claim tech firms building out data setters have expressed interest in using a natural gas energy source. 

Tech companies already have a checkered record on sustainability promises

A sudden reinterest in non-renewable energy sources to fuel an AI boom could contradict net zero carbon timelines and sustainability pledges made by major tech companies in recent years. Microsoft and Google, who are locked in a battle over quickly evolving generative AI tools like ChatGPT and Gemini, have both outlined plans to have net negative emissions in coming years. Apple, which reportedly shuttered its long-running car unit in order to devote resources towards AI, aims to become carbon neutral across its global supply chains by 2030. The Biden administration meanwhile has ambitiously pledged the US to have a carbon pollution free electricity sector by 2035.  

[ Related: Dozens of companies with ‘net-zero’ goals just got called out for greenwashing ]

Critics argue some of these climate pledges, particularly those heralded by large tech firms, may seem impressive on paper but have already fallen short in key areas. Multiple independent monitors in recent years have criticized large tech companies for allegedly failing to properly disclose their greenhouse gas emissions. Others have dinged tech firms for heavily basing their sustainability strategies around carbon offsets as opposed to potentially more effective solutions like reducing energy consumption. The alluring race for AI dominance risks stretching those already strained goals even further. 

AI boom has led to new data centers popping up around the US

Appetites for electricity are rising around the country. In Georgia, according to a recent Washington Post report, expected energy production within the state in the next ten years is 17 times larger than what it was recently. Northern Virginia, according to the same report, could require the energy equivalent of several nuclear power plants to meet the increased demand from planned data centers currently under construction. New data centers have popped up in both of those states in recent years. Lobbyists representing traditional coal and gas energy providers, the Post claims, are simultaneously urging government offices to delay retiring some fossil fuel plants in order to meet increasing energy demands. Data centers in the US alone were responsible for 4% of the county’s overall energy use in 2022 according to the IEA. That figure will only grow as more and more AI-focused facilities come online. 

At the same time, some of the AI industry’s-starkest proponents have argued these very same energy intensive models may prove instrumental in helping scale-up renewable energy sources and develop technologies to counteract the most destructive aspects of climate change. Previous reports argue powerful AI models could improve the efficiency of oils and gas facilities by improving underground mapping. AI simulation modes, similarly could help engineers develop optimal designs for wind or solar plants that could bring down their cost and increase their desirability as an energy source. Microsoft, who partners with OpenAI, is reportedly already using generative AI tools to try and streamline the regulatory approval process for nuclear reactors. Those future reactors, in theory, would then be used to generate the electricity needed to quench its AI models’ energy thirst. 

Fossil-fuel powered AI prioritizes long-term optimism over current day climate realities 

The problem with those more optimistic outlooks is that they remain, for the time being at least, mostly hypothetical and severely lacking in real-word data. AI models may increase the efficiency and affordability of renewable resources long term, but they risk doing so by pushing down on the accelerator of non-renewable resources right now. And with energy demands surging in other industries outside of tech at the same time, these optimistic longer-term outlooks could serve to justify splurging on natural gas and goal in the short term. Underpinning all of this is a worsening climate outlook that the overwhelming majority of climate scientists and international organizations agree demands radical action to reduce emissions as soon as possible. Renewable energy sources are on the rise in the US but tech firms looking for easier available sources of electricity to power their next AI projects risk setting back that progress. 

The post AI companies eye fossil fuels to meet booming energy demand appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Vernor Vinge, influential sci-fi author who warned of AI ‘Singularity,’ has died https://www.popsci.com/science/vernor-vinge-obit/ Thu, 21 Mar 2024 18:09:37 +0000 https://www.popsci.com/?p=607369
Vernor Vinge
Vernor Vinge is one of the first thinkers to popularize a technological Singularity. Lisa Brewster / Wikipedia Commons

Vinge’s visions of the future enthralled and influenced generations of writers and tech industry leaders. He was 79.

The post Vernor Vinge, influential sci-fi author who warned of AI ‘Singularity,’ has died appeared first on Popular Science.

]]>
Vernor Vinge
Vernor Vinge is one of the first thinkers to popularize a technological Singularity. Lisa Brewster / Wikipedia Commons

Vernor Vinge, prolific science-fiction writer, professor, and one of the first prominent thinkers to conceptualize the concepts of a “Technological Singularity” and cyberspace, has died at the age of 79. News of his passing on March 20 was confirmed through a Facebook post from author and friend David Brin, citing complications from Parkinson’s Disease.

“Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters, and the implications of science,” Brin writes.

The Hugo Award-winning author of sci-classics like A Fire Upon the Deep and Rainbow’s End, Vinge also taught mathematics and computer science at San Diego State University before retiring in 2000 to focus on his writing. In his famous 1983 op-ed, Vinge adapted the physics concept of a “singularity” to describe the moment in humanity’s technological progress marking “an intellectual transition as impenetrable as the knotted space-time at the center of a black hole” when “the world will pass far beyond our understanding.” The Singularity, Vinge hypothesized, would likely stem from the creation of artificial intelligence systems that surpassed humanity’s evolutionary capabilities. How life on Earth progressed from there was anyone’s guess—something plenty of Vinge-inspired writers have since attempted.

[Related: What happens if AI grows smarter than humans? The answer worries scientists.]

John Scalzi, bestselling sci-fi author of the Old Man’s War series, wrote in a blog post on Thursday that Vinge’s singularity theory in now so ubiquitous within science fiction and the tech industry that “it doesn’t feel like it has a progenitor, and that it just existed ambiently.”

“That’s a hell of a thing to have contributed to the world,” he continued.

In many ways, Vinge’s visions have arguably borne out almost to the exact year, as evidenced by the recent, rapid advances within an AI industry whose leaders are openly indebted to his work. In a 1993 essay further expounding on the Singularity concept, Vinge predicted that, “Within thirty years, we will have the technological means to create superhuman intelligence,” likening the moment to the “rise of human life on Earth.”

“Shortly after, the human era will be ended,” Vinge dramatically hypothesized at the time.
Many critics have since (often convincingly) argued that creating a true artificial general intelligence still remains out-of-reach, if not completely impossible. Even then, however, Vinge appeared perfectly capable of envisioning a dizzying, non-Singularity future—humanity may never square off against sentient AI, but it’s certainly already contending with “a glut of technical riches never properly absorbed.”

The post Vernor Vinge, influential sci-fi author who warned of AI ‘Singularity,’ has died appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI-generated nonsense is leaking into scientific journals https://www.popsci.com/technology/ai-generated-text-scientific-journals/ Tue, 19 Mar 2024 20:00:00 +0000 https://www.popsci.com/?p=607165
"As of my last knowledge update..."
"As of my last knowledge update...". DepositPhotos

Text outputs from large language models are littering paper mills—and even some peer-reviewed publications.

The post AI-generated nonsense is leaking into scientific journals appeared first on Popular Science.

]]>
"As of my last knowledge update..."
"As of my last knowledge update...". DepositPhotos

In February, an absurd, AI-generated rat penis somehow snuck its way into a since retracted Frontiers in Cell and Developmental Biology article. Now that odd travesty seems like it may just be a particularly loud example of a more persistent problem brewing in scientific literature. Journals are currently at a crossroads on how best to respond to researchers using popular but factually questionable generative AI tools to help draft manuscripts or produce images. Detecting evidence of AI use isn’t always easy, but a new report from 404 Media this week shows what appears to be dozens of partially AI-generated published articles hiding in plain sight. The dead give away? Commonly uttered, computer generated jargon

404 Media searched the AI-generated phrase “As of my last knowledge update” into Google Scholar’s public database and reportedly found 115 different articles that appeared to have relied on copy and pasted AI model outputs. That string of words are one of many turns of phrase often churned out by large language models like OpenAI’s ChatGPT. In this case, the “knowledge update” refers to the period when a model’s reference data was updated. Chat. Other common generative-AI phrases include “As an AI language model” and “regenerate response.” Outside of academic literature, these AI artifacts have appeared scattered in Amazon product reviews, and across social media platforms.  

Several of the papers cited by 404 Media appeared to copy the AI text directly into peer-reviewed papers purporting to explain complex research topics like quantum entanglement and the performance of lithium metal batteries. Other examples of journal articles appearing to include the common generative AI phrase “I don’t have access to real-time data” were also shared on X, formerly Twitter, over the weekend. At least some of the examples reviewed by PopSci did appear to be in relation to research into AI models. The AI utterances, in other words, were part of the subject material in those instances. 

Though several of these phrases appeared in reputable, well-known journals, 404 Media claims the majority of the examples it found stemmed from small, so-called “paper mills” that specialize in rapidly publishing papers, often for a fee and without scientific scrutiny or scrupulous peer review.. Researchers have claimed the proliferation of these paper mills has contributed to an increase in bogus or plagiarized academic findings in recent years. 

Unreliable AI-generated claims could lead to more retractions  

The recent examples of apparent AI-generated text appearing in published journal articles comes amid an uptick in retractions generally. A recent Nature analysis of research papers published last year found more than 10,000 retractions, more than any year previously measured. Though the bulk of those cases weren’t tied to AI-generated content, concerned researchers for years have feared increased use of these tools could lead to more false or misleading content making it past the peer review process. In the embarrassing rat penis case, the bizarre images and nonsensical AI-produced labels like “dissiliced” and “testtomcels” managed to slip by multiple reviewers either unnoticed or unreported. 

There’s good reason to believe articles submitted with AI-generated text may become more commonplace. Back in 2014, the journals IEEE and Springer combined removed more than 120 articles found to have included nonsensical AI-generated language. The prevalence of AI-generated text in journals has almost surely increased in the decade since then as more sophisticated, and easier to use tools like OpenAI’s ChatGPT have gained wider adoption. 

A 2023 survey of scientists conducted by Nature found that 1,600 respondents, or around 30% of those polled, admitted to using AI tools to help them write manuscripts. And while phrases like “As an AI algorithm” are dead giveaways exposing a sentence’s large language model (LLM) origin, many other more subtle uses of the technology are harder to root out. Detection models used to identify AI-generated text have proven frustratingly inadequate

Those who support permitting AI-generated text in some instances say it can help non-native speakers express themselves more clearly and potentially lower language barriers. Others argue the tools, if used responsibly, could speed up publication times and increase overall efficiency. But publishing inaccurate data or fabricated findings generated by these models risks damaging a journal’s reputation in the long term. A recent paper published in Current Osteoporosis Reports comparing review article reports written by humans and generated by ChatGPT found the AI-generated examples were often easier read. At the same time, the AI-generated reports were also filled with inaccurate references

“ChatGPT was pretty convincing with some of the phony statements it made, to be honest,” Indiana University School of Medicine professor and paper author Melissa Kacena said in a recent interview with Time. “It used the proper syntax and integrated them with proper statements in a paragraph, so sometimes there were no warning bells.”

Journals should agree on common standards around generative AI

Major publishers still aren’t aligned on whether or not to allow AI-generated text in the first place. Since 2022, journals published by Science have been strictly prohibited from using AI-generated text or images that are not first accepted by an editor. Nature, on the other hand, released a statement last year saying they wouldn’t allow AI-generated images or videos in its journals, but would permit AI-generated text in certain scenarios. JAMA currently allows AI-generated text but requires researchers to disclose when it appears and what specific models were used. 

These policy divergences can create unnecessary confusion both for researchers submitting works and reviewers tasked with vetting them. Researchers already have an incentive to use tools at their disposal to help publish articles quickly and boost their overall number of published works. An agreed upon standard around AI generated content by large journals would set clear boundaries for researchers to follow. The larger established journals can also further separate themselves from less scrupulous paper mills by drawing firm lines around certain uses of the technology or prohibiting it entirely in cases where it’s attempting to make factual claims. 

The post AI-generated nonsense is leaking into scientific journals appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Silicon Valley wants to deploy AI nursebots to handle your care https://www.popsci.com/technology/ai-nurse-chatbots-nvidia/ Tue, 19 Mar 2024 18:30:00 +0000 https://www.popsci.com/?p=607152
Woman talking with nurse chatbot on iPad
Hippocratic AI is using Nvidia GPUs to power its nurse chatbot avatars. Nvidia / Hippocratic AI / YouTube

Medical startup Hippocratic AI and Nvidia say it's all about the chatbots' 'empathy inference.'

The post Silicon Valley wants to deploy AI nursebots to handle your care appeared first on Popular Science.

]]>
Woman talking with nurse chatbot on iPad
Hippocratic AI is using Nvidia GPUs to power its nurse chatbot avatars. Nvidia / Hippocratic AI / YouTube

The medical startup Hippocratic AI and Nvidia have announced plans to deploy voice-based “AI healthcare agents.” In demonstration videos provided Monday, at-home patients are depicted conversing with animated human avatar chatbots on tablet and smartphone screens. Examples include a post-op appendectomy screening, as well as a chatbot instructing someone on how to inject penicillin. Hippocratic’s web page suggests providers could soon simply purchase its nursebots for less than $9/hour to handle such tasks, instead of paying an actual registered nurse $90/hour, Hippocratic claims. (The average pay for a registered nurse in the US is $38.74/hour, according to a 2022 U.S. Bureau of Labor Statistics’ occupational employment statistics survey.)

A patient’s trust in AI apparently is all about a program’s “seamless, personalized, and conversational” tone, said Munjal Shah, Hippocratic AI co-founder and CEO, in the company’s March 18 statement. Based on their internal research, people’s ability to “emotionally connect” with an AI healthcare agent reportedly increases “by 5-10% or more” for every half-second of conversational speed improvement, dubbed Hippocratic’s “empathy inference” engine. But quickly simulating all that worthwhile humanity requires a lot of computing power—hence Hippocratic’s investment in countless Nvidia H100 Tensor Core GPUs.

AI photo

“Voice-based digital agents powered by generative AI can usher in an age of abundance in healthcare, but only if the technology responds to patients as a human would,” said Kimberly Powell, Nvidia’s VP of Healthcare, said on Monday

[Related: Will we ever be able to trust health advice from an AI?]

But an H100 GPU-fueled nurse-droid’s capacity to spew medical advice nearly as fast as an overworked healthcare worker is only as good as its accuracy and bedside manner. Hippocratic says it’s also got that covered, of course, and cites internal surveys and beta testing of over 5,500 nurses and doctors voicing overwhelming satisfaction with the AI as proof. When it comes to its ability to avoid AI’s (well documented) racial, gendered, and age-based biases, however, testing is apparently still underway. And in terms of where Hippocratic’s LLM derived its diagnostic and conversational information—well, that’s even vaguer than their mostly anonymous polled humans.

In the company’s white paper detailing Polaris, its “Safety-focused LLM Constellation Architecture for Healthcare,” Hippocratic AI researchers say their model is trained “on a massive collection of proprietary data including clinical care plans, healthcare regulatory documents, medical manuals, drug databases, and other high-quality medical reasoning documents.” And that’s about it for any info on that front. PopSci has reached out to Hippocratic for more specifics, as well as whether or not patient medical info will be used in future training.

In the meantime, it’s currently unclear when healthcare companies (or, say, Amazon, for that matter) can “augment their human staff” with “empathy inference” AI nurses, as Hippocratic advertises. The company did note it’s already working with over 40 “beta partners” to test AI healthcare agents on a wide gamut of responsibilities, including chronic care management, wellness coaching, health risk assessments, pre-op outreach, and post-discharge follow-ups.

It’s hard to envision a majority of people ever preferring to talk with uncanny chat avatars instead of trained, emotionally invested, properly compensated healthcare workers. But that’s not necessarily the point here. The global nursing shortage remains dire, with recent estimates pointing to a shortage of 15 million health workers by 2030. Instead of addressing the working conditions and wage concerns that led unions representing roughly 32,000 nurses to strike in 2023, Hippocratic claims its supposed cost-effective AI solution is the “only scalable way” to close the shortfall gap—a scalability reliant on Nvidia’s H100 GPU.

The H100 is what helped make Nvidia one of the world’s most lucrative, multi trillion-dollar companies, and the chips still support many large language model (LLM) AI supercomputer systems. That said, it’s now technically Nvidia’s third most-powerful offering, following last year’s GH200 Grass Hopper Super Chip, as well as yesterday’s simultaneous reveal of a forthcoming Blackwell B200 GPU. Still, at roughly $30,000-to-$40,000 per chip, the H100’s price tag is reserved for the sorts of projects valued at half-a-billion dollars–projects like Hippocratic AI.

But before jumping at the potential savings that an AI labor workaround could provide the healthcare industry, it’s worth considering these bots’ energy costs. For reference, a single H100 GPU requires as much power per day as the average American household.

The post Silicon Valley wants to deploy AI nursebots to handle your care appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Crypto scammers flooded YouTube with sham SpaceX Starship livestreams https://www.popsci.com/technology/crypto-scam-starship-launch-livestream/ Thu, 14 Mar 2024 15:26:22 +0000 https://www.popsci.com/?p=606533
Starship rocket launching during third test
The SpaceX Starship Flight 3 Rocket launches at the Starbase facility on March 14, 2024 in Brownsville, Texas. The operation is SpaceX's third attempt at launching this rocket into space. The Starship Flight 3 rocket becomes the world's largest rocket launched into space and is vital to NASA's plans for landing astronauts on the Moon and Elon Musk's hopes of eventually colonizing Mars. Photo by Brandon Bell/Getty Images

A fake Elon Musk hawked an ‘amazing opportunity’ during this morning’s big launch.

The post Crypto scammers flooded YouTube with sham SpaceX Starship livestreams appeared first on Popular Science.

]]>
Starship rocket launching during third test
The SpaceX Starship Flight 3 Rocket launches at the Starbase facility on March 14, 2024 in Brownsville, Texas. The operation is SpaceX's third attempt at launching this rocket into space. The Starship Flight 3 rocket becomes the world's largest rocket launched into space and is vital to NASA's plans for landing astronauts on the Moon and Elon Musk's hopes of eventually colonizing Mars. Photo by Brandon Bell/Getty Images

YouTube is flooded with fake livestream accounts airing looped videos of “Elon Musk” supposedly promoting crypto schemes. Although not the first time to happen, the website’s layout, verification qualifications, and search results page continue to make it difficult to separate legitimate sources from the con artists attempting to leverage today’s Starship test launch—its most successful to date, although ground control eventually lost contact with the rocket yet again.

After entering search queries such as “Starship Launch Livestream,” at least one supposed verified account within the top ten results takes users to a video of Elon Musk standing in front of the over 400-feet-tall rocket’s launchpad in Boca Chica, Texas. Multiple other accounts airing the same clip can be found further within the search results.

Space X photo

“Don’t miss your chance to change your financial life,” a voice similar to Musk’s tells attendees over footage of him attending a previous, actual Starship event. “This initiative symbolizes our commitment to making space exploration accessible to all, while also highlighting the potential of financial innovations represented by cryptocurrencies.”

“…to send either 0.1 Bitcoin or one Ethereum or Dogecoin to the specified address. After completing the transaction within a minute, twice as much Bitcoin or Ethereum will be returned to your address. …It is very important to use reliable and verified sources to scan the QR code and visit the promotion website. This will help avoid possible fraudulent schemes. Please remember administration is not responsible for loss due to not following the rules of our giveaway due to incorrect transactions or the use of unreliable sources. Don’t miss your chance to change your financial life. Connect Cryptocurrency wallet right now and become part of this amazing opportunity. You will receive double the amount reflected in your Bitcoin wallet. This initiative symbolizes our commitment to making space exploration accessible to all while also highlighting the potential of financial innovations are represented by cryptocurrencies. So let us embark on this remarkable journey to financial independence and cosmic discoveries…”

Fake Elon Musk

It’s unclear if the audio is AI vocal clone or simply a human impersonation, but either way oddly stilted and filled with glitches. A QR code displayed at the bottom of the screen (which PopSci cropped out of the video above) takes viewers to a website falsely advertising an “Official event from SpaceX Company” offering an “opportunity to take a share of 2,000 BTC,” among other massive cryptocurrency hauls.

There are currently multiple accounts mirroring the official SpaceX YouTube page airing simultaneous livestreams of the same scam clip. One of those accounts has been active since May 16, 2022, and has over 2.3 million subscribers—roughly one-third that of SpaceX’s actual, verified profile. Unlike the real company’s locale, however, the fake profile is listed as residing in Venezuela.

[Related: Another SpaceX Starship blew up.]

Scammers have long leveraged Musk’s public image for similar con campaigns. The SpaceX, Tesla, and X CEO is a longtime pusher of various cryptocurrency ventures, and is one of the world’s wealthiest men. Likewise, YouTube is a particularly popular venue for crypto grifters. In June 2020, for example, bad actors made away with $150,000 through nearly identical SpaceX YouTube channels. Almost exactly two years later, the BBC noted dozens of fake Musk videos advertising crypto scams, earning a public rebuke from the actual Musk himself. The crypto enthusiast outlet BitOK revealed a campaign almost exactly the same as today’s scams around the time as the November 2023 Starship event.

Update 3/15/24 12:40pm: YouTube spokesperson confirmed that the company has “terminated four channels in line with our policies which prohibit cryptocurrency phishing schemes.” According to YouTube, video uploads are monitored by a combination of machine learning and human reviewers.

The post Crypto scammers flooded YouTube with sham SpaceX Starship livestreams appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Researchers propose fourth traffic signal light for hypothetical self-driving car future https://www.popsci.com/technology/fourth-traffic-light-self-driving-cars/ Wed, 13 Mar 2024 16:00:00 +0000 https://www.popsci.com/?p=606404
Traffic light flashing yellow signal
The classic traffic signal design was internationally recognized in 1931. Deposit Photos

It's called 'white' for now, until a color that 'does not create confusion' is picked.

The post Researchers propose fourth traffic signal light for hypothetical self-driving car future appeared first on Popular Science.

]]>
Traffic light flashing yellow signal
The classic traffic signal design was internationally recognized in 1931. Deposit Photos

Fully self-driving cars, despite the claims of some companies, aren’t exactly ready to hit the roads anytime soon. There’s even a solid case to be made that completely autonomous vehicles (AVs) will never take over everyday travel. Regardless, some urban planners are already looking into ensuring how such a future could be as safe and efficient. According to a team at North Carolina State University, one solution may be upending the more-than-century-old design of traffic signals.

The ubiquity of stop lights’ Red-Yellow-Green phases aren’t just coincidence—they’re actually codified in an international accord dating back to 1931. This has served drivers pretty well since then, but the NC State team argues AVs could eventually create the opportunity for better road conditions. Or, at the very least, could benefit from some infrastructure adjustments.

Last year, researchers led by civil, construction, and environmental engineering associate professor Ali Hajbabaie created a computer model for city commuting patterns which indicated everyday driving could one day actually improve from a sizable influx of AVs. By sharing their copious amounts of real-time sensor information with one another, Hajbabaie and colleagues believe these vehicles could hypothetically coordinate far beyond simple intersection changes to adjust variables like speed and break times.

To further harness these benefits, they proposed the introduction of a fourth, “white” light to traffic signals. In this scenario, the “white” phase activates whenever enough interconnected AVs approach an intersection. Once lit, the phase indicates nearby drivers should simply follow the car (AV or human) in front of them, instead of trying to anticipate something like a yellow light’s transition time to red. Additionally, such interconnectivity could communicate with traffic signal systems to determine when it is best for “Walk” and “Do-Not-Walk” pedestrian signals. Based on their modeling, it appeared such a change could reduce intersection congestion by at least 40-percent compared to current traffic system optimization software. In doing so, this could improve overall travel times, fuel efficiency, and safety.

[Related: What can ‘smart intersections’ do for a city? Chattanooga aims to find out.]

But for those concerned about the stressful idea of confusing, colorless lights atop existing signals, don’t worry—the “white” is just a theoretical stand-in until regulators decide on something clearer.

“Research needs to be done to find the best color/indication,” Hajbabaie writes in an email to PopSci. “Any indication/color could be used as long as it does not associate with any existing message and does not create confusion.”

This initial model had a pretty glaring limitation, however—it did not really take pedestrians into much consideration. In the year since, Hajbabaie’s team has updated their four-phase traffic light computer model to account for this crucial factor in urban traffic. According to their new results published in Computer-Aided Civil Infrastructure and Engineering, the NC State researchers determined that even with humans commuting by foot, an additional fourth light could reduce delays at intersections by as much as 25-percent from current levels.

Granted, this massive reduction is dependent on an “almost universal adoption of AVs,” Hajbabaie said in a separate announcement this week. Given the current state of the industry, such a future seems much further down the road than many have hoped. But while not a distinct possibility at the moment, the team still believes even a modest increase in AVs on roads—coupled with something like this fourth “white” phase—could improve conditions in an extremely meaningful way. What’s more, Hajbabaie says that waiting for fully autonomous cars may not be necessary.

“We think that this concept would [also] work with vehicles that have adaptive cruise control and some sort of lateral movement controller such as lane keeping feature,” he tells PopSci. “Having said that, we think we would require more sensors in the intersection vicinity to be able to observe the location of vehicles if they are not equipped with all the sensors that smart cars will be equipped with.”

But regardless of whether cities ever reach a driverless car future, it’s probably best to just keep investing in green urban planning projects like cycling lanes, protected walkways, and even e-bikes. They’re simpler, and more eco-friendly. 

The post Researchers propose fourth traffic signal light for hypothetical self-driving car future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
TSA is testing a self-screening security checkpoint in Vegas https://www.popsci.com/technology/tsa-vegas-self-screening/ Thu, 07 Mar 2024 16:37:31 +0000 https://www.popsci.com/?p=605766
Passenger staying at self-scan TSA station
The prototype is meant to resemble a grocery store's self checkout kiosk. Credit: TSA at Harry Reid International Airport at Las Vegas

The new prototype station is largely automated, and transfers much of the work onto passengers.

The post TSA is testing a self-screening security checkpoint in Vegas appeared first on Popular Science.

]]>
Passenger staying at self-scan TSA station
The prototype is meant to resemble a grocery store's self checkout kiosk. Credit: TSA at Harry Reid International Airport at Las Vegas

The Transportation Security Administration is launching the pilot phase of an autonomous self-screening checkpoint system. Unveiled earlier this week and scheduled to officially open on March 11 at Harry Reid International Airport in Las Vegas, the station resembles grocery store self-checkout kiosks—but instead of scanning milk and eggs, you’re expected to…scan yourself to ensure you aren’t a threat. Or at least that’s what it seems from the looks of it.

“We are constantly looking at innovative ways to enhance the passenger experience, while also improving security,” TSA Administrator David Pekoske said on Wednesday, claiming “trusted travelers” will be able to complete screenings “at their own pace.”

For now, the prototype station is only available to TSA PreCheck travelers. Although it’s possible additional passengers could use similar self-scan options in the future, depending on the prototype’s success. Upon reaching the Las Vegas airport’s “TSA Innovation Checkpoint,” users will see something similar to the standard security checks alongside the addition of a camera-enabled video screen. TSA agents are still nearby, but they won’t directly interact with passengers unless they request assistance, which may also take the form of a virtual agent popping up on the video screen.

Woman standing in TSA self scan booth at airport
A woman standing in the TSA’s self-screening security checkpoint in Las Vegas. Credit: TSA at Harry Reid International Airport at Las Vegas

The new self-guided station’s X-ray machines function similarly to standard checkpoints, while its automated conveyor belts feed all luggage into a more sensitive detection system. That latter tech, however, sounds a little overly cautious at the moment. In a recent CBS News video segment, items as small as a passenger’s hair clips triggered the alarm. That said, the station is designed to allow “self-resolution” in such situations to “reduce instances where a pat-down or secondary screening procedure would be necessary,” according to the TSA.

[Related: The post-9/11 flight security changes you don’t see.]

The TSA’s proposed solution to one of airports’ most notorious bottlenecks comes at a tricky moment for both the travel and automation industries. A string of recent, high-profile technological and manufacturing snafus have, at best, severely inconvenienced passengers and, at worst, absolutely terrified them. Meanwhile, businesses’ aggressive implementation of self-checkout systems has backfired in certain markets as consumers increasingly voice frustrations with the often finicky tech. Meanwhile, critics contend that automation “solutions” like the TSA’s new security checkpoint project are simply ways to employ fewer human workers who often ask for pesky things like living wages and health insurance.

Whether or not self-scanning checkpoints become an airport staple won’t be certain for a few years. The TSA cautioned as much in this week’s announcement, going so far as to say some of these technologies may simply find their way into existing security lines. Until then, the agency says its new prototype at least “gives us an opportunity to collect valuable user data and insights.”

And if there’s anything surveillance organizations love, it’s all that “valuable user data.”

The post TSA is testing a self-screening security checkpoint in Vegas appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI promised humanlike machines–in 1958 https://www.popsci.com/technology/ai-humanoid-robots-history/ Sun, 03 Mar 2024 17:00:00 +0000 https://www.popsci.com/?p=605203
vintage photo of scientists with a robot prototype
Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr

We’ve been here before.

The post AI promised humanlike machines–in 1958 appeared first on Popular Science.

]]>
vintage photo of scientists with a robot prototype
Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr

This article was originally featured on The Conversation.

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

AI boom and bust

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Familiar refrains

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm–unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.

Lessons to heed

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.

The post AI promised humanlike machines–in 1958 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why scientists are tracking whale tails with AI https://www.popsci.com/technology/whale-ai-tails/ Fri, 01 Mar 2024 19:43:36 +0000 https://www.popsci.com/?p=605238
tail of a humpback whale in the ocean sticking out of the waves
“Having an algorithm like this dramatically speeds up the information-gathering process.". DepositPhotos

A model similar to facial recognition is being used to reveal urgent news about humpback whales.

The post Why scientists are tracking whale tails with AI appeared first on Popular Science.

]]>
tail of a humpback whale in the ocean sticking out of the waves
“Having an algorithm like this dramatically speeds up the information-gathering process.". DepositPhotos

Researchers using an AI photo-scanning tool similar to facial recognition have learned that there’s been a 20% decline in North Pacific Ocean humpback whale populations over the past decade. The researchers pointed to a climate change related heat wave as a possible culprit. The findings, published this week in Royal Society Open Science, used the artificial intelligence-powered image detection model to analyze more than 200,000 photographs of humpback whales taken between 2001 and 2022. 

Facial recognition models used to identify humans have faced sustained criticism from researchers and advocates who say the models struggle to identify accurately identity nonwhite people. In this case, the model scanning humpback whale photos was trained to spot and recognize unique identifiers on a whale’s dorsal fin. These identifiers function like a one-of-a-kind whale fingerprint and can consist of marks, variations in pigmentation, scarring, and overall size. Researchers used successful photo matches to inform estimates for humpback whale populations over time.

[ Related: The government is going to use facial recognition more. That’s bad. ]

Images of the whale tails, captured by scientists and whale watchers alike, are stored by a nonprofit called HappyWhale, which described itself as “largest individual identification resource ever built for marine mammals.” HappyWhales encourages everyday “citizen scientists” to take photos of whales they see and upload them to its growing database. The photos include the data, and location of where the whale was spotted. 

From there, users can track a whale they photographed and contribute to a growing corpus of data researchers can use to more accurately understate the species’ population and migration patterns. Prior to this AI-assisted method, experts had to comb through individual whale tail photographs looking for similarities with their named eye, a process both painstaking and time-consuming. Image matching technology speeds that process, giving researchers more time to investigate changes in population data. 

 “Having an algorithm like this dramatically speeds up the information-gathering process, which hopefully speeds up timely management actions,” Philip Patton, a  University of Hawaii at Manoa Phd student who has worked with the tool said in a previous interview with Spectrum News

Humpback whales, once on the brink of extinction, have seen their population grow in the 40 years since commercial hunting of the species was made illegal, so much so that the giant mammals were removed from the endangered species list in the US in 2016. But that rebound is at risk of being short-lived. Researchers analyzing the whale data estimate their population peaked in 2012 at around 33,488. Then, the numbers started trickling downwards. From 2012 to 2021, the whale population dropped down to 26,662, a decline of around 20%. Researchers say that downward trend coincided with a record heat wave that raised ocean temperatures and may have “altered the course of species recovery.” 

That historic heat wave resulted in rising surface sea temperatures and decreases in nutrient-rich water which in turn led to reductions in  phytoplankton biomass. These changes led to greater disruptions in the food chain which the researcher says limited the whales’ access to krill and other food sources. While they acknowledged ship collisions and entanglements could be responsible for some of the population declines, the researchers said those factors couldn’t account for the entirety of the decline. 

“These advances have shifted the abundance estimation paradigm from data scarcity and periodic study to continuous and accessible tracking of the ocean-basin- wide population through time,” the researchers wrote. 

Facial recognition can shed light on animals on a population level 

Whales aren’t the only animals having their photos run through image detection algorithms. Scientists use various forms of the technology to research populations of cows, chickens, salmon, and lemurs, amongst species. Though primarily used as an aid for conservation and population estimation, some researchers have reportedly used the technology to analyze facial cues in domesticated Sheep to determine whether or not they felt pain in certain scenarios. Others have used photo matching software to try and to find missing pets

[ Related: Do all geese look the same to you? Not to this facial recognition software. ]

These examples and others highlight the upside of image and pattern matching algorithms capable of sifting through vast image databases. In the case of conservation, accurate population estimates made possible by these technologies can help inform whether or not certain species require endangered classifications or other resources to help maintain their healthy population.

The post Why scientists are tracking whale tails with AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI wants to devour a huge chunk of the internet. Who’s going to stop them? https://www.popsci.com/technology/openai-wordpress-tumblr/ Thu, 29 Feb 2024 15:43:16 +0000 https://www.popsci.com/?p=604994
Vacuum moving towards two blocks with Wordpress and Tumblr logos
WordPress supports around 43 percent of the internet you're most likely to see. DepositPhotos, Deposit Photos

The AI giant plans to buy WordPress and Tumblr data to train ChatGPT. What could go wrong?

The post OpenAI wants to devour a huge chunk of the internet. Who’s going to stop them? appeared first on Popular Science.

]]>
Vacuum moving towards two blocks with Wordpress and Tumblr logos
WordPress supports around 43 percent of the internet you're most likely to see. DepositPhotos, Deposit Photos

You probably don’t know about Automattic, but they know you.

As the parent company of WordPress, its content management systems host around 43 percent of the internet’s 10 million most popular websites. Meanwhile, it also owns a vast suite of mega-platforms including Tumblr, where a massive amount of embarrassing personal posts live. All this is to say that, through all those countless Terms & Conditions and third-party consent forms, Automattic potentially has access to a huge chunk of the internet’s content and data.

[Related: OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss.]

According to 404 Media earlier this week, Automattic is finalizing deals with OpenAI and Midjourney to provide a ton of that information for their ongoing artificial intelligence training pursuits. Most people see the results in chatbots, since tech companies need the text within millions of websites to train large language model conversational abilities. But this can also take the form of training facial recognition algorithms using your selfies, or improving image and video generation capabilities by analyzing original artwork you uploaded online. It’s hard to know exactly what and how much data is used, however, since companies like Midjourney and OpenAI maintain black box tech products—such is the case in this imminent business deal.

So, what if you wanna opt-out of ChatGPT devouring your confessional microblog entries or daily workflows? Good luck with that.

When asked to comment, a spokesperson for Automattic directed PopSci to its “Protecting User Choice” page, published Tuesday afternoon after 404 Media’s report. The page attempts to offer you a number of assurances. There’s now a privacy setting to “discourage” search engine indexing sites on WordPress.com and Tumblr, and Automattic promises to “share only public content” hosted on those platforms. Additional opt-out settings will also “discourage” AI companies from trawling data, and Automattic plans to regularly update its partners on which users “newly opt out,” so that their content can be removed from future training and past source sets.

There is, however, one little caveat to all this:

“Currently, no law exists that requires crawlers to follow these preferences,” says Automattic.

“From what I have seen, I’m not exactly sure what could be shared with AI,” says Erin Coyle, an associate professor of media and communication at Temple University. “We do have a confusing landscape right now, in terms of what data privacy rights people have.”

To Coyle, nebulous access to copious amounts of online user information “absolutely speaks” to an absence of cohesive privacy legislation in the US. One of the biggest challenges impeding progress is the fact that laws, by and large, are reactive instead of preventative regulation.

“There is no data privacy in general.”

“It’s really hard for legislators to get ahead of the developments, especially in technology,” she adds. “While there are arguments to be made for them to be really careful and cautious… it’s also very challenging in times like this, when the technology is developing so rapidly.”

As companies like OpenAI, Google, and Meta continue their AI arms race, it’s the everyday people providing the bulk of the internet’s content—both public and private—who are caught in the middle. Clicking “Yes” to the manifesto-length terms and conditions prefacing almost every app, site, or social media platform is often the only way to access those services.

“Everything is about terms of service, no matter what website we’re talking about,” says Christopher Terry, a University of Minnesota journalism professor focused on regulatory and legal analysis of media ownership, internet policy, and political advertising.

Speaking to PopSci, Terry explains that basically every single terms of service agreement you have signed online is a legal contractual obligation with whoever is running a website. Delve deep enough into the legalese, and “you’re gonna see you agreed to give them, and allow them to use, the data that you generate… you allowed them to monetize that.”

Of course, when was the last time you actually read any of those annoying pop-ups?

“There is no data privacy in general,” Terry says. “With the digital lives that we have been living for decades, people have been sharing so much information… without really knowing what happens to that information,” Coyle continues. “A lot of us signed those agreements without any idea of where AI would be today.”

And all it takes to sign away your data for potential AI training is a simple Terms of Service update notification—another pop-up that, most likely, you didn’t read before clicking “Agree.”

You either opt out, or you’re in

Should Automattic complete its deal with OpenAI, Midjourney, or any other AI company, some of those very same update alerts will likely pop-up across millions of email inboxes and websites—and most people will reflexively shoo them away. But according to some researchers, even offering voluntary opt-outs in such situations isn’t enough.

“It is highly probable that the majority of users will have no idea that this is an option and/or that the partnership with OpenAI/Midjourney is happening,” Alexis Shore, a Boston University researcher focused on technology policy and communication studies, writes to PopSci. “In that sense, giving users this opt-out option, when the default settings allow for AI crawling, is rather pointless.”

“They’re going all in on it right now while they still can.”

Experts like Shore and Coyle think one potential solution is a reversal in approach—changing voluntary opt-outs to opt-ins, as is increasingly the case for internet users in the EU thanks to its General Data Protection Regulation (GDPR). Unfortunately, US lawmakers have yet to make much progress on anything approaching that level of oversight.

The next option, should you have enough evidence to make your case, is legal action. And while copyright infringement lawsuits continue to mount against companies like OpenAI, it will be years before their legal precedents are established. By then, it’s anyone’s guess what the AI industry will have done to the digital landscape, and your privacy. Terry compares the moment to a 19th-century gold rush.

“They’re going all in on it right now while they still can,” he says. “You’re going out there to stake out your claim right now, and you’re pouring everything you can into that machine so that later, when that’s a [legal] problem, it’s already done.”

 Neither OpenAI nor Midjourney responded to multiple requests for comment at the time of writing.

The post OpenAI wants to devour a huge chunk of the internet. Who’s going to stop them? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI wants to make a walking, talking humanoid robot smarter https://www.popsci.com/technology/openai-wants-to-make-a-walking-talking-humanoid-robot-smarter/ Thu, 29 Feb 2024 13:00:00 +0000 https://www.popsci.com/?p=604845
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations.
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations. Figure

Figure’s founder Brett Adcock says a new partnership with OpenAI could help its robots hold conversation and learn from its mistakes over time.

The post OpenAI wants to make a walking, talking humanoid robot smarter appeared first on Popular Science.

]]>
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations.
OpenAI is partnering with Figure to help it develop a general purpose humanoid robot capable of working alongside humans and holding conversations. Figure

Just a few years ago, attempts at autonomous, human-shaped bipedal robots were laughable and far-fetched. Two-legged robots competing in high-profile Pentagon challenges famously stumbled and fell their way through obstacle courses like an inebriated pub-crawler while Tesla’s highly-hyped humanoid bot, years later, turned out to be nothing more than a man dancing in a skin-tight bodysuit.

But, despite those gaffs, robotics firms pressed on and now several believe their walking machines could work alongside human manufacturing workers in only a few short years. Figure, one of the more prominent companies in the humanoid robot space, this week told PopSci it raised $675 million in funding from some of the tech industry’s biggest players, including Microsoft, Nvidia, and Amazon founder Jeff Bezos. The company also announced it has struck a new agreement with generative AI giant, OpenAI to “develop next generation AI models for humanoid robots.” The partnership marks one of the most significant examples yet of an AI software company working to integrate its tools into physical robots. 

[ Related: BMW plans to put humanoid robots in a South Carolina factory to do… something ]

Figure Founder and CEO Brett Adcock described the partnership as a “huge milestone for robotics.” Eventually, Adcock hopes the partnership with OpenAI will lead to a robot that can work side-by-side with humans completing tasks and holding a conversation. By working with OpenAI, creators of the world’s most popular large language model, Adcock says Figure will be able to further improve the robot’s “semantic” understanding which should make it more useful in work scenarios. 

“I think it’s getting more clear that this [humanoid robotics] are becoming more and more an engineering problem than it is a research problem,” Adcock said. “Actually being able to build a humanoid [robot] and put it into the world of useful work is actually starting to be possible.” 

Why is OpenAI working with a humanoid robotics company? 

Founded in 2021, Figure is developing a 5 ‘6, 130-pound bipedal “general purpose” robot it claims can lift objects around 45 pounds and walk 2.7 miles per hour. Figure believes its robots could one day help address possible labor shortages in manufacturing jobs and generally “enable the automation of difficult, unsafe, or tedious tasks.” Though it’s unclear just how reliably current humanoid robots can actually execute those types of tasks, Figure recently released a video showing its Figure 01 model slowly walking towards a stack of create, grabbing one with its two hands and loading it into a conveyor belt. The company claims the robot performed the entire job autonomously. 

AI photo

Supporters of humanoid-style robots say their bi-pedal form-factor makes them more adept at climbing stairs and navigating uneven or unpredictable ground compared to the more typical wheeled or tracked alternatives. The technology underpinning these types of robots has notably come a long way from the embarrassing stumbles of previous years. Speaking with Wired last year, Figure Chief Technology Officer Jerry Pratt said Figure’s robots could complete the Pentagon’s test course in a quarter of the time it took machines to finish it back in 2015, thanks in part to advances in computer vision technology. Other bipedal robots, like Boston Dynamics’ Atlas, can already perform backflips and chuck large objects.  

Figure says its new “collaboration agreement” with OpenAI will combine OpenAI’s research with it’s own experience in robotics hardware and software. If successful, Figure believes the partnership will enhance its robot’s ability to “process and reason from language.” That ability to understand language and act on it could, in theory, allow the robots to better work alongside a human warehouse worker or take verbal commands. 

“We see a tremendous advantage of having a large language model or multi models model on the robot so that we can interact with it and give what we call ‘semantic understanding,’” Adcock said. 

Over the long-term, Adcock said people interacting with the Figure should be able to speak with the robot in plain language. The robot can then create a list of tasks and complete them autonomously. The partnership with OpenAI could also help the Figure robot self-correct and learn from its past mistakes, which should lead to quicker improvements in tasks. The Figure robot already possesses the ability to speak, Adcock said, and can use its cameras to describe what it “sees” in front of it. It can also describe what may have happened in a given area over a period of time. 

“We’ve always planned to come back to robotics and we see a path with Figure to explore what humanoid robots can achieve when powered by highly capable multimodal models,” Open AI VP of Product and Partnerships Peter Welinder said in a statement sent to PopSci.  

OpenAI and Figure aren’t the only ones trying to integrate language models into human-looking robots. Last year, Elon Musk biography Walter Isaacson wrote an article for Time claiming the Tesla CEO was exploring ways to integrate his company’s improving Optimus humanoid robot and its “Dojo” supercomputer with the goal of creating so-called artificial general intelligence, a term some researchers use to describe a machine capable of performing above human level capability at many tasks. 

Tech giants are betting big on Figure to win out in a brewing humanoid robot race 

Figure hopes the support from OpenAI, in addition to its massive new wave of funding, could speed-up Figure’s timeline for making its product available commercially. The $675 million in funding Figure revealed this week was reportedly over $150 more than the amount it has initially sought, according to Bloomberg. The company says it’s planning to use that capital to scale up its AI training, robotic manufacturing, and add on new engineers. Figure currently has 80 employees. 

But Figure isn’t the only company looking to commercialize humanoid robots. 1X Technologies AS, another humanoid robotics company with significant investment from OpenAI, recently raised $100 million. Oregon-based Agility Robotics, which demonstrated how its robots could perform a variety of simple warehouse tasks autonomously, is reportedly already testing machines in Amazon warehouses. Figure, for its part, recently announced a partnership with BMW to bring the humanoid robot to the carmaker’s Spartanburg, South Carolina manufacturing facility. 

All of these companies are racing to cement their place as an early dominant force in an industry some supporters believe could be a real money-maker in the near-future. In 2022, Goldman Sachs predicted the global humanoid robot market could reach $154 billion by 2035. If that sounds like a lot, it’s a fraction of the $3 trillion financial services company Macquarie estimates the industry could be worth by 2050. That’s roughly the value of Apple today. 

But much still has to happen before any of those lofty visions resemble reality. These still-developing technologies are just now being trialed and tested within major manufacturing facilities. The most impressive of these robots, like the dancing giants produced by Boston Dynamics, remain extremely expensive to manufacture. It’s also still unclear whether or not these robots can, or ever will, be able to respond to complex tasks with the same degree of flexibility as a human worker. 

Generally, it’s still unclear what exact problems these are best suited to solve.  Both Elon Musk and Figure have said their machines could complete assignments too dangerous or unappealing to humans, though what those exact use cases are hasn’t been articulated clearly. BMW, for example, previously told PopSci it was still “investigating concepts,” when asked how it plans to deploy Figure’s robots. Adcock went a step further, suggesting the Figure robot could be used to move sheet metal or perform other body shop tasks. Adcock said Figure has five primary use cases for the robot in the facility in mind that they have not yet publicly announced. 

The issue of what to do with these robots when they are made isn’t unique to Figure. In an interview with PopSci, Carnegie Mellon Department of Mechanical Engineering Associate Professor Ding Zhao called that issue of use-cases the “billion-dollar question.” 

“Generally speaking, we are still exploring the capabilities of humanoid robots, how effectively we can collect data and train them, and how to ensure their safety when they interact with the physical world.” 

Zhao went on to say robots which are intended to work alongside humans will also have to invest heavily in safety, which he argued could even match or exceed development costs. 

The robots themselves need to improve as well, especially in real world work environments that are less predictable and more “messy” than typical robot training facilities. Adcock says the robot’s speed at tasks and ability to handle larger and more diverse types of payloads will also need to increase. But all of those challenges, he argued, can be improved through powerful AI models like the type OpenAI is building. 

“We think we can solve a lot of this with AI systems,” Adcock said. “We really believe here that the future of general purpose robots is through learning, through AI learning.”

The post OpenAI wants to make a walking, talking humanoid robot smarter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Apple Car is dead https://www.popsci.com/technology/apple-car-dead/ Wed, 28 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=604807
Apple logo in store
Plans for an Apple car date as far back as 2014, but the project is no more. Deposit Photos

Apple has officially scrapped its multibillion dollar autonomous EV plans to focus on AI.

The post The Apple Car is dead appeared first on Popular Science.

]]>
Apple logo in store
Plans for an Apple car date as far back as 2014, but the project is no more. Deposit Photos

It turns out that last month’s report on Apple kicking its tortured, multibillion dollar electric vehicle project down the road another few years was a bit conservative. During an internal meeting on Tuesday, company representatives informed employees that all EV plans are officially scrapped. After at least a decade of rumors, research, and arguably unrealistic goals, it would seem that CarPlay is about as much as you’re gonna get from Apple while on the roads. RIP, “iCar.”

The major strategic decision, first reported by Bloomberg, also appears to reaffirm Apple’s continuing shift towards artificial intelligence. Close to 2,000 Special Projects Group employees worked on car initiatives, many of whom will now be folded into various generative AI divisions. The hundreds of vehicle designers and hardware engineers formerly focused on the Apple car can apply to other positions, although yesterday’s report makes clear that layoffs are imminent.

[Related: Don’t worry, that Tesla driver only wore the Apple Vision Pro for ’30-40 seconds’]

Previously referred to as Project Titan or T172, Apple’s intentions to break into the automotive market date as far back as at least 2014. It was clear from the start that Apple executives such as CEO Tim Cook wanted an industry-changing product akin to the iPod or iPhone—an electric vehicle with fully autonomous driving capabilities, voice-guided navigation software, no steering wheel or even pedals, and a “limousine-like interior.”

As time progressed, however, it became clear—both internally and vicariously through competitors like Tesla—that such goals were lofty, to say the least. Throughout multiple leadership shakeups, reorganizations, and reality checks, an Apple car began to sound much more like existing EVs already on the road. Basic driver components returned to the design, and AI navigation plans downgraded from fully autonomous to current technology such as acceleration assist, brake controls, and adaptive steering. Even then, recent rumors pointed towards the finalized car still costing as much as $100,000, which reportedly concerned company leaders for the hyper-luxury price point.

This isn’t the first time Apple pulled the plug on a major project—in 2014, for example, saw the abandonment of a 4K Apple smart TV. But the company has rarely, if ever, spent as much time and money on a product that never even officially debuted, much less made it to market.

Fare thee well, Apple Car. You sounded pretty cool, but it’s clear Tim Cook believes its future profits reside in $3,500 “spatial computing” headsets and attempting to integrate generative AI into everything. For now, the closest anyone will get to an iCar is wearing Apple Vision Pro while seated in a Tesla… something literally no one recommends.

The post The Apple Car is dead appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google pauses Gemini image tools after AI generates ‘inaccuracies’ in race https://www.popsci.com/technology/google-gemini-inaccuracies-race/ Fri, 23 Feb 2024 15:04:22 +0000 https://www.popsci.com/?p=603897
Twitter user @__Link_in_Bio__ said it was important for AI companies to portray diversity but said Google's approach lacked nuance and felt “bolted on.”
Twitter user @__Link_in_Bio__ said it was important for AI companies to portray diversity but said Google's approach lacked nuance and felt “bolted on.”. X.com

Gemini generated nonwhite Word War II Nazis, Vikings, and other historically or predominantly white figures, sparking an angry backlash on X.

The post Google pauses Gemini image tools after AI generates ‘inaccuracies’ in race appeared first on Popular Science.

]]>
Twitter user @__Link_in_Bio__ said it was important for AI companies to portray diversity but said Google's approach lacked nuance and felt “bolted on.”
Twitter user @__Link_in_Bio__ said it was important for AI companies to portray diversity but said Google's approach lacked nuance and felt “bolted on.”. X.com

Facing bias accusations, Google this week was forced to pause the image generation portion of Gemini, its generative AI model. The temporary suspension follows backlash from users who criticized it for allegedly placing too much emphasis on ethnic diversity, sometimes at the expense of accuracy. Prior to Google pausing services, Gemini was found producing racially diverse depictions of World War II-era Nazis, Viking warriors, the US Founding Fathers, and other historically white figures.

In a statement released Wednesday, Google said Gemini’s image generation capabilities were “missing the mark” and said it was “working to improve these kinds of depictions immediately.” Google then suspended access to the image generation tools altogether on Thursday morning and said it would release a new version of the model soon. Gemini refused to generate any images when PopSci tested the service Thursday morning, instead stating: “We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.” As of this writing, Gemini is still down. Google directed PopSci to its latest statement when reached for comment.

Gemini’s image generations draw complaints  

Google officially began rolling out its image generation tools in Gemini earlier this month but controversy over its non-white depictions heated up this week. Users on X, formerly Twitter, began sharing screenshots of examples where Gemini reportedly generated images of nonwhite people when specifically prompted to depict a white person. In other cases, Gemini reportedly appeared to over-represent non-white people when prompted to generate images of historical groups that were predominantly white, critics claim.

The posts quickly attracted the attention of right-wing social media circles which have taken issue with what they perceive as heavy-handed diversity and equity initiatives in American politics and business. In more extreme circles, some accounts used the AI-generated images to stir-up an unfounded conspiracy theory accusing Google of purposely trying to eliminate white people from Gemini image results. 

How has Google responded to the Gemini controversy? 

Though the controversy surrounding Gemini seems to stem from critics arguing Google doesn’t place enough emphasis on white individuals, experts studying AI have long said AI models do just the opposite and regularly underrepresented nonwhite groups. In a relatively short period of time, AI systems trained on culturally biased datasets have amassed a history of repeating and reinforcing stereotypes about racial minorities. Safety researchers say this is why tech companies building AI models need to responsibly filter and tune their products. Image generators, and AI models more broadly, often repeat or reinforce culturally biased data absorbed from its training data in a dynamic researchers sometimes refer to as “garbage in, garbage out.” 

A version of this played out following the rollout of OpenAI’s DALL-E image generator in 2022 where AI researchers criticized the company for allegedly reinforcing age-old gender and racial stereotypes. At the time, for example, users asking DALL-E to produce images of a “builder” or a “flight attendant” would produce results exclusively depicting men and women respectively. OpenAI has since made tweaks to its models to try and address these issues. 

[ Related: How this programmer and poet thinks we should tackle racially biased AI ]

It’s possible Google was attempting to counterbalance some biases with Gemini but made an overcorrection during that process. In a statement released Wednesday, Google said Gemini does generate a wide range of people, which it said is “generally a good thing because people around the world use it.” Google did not respond to PopSci’s requests for comment asking for information on why Gemini may have produced the image results in question.

It’s not immediately clear what caused Gemini to produce the images it did but some commentators have theories. In an interview with Platformer’s Casey Newton Thursday, former OpenAI head of trust and safety Dave Willner said balancing how AI models responsibly generate content is complex and Google’s approach “wasn’t exactly elegant.” Wilners suspected these missteps could be attributed, at least in part, to a lack of resources provided to Google engineers to approach the nuanced area properly.

[ Related: OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss ]

Gemini Senior Director of Product Jack Krawczyk elaborated on that further in a post on X, where he said the model’s non-white depictions of people reflected the company’s “global user base.” Krawczyk defended Google’s approach towards representation and bias, which he said aligned with the company’s core AI principles, but said some “inaccuracies” may be occuring in regard to historical prompts.

“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said on a now restricted account. “Historical contexts have more nuance to them and we will further tune to accommodate that,” he added. 

Neither Krawczyk nor Google immediately responded to criticisms from users who said the racial representation choices extended beyond strictly historical figures. These claims, it’s worth stating, should be taken with a degree of skepticism. Some users expressed different experiences and PopSci was unable to replicate the findings. In some cases, other journalists claimed Gemini had refused to generate images for prompts asking the AI to create images of either Black or white people. In other words, user experiences with Gemini in recent days appear to have varied widely. 

Google released a separate statement Thursday saying it was pausing Gemini’s image generation capabilities while it worked to address “inaccuracies” and apparent disproportionate representations. The company said it would re-release a new version of the model “soon” but didn’t provide any specific date. 

Though it’s still unclear what caused Gemini to generate the content that resulted in its temporary pause, the flavor of online blowback Google received likely won’t end for AI-makers anytime soon.

The post Google pauses Gemini image tools after AI generates ‘inaccuracies’ in race appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Washington puzzles over AI in health care https://www.popsci.com/technology/ai-health-care-regulation-questions/ Thu, 22 Feb 2024 13:00:00 +0000 https://www.popsci.com/?p=603325
Already, AI’s impact on health care is widespread.
Already, AI’s impact on health care is widespread. DepositPhotos

The hype, the risks, and the 'daunting problem' of regulating AI's impact on your medical treatments.

The post Washington puzzles over AI in health care appeared first on Popular Science.

]]>
Already, AI’s impact on health care is widespread.
Already, AI’s impact on health care is widespread. DepositPhotos

This article was originally published on KFF Health News.

Lawmakers and regulators in Washington are starting to puzzle over how to regulate artificial intelligence in health care—and the AI industry thinks there’s a good chance they’ll mess it up.

“It’s an incredibly daunting problem,” said Bob Wachter, the chair of the Department of Medicine at the University of California-San Francisco. “There’s a risk we come in with guns blazing and overregulate.”

Already, AI’s impact on health care is widespread. The Food and Drug Administration has approved some 692 AI products. Algorithms are helping to schedule patients, determine staffing levels in emergency rooms, and even transcribe and summarize clinical visits to save physicians’ time. They’re starting to help radiologists read MRIs and X-rays. Wachter said he sometimes informally consults a version of GPT-4, a large language model from the company OpenAI, for complex cases.

The scope of AI’s impact—and the potential for future changes—means government is already playing catch-up.

“Policymakers are terribly behind the times,” Michael Yang, senior managing partner at OMERS Ventures, a venture capital firm, said in an email. Yang’s peers have made vast investments in the sector. Rock Health, a venture capital firm, says financiers have put nearly $28 billion into digital health firms specializing in artificial intelligence.

One issue regulators are grappling with, Wachter said, is that, unlike drugs, which will have the same chemistry five years from now as they do today, AI changes over time. But governance is forming, with the White House and multiple health-focused agencies developing rules to ensure transparency and privacy. Congress is also flashing interest. The Senate Finance Committee held a hearing Feb. 8 on AI in health care.

Along with regulation and legislation comes increased lobbying. CNBC counted a 185% surge in the number of organizations disclosing AI lobbying activities in 2023. The trade group TechNet has launched a $25 million initiative, including TV ad buys, to educate viewers on the benefits of artificial intelligence.

“It is very hard to know how to smartly regulate AI since we are so early in the invention phase of the technology,” Bob Kocher, a partner with venture capital firm Venrock who previously served in the Obama administration, said in an email.

Kocher has spoken to senators about AI regulation. He emphasizes some of the difficulties the health care system will face in adopting the products. Doctors—facing malpractice risks—might be leery of using technology they don’t understand to make clinical decisions.

An analysis of Census Bureau data from January by the consultancy Capital Economics found 6.1% of health care businesses were planning to use AI in the next six months, roughly in the middle of the 14 sectors surveyed.

Like any medical product, AI systems can pose risks to patients, sometimes in a novel way. One example: They may make things up.

Wachter recalled a colleague, as a test, assigning OpenAI’s GPT-3 to write a prior authorization letter to an insurer for a purposefully “wacky” prescription: a blood thinner to treat a patient’s insomnia.

But the AI “wrote a beautiful note,” he said. The system so convincingly cited “recent literature” that Wachter’s colleague briefly wondered whether she’d missed a new line of research. It turned out the chatbot had made it up.

There’s a risk of AI magnifying bias already present in the health care system. Historically, people of color have received less care than white patients. Studies show, for example, that Black patients with fractures are less likely to get pain medication than white ones. This bias might get set in stone when artificial intelligence is trained on that data and subsequently acts.

Research into AI deployed by large insurers has confirmed that has happened. But the problem is more widespread. Wachter said UCSF tested a product to predict no-shows for clinical appointments. Patients who are deemed unlikely to show up for a visit are more likely to be double-booked.

The test showed that people of color were more likely not to show. Whether or not the finding was accurate, “the ethical response is to ask, why is that, and is there something you can do,” Wachter said.

Hype aside, those risks will likely continue to grab attention over time. AI experts and FDA officials have emphasized the need for transparent algorithms, monitored over the long term by human beings—regulators and outside researchers. AI products adapt and change as new data is incorporated. And scientists will develop new products.

Policymakers will need to invest in new systems to track AI over time, said University of Chicago Provost Katherine Baicker, who testified at the Finance Committee hearing. “The biggest advance is something we haven’t thought of yet,” she said in an interview.

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

Subscribe to KFF Health News’ free Morning Briefing.

The post Washington puzzles over AI in health care appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
ChatGPT has been generating bizarre nonsense (more than usual) https://www.popsci.com/technology/chatgpt-bizarre-nonsense/ Wed, 21 Feb 2024 20:00:00 +0000 https://www.popsci.com/?p=603626
ChatGPT users posted screenshots of odd responses where the model provided lengthy incoherent responses and unexpectedly weaved between Spanish Latin. OpenAI is “investigating” the issue.
ChatGPT users posted screenshots of odd responses where the model provided lengthy incoherent responses and unexpectedly weaved between Spanish Latin. OpenAI is “investigating” the issue. DepositPhotos

'Would it glad your clickies to grasp-turn-tooth over a mind-ocean jello type? … 🌊 💼 🐠'

The post ChatGPT has been generating bizarre nonsense (more than usual) appeared first on Popular Science.

]]>
ChatGPT users posted screenshots of odd responses where the model provided lengthy incoherent responses and unexpectedly weaved between Spanish Latin. OpenAI is “investigating” the issue.
ChatGPT users posted screenshots of odd responses where the model provided lengthy incoherent responses and unexpectedly weaved between Spanish Latin. OpenAI is “investigating” the issue. DepositPhotos

It’s no secret at this point that commonly used large language models can struggle to accurately represent facts and sometimes provide misleading answers. OpenAI’s ChatGPT briefly took that reality to its extreme this week by responding to user prompts with long strings of comically odd nonsensical gibberish devoid of any comprehensible meaning. 

Users shared ChatGPT’s strange , and at times esoteric-sounding responses through screenshots which show the model unexpectedly weaving between multiple languages, generating random words, and repeating phrases over and over again. Emojis, sometimes with no clear relation to users prompt questions, also frequently appeared.  

One user explaining his experience succinctly summed up the issue on Reddit, writing, “clearly, something is very wrong with ChatGPT right now.” One of the odder responses included below shows the model incorporating a variety of these oddities when apologizing to a user for its repeated mistakes. 

“Would it glad your clickies to grasp-turn-tooth over a mind-ocean jello type? Or submarine-else que quesieras que dove in-toe? Please, share with there-forth combo desire! 🌊 💼 🐠”

On Tuesday, OpenAI released a status report saying it was “investigating reports of unexpected responses from ChatGPT.”  As of late Wednesday morning, the OpenAI status page read “All systems operational.” The company pointed PopSci to its status page when asked for comment and did not answer questions asking what may have caused the sudden strange outputs. 

What is going on with ChatGPT? 

ChatGPT users began posting screenshots of their odd interactions with the model on social media and in online forums this week, with many of the oddest responses occurring on Tuesday. In one example, ChatGPT responded to a query by providing a jazz album recommendation and then suddenly repeating the phrase “Happy listening 🎶” more than a dozen times. 

Other users posted screenshots of the model providing paragraphs worth of odd, nonsensical phrases in response to seemingly simple questions like “what is a computer” or how to make a sundried tomato. One user asking ChatGPT to provide a fun fact about the Golden State warrior basketball team received an odd, unintelligible response describing the team’s players as “heroes with laugh lines that seep those dashing medleys into something that talks about every enthusiast’s mood board.” 

Elsewhere, the model would answer prompts by unexpectedly weaving between multiple languages like Spanish and Latin and, in some cases, simply appearing to make up words that don’t seem to exist. 

OpenAI says it’s investigating the strange mistakes 

It’s still unclear exactly what may have caused ChatGPT’s sudden influx of nonsensical responses or what steps OpenAI has taken to address the issue. Some have speculated the odd, sometimes verbose responses could be the result of tweaks made to the model’s “temperature” which determines the creativity level of its responses. PopSci could not verify this theory. 

The strange responses come just around three months after some ChatGPT users complained about the model seemingly getting “lazier” with some of its responses. Multiple users complained on social media about the model apparently refusing to analyze large files or complete other more responsive to other more complicated prompts that it seemed to dutifully complete just months prior, which in turn entertained some oddball theories. OpenAI publicly acknowledged the issue and vaguely said it may have been related to a November update. 

“We’ve heard all your feedback about GPT4 getting lazier!” OpenAI said at the time. “We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. Model behavior can be unpredictable, and we’re looking into fixing it.”

ChatGPT has generated odd outputs before 

Since its official launch in 2022, ChatGPT, like other large language models, has struggled to consistently present facts accurately, a phenomena AI researchers refer to as “hallucinations.” OpenAI’s leadership has acknowledged these issues in the past and said they expected the hallucinations issue to ease over time as its results receive continued feedback from human evaluators
But it’s not entirely clear if that improvement is going completely according to plan. Researchers last year from Stanford University and UC Berkeley determined GPT-4 was answering complicated math questions with less accuracy and provided less thorough explanation for its answers than it did just a few months prior. Those findings seemed to add more credence to complaints from ChatGPT users who speculate some elements of the model’s may actually be getting worse over time.

While we can’t say exactly what caused ChatGPT’s most recent hiccups, we can say with confidence what it almost certainly wasn’t: AI suddenly exhibiting human-like tendencies. That might seem like an obvious statement but new reports show a growing number of academics are increasingly using anthropomorphic language to refer to AI models like ChatGPT. 

Researchers from Stanford recently analyzed more than 650,000 academic articles published between 2007 and 2023 and found a 50% increase in instances where other researchers used human pronouns to refer to technology. Researchers writing in papers discussing LLMs were reportedly more likely to anthropomorphize than those writing about other forms of technology. 

“Anthropomorphism is baked into the way that we are building and using language models,” Myra Cheng, one of the paper’s authors said in a recent interview with New Scientist. “It’s a double-bind that the field is caught in, where the users and creators of language models have to use anthropomorphism, but at the same time, using anthropomorphism leads to more and more misleading ideas about what these models can do.”
In other words, using familiar human experiences to explain errors and glitches stemming from an AI model’s analyses of billions of parameters of data could do more harm than good. Many AI safety researchers and public policy experts agree AI hallucinations pose a pressing threat to the information ecosystem but it would be a step too far to describe ChatGPT as “freaking out.” The real answers often lie in the model’s training data and underlying architecture, which remain difficult for independent researchers to parse.

The post ChatGPT has been generating bizarre nonsense (more than usual) appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A ridiculous AI-generated rat penis made it into a peer-reviewed journal https://www.popsci.com/technology/ai-rat-journal/ Fri, 16 Feb 2024 20:00:00 +0000 https://www.popsci.com/?p=603215
The researchers openly acknowledged they used Midjoruney’s AI-image generator to produce the image in text accompanying the figure. The original caption reads: "Sermatogonial stem cells, isolated, purified and cultured from rat testes."
The researchers openly acknowledged they used Midjoruney’s AI-image generator to produce the image in text accompanying the figure. The original caption reads: "Sermatogonial stem cells, isolated, purified and cultured from rat testes.". Xinyu Guo, Liang Dong and Dingjun Hao

Researchers used Midjourney’s AI image generators to illustrate the fantastical rodent beside incoherent strings of text.

The post A ridiculous AI-generated rat penis made it into a peer-reviewed journal appeared first on Popular Science.

]]>
The researchers openly acknowledged they used Midjoruney’s AI-image generator to produce the image in text accompanying the figure. The original caption reads: "Sermatogonial stem cells, isolated, purified and cultured from rat testes."
The researchers openly acknowledged they used Midjoruney’s AI-image generator to produce the image in text accompanying the figure. The original caption reads: "Sermatogonial stem cells, isolated, purified and cultured from rat testes.". Xinyu Guo, Liang Dong and Dingjun Hao

A prominent scientific journal has officially retracted an article featuring an AI-generated image of a rat with large genitals alongside strings of nonsensical gibberish words. The embarrassing reversal highlights the risk of using the increasingly popular AI tools in scientific literature. Though some major journals are already reigning in the practice, researchers worry unchecked use of the AI tools could promote inaccurate findings and potentially deal reputational damage to institutions and researchers. 

How did an AI-generated rat pass peer review? 

The AI-generated images appeared in a paper published earlier this week in the journal Frontiers in Cell and Developmental Biology. The three researchers, who are from Xi’an Honghui Hospital and Xi’an Jiaotong University, were investigating current research related to sperm stem cells of small mammals. As part of the paper, the researchers included an illustration of a cartoon rat with a phallus towering over its own body. Labels appeared beside the rat with incoherent works like “testtomcels,” “Dissisilcied” and “dck.” The researchers openly acknowledged they used Midjoruney’s AI-image generator to produce the image in text accompanying the figure. 

The AI-generated rat was followed up by three more figures purportedly depicting complex signaling pathways. Though these initially appeared less visually jarring than the animated rat, they were similarly surrounded by nonsensical AI-generated gibberish. Combined, the odd figures quickly garnered attention amongst academics on social media, with some questioning how the clearly inaccurate figures managed to slip through Frontiers’ review process.

Though many researchers have cautioned against using AI-generated material in academic literature, Frontiers policies don’t prohibit authors from using AI tools, so long as they include a proper disclosure. In this case, the author clearly stated they used Midjourney’s AI image generator to produce the diagrams. Still, the journal’s author guidelines note figures produced using these tools must be checked for accuracy which clearly doesn’t seem to have happened in this case. 

The journal has since issued a full retraction, saying the article “does not meet [Frontiers’] standards of editorial and scientific rigor.” In an accompanying blog post, Frontiers said it had removed the article, and the AI-generated figures within it from its databases “to protect the integrity of the scientific record.” 

The journal claims the paper’s authors failed to respond to a reviewer’s requests calling on them to revise the figures. Now, Frontiers says it is investigating how all of this was able to happen in the first place. One of the US reviewers reportedly told Motherboard they reviewed the article only on its scientific merit. The decision to include the AI-generated figure, they claimed, was ultimately left up to Frontiers

“We sincerely apologize to the scientific community for this mistake and thank our readers who quickly brought this to our attention,” Frontiers wrote. 

Frontiers did not immediately respond to PopSci’s request for comment. 

Science integrity expert Elisabeth Bik, who spends much of her time spotting manipulated images in academic journals, described the figure as a “sad example” of how generative AI images can slip through the cracks. Even though the phallic figure in particular was easy to spot, Bik warned its publication could foreshadow more harmful entries in the future.

“These figures are clearly not scientifically correct, but if such botched illustrations can pass peer review so easily, more realistic-looking AI-generated figures have likely already infiltrated the scientific literature,” Bik wrote on her blog Science Integrity Digest. “Generative AI will do serious harm to the quality, trustworthiness, and value of scientific papers.”

Should academic journals allow AI-generated material?

With over five million academic articles published online every year, it’s almost impossible to gauge how frequently researchers are turning to AI-generated images. PopSci was able to spot at least one other obvious example of what appeared to be an AI-generated image depicting two African elephants fighting, which appeared in a press kit for a recently published Nature Metabolism paper. The AI-generated image was not included in the published paper. 

But even if AI-generated images aren’t flooding articles at this moment, there’s still an incentive for time-scrapped researchers to turn to the increasingly convincing tools to produce published works at a faster clip. Some prominent journals are already taking precautions to prevent the content from being published. 

Last year, Nature released a strongly worded statement saying it would not allow any AI-generative images or videos to appear in its journals. The family of journals published by Science are prohibited from using text, images, or figures generated by AI without first getting an editor’s permission. Nature’s editorial board said its decision, made after months of deliberation, stemmed primarily from difficulties related to verifying data used to generate AI content. The board also expressed concern over image generators’ inability to properly credit artists for their work. 

AI firms responsible for these tools are currently fighting off a spate of lawsuits from artists and authors who say these tools are illegally spitting out images trained on copyright protected materials. In addition to questions of attribution, generative AI tools have a tendency to spew out text and produce images that aren’t factually or conceptually accurate, a phenomena researchers refer to as “hallucinations.” Elsewhere, researchers warn AI-generated images could be use to create realistic fake images, also known as deepfakes, which they say could be used to give credence to faulty data or inaccurate conclusions. All of these factors present challenges to researchers or journals looking to publish AI-generated material.

“The world is on the brink of an AI revolution,” Nature’s editorial board wrote last year. “This revolution holds great promise, but AI—and particularly generative AI—is also rapidly upending long-established conventions in science, art, publishing and more. These conventions have, in some cases, taken centuries to develop, but the result is a system that protects integrity in science and protects content creators from exploitation. If we’re not careful in our handling of AI, all of these gains are at risk of unraveling.”

The post A ridiculous AI-generated rat penis made it into a peer-reviewed journal appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss https://www.popsci.com/technology/openai-sora-generative-video/ Fri, 16 Feb 2024 18:15:23 +0000 https://www.popsci.com/?p=603154
Sora AI generated video still of woolly mammoth herd in tundra
A screenshot from one of the many hyperrealistic videos generated by OpenAI's Sora program. OpenAI

Generative AI videos advanced from comical to photorealistic within a single year. This is uncharted, dangerous territory.

The post OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss appeared first on Popular Science.

]]>
Sora AI generated video still of woolly mammoth herd in tundra
A screenshot from one of the many hyperrealistic videos generated by OpenAI's Sora program. OpenAI

It’s hard to write about Sora without feeling like your mind is melting. But after OpenAI’s surprise artificial intelligence announcement yesterday afternoon, we have our best evidence yet of what a yet unregulated, consequence-free tech industry wants to sell you: a suite of energy-hungry black box AI products capable of producing photorealistic media that pushes the boundaries of legality, privacy, and objective reality.

Barring decisive, thoughtful, and comprehensive regulation, the online landscape could very well become virtually unrecognizable, and somehow even more untrustworthy, than ever before. Once the understandable “wow” factor of hyperreal woolly mammoths and paper art ocean scapes wears off, CEO Sam Altman’s newest distortion project remains concerning.

The concept behind Sora (Japanese for “sky”) is nothing particularly new: It apparently is an AI program capable of generating high-definition video based solely on a user’s descriptive text inputs. To put it simply: Sora reportedly combines the text-to-image diffusion model powering DALL-E with a neural network system known as a transformer. While generally used to parse massive data sequences such as text, OpenAI allegedly adapted the transformer tech to handle video frames in a similar fashion.

“Apparently,” “reportedly,” “allegedly.” All these caveats are required when describing Sora, because as MIT Technology Review explains, OpenAI only granted access to yesterday’s example clips after media outlets agreed to wait until after the company’s official announcement to “seek the opinion of outside experts.” And even when OpenAI did preview their newest experiment, they did so without releasing a technical report or a backend demonstration of the model “actually working.”

This means that, for the conceivable future, not a single outside regulatory body, elected official, industry watchdog, or lowly tech reporter will know how Sora is rendering the most uncanny media ever produced by AI, what data Altman’s company scraped to train its new program, and how much energy is required to fuel these one-minute video renderings. You are at the mercy of what OpenAI chooses to share with the public—a company whose CEO repeatedly warned the extinction risk from AI is on par with nuclear war, but that only men like him can be trusted with the funds and resources to prevent this from happening.

The speed at which we got here is as dizzying as the videos themselves. New Atlas offered a solid encapsulation of the situation yesterday—OpenAI’s sample clips are by no means perfect, but in just nine months, we’ve gone from the “comedic horror” of AI Will Smith eating spaghetti, to near-photorealistic, high-definition videos depicting crowded city streets, extinct animals, and imaginary children’s fantasy characters. What will similar technology look like nine months from now—on the eve of potentially one of the most consequential US presidential elections in modern history.

Once you get over Sora’s parlor trick impressions, it’s hard to ignore the troubling implications. Sure, the videos are technological marvels. Sure, Sora could yield innovative, fun, even useful results. But what if someone used it to yield, well, anything other than “innovative,” “fun,” or “useful?” Humans are far more ingenious than any generative AI programs. So far, jailbreaking these things has only required some dedication, patience, and a desire to bend the technology for bad faith gains.

Companies like OpenAI promise they are currently developing security protocols and industry standards to prevent bad actors from exploiting our new technological world—an uncharted territory they continue to blaze recklessly into with projects like Sora. And yet they have failed miserably in implementing even the most basic safeguards: Deepfakes abuse human bodies, school districts harness ChatGPT to acquiesce to fascist book bans, and the lines between fact and fiction continue to smear.

[Related: Generative AI could face its biggest legal tests in 2024.]

OpenAI says there are no immediate plans for Sora’s public release, and that they are conducting red team tests to “assess critical areas for harms or risks.” But barring any kind of regulatory pushback, it’s possible OpenAI will unleash Sora as soon as possible.

“Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving [Artificial General Intelligence],” OpenAI said in yesterday’s announcement, once again explicitly referring to the company’s goal to create AI that is all-but-indistinguishable from humans.

Sora, a model to understand and simulate the real world—what’s left of it, at least.

The post OpenAI’s Sora pushes us one mammoth step closer towards the AI abyss appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new AI-powered satellite will create Google Maps for methane pollution https://www.popsci.com/technology/methanesat-edf-google-satellite/ Wed, 14 Feb 2024 16:00:00 +0000 https://www.popsci.com/?p=602657
MethaneSAT concept art above Earth
Methane is very hard to track around the world, but a new satellite project could help address the issue. MethaneSAT LLC

Google and the Environmental Defense Fund have teamed up to track the elusive emissions—from space.

The post A new AI-powered satellite will create Google Maps for methane pollution appeared first on Popular Science.

]]>
MethaneSAT concept art above Earth
Methane is very hard to track around the world, but a new satellite project could help address the issue. MethaneSAT LLC

Methane emissions, be it from industrial cattle farming or fossil fuel extraction, are responsible for roughly 30 percent of the Earth’s climate change issues. But despite the massive amounts of methane emissions released into the atmosphere every year, it’s often difficult to track the pollutant—apart from being invisible to the human eye and satellites’ multispectral near-infrared wavelength sensors, methane is also hard to assess due to spectral noise in the atmosphere.

To help tackle this immediate crisis, Google and the Environmental Defense Fund are teaming up for a new project with lofty goals. Announced in a new blog post earlier today, MethaneSAT in a new, AI-enhanced satellite project to better track and quantify the dangerous emissions, with an aim to offer the info to researchers around the world.

Google Earth Image screenshot displaying methane geodata map
EDF’s aerial data, available in Earth Engine, shows both high-emitting point sources as yellow dots, and diffuse area sources as a purple and yellow heat map. MethaneSAT will collect this data with the same technology, at a global scale and with more frequency. Credit: Google

“MethaneSAT is highly sophisticated; it has a unique ability to monitor both high-emitting methane sources and small sources spread over a wide area,” Yael Maguire, Google’s VP and General Manager of Geo Developer & Sustainability, said in a February 14 statement.

[Related: How AI could help scientists spot ‘ultra-emission’ methane plumes faster—from space.]

To handle such a massive endeavor, the EDF developed new algorithmic software with researchers at the Smithsonian Astrophysical Observatory andHarvard University’s School of Engineering and Applied Science and its Center for Astrophysics. Their new supercomputer-powered AI system can calculate methane emissions in specific locations, and subsequently track those pollutants as they spread in the atmosphere. 

MethaneSAT is scheduled to launch aboard a SpaceX Falcon 9 rocket in early March. Once deployed at an altitude of over 350 miles, the satellite will circle the Earth 15 times per day at roughly 1,660 mph. Aside from emission detection duties, Google and EDF intend to harness their AI programs to compile a worldwide map of oil and gas infrastructure systems to hone in what aspects rank as the worst offenders. According to Google, this will function much like how its AI programs interpret satellite imagery for Google Maps. Instead of road names, street signs, and sidewalk markers, however, MethaneSAT will help tag points like oil storage containers.

Google satellite imagery displaying oil wells
The top satellite image shows a map of dots, which are correctly identified as oil well pads. Using our satellite and aerial imagery, we applied AI to detect infrastructure components. Well pads are shown in yellow, oil pump jacks are shown in red, and storage tanks are shown in blue. Credit: Google

“Once we have this complete infrastructure map, we can overlay the MethaneSAT data that shows where methane is coming from,” Maguire said on Wednesday. “When the two maps are lined up, we can see how emissions correspond to specific infrastructure and obtain a far better understanding of the types of sources that generally contribute most to methane leaks.” Datasets like these could prove valuable to watchdogs and experts attempting to rein in oil and gas emission locations that may become more prone to leaks.

All this much-needed information is intended to become available later this year through the official MethaneSAT website, as well as Google Earth Engine, the company’s open-source global environmental monitoring platform. In the very near future, the new emissions data will be able to combine alongside datasets concerning factors like waterways, land cover, and regional borders to better assess where we are as a global community, and what needs to be done in order to stave off climate change’s worst outcomes.

The post A new AI-powered satellite will create Google Maps for methane pollution appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A crowd torched a Waymo robotaxi in San Francisco https://www.popsci.com/technology/waymo-torched-vandals/ Mon, 12 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=602323
Destroyed Waymo on after attacked by vandals in San Francisco
The vehicle appeared 'decapitated' by the time first responders arrived, but no one was injured. Credit: San Francisco Fire Dept. Media / Séraphine Hossenlopp

No injuries were reported after the fire department extinguished Saturday evening's blaze.

The post A crowd torched a Waymo robotaxi in San Francisco appeared first on Popular Science.

]]>
Destroyed Waymo on after attacked by vandals in San Francisco
The vehicle appeared 'decapitated' by the time first responders arrived, but no one was injured. Credit: San Francisco Fire Dept. Media / Séraphine Hossenlopp

Vandals thoroughly obliterated a Waymo autonomous taxi in San Francisco’s Chinatown on Saturday evening to the cheers of onlookers. In an emailed statement provided to PopSci, a Waymo spokesperson confirmed the vehicle was empty when the February 10 incident began just before 9PM, and no injuries were reported. Waymo says they are also “working closely with local safety officials to respond to the situation.”

A San Francisco Fire Department (SFFD) representative also told PopSci responders arrived on the scene at 9:03PM to a “reported electric autonomous vehicle on fire” in the 700 block of Jackson St., which includes a family owned musical instrument store and a pastry shop.

“SFFD responded to this like any other vehicle fire with 1 engine, 1 truck, and for this particular incident the battalion chief was on scene as well,” the representative added in their email.

Multiple social media posts over the weekend depict roughly a dozen people smashing the Waymo Jaguar I-Pace’s windows, covering it in spray paint, and eventually tossing a firework inside that set it ablaze—all to the enthusiastic encouragement of bystanders. After posting their own video recordings to X, one onlooker told Reuters that someone wearing a white hoodie “jumped on the hood of the car and literally WWE style K/O’ed the windshield & broke it.” Additional footage uploaded by street reporter “Franky Frisco” to their YouTube channel also shows emergency responders dousing the flaming EV, which reportedly caught fire after someone tossed a firecracker inside the car. Chinatown’s streets were already crowded by visitors attending Lunar New Year celebrations.

Speaking to The Autopian, Frisco says that they have covered similar autonomous vehicle situations in the past, but this weekend’s drama left the Waymo vehicle looking “completely ‘decapitated.’” Upon arrival, emergency responders reportedly even had difficulty discerning whether it was a Waymo or Zoox car. Although both companies (owned by Google and Amazon, respectively) offer driverless taxi services, neither fleet resembles one another—when they are in better condition.

[Related: Self-driving taxis blocked an ambulance and the patient died, says SFFD.]

Electric Vehicles photo

Motive for Saturday night’s incident remains unclear. The event took place as locals continue to push back against autonomous taxi operations in the area. Since receiving a regulatory greenlight for 24/7 services in August 2023, numerous reports detail cars from companies like Waymo, Zoox, and Cruise creating traffic jams, running stop signs, and blocking emergency responders. In October 2023, a Cruise driverless taxi allegedly hit a pedestrian and dragged her 20-feet down the road. Cruise’s CEO stepped down the following month, and the General Motors-owned company subsequently issued first San Francisco, then nationwide, operational moratoriums.

Not only is this weekend’s autonomous taxi butchering aggressive, dangerous, and illegal—it’s also apparently a bit of overkill. According to previous reports, driverless car protestors around San Francisco have found that simply stacking orange traffic cones atop a taxi’s hood renders its camera navigation system useless until the obstruction is removed.

The post A crowd torched a Waymo robotaxi in San Francisco appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
2,000 new characters from burnt-up ancient Greek scroll deciphered with AI https://www.popsci.com/technology/vesuvius-scrolls-ai-deciphered/ Fri, 09 Feb 2024 17:00:00 +0000 https://www.popsci.com/?p=602097
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners.
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners. Vesuvius Challenge

The Vesuvius Challenge winners were able to digitally reconstruct a philosopher's rant previously lost to volcanic damage.

The post 2,000 new characters from burnt-up ancient Greek scroll deciphered with AI appeared first on Popular Science.

]]>
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners.
Left: Restored images of papyrus scrolls from Mount Vesuvius. Over 2,000 characters composing 15 lines of an ancient Greek scroll is now legible thanks to machine learning. Right: The scroll read by the winners. Vesuvius Challenge

Damaged ancient papyrus scrolls dating back to the 1st century CE are finally being deciphered by the Vesuvius Challenge contest winners using computer vision and AI machine learning programs. The scrolls were carbonized during the eruption of Italy’s Mount Vesuvius in 79 CE and have been all-but-inaccessible using normal restoration methods, as they have been reduced to a fragile, charred log. Three winners–Luke Farritor (US), Youssef Nader (Egypt), and Julian Schilliger (Switzerland)–will split the $700,000 grand prize after deciphering roughly 2,000 characters making up 15 columns of never-before-seen Greek texts.

[Related: AI revealed the colorful first word of an ancient scroll torched by Mount Vesuvius.]

In October 2023, Farritor, a 21-year-old Nebraska native and former SpaceX intern won the challenge’s “First Word” contest after developing a machine learning model to parse out the first few characters and form the word Πορφύραc—or porphyras, ancient Greek for “purple.” He then teamed up with Nader and Schlinder to tackle the remaining fragments using their own innovative AI programs. The newly revealed text is an ancient philosopher’s meditation on life’s pleasures—and a dig on people who don’t appreciate them.  

AI photo

A 1,700 year journey

The scrolls once resided within a villa library believed to belong to Julius Caesar’s father-in-law, south of Pompeii in the town of Herculaneum. Upon its eruption, Mount Vesuvius’ historic volcanic blast near-instantly torched the library before subsequently burying it in ash and pumice. The carbonized scrolls remained lost for centuries until rediscovered by a farmer in 1752. Over the next few decades, a Vatican scholar utilized an original, ingenious weighted string method to carefully “unroll” much of the collection. Even then, the monk’s process produced thousands of small, crumbled fragments which he then needed to laboriously piece back together.

Fast forward to 2019, and around 270 “Villa of the Papyri” scrolls still remained inaccessible—a lingering mystery prompting a team at the University of Kentucky to 3D scan the archive and launch the Vesuvius Challenge in 2023. After releasing open-source software alongside thousands of 3D X-ray scans made from three papyrus fragments and two scrolls, challenge sponsors offered over $1 million in various prizes to help develop new, high-tech methods for accessing the unknown contents.

What do the scrolls say?

According to a February 5 post on X from competition sponsor Nat Friedman, the first scroll’s final 15 columns were likely penned by Epicurean philosopher Philodemus, and discuss “music, food, and how to enjoy life’s pleasures.”

According to the Vesuvius Challenge announcement, two columns of the scroll, for example, center on whether or not the amount of available food influences the level of pleasure diners will feel from their meals. In this case, the scroll’s author argues it doesn’t: “[A]s too in the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant.”

“In the closing section, he throws shade at unnamed ideological adversaries—perhaps the stoics?—who ‘have nothing to say about pleasure, either in general or in particular,'” Friedman also said on X.

Although much more remains to be uncovered, challenge organizers have previously hypothesized the scrolls could include long-lost works including the poems of Sappho.

AI photo

But despite the grand prize announcement, the Vesuvius Challenge is far from finished—the newly translated text makes up just 5 percent of a single scroll, after all. In the same X announcement, Friedman revealed the competition’s next phase: a new, $100,000 prize to the first team to retrieve at least 90 percent of the four currently scanned scrolls.

At this point, learning the ancient scrolls’ contents is more a “when” than an “if” for researchers. Once that’s done, well, huge sections of the Villa of the Papyri remain unexcavated. And within those ruins? According to experts, potentially thousands more scrolls await eager eyes.

The post 2,000 new characters from burnt-up ancient Greek scroll deciphered with AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
FCC bans AI-generated robocalls https://www.popsci.com/technology/fcc-ai-robocall-ban/ Thu, 08 Feb 2024 22:00:00 +0000 https://www.popsci.com/?p=602015
Hand reaching to press 'accept' on unknown smartphone call
The FCC wants to deter bad actors ahead of the 2024 election season. Deposit Photos

Thanks to a 1991 telecom law, scammers could face over $25,000 in fines per call.

The post FCC bans AI-generated robocalls appeared first on Popular Science.

]]>
Hand reaching to press 'accept' on unknown smartphone call
The FCC wants to deter bad actors ahead of the 2024 election season. Deposit Photos

The Federal Communications Commission unanimously ruled on Thursday that robocalls containing AI-generated vocal clones are illegal under the Telephone Consumer Protection Act of 1991.The telecommunications law passed over 30 years ago now encompasses some of today’s most advanced artificial intelligence programs. The February 8 decision, effective immediately, marks the FCC’s strongest escalation yet in its ongoing efforts to curtail AI-aided scam and misinformation campaigns ahead of the 2024 election season.

“It seems like something from the far-off future, but it is already here,” FCC Chairwoman Jessica Rosenworcel said in a statement accompanying the declaratory ruling. “This technology can confuse us when we listen, view, and click, because it can trick us into thinking all kinds of fake stuff is legitimate.”

[Related: A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election.]

The FCC’s sweeping ban arrives barely two weeks after authorities reported a voter suppression campaign targeting thousands of New Hampshire residents ahead of the state’s presidential primary. The robocalls—later confirmed to originate from a Texas-based group—featured a vocal clone of President Joe Biden telling residents not to vote in the January 23 primary.

Scammers have already employed AI software for everything from creating deepfake celebrity videos to hawk fake medical benefit cards, to imitating an intended victim’s loved ones for fictitious kidnappings. In November, the FCC launched a public Notice of Inquiry regarding AI usage in scams, as well as how to potentially leverage the same technology in combating bad actors.

According to Rosenworcel, Thursday’s announcement is meant “to go a step further.” Passed in 1991, the Telephone Consumer Protection Act at the time encompassed unwanted and “junk” calls containing artificial or prerecorded voice messages. Upon reviewing the law, the FCC (unsurprisingly) determined AI vocal clones are ostensibly just much more advanced iterations of the same spam tactics, and thereby are subject to the same prohibitions.

“We all know unwanted robocalls are a scourge on our society. But I am particularly troubled by recent harmful and deceptive uses of voice cloning in robocalls,” FCC Commissioner Geoffrey Starks said in an accompanying statement. Starks continued by calling generative AI “a fresh threat” within voter suppression efforts ahead of the US campaign season, and thus warranted immediate action.

In addition to potentially receiving regulatory fines of more than $23,000 per call, vocal cloners are now also open to legal action from victims. The Telephone Consumer Protection Act states individuals can recover as much as $1,500 in damages per unwanted call.

The post FCC bans AI-generated robocalls appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google wants to fight deepfakes with a special badge https://www.popsci.com/technology/google-deepfake-ai-badge/ Thu, 08 Feb 2024 14:00:00 +0000 https://www.popsci.com/?p=601893
“In a world where all digital content could be fake, we need a way to prove what’s true.”
“In a world where all digital content could be fake, we need a way to prove what’s true.”. DepositPhotos

Content Credentials are attached to image metadata and show if it was AI generated or edited.

The post Google wants to fight deepfakes with a special badge appeared first on Popular Science.

]]>
“In a world where all digital content could be fake, we need a way to prove what’s true.”
“In a world where all digital content could be fake, we need a way to prove what’s true.”. DepositPhotos

In just a few short years, AI-generated deepfakes of celebrities and politicians have graduated from the confines of academic journals to trending pages on major social media sites. Misinformation experts warn these tools, when combined with strained moderation teams at social media platforms, could add a layer of chaos and confusion to an already contentious 2024 election season. 

Now, Google is officially adding itself to a rapidly growing coalition of tech and media companies working to standardize a digital badge that reveals whether or not images were created using generative AI tools. If rolled out widely, the “Content Credential” spearheaded by The Coalition for Content Provenance and Authenticity (C2PA) could help bolster consumer trust in the provenance of photos and video amid a rise in deceptive AI-generated political deepfakes spreading on the internet. Google will join the C2PA as steering member this month which puts them in the same company as Adobe, Microsoft, Intel, and the BBC. 

In an email, a Google spokerson told PopSci that the company is currently exploring ways to use the standard in its suite of products and will have more to share “in the coming months.” The spokesperson says Google is already exploring incorporating Content Credentials into the “About this image” feature in Google Image search. Google’s support of these credentials could drive up their popularity but their overall use still remains voluntarily in lieu of any binding federal deepfake legislation. That lack of consistency gives deepfake creators an advantage. 

What are Content Credentials?

The (C2PA) is a global standards body created in 2019 with the main goal of creating technical standards to certify who where and how a piece of digital content was originally created. Adobe, which led the Content Authenticity Initiative (CAI), and its partners were already concerned about the ways AI generated media could erode public trust and amplify misinformation online years before massively popular consumer generative AI tools like OpenAI’s DALL-E gained momentum.

That concern catalyzed creation of Content Credentials, a small badge companies and creators can choose to attach to an image’s metadata that discloses who created it and when the image was made. It also discloses to viewers whether or not the digital content was created using an generative AI model and even names the particular model used as well as whether or not it was digitally edited or modified later. 

Content Credential supporters argue the tool creates a “tamper-resistant metadata” record that travels with digital content and can be verified at any point along its life cycle. In practice, most users will see this “icon of transparency” pop up as a small badge with the letters “CR” appearing in the corner of the image. Microsoft, Intel, ARM, and the BBC are also all members of the C2PA steering committee.

“With digital content becoming the de facto means of communication, coupled with the rise of AI-enabled creation and editing tools, the public urgently needs transparency behind the content they encounter at home, in schools, in the workplace, wherever they are,” Adobe General Counsel and Chief Trust Officer Dana Rao said in a statement sent to PopSci. “In a world where all digital content could be fake, we need a way to prove what’s true.” 

Users who come across an image pinned with the Content Credential can click on the badge to inspect when it was created and any edits that may have occurred since then. Each new edit is then bound to the photo or video’s original manifest which travels with it across the web. 

If a reporter were to crop a photo that was previously edified using Photoshop, for example, both of those changes to the images would be noted in the final manifest. CAI says the tool won’t prevent anyone taking a screenshot of an image, however, that screenshot would not include CAI metadata from the original file, could be a hint to viewers that it was not the original file.The symbol is visible on the image but is also included in its metadata which, in theory, should prevent a trouble-maker from using Photoshop or another editing tool to remove the badge. 

If an image does not have a visible badge on it, users can copy it and upload it to this Content Credentials Verify link to inspect its credentials and see if it has been altered over time. If the media was edited without in a way that didn’t meet the C2PA’s specification during some part of its life cycle, users will see a “missing” or “incomplete” marker. The Content Credential feature dates back to 2021. Adobe has since made it available to Photoshop users and creators producing images using Adobe’s Firefly AI Image generator. Microsoft plans to use the badge with images created by its Bing AI image generators. Meta, which owns Facebook and Instagram, similarly announced it would add a new feature to let users disclose when they share AI-generated video or audio on its platforms. Meta said it would begin applying these labels “in the coming months.” 

Why Google joining C2PA matters

Google’s involvement in the C2PA is important, first and foremost, because of the search giant’s massive digital footprint online. The company is already exploring ways of using these badges across its wide range of online products and services, which notably includes YouTube. The C2PA believes Google’s participation could put the credentials in front of more eyeballs, which could drive broader awareness of the tool as an actionable way to verify digital content, especially as political deepfakes and manipulated media gain traction online. Rao described Google’s partnership as a “watershed moment” for driving awareness to Content Credentials. 

“Google’s industry expertise, deep research investments, and global reach will help us strengthen our standard to address the most pressing issues around the use of content provenance and reach even more consumers and creators everywhere,” Rao said. “With support and adoption from companies like Google, we believe Content Credentials can become what we need: a simple, harmonized, universal way to understand content.” 

The partnership comes three months after Google announced it would attach a digital watermark with SynthID to audio created using its DeepMind AI Lyrica model. In that case, DeepMind says the audio watermark shouldn’t be audible to a human ear and similarly shouldn’t disrupt a user’s listening experience. Instead, it should serve as a more transparent safeguard to protect musicals from AI generated replicas of themselves or to prove whether or not an questionable clip was genuine or AI generated. 

Deepfake-caused confusion could make already contentious 2024 elections worse 

Tech companies and media companies are rushing to establish trusted ways to verify the provenance of digital media online ahead of what misinformation experts warn could be a mind bending 2024 election cycle. Major political figures, like Republican presidential candidate Donald Trump and Florida Governor Ron DeSantis have both already used generative AI tools to attack each other. More recently in New Hampshire, AI vocal cloning technology was used to make it appear as if President Joe Biden was calling residents urging them not to vote in the January primary election. The state’s attorney general’s office has since linked robocalls to two companies based in Texas

But the threats extend beyond elections too. For years, researchers have warned the rampant spread of increasingly convincing AI-generated deepfake images and videos online could lead to a phenomena called the “Liar’s Dividend” where consumers doubt whether anything they see online actually as it seems. Lawyers, politicians, and police officers have already falsely claimed legitimated images and videos were AI-generated to try and win a case or seal a conviction. 

Content Credential helps could help, but they lack teeth 

Even with Google’s support, Content Credentials remain entirely voluntary. Neither Adobe nor any regulatory body are forcing tech companies or their users to dutifully add provenance credentials to their content. And even if Google and Microsoft do use these markers to disclose content made using their own particular AI generators, nothing currently stops political bad actors from cobbling together a deepfake using other open source AI tools and then try to spread it via social media.

In the US, the Biden Administration has instructed the Commerce Department to create new guidelines for AI watermarking and safety standards tech firms building generative AI models would have to adhere to. Lawmakers in Congress have proposed federal legislation requiring AI companies include identifiable watermarks on all AI-generated content, though it’s unclear whether or not that would work practically. 

Tech companies are working quickly to put in place safeguards against deepfake but with a major presidential election less than seven months away, experts agree it’s likely confusing or misleading AI material will likely play some role.

The post Google wants to fight deepfakes with a special badge appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sharing AI-generated images on Facebook might get harder… eventually https://www.popsci.com/technology/meta-ai-image-detection-plans/ Wed, 07 Feb 2024 16:03:17 +0000 https://www.popsci.com/?p=601822
Upset senior woman looks at the laptop screen
Meta hopes to address AI images with a bunch of help from other companies, and you. Deposit Photos

And you'll soon have to fess up to posting 'synthetic' images on Meta's platforms.

The post Sharing AI-generated images on Facebook might get harder… eventually appeared first on Popular Science.

]]>
Upset senior woman looks at the laptop screen
Meta hopes to address AI images with a bunch of help from other companies, and you. Deposit Photos

That one aunt of yours (you know the one) may finally think twice before forwarding Facebook posts of “lost” photos of hipster Einstein and a fashion-forward Pope Francis. On Tuesday, Meta announced that “in the coming months,” it will attempt to begin flagging all AI-generated images made using programs from major companies like Microsoft, OpenAI, Midjourney, and Google that are flooding Facebook, Instagram, and Threads. 

But to tackle rampant generative AI abuse experts are calling “the world’s biggest short-term threat,” Meta requires cooperation from every major AI company, self-reporting from its roughly 5.4 billion users, as well as currently unreleased technologies.

Nick Clegg, Meta’s President of Global Affairs, explained in his February 6 post that the policy and tech rollouts are expected to debut ahead of pivotal election seasons around the world.

“During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve,” Clegg says.

[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet.]

Meta nebulous roadmap centers on working with “other companies in [its] industry” to develop and implement common identification technical standards for AI imagery. Examples might include digital signature algorithms and cryptographic information “manifests,” as suggested by the Coalition for Content Provenance and Authenticity (C2PA) and the International Press Telecommunications Council (IPTC). Once AI companies begin using these watermarks, Meta will begin labeling content accordingly using “classifiers” to help automatically detect AI-generated content.

If AI companies begin using watermarks” might be more accurate. While the company’s own Meta AI feature already labels its content with an “Imagined with AI” watermark, such easy identifiers aren’t currently uniform across AI programs from Google, OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and others.

This, of course, will do little to deter bad actors’ use of third-party programs, often to extremely distasteful effects. Last month, for example, AI-generated pornographic images involving Taylor Swift were shared tens of millions of times across social media.

Meta made clear in Tuesday’s post these safeguards will be limited to static images. But according to Clegg, anyone concerned by this ahead of a high-stakes US presidential election should take it up with other AI companies, not Meta. Although some companies are beginning to include identifiers in their image generators, “they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies,” he writes.

While “the industry works towards this capability,” Meta appears to shift the onus onto its users. Another forthcoming feature will soon allow people to disclose their AI-generated video and audio uploads—something Clegg may eventually be a requirement punishable with “penalties.”

For what it’s worth, Meta also at least admitted it’s currently impossible to flag all AI-generated content, and there remain “ways that people can strip out invisible markers.” To potentially address these issues, however, Meta hopes to fight AI with AI. Although AI technology has long aided Meta’s policy enforcement, its use of generative AI for this “has been limited,” says Clegg, “But we’re optimistic that generative AI could help us take down harmful content faster and more accurately.”

“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Clegg continued.

The post Sharing AI-generated images on Facebook might get harder… eventually appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Don’t worry, that Tesla driver only wore the Apple Vision Pro for ’30-40 seconds’ https://www.popsci.com/technology/apple-vision-pro-tesla-video/ Mon, 05 Feb 2024 18:45:00 +0000 https://www.popsci.com/?p=601455
Three screenshots of Tesla driver wearing Apple Vision Pro
PSA: Don't. X

In a viral video meant to be a 'skit,' an influencer drove in Autopilot while wearing the $3,499 spacial computing headset.

The post Don’t worry, that Tesla driver only wore the Apple Vision Pro for ’30-40 seconds’ appeared first on Popular Science.

]]>
Three screenshots of Tesla driver wearing Apple Vision Pro
PSA: Don't. X

Videos of what looks like Tesla drivers using the new Apple Vision Pro “spatial computing” headset while in Autopilot mode are going viral, but at least one is staged. After getting over 24 million views on X, 21-year-old Dante Lentini may still face legal repercussions for his stunt.

In an email to PopSci on Monday, Lentini confirmed a video appearing to show him being stopped by police for using Apple’s $3,499 headset behind the wheel of his Tesla was filmed in a “skit-style fashion.” The 25-second clip shows Lentini sitting in the Tesla driver’s seat while traveling on a highway using Autopilot. Instead of keeping his hands on the steering wheel, as Tesla directs all users to do while in Autopilot, Lentini gestures to imply he is using Vision Pro’s interface. (The Apple headset relies on interpreting specific hand movements to navigate and utilize its apps.) The video then cuts to Lentini in a parking lot as a police vehicle flashes its lights behind him.

“So the police were not even in the parking lot for me to begin with,” Lentini alleges in the email. “I wasn’t pulled over never mind [sic] not being arrested nor ticketed.”

Lentini uploaded his clip to X on February 2, the same day Apple’s Vision Pro headset hit stores, but it wasn’t until this weekend that the post began gaining momentum. Numerous outlets have since covered Lentini’s video, as well as similar content. A different video posted to X on February 3 appears to show another Apple Vision Pro user in the driver’s seat of a Tesla Cybertruck. Like Lentini, the driver makes gestures known to control the headset, implying the $60,990 base price EV is engaged in Autopilot or Full Self-Driving Beta mode. The Cybertruck video has racked up over 17 million views by Monday morning.

In a follow-up email to PopSci, Lentini confirmed he used Tesla’s Autopilot program during his video after he “got over to the right most lane [sic].” He also claimed he only wore Apple’s headset for “10-15 second increments” totalling “less than 30-40 seconds combined.” 

“I believe the Vision Pro doesn’t even work while traveling since the technology fails to be able to track your reference surroundings and place the graphics accordingly,” he continued. “So all it showed was a pass through video feed,” referring to the headset’s ability to visualize external surroundings with a reportedly 12 millisecond latency, “as if I was just wearing sunglasses.”

[Related: Here’s a look at Apple’s first augmented reality headset.]

Most US state traffic laws prohibit wearing anything that could potentially obscure a driver’s ability to see their surroundings. In Palo Alto, where Lentini claims to reside, “it is unlawful for a person to drive a vehicle if a television receiver, a video monitor, or a television or video screen, is operating and is visible to the driver.” Violations could include a fine of $238, as well as a point added to the driver’s DMV record.

A previous review of the parameters within Vision Pro’s visionOS coding indicates it disables certain features if it detects users traveling over a “safe speed,” although it’s unclear if this applies to driving. A separate “Travel Mode” can reportedly be enabled while “stationary” in an airplane, but Apple does not offer an explanation of how Vision Pro assesses the speed, travel, and passenger status. 

According to Apple’s official product page, the Vision Pro includes built-in safety features meant to help prevent collisions and falls. “[I]t’s also important to use the device in a safe manner. For example, don’t run while wearing Apple Vision Pro, use it while operating a moving vehicle, or use it while intoxicated or otherwise impaired,” the company states.

Lentini suspects similar viral content videos are also “skits.” Although he understands “some people’s initial frustration” after seeing his clip, “there’s nothing obstructing my vision. I personally feel like it’s more dangerous to text and drive or even eat and drive, even though I still recommend not wearing these while driving.” Illegal “distracted driving” is defined on a state-by-state basis, but usually includes texting. In some places, eating can also fall within the bounds of distracted driving. 

Whether or not flashy, bank-draining luxury items like Apple Vision Pro and Tesla Cybertruck will prove successful remains to be seen. For now, at least, the combination is leaving bystanders dizzied by the whirlwind mix of legality, wealth, virality, and veracity—all exacerbated by such posts’ ability to spread across platforms like X.

The post Don’t worry, that Tesla driver only wore the Apple Vision Pro for ’30-40 seconds’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a baby with a headcam taught AI to learn words https://www.popsci.com/technology/baby-headcam-ai-learn/ Fri, 02 Feb 2024 18:09:48 +0000 https://www.popsci.com/?p=601333
Photo of an 18mo baby wearing a head-mounted camera
Photo of an 18mo baby wearing a head-mounted camera. Wai Keen Vong

An AI model identified objects 62% of the time after being trained on video and audio captured by a camera strapped to a toddler’s head.

The post How a baby with a headcam taught AI to learn words appeared first on Popular Science.

]]>
Photo of an 18mo baby wearing a head-mounted camera
Photo of an 18mo baby wearing a head-mounted camera. Wai Keen Vong

Artificial intelligence researchers were able to successfully create a machine learning model capable of learning words using footage captured by a toddler wearing a headcam. The findings, published this week in Science, could shed new light on the ways children learn language and potentially inform researchers’ efforts to build future machine learning models that learn more like humans. 

Previous research estimates children tend to begin acquiring their first words around 6 to 9 months of age. By their second birthday, the average kid possesses around 300 words in their vocabulary toolkit. But the actual mechanics underpinning exactly how children come to associate meaning with words remains unclear and a point of scientific debate. Researchers from New York University’s Center for Data Science tried to explore this gray area further by creating an AI model that attempted to learn the same way a child does.

To train the model, the researchers relied on over 60 hours of video and audio recordings pulled from a light head camera strapped to a child named Sam. The toddler wore the camera on and off starting when he was six months old and ending after his second birthday. Over those 19 months, the camera collected over 600,000 video frames connected to more than 37,500 transcribed utterances from nearby people. The background chatter and video frames pulled from the headcam provides a glimpse into the experience of a developing child as it eats, plays, and generally experiences the world around them. 

AI photo

.

Armed with Sam’s eyes and ears, the researchers then created a neural network model to try and make sense of what Sam was seeing and hearing. The model, which had one module analyzing single frames taken from the camera and another focused on transcribed speech direct towards Sam, was self-supervised, which means it didn’t use external data labeling to identify objects. Like a child, the model learned by associating words with particular objects and visuals when they happened to co-occur at the same time.

Testing procedure in models and children. Credit: Wai Keen Vong
Testing procedure in models and children. Credit: Wai Keen Vong

“By using AI models to study the real language-learning problem faced by children, we can address classic debates about what ingredients children need to learn words—whether they need language-specific biases, innate knowledge, or just associative learning to get going,” paper co-author and NYU Center for Data Science Professor Brenden Lake said in a statement. “It seems we can get more with just learning than commonly thought.”

Researchers tested the model the same way scientists evaluate children. Researchers presented the model with four images pulled from the training set and asked it to pick which one matches with a given word like “ball” “crib” or “tree.” The model was successful 61.6% of the time. The baby cam-trained model even approached similar levels of accuracy to a pair of separate AI models that were trained with many more language inputs. More impressive still, the model was able to correctly identify some images that weren’t included in Sam’s headcam dataset, which suggests it was able to learn from the data it was trained on and use that to make more generalized observations.

AI photo

“These findings suggest that this aspect of word learning is feasible from the kind of naturalistic data that children receive while using relatively generic learning mechanisms such as those found in neural networks,” Lake said. 

In other words, the AI model’s ability to consistently identify objects using only data from the head camera shows how representative learning, or simply associating visuals with concurrent words, does seem to be enough for children to learn and acquire a vocabulary. 

Findings hint at an alternative method to train AI 

Looking to the future, the NYU researchers’ findings could prove valuable for future AI developers interested in creating AI models that learn in ways similar to humans. The AI industry and computer scientists have long used human thinking and neural pathways as inspiration for building AI systems

Recently, large language models like OpenAI’s GPT models or Google’s Bard have proven capable of writing serviceable essays, generating code, and periodically botching facts thanks to an intensive training period where the models inject trillions of parameters worth of data pulled from mammoth datasets. The NYU findings, however, suggest an alternative method of word acquisition may be possible. Rather than rely on mounds of potentially copyright protected or biased inputs, an AI model mimicking the way humans learn when we crawl and stumble our way around the world could offer an alternative path towards recognizing language. 

“I was surprised how much today’s AI systems are able to learn when exposed to quite a minimal amount of data of the sort a child actually receives when they are learning a language,” Lake said.

The post How a baby with a headcam taught AI to learn words appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
FCC wants to make AI-generated robocalls illegal https://www.popsci.com/technology/fcc-wants-to-make-ai-generated-robocalls-illegal/ Thu, 01 Feb 2024 17:50:31 +0000 https://www.popsci.com/?p=601218
The new policy proposal, if accepted, would make AI-generated robocalls easier to investigate and prosecute.
The new policy proposal, if accepted, would make AI-generated robocalls easier to investigate and prosecute. DepositPhotos

New AI voice-cloning tools are making already frustrating robocalls more dangerous.

The post FCC wants to make AI-generated robocalls illegal appeared first on Popular Science.

]]>
The new policy proposal, if accepted, would make AI-generated robocalls easier to investigate and prosecute.
The new policy proposal, if accepted, would make AI-generated robocalls easier to investigate and prosecute. DepositPhotos

The US’ top communications regulator believes AI-generated robocalls like the one recently impersonating President Joe Biden in New Hampshire should be considered illegal under existing law. That legal designation would make it easier to charge voice cloning scammers with fraud and could act as a deterrent to push back against a rising tide of scams carried out using generative AI tools.

In a proposal released this week, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel said the FCC should recognize AI-generated voice calls as under Telephone Consumer Protection Act (TCPA). The TCPA already places restrictions on automated marketing calls, also known as robocalls, though it’s still unclear whether or not AI generated content neatly falls under that category. An FCC vote in favor of Rosenworcel’s proposal would clear up that ambiguity and make AI-generated robocalls illegal without the need for any new legislation. That vote, according to an FCC spokesperson speaking with TechCrunch will occur at Commissioner’s discretion. 

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” Rosenworcel said in a statement. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

An FCC spokesperson told PopSci that clarifying that AI-generated calls are robocalls under existing laws would make it easier for state and federal investigators to take enforcement actions.

Why are AI-generated robocalls an issue? 

Increasingly convincing and easy to use AI voice cloning tools are making already frustrating robocalls more dangerous. Scammers can now use these tools to make it seem as if the person on the other end of the line is a famous celebrity, politician, or even a direct relative. That added layer of familiarity can make callers on the other end of the line more comfortable and more susceptible to handing over sensitive information. Scams like these are becoming more common. One out of every 10 respondents surveyed by security software firm McAfee last year said they were personally targeted by a voice scam. 77% of the targeted victims reported losing money. 

Rosenworcel isn’t the only one who wants to outlaw the practice either. Earlier this month, attorneys general from 26 states formed a coalition and sent a letter to the FCC urging the agency to restrict genertive’s AI’s use by telemarketers. The AG letter says telemarketers looking to impersonate humans should fall under the TCPA’s “artificial” designation which would require them to obtain written consent from consumers before targeting them with calls. 

“Technology is advancing and expanding, seemingly, by the minute, and we must ensure these new developments are not used to prey upon, deceive, or manipulate consumers,” Pennsylvania Attorney General Michelle Henry said in a statement.  

FCC’s long battle against robocallers 

The FCC has spent years pushing back against more traditional, non-AI generated robocalls with varying degrees of success. Last year, the agency issued a record breaking $300 million fine against a large-robocalling operation that was reportedly responsible for billions of dollars worth of automobile warranty scams. Prior to that, the agency levied a $5 million fine against a pair of operatives who carried out over 1,100 unlawful robocalls as part of an effort to suppress Black voter turnout in the 2020 presidential elections. 

[ Related: FCC slaps voter suppression robocall scammers with a record-breaking fine. ] 

Still, rooting out all robocalls remains an exceedingly difficult challenge. Many robocall operations originate from outside of the US, which makes them difficult to prosecute. US carriers, meanwhile, are limited in what cell numbers they can reasonably block. Evolving robocalling techniques, like “spoofing” phone numbers to make them seem as if they are in your area code, also make enforcement more difficult. 

Rising anxieties around potentially election interference and sophisticated scams exacerbated by voice clones could motivate the FCC to act quickly this time. And unlike other proposals attempting to penalize AI deepfakes on the web, this policy change could occur without corralling divided members of Congress together to agree on a new bill.

The post FCC wants to make AI-generated robocalls illegal appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
13 percent of AI chat bot users in the US just want to talk https://www.popsci.com/technology/ai-chatbot-chatgpt-survey-talk/ Wed, 31 Jan 2024 21:30:00 +0000 https://www.popsci.com/?p=601017
As AI becomes more ubiquitous and naturalistic, many industry critics have voiced concerns about a potentially increasing number of people turning to technology instead of human relationships.
As AI becomes more ubiquitous and naturalistic, many industry critics have voiced concerns about a potentially increasing number of people turning to technology instead of human relationships. Deposit Photos

A Consumer Reports survey says many adults who used programs like ChatGPT in the summer of 2023 simply wanted to 'have a conversation with someone.'

The post 13 percent of AI chat bot users in the US just want to talk appeared first on Popular Science.

]]>
As AI becomes more ubiquitous and naturalistic, many industry critics have voiced concerns about a potentially increasing number of people turning to technology instead of human relationships.
As AI becomes more ubiquitous and naturalistic, many industry critics have voiced concerns about a potentially increasing number of people turning to technology instead of human relationships. Deposit Photos

Most people continue to use AI programs such as ChatGPT, Bing, and Google Bard for mundane tasks like internet searches and text editing. But of the roughly 103 million US adults turning to generative chatbots in recent months, an estimated 13 percent occasionally did it to simply “have a conversation with someone.” 

New national surveys from Consumer Reports explore how and why people are interacting with the increasingly influential technology.

[Related: Humans actually wrote that fake George Carlin ‘AI’ routine.]

According to the August 2023 survey results released on January 30, a vast majority of Americans (69 percent) either did not regularly utilize AI chat programs in any memorable way, or did not use them at all within the previous three months. Those that did, however, overwhelmingly opted to explore OpenAI’s ChatGPT—somewhat unsurprising, given the company’s continued industry dominance. With 19 percent of respondents, ChatGPT usage was more than triple that of Bing AI, as well as nearly five times more popular than Google Bard.

Most AI users asked their programs to conduct commonplace tasks, such as answering questions in lieu of a traditional search engine, writing content, summarizing longer texts, and offering ideas for work or school assignments. Despite generative AI’s relative purported strength at creating and editing computer code, just 10 percent of those surveyed recounted using the technology to do so—three percent less than the number of participants who used it to strike up a conversation.

The desire for idle conversation with someone else is an extremely human, natural feeling. Despite chatbots likely presenting a quick fix for some of those surveyed by Consumer Reports, however, there are already signs that it’s not necessarily the healthiest of habits.

As AI becomes more ubiquitous and naturalistic, many industry critics have voiced concerns about a potentially increasing number of people turning to technology instead of human relationships. Numerous reports in recent months highlight a growing market of AI bots explicitly marketed to an almost exclusively male audience as “virtual girlfriends.” Meanwhile, countless examples showcase men repeatedly engaging in behavior with their digital partners that would be considered abusive in the real world.

Of course, it’s important to note simply putting the “chat” in “chatbot” to the test isn’t in any way concerning on its own. This is a shiny, new technology, after all—one that is being aggressively pushed within a largely unregulated industry. Extrapolating Consumer Reports’ survey results, it’s unlikely that a large portion of the estimated 10.2 million Americans who recently had a “conversation” with a chatbot are planning on putting a (digital) ring on it. Still, that’s quite a lot of people looking to gab—roughly about as many as those visited with an AI chatbot for “no particular task, I just wanted to see what it was like.”

The post 13 percent of AI chat bot users in the US just want to talk appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI-generated Taylor Swift porn deepfakes ran rampant on X. Will laws catch up? https://www.popsci.com/technology/ai-taylor-swift-deepfake-x/ Wed, 31 Jan 2024 19:10:00 +0000 https://www.popsci.com/?p=600911
X briefly banned search results for the singer’s name on the platform.
X briefly banned search results for the singer’s name on the platform. DepositPhotos

Gutted trust and safety teams and loose content moderation at X may have been to blame.

The post AI-generated Taylor Swift porn deepfakes ran rampant on X. Will laws catch up? appeared first on Popular Science.

]]>
X briefly banned search results for the singer’s name on the platform.
X briefly banned search results for the singer’s name on the platform. DepositPhotos

Nonconsensual, AI-generated images and video appearing to show singer Taylor Swift engaged in sex acts flooded X, the site formerly known as Twitter, last week, with one post reportedly viewed 45 million times before it was taken down. The deluge of AI generated “deepfake” porn persisted for days, and only slowed down after X briefly banned search results for the singer’s name on the platform entirely. Now, lawmakers, advocates, and Swift fans are using the content moderation failure to fuel calls for new laws that clearly criminalize the spread of AI-generated, deepfakes sexual in nature online. 

How did the Taylor Swift deepfakes spread? 

Many of the AI-generated Swift deepfakes reportedly originated on the notoriously misogynistic message board 4chan and a handful of relatively obscure private Telegram channels. Last week, some of those made the jump to X where they quickly started spreading like wildfire. Numerous accounts flooded X with the deepfake material, so much so that searching for the term “Taylor Swift AI,” would serve the images and videos. In some regions, The Verge notes, that same hashtag was featured as a trending topic, which ultimately amplified the deepfakes further. One post in particular reportedly received 45 million views and 24,000 reposts before it was eventually removed. It took X 17 hours to remove the post despite it violating the company’s terms of service

X did not immediately respond to PopSci’s request for comment. 

With new iterations of the deepfakes proliferating, X moderators stepped in on Sunday and blocked search results for “Taylor Swift” and “Taylor Swift AI” on the platform. Users who searched for the pop star’s name on the platform for several days reportedly saw an error message reading “something went wrong.” X officially addressed the issue in a tweet last week, saying it was actively monitoring the situation and taking “appropriate action” against accounts spreading the material. 

Swift’s legion of fans took matters into their own hands last week by posting non-sexualized images of the pop star with the hashtag #ProtectTaylorSwift in an effort to drown out the deepfakes. Others banded together to report accounts that uploaded the pornographic material. The platform officially lifted the two-day ban on Swift’s name Monday. 

“Search has been re-enabled and we will continue to be vigilant for any attempt to spread this content and will remove it if we find it,” X Head of Business Joe Benarroch, said in a statement sent to the Wall Street Journal. 

Why did this happen? 

Sexualized deepfakes of Swift and other celebrities do make appearances on other platforms, but privacy and policy experts said X’s uniquely hands-off approach to content moderation in the wake of its acquisition by billionaire Elon Musk were at least partly to blame for the event’s unique virality. As of January, X had reportedly laid off around 80% of engineers working on trust and safety teams since Musk took the helm. 

That gutting of the platform’s main line of defenses against violating content makes an already difficult content moderation challenge even more difficult, especially during viral moments where users flood the platform with more potentially violating content. Other major tech platforms run by Meta, Google, and Amazon have similarly downsized their own trust and safety teams in recent years which some fear could lead to an uptick in misinformation and deepfakes in coming months.  

Trust and safety workers still review and remove some violating content at X, but the company has openly relied more heavily on automated moderation tools to detect those posts since Musk took over. X is reportedly planning on hiring 100 additional employees to work in a new “Trust and Safety center of excellence” in Austin, Texas later this year. Even with those additional hires, the total number of trust and safety staff will still be a fraction of what it was prior to layoffs.

AI deepfake clones of prominent politicians and celebrities have heightened anxieties around how tech could be used to spread misinformation or influence elections, but nonconsensual pornography remains the dominant use case. These images and videos are often created using lesser known, open source generative AI tools since popular models like OpenAI’s DALL-E explicitly prohibit sexually explicit content. Technological advancements in AI and wider access to the tools, in turn, have contributed to an increased amount of sexual deepafkes on the web. 

Researchers in 2021 estimated that somewhere between 90 and 95% of deepfakes living on the internet were of nonconsensual sexual porn, the overwhelming majority of which targeted women. That trend is showing no signs of slowing down. An independent researcher speaking with Wired recently estimated there was more deepfake porn was uploaded in 2023 than all other years combined. AI generated child sexual abuse material, some of which are created without real human images, are also reportedly on the rise

How Swift’s following could influence tech legislation 

Swift’s tectonic cultural influence and particularly vocal fan base are helping reinvigorate years-long efforts to introduce and pass legislation explicitly targeting nonconsensual deepfakes. In the days since the deepfake material began spreading, major figures like Microsoft CEO Satya Nadella and even President Joe Biden’s White House have weighed in, calling for action. Multiple members of Congress, including Democratic New York representative Yvette  Clarke and New Jersey Republican representative Tom Kean Jr. released statements promoting legislation that would attempt to criminalize sharing of non consensual deepfake porn. Kean Jr. One of those bills, called the Preventing Deepfakes of Intimate Images Act, could come up for a vote this year. 

Deepfake porn and legislative efforts to combat it aren’t new, but Swift’s sudden association with the issue could serve as a social accelerant. An echo of this phenomenon occurred in 2022 when the Department of Justice announced it would launch an antitrust investigation into Live Master after its site crumbled under the demand of presale tickets for Swift’s “The Eras” tour. The incident resparked some music fans’ long-held grievances towards Live Nation and its supposed monopolistic practices, so much so that executives from the company were forced to attend a Senate Judiciary Committee hearing grilling them on their business practices. Multiple lawmakers made public statements supporting “breaking up” Live Nation-Ticketmaster.

Whether or not that same level of political mobilization happens this time around with deepfakes remains to be seen. Still, the boost in interest for laws reigning in AI’s darkest use cases following the Swift deepfake debacle points to the power of having culturally relevant figureheads attach their names to otherwise lesser known policy pursuits. That relevance can help jump start bills to the top of agendas when, otherwise, they would have been destined for obscurity. 

The post AI-generated Taylor Swift porn deepfakes ran rampant on X. Will laws catch up? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Humans actually wrote that fake George Carlin ‘AI’ standup routine https://www.popsci.com/technology/george-carlin-ai-lawsuit/ Mon, 29 Jan 2024 18:00:00 +0000 https://www.popsci.com/?p=600591
Black and white portrait of George Carlin
Pictured: The real George Carlin. Mark Junge/Getty Images

The podcasters responsible still face a copyright infringement lawsuit from the late comedian's estate.

The post Humans actually wrote that fake George Carlin ‘AI’ standup routine appeared first on Popular Science.

]]>
Black and white portrait of George Carlin
Pictured: The real George Carlin. Mark Junge/Getty Images

The podcasters behind “George Carlin: I’m Glad I’m Dead”—a controversial stand-up “special” originally advertised as AI-generated—confirm their stunt routine was “completely written” by a human. Although an unsurprising turn of events, it still may not shield them from legal fury.

A brief catchup on the Carlin controversy

To bring anyone blessedly unaware of recent events up to speed: Earlier this month, content creators Will Sasso and Chad Kultgen hyped a forthcoming, Carlin-centric episode of Dudesey, a podcast series they claim is written by a “state of the art entertainment AI” of the same name trained on data including the duo’s own social media posts, text messages, and emails. Then on January 9, Sasso and Kultgen released the episode (currently private on YouTube) after “training” “AI”  (they claimed) on text and audio from the entirety of Carlin’s over 50-year career.

“George Carlin died… before 2010, I think—and now he’s been resurrected by an AI to create more material,” Kultgen said in a preview YouTube video. Carlin died in 2008.

At the episode’s outset, Dudesey “AI” claimed: “I listened to all of George Carlin’s material and did my best to imitate his voice, cadence and attitude, as well as the subject matter I think would have interested him today,” before launching into “George Carlin: I’m Glad I’m Dead.” Over the course of the segment, a vocal clone of the late comedian covered a range of Carlinesque topics, including gun violence, politics, free speech, and class.

“If you’re in America, you’re special. God made something just for you, something no other country on the planet gets,” the fake Carlin states early in the episode, as reported over the weekend by The Washington Post. “Of course, I’m talking about mass shootings!” Listeners were not amused.

A tough crowd

Virtually the only positive response to Dudesey’s fake Carlin set came from a self-provided audience laugh track. The internet quickly panned the episode as a clickbait cash-in meant to leverage a simultaneously hyped and maligned AI industry.

“ChatGPT and other LLMs rely on vast swaths of copyrighted material created by human hands. Dudesy can’t fart out a crass imitation of George Carlin without viewing 14 standup specials that are the sum of a human’s life, dreams, and labor,” Matthew Gault wrote for Vice.

Others doubted how much AI technology was actually used to make “I’m Glad I’m Dead.” Images in the YouTube video resembled generative AI artwork and vocal cloning can already produce near-indistinguishable imitations of real human voices. However, critics were skeptical that any generative AI is currently capable of creating an hour’s worth of coherent material.

“Despite the claims that Dudesy has somehow ingested Sasso and Kultgen’s work, or that it somehow ‘learns’ and ‘generates data that will be used to make the next episode better,’ it appears to be more likely that it uses a combination of readily-available tools patched together to ‘surprise’ two comedians clearly in on the act,” commentator Ed Zitron wrote in post for his internet culture newsletter.

“It’s also worth remembering the context around AI at the time Dudesy premiered in March 2022. The ‘state of the art’ public AI at the time was the text-davinci-002 version of GPT-3, an impressive-for-its-day model that nonetheless still utterly failed at many simple tasks,” Kyle Orland explained for Ars Technica. “It wouldn’t be until months later that a model update gave GPT-3 now-basic capabilities like generating rhyming poetry.”

Meanwhile, the comedy legend’s daughter also made her own thoughts on the matter clear.

“I understand and share the desire for more George Carlin. I, too, want more time with my father,” Kelly Carlin wrote in a statement posted to X a day after the video’s release. “But… the ‘George Carlin’ in that video is not the beautiful human who defined his generation and raised me with love. It is a poorly-executed facsimile cobbled together by unscrupulous individuals to capitalize on the extraordinary goodwill my father established with his adoring fan base.”

The Carlin estate’s legal team filed a lawsuit against Dudesey’s creators on January 25, claiming copyright infringement, deprivation of rights of publicity, and violation of rights of publicity. According to US law, plaintiffs could be entitled to as much as $150,000 per charge. Soon afterwards, the podcasters finally confirmed many critics’ suspicions.

In a statement first provided to The New York Times on Friday morning last week, a spokesperson for Sasso and Kultgen stated their Dudesey is a “fictional podcast character created by two human beings.” As for “I’m Glad I’m Dead,” the material itself was “completely written” by Kultgen, although the lawsuit’s defendants have yet to confirm if they employed AI for the Carlin vocal clone or accompanying artwork. 

Joshua Schiller, a partner at Boies Schiller Flexner, LLP, and an attorney for the Carlin estate, believes Sasso and Kultgen admitting to the stunt won’t absolve the duo of legal responsibility.

“Who knows what to believe from these defendants? All we know is that they are craven opportunists who have fabricated a piece of content that violates multiple of my clients’ rights,” Schiller said in a statement provided to PopSci on Monday. “We look forward to getting the truth about how this shameful spectacle was created and holding defendants accountable for their blatant disregard for the law and basic decency.”

According to the lawsuit filing previously obtained by Ars Technica, plaintiff attorneys argue Carlin’s reputation and legacy is now potentially damaged by association with the Dudesey special, and are continuing to seek legal and financial compensation.

The Carlin “stand-up,” although largely debunked, draws attention once again to the mounting copyright-related lawsuits against a still largely unregulated AI industry. Makers of programs such as ChatGPT maintain that access to copyrighted material is key to training trustworthy, safe AI. Compensation for such access, however, is currently far from uniform, reliable, or even legally sound. Meanwhile, there remains the possibility “I’m Glad I’m Dead” employed both AI vocal clone and generative art programs—both of which are often trained on massive, copyrighted datasets.

The real George Carlin saw it coming

As the controversies continue to play out, it certainly feels like Carlin himself was onto something almost exactly 20 years ago.

“I’ve been uplinked and downloaded. I’ve been inputted and outsourced. I know the upside of downsizing; I know the downside of upgrading,” he wrote in his 2004 essay, “Ode to the Modern Man.” “I’m a high-tech lowlife. A cutting-edge, state-of-the-art, bicoastal multitasker, and I can give you a gigabyte in a nanosecond.”

UPDATE 04/03/2024 10:31AM: The legal team for Carlin’s estate announced an out-of-court settlement with the creators of George Carlin: I’m Glad I’m Dead. Sasso and Kultgen agreed to permanently remove the special from all platforms, and would not use Carlin’s image, voice, or likeness again with estate approval. Additional settlement details, including monetary compensation, were not disclosed.

The post Humans actually wrote that fake George Carlin ‘AI’ standup routine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election https://www.popsci.com/technology/biden-robocall-ai-clone-deepfake/ Mon, 22 Jan 2024 20:56:15 +0000 https://www.popsci.com/?p=599725
Joe Biden speaking in front of American flag
A robocall scam told New Hampshire residents to not write-in Biden's name during Tuesday's primary. Deposit Photos

An AI vocal clone confused New Hampshire residents ahead of first-in-the-nation primary.

The post A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election appeared first on Popular Science.

]]>
Joe Biden speaking in front of American flag
A robocall scam told New Hampshire residents to not write-in Biden's name during Tuesday's primary. Deposit Photos

AI vocal cloning technology is reportedly already muddying the waters ahead of the 2024 election. According to a statement issued by the New Hampshire attorney general’s office on Monday, a robocall campaign deployed over the weekend used an imitation of President Joe Biden’s voice to urge recipients not to vote in the state’s January 23 presidential primary.

“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” an AI-generated Biden told residents over the phone. “Your vote makes a difference in November, not this Tuesday.”

The disinformation campaign’s orchestrators are currently unknown, but it comes from “obviously somebody who wants to hurt Joe Biden,” according to former New Hampshire Democratic Party chair Kathy Sullivan, speaking through NBC News’ initial exclusive report.

[Related: Deepfake audio already fools people nearly 25 percent of the time.]

A growing problem

AI-generated content including deepfaked audio, video, and imagery is a growing concern among misinformation experts. Multiple reports have warned that today’s media, internet, and social landscapes are unprepared for a likely imminent deluge of falsified “fake news” content as the 2024 presidential election intensifies. A recent study conducted by researchers at the UK’s University College London indicates AI generated audio can fool as many as 1-in-4 listeners. Over 1,600 videos uploaded to YouTube have featured deepfaked celebrities like Taylor Swift and Steve Harvey hawking “medical card” schemes and other scams, collectively amassing over 195 million views in the process. But unlike “free money” ploys from AI Oprah, the latest vocal cloning example is explicitly meant to influence the US political landscape.

“Disgraceful and an unacceptable affront to democracy”

As for this weekend’s misinfo campaign, the fake Biden falsely claimed an ongoing statewide campaign to write-in his name during New Hampshire’s primary would hurt the president’s reelection prospects. The robocall message then concluded with a phone number linked to Kathy Sullivan, resulting in a flurry of calls on Sunday evening that prompted the former New Hampshire Democratic Party Chair to report the situation to the state attorney general’s office.

“These messages appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters,” the state attorney general’s office cautioned in Monday’s statement. Voters were urged to disregard the message, with the office explicitly making clear, “Voting in the New Hampshire Presidential Primary Election does not preclude a voter from additionally voting in the November General Election.”

[Related: Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube.]

Representatives of former President Trump’s campaign have denied any connection to the robocall scheme. Meanwhile a spokesperson for Dean Phillips, the congressman from Minnesota challenging Biden for the Democratic Party nomination, described the scam as “wildly concerning.”

“Any effort to discourage voters is disgraceful and an unacceptable affront to democracy,” Phillips campaign representative Katie Dolan told NBC News. “The potential use of AI to manipulate voters is deeply disturbing.”

Chatbot politician stand-ins

At least some Phillips supporters, however, are embracing other AI tool tactics, despite warnings to the contrary. Late last week, The Washington Post noted that Dean.Bot—a chatbot created by a pro-Phillips SuperPAC—had been removed from OpenAI’s recently launched online store for “knowingly violating our API usage policies which disallow political campaigning.”

Despite the slap on the wrist, Phillips remains a favorite of some Silicon Valley’s top players. The SuperPAC behind Dean.Bot, We Deserve Better, was co-founded by the former chief of staff for Sam Altman, OpenAI co-founder and recently fired-and-rehired CEO. Altman himself has met with Phillips, although he has yet to endorse or formally donate to the congressman’s campaign.

An ongoing investigation

The New Hampshire AG’s statement notes a Department of Justice investigation into the AI presidential robocall is ongoing, and encourages residents to contact the Election Law Unit if they received a message beginning with AI Biden opining, “What a bunch of malarkey.”

The post A deepfake ‘Joe Biden’ robocall told voters to stay home for primary election appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Check out some of the past year’s most innovative musical inventions https://www.popsci.com/technology/guthman-musical-instrument-finalists/ Thu, 18 Jan 2024 20:28:51 +0000 https://www.popsci.com/?p=599336
Orpheas Kofinakos, Herui Chen, Peter Zhang – The eXpressive Electronic Keyboard Instrument (XEKI)
Orpheas Kofinakos, Herui Chen, Peter Zhang – The eXpressive Electronic Keyboard Instrument (XEKI). Guthman Musical Instrument Competition / Georgia Tech

The finalists of this year's Guthman Musical Invention Competition include spinning guitars and handheld electromagnets.

The post Check out some of the past year’s most innovative musical inventions appeared first on Popular Science.

]]>
Orpheas Kofinakos, Herui Chen, Peter Zhang – The eXpressive Electronic Keyboard Instrument (XEKI)
Orpheas Kofinakos, Herui Chen, Peter Zhang – The eXpressive Electronic Keyboard Instrument (XEKI). Guthman Musical Instrument Competition / Georgia Tech

Every year since 2009, a handful of artists, engineers, musicians, and hobbyists from around the world arrive in Atlanta, Georgia, with one-of-a-kind instruments in tow. Sitars made from golf clubs, pianos generating otherworldly tones from electromagnets, and infinitely customizable miniature synthesizers—all have taken home prizes at Georgia Tech’s Guthman Musical Instrument Competition. As university gears up to showcase 2024’s ten finalists, Jason Freeman is excited, to say the least. “It’s one of the favorite parts of my job,” he tells PopSci.

Although the School of Music has been a part of the university since its marching band formed in 1908, the world of instrumentation has changed dramatically over the ensuing century. Freeman, professor and chair of Georgia Tech’s School of Music and the competition’s director, as well as his fellow organizers saw an opportunity to draw attention to the ever-evolving world of music technology, as well as human beings’ immense creative capacities using increasingly accessible tools.

AI photo

Freeman says they receive between 50 and 100 open call submissions each year from creators residing everywhere from Turkey, to Germany, to Spain, to California. Ten finalists converge on campus in the spring to demonstrate their inventions in front of a panel of judges, as well as a packed house. One of the school’s most public activities, the Guthman Competition hosts as many as 1,500 visitors during the two-day event, including K-12 students and industry professionals alike.

In early March, attendees will be able to see (and hear) finalist entries such as Jean-François Laporte’s Babel Table, an instrument created for a children’s project utilizing multiple arrays of latex membranes and compressed air flows to produce everything from percussive tones to electronic-esque chirping notes. Pippa Kelmenson’s Bone Conductive Instrument (BCI), which emits sound signals to vibrate individual body resonant frequencies to aid hard-of-hearing users. Playmodes’ Sonògraf, meanwhile, uses camera-enabled machine learning to transform a user’s handwritten drawings and collages into audible melodies. Freeman admits it can be difficult to pick a winner with such a wide variety of finalists, and likens the process to comparing apples and oranges.

Pippa Kelmenson's Bone Conductive Instrument (BCI) uses vibrating sound signals through the body to help users with hearing spectrum issues.
Pippa Kelmenson’s Bone Conductive Instrument (BCI) uses vibrating sound signals through the body to help users with hearing spectrum issues. Credit: Georgia Tech

“One may be a mobile app, another may be a reimagination of a traditional instrument, while another may be a device designed for deaf or hard-of-hearing musicians,” he says. “We celebrate this whole spectrum of practices with the competition.”

Like the instruments on display, Freeman says the competition continues to evolve, as well. Initially, organizers focused on much narrower criteria, namely the potential for an instrument to go on to achieve widespread commercial success.

Lorentz Violin musical instrument invention
Thomas Coor’s Lorentz Violin is a portable electromechanical instrument employing a guitar pickup and variable-speed magnetic wheel to make its tones. Credit: Georgia Tech

“While some winners have certainly achieved this goal, many never intended to mass produce their creations,” he explains. “They were creating something that has a very specific and unique need. Sometimes for a musician of one, and sometimes for a very specific field of practice.”

Anthony Dickens' Circle Guitar uses a rotating wheel to strike the strings, creating rhythms otherwise impossible to perform by hand
Anthony Dickens’ Circle Guitar uses a rotating wheel to strike the strings, creating rhythms otherwise impossible to perform by hand. Credit: Georgia Tech

Generally speaking, judges now evaluate finalists on three central criteria: design, innovation, and musicality, i.e. an instrument’s potential for creative and melodic expression. Most submissions involve some degree of electronics (see: Teenage Engineering’s OP-1 mini synthesizer) although many remain wholly acoustic creations. Reimaginings of traditional instruments are also common, such as 2014’s first-place winner—the Adjustable Microtonal Guitar from Turkish musician, Tolgahan Çoğulu, which allows players to customize the fretboard for non-western melodies. Another common theme is using electromagnets to generate sound, as seen in 2024 finalist Nicola Privato’s Thales sensors.

Another major theme in recent years is an increasing emphasis on using instruments in educational contexts, such as ways students can learn to play music while avoiding some of the common, early phase pain points and frustrations. Another is increasing accessibility for individuals with different needs, such as a physical disability preventing them from playing a particular instrument. These approaches are some of Freeman’s personal favorites.

“There’s still tremendous potential for technology that helps us become better at traditional instruments, but also retrofits or expands or adapts practices from those instruments to help people be creative and make music,” he says.

AI photo

Like the instruments on display, Freeman says the competition continues to evolve, as well. Tools to create instruments are becoming more accessible and affordable, as well as easier to utilize. When asked about the most recent technological buzzwords and their (potentially problematic) implications, however, Freeman seems unphased. If anything, it’s old news.

“Machine learning and AI have really been long important in this space in a variety of ways,” he says.

But as innovative as they can be, such technologies are not an instant recipe for success. Freeman cautions that designers often mistakenly believe instrument aspects can be mapped on a one-to-one ratio—tone is one sensor, physical intensity to another, volume to yet another.

“That’s not how real musical instruments work. Real musical instruments are nonlinear systems that have a huge amount of unpredictability built into them.”

The post Check out some of the past year’s most innovative musical inventions appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sam Altman: Age of AI will require an ‘energy breakthrough’ https://www.popsci.com/technology/sam-altman-age-of-ai-will-require-an-energy-breakthrough/ Thu, 18 Jan 2024 19:09:02 +0000 https://www.popsci.com/?p=599322
 Sam Altman, chief executive officer of OpenAI, attends the World Economic Forum (WEF) in Davos, Switzerland.
Sam Altman, chief executive officer of OpenAI, attends the World Economic Forum (WEF) in Davos, Switzerland. Halil Sagirkaya/Anadolu via Getty Images

Speaking at Davos, OpenAI's CEO spoke of a vague AI future made possible only by currently unavailable resources.

The post Sam Altman: Age of AI will require an ‘energy breakthrough’ appeared first on Popular Science.

]]>
 Sam Altman, chief executive officer of OpenAI, attends the World Economic Forum (WEF) in Davos, Switzerland.
Sam Altman, chief executive officer of OpenAI, attends the World Economic Forum (WEF) in Davos, Switzerland. Halil Sagirkaya/Anadolu via Getty Images

Open AI CEO Sam Altman believes long-awaited nuclear fusion may be the silver bullet needed to solve artificial intelligence’s glutinous energy appetite and pave the way for an AI revolution. When that revolution does arrive, however, it might not seem quite as shocking as he once claimed. 

Altman touched on  AI’s growing demands earlier this week while speaking at a Bloomberg event outside of the annual World Economic Forum meeting in Davos, Switzerland. The CEO said powerful new AI models would likely require even more energy consumption than previously imagined. Solving that energy deficit, he suggested, will require a “breakthrough” in nuclear fusion.

“There’s no way to get there without a breakthrough,” Altman said at the event according to Reuters. “It motivates us to go invest more in [nuclear] fusion.”

AI’s energy problem 

Though some AI proponents believe insights gleaned from advanced models could help fight climate change in novel ways, a growing body of research suggests the up-front energy required to train these complex models is taking a toll of its own. Experts expect the vast amounts of data needed to train models like OpenAI’s GPT and Google’s Bard could increase the global data server industry, which the International Energy Agency (IEA) estimates already accounts for around 2-3% of global greenhouse gas emissions. 

Researchers estimate training a single large language model like GPT-4 could use around 300 tons of CO2. Others estimate a single image spit out by AI image generator tools like Dall-E or Stable Diffusion requires the same amount of energy as charging a smartphone. The massive server farms needed to facilitate AI training also require vast amounts of water to stay cool. GPT-3 alone, recent research suggests, may have consumed 185,000 gallons of water during its training period

[ Related: A simple guide to the expansive world of artificial intelligence ]

Altman hopes climate-friendly energy solutions like more affordable solar energy and nuclear fusion can help AI companies meet this growing demand without worsening an already bleak climate outlook. Fusion, which mimics the power generated by stars, has long-attracted scientists and entrepreneurs as a source of nearly limitless, clean energy when produced on an industrial scale

Scientists have already hit several important milestones along the journey towards fusion, but it’s unlikely we will see fully functioning fusion reactors capable of powering AI training models anytime soon. The IAE expects a prototype fusion reactor could come online by 2024. Altman is getting in on the action in the meantime. In 2021, the OpenAI CEO and former Y Combinator President personally invested $375 million in Helion Energy, a US-based company developing a fusion power plant. 

AI will ‘change the world much less than we all think’

When he wasn’t pondering a fusion-fueled future, Altman was busy backpedaling away from some of his more cataclysmic claims related to AI. Less than one year ago, Altman signed onto a letter warning of runaway AI possibly ending all human life and wrote a blog post preparing for a world beyond superintelligent AI. Now, speaking to the crowd outside the World Economic Forum event, the CEO says the technology will “change the world much less than we all think.” 

Altman still believes artificial general intelligence, a vague and evolving industry term for a model capable of outperforming humans and exhibiting human-like cognitive abilities  is around the corner, but he seems less concerned about its disruptive impact than he did just months earlier. 

“It [AGI] will change the world much less than we all think and it will change jobs much less than we all think,” Altman said during a conversation at the World Economic Forum, according to CNBC. He went on to loosely predict AGI would be developed in the “reasonably close-ish future.” 

[ Related: What happens if AI grows smarter than humans? The answer worries scientists. ]

Altman continued on with his relatively reserved tenor during a Tuesday conversation with Microsoft CEO Satya Nadella and The Economist Editor-in chief Zanny Minton Beddoes. 

“When we reach AGI,” Altman said according to VentureBeat, “the world will freak out for two weeks and then humans will go back to do human things.”

Speaking on Thursday at the World Economic Forum, Altman continued pouring water over his company’s own technology, describing the tool as a “system that is sometimes right, sometimes creative, [and] often totally wrong.” Specifically, Altman said AI shouldn’t be trusted to make life or death decisions.

“You actually don’t want that [AI] to drive your car,” Altman said according to CNN. ”But you’re happy for it to help you brainstorm what to write about or help you with code that you get to check.”

It’s not entirely clear what caused AI’s loudest evangelist to muffle his tune on the technology’s impacts in such a short period of time. The change in tone notably comes just two months after Altman survived an attempt by OpenAI’s then board of directors to oust him from his role at the company.

At the time, the board members said they sought to remove Altman because he had not been “consistently candid in his communications.” Some observers interpreted that vague explanation as code for Altman allegedly prioritizing AI product launch speed over safety. Altman eventually returned as CEO following a week of late-night corporate jockeying fit for prime-time television

Altman’s about-face on AI’s impact and his previous doomsday scenarios may sound diametrically opposed but they share one key attribute: neither of them are based on open data verifiable by researchers or the greater public. OpenAI’s training methodology remains closed off, leaving predictions about its coming computational power mere speculation. 

The post Sam Altman: Age of AI will require an ‘energy breakthrough’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube https://www.popsci.com/technology/youtube-free-money-deepfakes/ Wed, 10 Jan 2024 20:00:00 +0000 https://www.popsci.com/?p=598195
AI photo
YouTube

Steve Harvey, Taylor Swift, and other famous people's sloppy deepfakes are being used in sketchy 'medical card' YouTube videos.

The post Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube appeared first on Popular Science.

]]>
AI photo
YouTube

Online scammers are using AI voice cloning technology to make it appear as if celebrities like Steve Harvey and Taylor Swift are encouraging fans to fall for medical benefits-related scams on YouTube. 404 Media first reported on the trend this week. These are just some of the latest examples of scammers harnessing increasingly accessible generative AI tools to target often economically impoverished communities and impersonate famous people for quick financial gain

404 Media was contacted by a tipster who pointed the publication towards more than 1,600 videos on YouTube where deepfaked celebrity voices work as well as non-celebrities to push the scams. Those videos, many of which remain active at time of writing, reportedly amassed 195 million views. The videos appear to violate several of YouTube’s policies, particularly those around misrepresentation and spam and deceptive practices. YouTube did not immediately respond to PopSci’s request for comment.  

How does the scam work?

The scammers try to trick viewers by using chopped up clips of celebrities and with voiceovers created with AI tools mimicking the celebrities’ own voices. Steve Harvey, Oprah, Taylor Swift, podcaster Joe Rogan, and comedian Kevin Hart all have deepfake versions of their voices appearing to promote the scam. Some of the videos don’t use celebrities deepfakes at all but instead appear to use a recurring cast of real humans pitching different variations of a similar story. The videos are often posted by YouTube accounts with misleading names like “USReliefGuide,” “ReliefConnection” and “Health Market Navigators.” 

“I’ve been telling you guys for months to claim this $6,400,” a deepfake clones attempting to impersonate Family Feud host Steve Harvey says. “Anyone can get this even if you don’t have a job!” That video alone, which was still on YouTube at time of writing, had racked up over 18 million views. 

Though the exact wording of the scams vary by video, they generally follow a basic template. First, the deepfaked celebrity or actor addresses the audience alerting them to a $6,400 end-of-the-year holiday stimulus check provided by the US government delivered via a “health spending card.” The celebrity voice then says anyone can apply for the stimulus so long as they are not already enrolled in Medicare or Medicaid. Viewers are then usually instructed to click a link to apply for the benefits. Like many effective scams, the video also introduces a sense of urgency by trying to convince viewers the bogus deal won’t last long. 

In reality, victims who click through to those links are often redirected to URLs with names like “secretsavingsusa.com” which are not actually affiliated with the US government. Reporters at PolitiFact called a signup number listed on one of those sites and spoke with an “unidentified agent” who asked them for their income, tax filing status, and birth date; all sensitive personal data that could potentially be used to engage in identity fraud. In some cases, the scammers reportedly ask for credit card numbers as well. The scam appears to use confusion over real government health tax credits as a hook to reel in victims. 

Numerous government programs and subsidies do exist to assist people in need, but generic claims offering “free money” from the US government are generally a red flag. Lowering costs associated with generative AI technology capable of creating somewhat convincing mimics of celebrities’ voices can make these scams even more convincing. The Federal Trade Commission (FTC) warned of this possibility in a blog post last year where it cited easy examples of fraudsters using deepfakes and voice clones to engage in extortion and financial fraud, among other illegal activities. A recent survey conducted by PLOS One last year found deepfake audio can already fool human listeners nearly 25% of the time

The FTC declined to comment on this recent string of celebrity deepfake scams. 

Affordable, easy to use AI tech has sparked a rise in celebrity deepfake scam

This isn’t the first case of deepfake celebrity scams, and it almost certainly won’t be the last. Hollywood legend Tom Hanks recently apologized to his fans on Instagram after a deepfake clone of himself was spotted promoting a dental plan scam. Not long after that, CBS anchor Gayle King said scammers were using similar deepfake methods to make it seem like she was endorsing a weight-loss product. More recently, scammers reportedly combined a AI clone of pop star Taylor Swift’s voice alongside real images of her using Le Creuset cookware to try and convince viewers to sign up for a kitchenware giveaway. Fans never received the shiny pots and pans. 

Lawmakers are scrambling to draft new laws or clarify existing legislation to try and address the growing issues. Several proposed bills like the Deepfakes Accountability Act and the No Fakes Act would give individuals more power to control digital representations for their likeness. Just this week, a bipartisan group of five House lawmakers introduced the No AI FRAUD Act which attempts to lay out a federal framework to protect individuals rights to their digital likeness, with an emphasis on artists and performers. Still, it’s unclear how likely those are to pass amid a flurry of new, quickly devised AI legislation entering Congress

Update 01/11/23 8:49am: A YouTube spokesperson got back to PopSci with this statement: “We are constantly working to enhance our enforcement systems in order to stay ahead of the latest trends and scam tactics, and ensure that we can respond to emerging threats quickly. We are reviewing the videos and ads shared with us and have already removed several for violating our policies and taken appropriate action against the associated accounts.”

The post Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How video game tech, AI, and computer vision help decode animal pain and behavior https://www.popsci.com/science/computer-vision-mice-pain-behavior/ Wed, 10 Jan 2024 15:00:00 +0000 https://www.popsci.com/?p=598046
AI photo
The Jackson Laboratory / Popular Science

Top neuroscience labs are adapting new and unexpected tools to gain a deeper understanding of how mice, and ultimately humans, react to different drug treatments.

The post How video game tech, AI, and computer vision help decode animal pain and behavior appeared first on Popular Science.

]]>
AI photo
The Jackson Laboratory / Popular Science

Back in 2013, Sandeep Robert “Bob” Datta was working in his neurobiology lab at Harvard Medical School in Boston when he made the fateful decision to send his student Alex Wiltschko to the Best Buy up the street. Wiltschko was on a mission to purchase an Xbox Kinect camera, designed to pick up players’ body movements for video games like Just Dance and FIFA. He plunked down about $150 and walked out with it. The unassuming piece of consumer electronics would determine the lab’s direction in the coming decade and beyond. 

It also placed the team within a growing scientific movement at the intersection of artificial intelligence, neuroscience, and animal behavior—a field poised to change the way researchers use other creatures to study human health conditions. The Datta Lab is learning to track the intricate nuances of mouse movement and understand the basics of how the mammal brain creates behavior, untangling the neuroscience of different health conditions and ultimately developing new treatments for people. This area of research relies on so-called “computer vision” to analyze video footage of animals and detect behavior patterns imperceptible to the unaided eye. Computer vision can also be used to auto-detect cell types, addressing a persistent problem for researchers who study complex tissues in, for example, cancers and gut microbiomes.

In the early 2010s, Datta’s lab was interrogating how smell, “the sense that is most important to most animals” and the one that mice can’t survive without, drives the rodents’ responses to manipulations in their environment. Human observers traditionally track mouse behavior and record their observations—how many times a mouse freezes in fear, how often it rears up to explore its enclosure, how long it spends grooming, how many marbles it buries. Datta wanted to move beyond the movements visible to the unaided eye and use video cameras to track and compute whether a rodent avoids an odor (that of predator urine, for instance) or is attracted to it (like the smell of roses). The tools available at the time—overhead 2D cameras that tracked each animal as a single point—didn’t yield sufficiently detailed data.

“Even in an arena in the dark, where there’s no stimuli at all, [mice] just generate these incredible behavioral dynamics—none of which are being captured by, like, a dot bouncing around on the screen,” says Datta. So Wiltschko identified the Xbox Kinect camera as a potential solution. Soon after its introduction in 2010, people began hacking the hardware for science and entertainment purposes. It was fitting for Datta’s lab to use it to track mice: It can record in the dark using infrared light (mice move around much more when it’s darker) and can see in 3D when mounted overhead by measuring how far an object is from the sensor. This enabled Datta’s team to follow the subjects when they ran around, reared up, or hunkered down. As it analyzed its initial results, it realized that the Kinect camera recorded the animals’ movements with a richness that 2D cameras couldn’t capture.

“That got us thinking that if we could just somehow identify regularities in the data, we might be able to identify motifs or modules of action,” Datta says. Looking at the raw pixel counts from the Kinect sensor, even as compressed image files and without any sophisticated analysis, they began seeing these regularities. With or without an odor being introduced, every few hundred milliseconds, mice would switch between different types of movement—rearing, bobbing their heads, turning. For several years after the first Kinect tests, Datta and his team tried to develop software to identify and record the underlying elements of the basic components of movement the animals string together to create behavior.

But they kept hitting dead ends.

“There are many, many ways you can take data and divide it up into piles. And we tried many of those ways, many for years,” Datta recalls. “And we had many, many false starts.”

They tried categorizing results based on the animals’ poses from single frames of video, but that approach ignored movement—“the thing that makes behavior magic,” according to Datta. So they abandoned that strategy and started thinking about the smaller motions that last fractions of a second and constitute behavior, analyzing them in sequence. This was the key: the recognition that movement is both discrete and continuous, made up of units but also fluid. 

So they started working with machine learning tools that would respect this dual identity. In 2020, seven years after that fateful trip to Best Buy, Datta’s lab published a scientific paper describing the resulting program, called MoSeq (short for “motion sequencing,” evoking the precision of genetic sequencing). In this paper, they demonstrated their technique could identify the subsecond movements, or “syllables,” as they call them, that make up mouse behavior when they’re strung together into sequences. By detecting when a mouse reared, paused, or darted away, the Kinect opened up new possibilities for decoding the “grammar” of animal behavior.

AI photo
MoSeq

Computer visionaries

In the far corner of the Datta Lab, which still resides at Harvard Medical School, Ph.D. student Maya Jay pulls back a black curtain, revealing a small room bathed in soft reddish-orange light. To the right sit three identical assemblies made of black buckets nestled inside metal frames. Over each bucket hangs a Microsoft Xbox Kinect camera, as well as a fiber-optic cable connected to a laser light source used to manipulate brain activity. The depth-sensing function of the cameras is the crucial element at play. Whereas a typical digital video captures things like color, the images produced by the Kinect camera actually show the height of the animal off the floor, Jay says—for instance, when it bobs its head or rears up on its hind legs. 

Microsoft discontinued the Xbox Kinect cameras in 2017 and has stopped supporting the gadget with software updates. But Datta’s lab developed its own software packages, so it doesn’t rely on Microsoft to keep the cameras running, Jay says. The lab also runs its own software for the Azure Kinect, a successor to the original Kinect that the team also employs—though it was also discontinued, in 2023. Across the lab from the Xbox Kinect rigs sits a six-camera Azure setup that records mice from all angles, including from below, to generate either highly precise 2D images incorporating data from various angles or 3D images.

In the case of MoSeq and other computer vision tools, motion recordings are often analyzed in conjunction with manipulations to the brain, where sensory and motor functions are rooted in distinct modules, and neural-activity readings. When disruptions in brain circuits, either from drugs administered in the lab or edits to genes that mice share with humans, lead to changes in behaviors, it suggests a connection between the two. This makes it possible for researchers to determine which circuits in the brain are associated with certain types of behavior, as well as how medications are working on these circuits.

In 2023, Datta’s lab published two papers detailing how MoSeq can contribute to new insights into an organism’s internal wiring. In one, the team found that, for at least some mice in some situations, differences in mouse behavior are influenced way more by individual variation in the brain circuits involved with exploration than by sex or reproductive cycles. In another, manipulating the neurotransmitter dopamine suggested that this chemical messenger associated with the brain’s reward system supports spontaneous behavior in much the same way it influences goal-directed behaviors. The idea is that little bits of dopamine are constantly being secreted to structure behavior, contrary to the popular perception of dopamine as a momentous reward. The researchers did not compare MoSeq to human observations, but it performed comparably in another set of experiments in a paper that has yet to be published.

These studies probed some basic principles of mouse neurobiology, but many experts in this field say MoSeq and similar tools could broadly revolutionize animal and human health research in the near future. 

With computer vision tools, mouse behavioral tests can run in a fraction of the time that would be required with human observers. This tech comes at a time when multiple forces are calling animal testing into question. The United States Food and Drug Administration (FDA) recently changed its rules on drug testing to consider alternatives to animal testing as prerequisites for human clinical trials. Some experts, however, doubt that stand-ins such as organs on chips are advanced enough to replace model organisms yet. But the need exists. Beyond welfare and ethical concerns, the vast majority of clinical trials fail to show benefits in humans and sometimes produce dangerous and unforeseen side effects, even after promising tests on mice or other models. Proponents say computer vision tools could improve the quality of medical research and reduce the suffering of lab animals by detecting their discomfort in experimental conditions and clocking the effects of treatments with greater sensitivity than conventional observations.

Further fueling scientists’ excitement, some see computer vision tools as a means of measuring the effects of optogenetics and chemogenetics, techniques that use engineered molecules to make select brain cells turn on in response to light and chemicals, respectively. These biomedical approaches have revolutionized neuroscience in the past decade by enabling scientists to precisely manipulate brain circuits, in turn helping them investigate the specific networks and neurons involved in behavioral and cognitive processes. “This second wave of behavior quantification is the other half of the coin that everyone was missing,” says Greg Corder, assistant professor of psychiatry at the University of Pennsylvania. Others agree that these computer vision tools are the missing piece to track the effects of gene editing in the lab.

“[These technologies] truly are integrated and converge,” agrees Clifford Woolf, a neurobiologist at Harvard Medical School who works with his own supervised computer vision tools in his pain research.

But is artificial intelligence ready to take over the task of tracking animal behavior and interpreting its meaning? And is it identifying meaningful connections between behavior and neurological activity just yet?

These are the questions at the heart of a tension between supervised and unsupervised AI models. Machine learning algorithms find patterns in data at speeds and scales that would be difficult or impossible for humans. Unsupervised machine learning algorithms identify any and all motifs in datasets, whereas supervised ones are trained by humans to identify specific categories. In mouse terms, this means unsupervised AIs will flag every unique movement or behavior, but supervised ones will pinpoint only those that researchers are interested in.

The major advantage of unsupervised approaches for mouse research is that people may not notice action that takes place on the subsecond scale. “When we analyze behavior types, we often actually are based on the experimenters’ judgment of the behavior type, rather than mathematical clustering,” says Bing Ye, a neuroscientist at the University of Michigan whose team developed LabGym, a supervised machine learning tool for mice and other animals, including rats and fruit fly larvae. The number of behavioral clusters that can be analyzed, too, is limited by human trainers. On the other hand, he says, live experts may be the most qualified to recognize behaviors of note. For this reason, he advocates transparency: publishing training datasets, the classification parameters that a supervised algorithm learns on, with any studies. That way, if experts disagree with how a tool identifies behaviors, the publicly available data provide a solid foundation for scientific debate.

Mu Yang, a neurobiologist at Columbia University and the director of the Mouse NeuroBehavior Core, a mouse behavior testing facility, is wary of trusting AI to do the work of humans until the machines have proved reliable. She is a traditional mouse behavior expert, trained to detect the animals’ subtleties with her own eyes. Yang knows that the way a rodent expresses an internal state, like fear, can change depending on its context. This is true for humans too. “Whether you’re in your house or…in a dark alley in a strange city, your fear behavior will look different,” Yang explains. In other words, a mouse may simply pause or it may freeze in fear, but an AI could be hard-pressed to tell the difference. One of the other challenges in tracking the animals’ behaviors, she says, is that testing different drugs on them may cause them to exhibit actions that are not seen in nature. Before AIs can be trusted to track these novel behaviors or movements, machine learning programs like MoSeq need to be vetted to ensure they can reliably track good old-fashioned mouse behaviors like grooming. 

Yang draws a comparison to a chef, saying that you can’t win a Michelin star if you haven’t proved yourself as a short-order diner cook. “If I haven’t seen you making eggs and pancakes, you can talk about caviar and Kobe beef all you want, I still don’t know if I trust you to do that.”

For now, as to whether MoSeq can make eggs and pancakes, “I don’t know how you’d know,” Datta says. “We’ve articulated some standards that we think are useful. MoSeq meets those benchmarks.”

Putting the tech to the test

There are a couple of ways, Datta says, to determine benchmarks—measures of whether an unsupervised AI is correctly or usefully describing animal behavior. “One is by asking whether or not the content of the behavioral description that you get [from AI] does better or worse at allowing you to discriminate among [different] patterns of behavior that you know should occur.” His team did this in the first big MoSeq study: It gave mice different medicines and used the drugs’ expected effects to determine whether MoSeq was capturing them. But that’s a pretty low bar, Datta admits—a starting point. “There are very few behavioral characterization methods that wouldn’t be able to tell a mouse on high-dose amphetamine from a control.” 

The real benchmark of these tools, he says, will be whether they can provide insight into how a mouse’s brain organizes behavior. To put it another way, the scientifically useful descriptions of behavior will predict something about what’s happening in the brain.

Explainability, the idea that machine learning will identify behaviors experts can link to expected behaviors, is a big advantage of supervised algorithms, says Vivek Kumar, associate professor at the biomedical research nonprofit Jackson Laboratory, one of the main suppliers of lab mice. His team used this approach, but he sees training supervised classifiers after unsupervised learning as a good compromise. The unsupervised learning can reveal elements that human observers may miss, and then supervised classifiers can take advantage of human judgment and knowledge to make sure that what an algorithm identifies is actually meaningful.

“It’s not magic”

MoSeq isn’t the first or only computer vision tool under development for quantifying animal behavior. In fact, the field is booming as AI tools become more powerful and easier to use. We already mentioned Bing Ye and LabGym; the lab of Eric Yttri at Carnegie Mellon University has developed B-SOiD; the lab of Mackenzie Mathis at École Polytechnique Fédérale de Lausanne has DeepLabCut; and the Jackson Laboratory is developing (and has patented) its own computer vision tools. Last year Kumar and his colleagues used machine vision to develop a frailty index for mice, an assessment that is notoriously sensitive to human error.

Each of these automated systems has proved powerful in its own way. For example, B-SOiD, which is unsupervised, identified the three main types of mouse grooming without being trained in these basic behaviors. 

“That’s probably a good benchmark,” Yang says. “I guess you can say, like the egg and pancake.”

Mathis, who developed DeepLabCut, emphasizes that carefully picking data sources is critical for making the most of these tools. “It’s not magic,” she says. “It can make mistakes, and your trained neural networks are only as good as the data you give [them].”

And while the toolmakers are still honing their technologies, even more labs are hard at work deploying them in mouse research with specific questions and targets in mind. Broadly, the long-term goal is to aid in the discovery of drugs that will treat psychiatric and neurological conditions. 

Some have already experienced vast improvements in running their experiments. One of the problems of traditional mouse research is that animals are put through unnatural tasks like running mazes and taking object recognition tests that “ignore the intrinsic richness” of behavior, says Cheng Li, professor of anesthesiology at Tongji University in Shanghai. His team found that feeding MoSeq videos of spontaneous rodent behavior along with more traditional task-oriented behaviors yielded a detailed description of the mouse version of postoperative delirium, the most common central nervous system surgical complication among elderly people. 

Meanwhile, LabGym is being used to study sudden unexpected death in epilepsy in the lab of Bill Nobis at Vanderbilt University Medical Center. After being trained on videos of mouse seizures, the program detects them “every time,” Nobis says.

Easing their pain

Computer vision has also become a major instrument for pain research, helping to untangle the brain’s pathways involved in different types of pain and treat human ailments with new or existing drugs. And despite the FDA rule change in early 2023, the total elimination of animal testing is unlikely, Woolf says, especially in developing novel medicines. By detecting subtle behavioral signs of pain, computer vision tools stand to reduce animal suffering. “We can monitor the changes in them and ensure that we’re not producing an overwhelming, painful situation—all we want is enough pain that we can measure it,” he explains. “We would not do anything to a mouse that we wouldn’t do to a human, in general.”

His team used supervised machine learning to track behavioral signatures of pain in mice and show when medications have alleviated their discomfort, according to a 2022 paper in the journal Pain. One of the problems with measuring pain in lab animals, rather than humans, is that the creatures can’t report their level of suffering, Woolf says. Scientists long believed that, proportional to body weight, the amount of medicine required to relieve pain is much higher in mice than in humans. But it turns out that if your computer vision algorithms can measure the sensation relatively accurately—and Woolf says his team’s can—then you actually detect signs of pain relief at much more comparable doses, potentially reducing the level of pain inflicted to conduct this research. Measuring pain and assessing pain medicine in lab animals is so challenging that most large pharmaceutical companies have abandoned the area as too risky and expensive, he adds. “We hope this new approach is going to bring them back in.”

Corder’s lab at the University of Pennsylvania is working on pain too, but using the unsupervised B-SOiD in conjunction with DeepLabCut. In unpublished work, the team had DeepLabCut visualize mice as skeletal stick figures, then had B-SOiD identify 13 different pain-related behaviors like licking or biting limbs. Supervised machine learning will help make his team’s work more reliable, Corder says, as B-SOiD needs instruction to differentiate these behaviors from, say, genital licking, a routine hygiene behavior. (Yttri, the co-creator of B-SOiD, says supervision will be part of the new version of his software.) 

As computer vision tools continue to evolve, they could even help reduce the number of animals required for research, says FDA spokesperson Lauren-Jei McCarthy. “The agency is very much aligned with efforts to replace, reduce, or refine animal studies through the use of appropriately validated technologies.”

If you build it, they will come

MoSeq’s next upgrade, which has been submitted to an academic journal and is under review, will try something similar to what Corder’s lab did: It will meld its unsupervised approach with keypoint detection, a computer vision method that highlights crucial points in an object like the body of a mouse. This particular approach employs the rig of six Kinect Azure cameras instead of the Datta lab’s classic Xbox Kinect camera rigs.

An advantage of this approach, Datta says, is that it can be applied to existing 2D video, meaning that all the petabytes of archival mouse data from past experiments could be opened up to analysis without the cost of running new experiments on mice. “That would be huge,” Corder agrees.

Datta’s certainty increases as he rattles off some of his team’s accomplishments with AI and mouse behavior in the past few years. “Can we use MoSeq to identify genetic mutants and distinguish them from wild types? —mice with genetics as they appear in nature. This was the subject of a 2020 paper in Nature Neuroscience, which showed that the algorithm can accurately discern mice with an autism-linked gene mutation from those with typical genetics. “Can we make predictions about neural activity?” The Datta Lab checked this off its bucket list just this year in its dopamine study. Abandoning the hedging so typical of scientists, he confidently declares, “All of that is true. I think in this sense, MoSeq can make eggs and pancakes.”

The post How video game tech, AI, and computer vision help decode animal pain and behavior appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI argues it is ‘impossible’ to train ChatGPT without copyrighted work https://www.popsci.com/technology/openai-copyright-fair-use/ Mon, 08 Jan 2024 22:00:00 +0000 https://www.popsci.com/?p=597864
Silhouette of people using phones against OpenAI logo
OpenAI said The New York Times' recent lawsuit against the tech company is 'without merit.'. Deposit Photos

The tech company says it has 'a mission to ensure that artificial general intelligence benefits all of humanity.'

The post OpenAI argues it is ‘impossible’ to train ChatGPT without copyrighted work appeared first on Popular Science.

]]>
Silhouette of people using phones against OpenAI logo
OpenAI said The New York Times' recent lawsuit against the tech company is 'without merit.'. Deposit Photos

2023 marked the rise of generative AI and 2024 could well be the year its makers reckon with the technology’s fallout of the industry-wide arms race. Currently, OpenAI is aggressively pushing back against recent lawsuits’ claims that its products including ChatGPT are illegally trained on copyrighted texts. What’s more, the company is making some bold legal claims as to why their programs should have access to other people’s work.

[Related: Generative AI could face its biggest legal tests in 2024.]

In a blog post published on January 8, OpenAI accused The New York Times of “not telling the full story” in the media company’s major copyright lawsuit filed late last month. Instead, OpenAI argues its scraping of online works falls within the purview of “fair use.” The company additionally claims that it currently collaborates with various news organizations (excluding, among others, The Times) on dataset partnerships, and dismisses any “regurgitation” of outside copyrighted material as a “rare bug” they are working to eliminate. This is attributed to “memorization” issues that can be more common when content appears multiple times within training data, such as if it can be found on “lots of different public websites.”

“The principle that training AI models is permitted as a fair use is supported by a wide range of [people and organizations],” OpenAI representatives wrote in Monday’s post, linking out to recently submitted comments from several academics, startups, and content creators to the US Copyright Office.

In a letter of support filed by Duolingo, for example, the language learning software company wrote that it believes that “Output generated by an AI trained on copyrighted materials should not automatically be considered infringing—just as a work by a human author would not be considered infringing merely because the human author had learned how to write through reading copyrighted works.” (On Monday, Duolingo confirmed to Bloomberg it has laid off approximately 10 percent of its contractors, citing its increased reliance on AI.)

On December 27, The New York Times sued both OpenAI and Microsoft—which currently utilizes the former’s GPT in products like Bing—for copyright infringement. Court documents filed by The Times claim OpenAI trained its generative technology on millions of the publication’s articles without permission or compensation. Products like ChatGPT are now allegedly used in lieu of their source material at a detriment to the media company. More readers opting for AI news summaries presumably means less readers subscribing to source outlets, argues The Times.

The New York Times lawsuit is only the latest in a string of similar filings claiming copyright infringement, including one on behalf of notable writers, as well as another for visual artists.

Meanwhile, OpenAI is lobbying government regulators over their access to copyrighted material. According to The Telegraph on January 7, a recent letter submitted by OpenAI to the UK’s House of Lords communications and digital argues access to copyrighted materials is vital to the company’s success and product relevancy.

“Because copyright today covers virtually every sort of human expression—including blog posts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” OpenAI wrote in the letter, while also contending that limiting training data to public domain work, “might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” The letter states that it is part of OpenAI’s “mission to ensure that artificial general intelligence benefits all of humanity.”

Meanwhile, some critics have swiftly mocked OpenAI’s claim that its program’s existence requires the use of others’ copyrighted work. On the social media platform Bluesky, historian and author Kevin M. Kruse likened OpenAI’s strategy to selling illegally obtained items in a pawn shop.

“Rough Translation: We won’t get fabulously right if you don’t let us steal, so please don’t make stealing a crime!” AI expert Gary Marcus also posted to X on Monday.

The post OpenAI argues it is ‘impossible’ to train ChatGPT without copyrighted work appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The FTC wants your help fighting AI vocal cloning scams https://www.popsci.com/technology/ftc-ai-vocal-clone-contest/ Mon, 08 Jan 2024 17:21:51 +0000 https://www.popsci.com/?p=597756
Sound level visualization of audio clip
The FTC is soliciting for the best ideas on keeping up with tech savvy con artists. Deposit Photos

Judges will award $25,000 to the best idea on how to combat malicious audio deepfakes.

The post The FTC wants your help fighting AI vocal cloning scams appeared first on Popular Science.

]]>
Sound level visualization of audio clip
The FTC is soliciting for the best ideas on keeping up with tech savvy con artists. Deposit Photos

The Federal Trade Commission is on the hunt for creative ideas tackling one of scam artists’ most cutting edge tools, and will dole out as much as $25,000 for the most promising pitch. First announced last fall, submissions are now officially open for the FTC’s Voice Cloning Challenge. The contest is looking for ideas for “preventing, monitoring, and evaluating malicious” AI vocal cloning abuses.

Artificial intelligence’s ability to analyze and imitate human voices is advancing at a breakneck pace—deepfaked audio already appears capable of fooling as many as 1-in-4 unsuspecting listeners into thinking a voice is human-generated. And while the technology shows immense promise in scenarios such as providing natural-sounding communication for patients suffering from various vocal impairments, scammers can use the very same programs for selfish gains. In April 2023, for example, con artists attempted to target a mother in Arizona for ransom by using AI audio deepfakes to fabricate her daughter’s kidnapping. Meanwhile, AI imitations present a host of potential issues for creative professionals like musicians and actors, whose livelihoods could be threatened by comparatively cheap imitations.

[Related: Deepfake audio already fools people nearly 25 percent of the time.]

Remaining educated about the latest in AI vocal cloning capabilities is helpful, but that can only do so much as a reactive protection measure. To keep up with the industry, the FTC initially announced its Voice Cloning Challenge in November 2023, which sought to “foster breakthrough ideas on preventing, monitoring, and evaluating malicious voice cloning.” The contest’s submission portal launched on January 2, and will remain open until 8pm ET on January 12.

According to the FTC, judges will evaluate each submission based on its feasibility, the idea’s focus on reducing consumer burden and liability, as well as each pitch’s potential resilience in the face of such a quickly changing technological landscape. Written proposals must include a less-than-one page abstract alongside a more detailed description under 10 pages in length explaining their potential product, policy, or procedure. Contestants are also allowed to include a video clip describing or demonstrating how their idea would work.

In order to be considered for the $25,000 grand prize—alongside a $4,000 runner-up award and up to three, $2,000 honorable mentions—submitted projects must address at least one of the three following areas of vocal cloning concerns, according to the official guidelines

  • Prevention or authentication methods that would limit unauthorized vocal cloning users
  • Real-time detection or monitoring capabilities
  • Post-use evaluation options to assess if audio clips contain cloned voices

The Voice Cloning Challenge is the fifth of such contests overseen by the FTC thanks to funding through the America Competes Act, which allocated money for various government agencies to sponsor competitions focused on technological innovation. Previous, similar solicitations focused on reducing illegal robocalls, as well as bolstering security for users of Internet of Things devices.

[Related: AI voice filters can make you sound like anyone—and anyone sound like you.]

Winners are expected to be announced within 90 days after the contest’s deadline. A word of caution to any aspiring visionaries, however: if your submission includes actual examples of AI vocal cloning… please make sure its source human consented to the use. Unauthorized voice cloning sort of defeats the purpose of the FTC challenge, after all, and is grounds for immediate disqualification.

The post The FTC wants your help fighting AI vocal cloning scams appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI and satellite data helped uncover the ocean’s ‘dark vessels’ https://www.popsci.com/technology/ai-dark-vessels/ Wed, 03 Jan 2024 22:00:00 +0000 https://www.popsci.com/?p=597308
Data visualization of all maritime activity in the North Sea
The study used machine learning and satellite imagery to create the first global map of vessel traffic and offshore infrastructure, offering an unprecedented view of previously unmapped industrial use of the ocean. Global Fishing Watch

An unprecedented study details that over 75 percent of all industrial fishing ships don’t publicly report their whereabouts.

The post AI and satellite data helped uncover the ocean’s ‘dark vessels’ appeared first on Popular Science.

]]>
Data visualization of all maritime activity in the North Sea
The study used machine learning and satellite imagery to create the first global map of vessel traffic and offshore infrastructure, offering an unprecedented view of previously unmapped industrial use of the ocean. Global Fishing Watch

Researchers can now access artificial intelligence analysis of global satellite imagery archives for an unprecedented look at humanity’s impact and relationship to our oceans. Led by Global Fishing Watch, a Google-backed nonprofit focused on monitoring maritime industries, the open source project is detailed in a study published January 3 in Nature. It showcases never-before-mapped industrial effects on aquatic ecosystems thanks to recent advancements in machine learning technology.

The new research shines a light on “dark fleets,” a term often referring to the large segment of maritime vessels that do not broadcast their locations. According to Global Fishing Watch’s Wednesday announcement, as much as 75 percent of all industrial fishing vessels “are hidden from public view.”

As The Verge explains, maritime watchdogs have long relied on the Automatic Identification System (AIS) to track vessels’ radio activity across the globe—all the while knowing the tool was far from perfect. AIS requirements differ between countries and vessels, and it’s easy to simply turn off a ship’s transponder when a crew wants to stay off the grid. Hence the (previously murky) realm of dark fleets.

Data visualization of untracked fishing vessels around the world
Data analysis reveals that about 75 percent of the world’s industrial fishing vessels are not publicly tracked, with much of that fishing taking place around Africa and south Asia. Credit: Global Fishing Watch

“On land, we have detailed maps of almost every road and building on the planet. In contrast, growth in our ocean has been largely hidden from public view,” David Kroodsma, the nonprofit’s director of research and innovation, said in an official statement on January 3. “This study helps eliminate the blindspots and shed light on the breadth and intensity of human activity at sea.” 

[Related: How to build offshore wind farms in harmony with nature.]

To solve this data void, researchers first collected 2 million gigabytes of global imaging data taken by the European Space Agency’s Sentinel-1 satellite constellation between 2017 and 2021. Unlike AIS limitations, the ESA satellite array’s sensitive radar technology allows it to detect surface activity or movement, regardless of cloud coverage or time of day.

From there, the team combined this information with GPS data to highlight otherwise undetected or overlooked ships. A machine learning program then analyzed the massive information sets to pinpoint previously undocumented fishing vessels.

The newest findings upend previous industry assumptions, and showcase the troublingly larger impact of dark fleets around the world.

“Publicly available data wrongly suggests that Asia and Europe have similar amounts of fishing within their borders, but our mapping reveals that Asia dominates—for every 10 fishing vessels we found on the water, seven were in Asia while only one was in Europe,” Jennifer Raynor, a study co-author and University of Wisconsin-Madison assistant professor of natural resource economics, said in the announcement. “By revealing dark vessels, we have created the most comprehensive public picture of global industrial fishing available.”

It’s not all troubling revisions, however. According to the team’s findings, the number of green offshore energy projects more than doubled over the five-year timespan analyzed. As of 2021, wind turbines officially outnumbered the world’s oil platforms, with China taking the lead by increasing its number of wind farms by 900 percent.

“Previously, this type of satellite monitoring was only available to those who could pay for it. Now it is freely available to all nations,” Kroodsma said in Wednesday’s announcement, declaring the study as marking “the beginning of a new era in ocean management and transparency.”

The post AI and satellite data helped uncover the ocean’s ‘dark vessels’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch an AI-leveraging robot beat humans in this classic maze puzzle game https://www.popsci.com/technology/cyberrunner-maze-game-robot/ Thu, 21 Dec 2023 15:30:00 +0000 https://www.popsci.com/?p=596498
CyberRunner robot capable of playing Labyrinth maze game
CyberRunner learned to successfully play Labyrinth after barely 5 hours of training. ETH Zurich

After hours of learning, CyberRunner can guide a marble through Labyrinth in just 14.5 seconds.

The post Watch an AI-leveraging robot beat humans in this classic maze puzzle game appeared first on Popular Science.

]]>
CyberRunner robot capable of playing Labyrinth maze game
CyberRunner learned to successfully play Labyrinth after barely 5 hours of training. ETH Zurich

Artificial intelligence programs easily and consistently outplay human competitors in cognitively intensive games like chess, poker, and Go—but it’s much harder for robots to beat their biological rivals in games requiring physical dexterity. That performance gap appears to be shortening, however, starting with a classic children’s puzzle game.

Researchers at Switzerland’s ETH Zurich recently unveiled CyberRunner, their new robotic system that leveraged precise physical controls, visual learning, and AI training reinforcement in order to learn how to play Labyrinth faster than a human.

AI photo

Labyrinth and its many variants generally consist of a box topped with a flat wooden plane that tilts across an x and y axis using external control knobs. Atop the board is a maze featuring numerous gaps. The goal is to move a marble or a metal ball from start to finish without it falling into one of those holes. It can be a… frustrating game, to say the least. But with ample practice and patience, players can generally learn to steady their controls enough to steer their marble through to safety in a relatively short timespan.

CyberRunner, in contrast, reportedly mastered the dexterity required to complete the game in barely 5 hours. Not only that, but researchers claim it can now complete the maze in just under 14.5 seconds—over 6 percent faster than the existing human record.

The key to CyberRunner’s newfound maze expertise is a combination of real-time reinforcement learning and visual input from overhead cameras. Hours’ worth of trial-and-error Labyrinth runs are stored in CyberRunner’s memory, allowing it learn step-by-step how to best navigate the marble successfully along its route.

[Related: This AI program could teach you to be better at chess.]

“Importantly, the robot does not stop playing to learn; the algorithm runs concurrently with the robot playing the game,” reads the project’s description. “As a result, the robot keeps getting better, run after run.”

CyberRunner not only learned the fastest way to beat the game—but it did so by finding faults in the maze design itself. Over the course of testing possible pathways, the AI program uncovered shortcuts allowing it to shave off time from its runs. Basically, CyberRunner created its own Labyrinth cheat codes by finding shortcuts that sidestep the maze’s marked pathways.

CyberRunner’s designers have made the project completely open-source, with an aim for other researchers around the world to utilize and improve upon the program’s capabilities.
“Prior to CyberRunner, only organizations with large budgets and custom-made experimental infrastructure could perform research in this area,” project collaborator and ETH Zurich professor Raffaello D’Andrea said in a statement this week. “Now, for less than 200 dollars, anyone can engage in cutting-edge AI research. Furthermore, once thousands of CyberRunners are out in the real-world, it will be possible to engage in large-scale experiments, where learning happens in parallel, on a global scale.”

The post Watch an AI-leveraging robot beat humans in this classic maze puzzle game appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Rite Aid can’t use facial recognition technology for the next five years https://www.popsci.com/technology/rite-aid-facial-recognition-ban/ Wed, 20 Dec 2023 21:00:00 +0000 https://www.popsci.com/?p=596336
Rotating black surveillance control camera indoors
Rite Aid conducted a facial recognition tech pilot program across around 200 stores between 2013 and 2020. Deposit Photos

FTC called the use of the surveillance technology 'reckless.'

The post Rite Aid can’t use facial recognition technology for the next five years appeared first on Popular Science.

]]>
Rotating black surveillance control camera indoors
Rite Aid conducted a facial recognition tech pilot program across around 200 stores between 2013 and 2020. Deposit Photos

Rite Aid is banned from utilizing facial recognition programs within any of its stores for the next five years. The pharmacy retail chain agreed to the ban as part of a Federal Trade Commission settlement regarding “reckless use” of the surveillance technology which “left its customers facing humiliation and other harms,” according to Samuel Levine, Director of the FTC’s Bureau of Consumer Protection.

“Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices,” Levine continued in the FTC’s December 19 announcement.

[Related: Startup claims biometric scanning can make a ‘secure’ gun.]

According to regulators, the pharmacy chain tested a pilot program of facial identification camera systems within an estimated 200 stores between 2012 and 2020. FTC states that Rite Aid “falsely flagged the consumers as matching someone who had previously been identified as a shoplifter or other troublemaker.” While meant to deter and help prosecute instances of retail theft, the FTC documents numerous incidents in which the technology mistakenly identified customers as suspected shoplifters, resulting in unwarranted searches and even police dispatches.

In one instance, Rite Aid employees called the police on a Black customer after the system flagged their face—despite the image on file depicting a “white lady with blonde hair,” cites FTC commissioner Alvaro Bedoya in an accompanying statement. Another account involved the unwarranted search of an 11-year-old girl, leaving her “distraught.” 

“Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in plurality-Black and Asian communities than in plurality-White communities,” the FTC added.

“We are pleased to reach an agreement with the FTC and put this matter behind us,” Rite Aid representatives wrote in an official statement on Tuesday. Although the company stated it respects the FTC’s inquiry and reiterated the chain’s support of protecting consumer privacy, they “fundamentally disagree with the facial recognition allegations in the agency’s complaint.”

Rite Aid also contends “only a limited number of stores” deployed technology, and says its support for the facial recognition program ended in 2020.

“It’s really good that the FTC is recognizing the dangers of facial recognition… [as well as] the problematic ways that these technologies are deployed,” says Hayley Tsukayama, Associate Director of Legislative Activism at the digital privacy advocacy group, Electronic Frontier Foundation.

Tsukayama also believes the FTC highlighting Rite Aid’s disproportionate facial scanning in nonwhite, historically over-surveilled communities underscores the need for more comprehensive data privacy regulations.

“Rite Aid was deploying this technology in… a lot of communities that are over-surveilled, historically. With all the false positives, that means that it has a really disturbing, different impact on people of color,” she says.

In addition to the five year prohibition on employing facial identification, Rite Aid must delete any collected images and photos of consumers, as well as direct any third parties to do the same. The company is also directed to investigate and respond to all consumer complaints stemming from previous false identification, as well as implement a data security program to safeguard any remaining collected consumer information it stores and potentially shares with third-party vendors.

The post Rite Aid can’t use facial recognition technology for the next five years appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New UK guidelines for judges using AI chatbots are a mess https://www.popsci.com/technology/ai-judges/ Wed, 13 Dec 2023 20:00:00 +0000 https://www.popsci.com/?p=595407
Gavel on top of a computer for a judge
“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.". DepositPhotos

The suggestions attempt to parse appropriate vs. inappropriate uses of LLMs like ChatGPT.

The post New UK guidelines for judges using AI chatbots are a mess appeared first on Popular Science.

]]>
Gavel on top of a computer for a judge
“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.". DepositPhotos

Slowly but surely, text generated by AI large language models (LLMs) are weaving their way into our everyday lives, now including legal rulings. New guidance released this week by the UK’s Judicial Office provides judges with some additional clarity on when exactly it’s acceptable or unacceptable to rely on these tools. The UK guidance advises judges against using the tools for generating new analyses. However, it allows summarizing texts. Meanwhile, an increasing number of lawyers and defendants in the US find themselves fined and sanctioned for sloppily introducing AI into their legal practices.

[ Related: “Radio host sues ChatGPT developer over allegedly libelous claims” ]

The Judicial Office’s AI guidance is a set of suggestions and recommendations intended to help judges and their clerks understand AI and its limits as the tech becomes more commonplace. These guidelines aren’t punishable rules of law but rather a “first step” in a series of efforts from the Judicial Office to clarify how judges can interact with the technology. 

In general, the new guidance says judges may find AI tools like OpenAI’s ChatGPT useful as a research tool summarizing large bodies of text or for administrative tasks like helping draft emails or memoranda. Simultaneously, it warned judges against using tools to conduct legal research  that relies on new information that can’t be independently verified. As for forming legal arguments, the guidance warns public AI chatbots simply “do not produce convincing analyses or reasoning.” Judges may find some benefits in using an AI chatbot to dig up material they already know to be accurate the guidance notes, but they should refrain from using the tools to conduct new research into topics they can’t verify themselves. It appears the guidance puts the responsibility on the user to tell fact from fiction in the LLMs outputs. 

“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts,” the guidance reads. 

The guidance goes on to warn judges that AI tools can spit out inaccurate, incomplete, or biased information–even if they are fed highly detailed or scrupulous prompts. These odd AI fabrications are generally referred toas “hallucinations.” Judges are similarly advised against entering any “private or confidential information” into the service because several of them are “open in nature.” 

“Any information that you input into a public AI chatbot should be seen as being published to all the world,” the guidance reads. 

Since the information spat up from a prompt is “non-definitive” and potentially inaccurate, while information fed into the LLM must not include “private” information that is potentially key to a full review of, say, a lawsuit’s text, it is not quite clear what actual use it would serve in the legal context. 

Context dependent data is also an area of concern for the Judicial Office. The most popular AI chatbots on market today, like OpenAI’s ChatGPT and Google’s Bard, were developed in the US and with a large corpus of US focused data. The guidance warns that emphasis on US training data could give AI models a “view” of the law that’s skewed towards American legal contexts and theory. Still, at the end of the day, the guidance notes, judges are still the ones held responsible for material produced in their name, even if it was done so with the assistance of an AI tool. 

Geoffrey Vos, the Head of Civil Justice in England and Wales, reportedly told Reuters ahead of the guidance reveal that he believes AI “provides great opportunities for the justice system.” He went on to say he believed judges were capable of spotting legal arguments crafted using AI.

“Judges are trained to decide what is true and what is false and they are going to have to do that in the modern world of AI just as much as they had to do that before,” Vos said according to Reuters. 

Some judges already find AI ‘jolly useful’ despite accuracy concerns

The new guidance comes three months after a UK court of appeal judge Lord Justice Birss used ChatGPT to provide a summary of an area of law and then used part of that summary to write a verdict. The judge reportedly hailed the ChatGPT as “jolly useful,” at the time according to The Guardian. Speaking at a press conference earlier this year, Birss said he should still ultimately be held accountable for the judgment’s content even if it was created with the help of an AI tool. 

“I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else,” Birss said according to The Law Gazette. “All it did was a task which I was about to do and which I knew the answer and could recognise as being acceptable.” 

A lack of clear rules clarifying when and how AI tools can be used in legal filings has already landed some lawyers and defendants in hot water. Earlier this year, a pair of US lawyers were fined $5,000 after they submitted a court filing that contained fake citations generated by ChatGPT. More recently, a UK woman was also reportedly caught using an AI chatbot to defend herself in a tax case. She ended up losing her case on appeal after it was discovered case law she had submitted included fabricated details hallucinated by the AI model. OpenAI was even the target of a libel suit earlier this year after ChatGPT allegedly authoritatively named a radio show host as the defendant in an embezzlement case that he had nothing to do with. 

[ Related: “EU’s powerful AI Act is here. But is it too late?” ] 

The murkiness of AI in legal proceedings might get worse before it gets better. Though the Biden Administration has offered proposals governing the deployment of AI in the legal settings as part of his recent AI Executive Order, Congress still hasn’t managed to pass any comprehensive legislation setting clear rules. On the other side of the Atlantic, The European Union recently agreed on its own AI Act which introduces stricter safety and transparency rules for a wide range of AI tools and applications that are deemed “high risk.” But the actual penalties for violating those rules likely won’t see the light of day until 2025 at the earliest. So, for now, judges and lawyers are largely flying by the seat of their pants when it comes to sussing out the ethical boundaries of AI use. 

The post New UK guidelines for judges using AI chatbots are a mess appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tesla’s Optimus robot can now squat and fondle eggs https://www.popsci.com/technology/tesla-optimus-robot-update/ Wed, 13 Dec 2023 19:30:00 +0000 https://www.popsci.com/?p=595389
Tesla Optimus robot handling an egg in demo video
Optimus' new hands include tactile sensing capabilities in all its fingers. X / Tesla

Elon Musk once said it will help create 'a future where there is no poverty.'

The post Tesla’s Optimus robot can now squat and fondle eggs appeared first on Popular Science.

]]>
Tesla Optimus robot handling an egg in demo video
Optimus' new hands include tactile sensing capabilities in all its fingers. X / Tesla

The last time Elon Musk publicly debuted a prototype of his humanoid robot, Optimus could “raise the roof” and wave at the politely enthused crowd attending Tesla’s October 2022 AI Day celebration. While not as advanced, agile, handy, or otherwise useful as existing bipedal robots, the “Bumblebee” proof-of-concept certainly improved upon the company’s first iteration—a person dressed as a robot.

On Wednesday night, Musk surprised everyone with a two-minute highlight reel posted to his social media platform, X, showcasing “Optimus Gen 2,” the latest iteration on display. In a major step forward, the now sleekly-encased robot can walk and handle an egg without breaking it. (Musk has previously stated he intends Optimus to be able to pick up and transport objects as heavy as 45 pounds.) 

Unlike last year’s Bumblebee demo, Tesla’s December 12 update only shows pre-taped, in-house footage of Gen 2 performing squats and stiffly striding across a Tesla showroom floor. That said, the new preview claims the third Optimus can accomplish such perambulations 30 percent quicker than before (an exact speed isn’t provided in the video) while weighing roughly 22 lbs less than Bumblebee. It also now includes “articulated foot sections” within its “human foot geometry.”

The main focus, however, appears to be the robot’s “faster… brand-new” five-fingered hands capable of registering and interpreting tactile sensations. To demonstrate, Optimus picks up an egg, transfers it between hands, and places it back down while a superimposed screen displays its finger pressure readings. 

[Related: Tesla’s Optimus humanoid robot can shuffle across stage, ‘raise the roof’]

The clip does not include an estimated release window or updated price point. In the past, Musk said production could begin as soon as this year, but revised that launch date in 2022 to somewhere 3-5 years down the line. If Optimus does make it off the factory line—and onto factory floors as a surrogate labor force—it will enter an industry rife with similar work robots.

During Tesla’s October 2022 AI Day event, Musk expressed his belief that Optimus will one day “help millions of people” through labor contributions that aid in creating “a future of abundance, a future where there is no poverty, where people can have whatever you want in terms of products and services.”

Musk previously offered a ballpark cost for Optimus at somewhere under $20,000—although his accuracy in such guesstimates aren’t great. The company’s much-delayed Cybertruck, for example, finally received its production launch event last month with a base price costing roughly one Optimus more than originally stated.

The post Tesla’s Optimus robot can now squat and fondle eggs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
EU’s powerful AI Act is here. But is it too late? https://www.popsci.com/technology/ai-act-explained/ Tue, 12 Dec 2023 20:05:00 +0000 https://www.popsci.com/?p=595230
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database.
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. DepositPhotos

Technology moves faster than ever. AI regulators are fighting to keep up.

The post EU’s powerful AI Act is here. But is it too late? appeared first on Popular Science.

]]>
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database.
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. DepositPhotos

European Union officials made tech policy history last week by enduring 36 hours of grueling debate in order to finally settle on a first of its kind, comprehensive AI safety and transparency framework called the AI Act. Supporters of the legislation and AI safety experts told PopSci they believe the new guidelines are the strongest of their kind worldwide and could set an example for other nations to follow.  

The legally binding frameworks set crucial new transparency requirements for OpenAI and other generative AI developers. It also draws several red lines banning some of the most controversial uses of AI, from real-time facial recognition scanning and so-called emotion recognition to predictive policing techniques. But there could be a problem brewing under the surface. Even when the Act is voted on, Europe’s AI cops won’t actually be able to enforce any of those rules until 2025 at the earliest. By then, it’s anyone’s guess what the ever-evolving AI landscape will look like. 

What is the EU AI Act? 

The EU’s AI Act breaks AI tools and applications into four distinct “risk categories” with those placed on the highest end of the spectrum exposed to the most intense regulatory scrutiny. AI systems considered high risk, which would include self-driving vehicles, tools managing critical infrastructure, medical devices, and biometric identification systems among others, would be required to undergo fundamental rights impact assessments, adhere to strict new transparency requirements, and must be registered in a public EU database. The companies responsible for these systems will also be subject to monitoring and record keeping practices to ensure EU regulators the tools in question don’t pose a threat to safety or fundamental human rights. 

It’s important here to note that the EU still needs to vote on the Act and a final version of the text has not been made public. A final vote for the legation is expected to occur in early 2024. 

“A huge amount of whether this law has teeth and whether it can prevent harm is going to depend on those seemingly much more technical and less interesting parts.”

The AI Act goes a step further and bans other use cases outright. In particular, the framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. This could potentially impact well known facial recognition startups like Clearview AI and PimEyes, which reportedly scrape the public internet for billions of face scans. Jack Mulcaire, Clearview AI’s General Counsel, told PopSci it does not operate in or offer its products in the EU. PimEyes did not immediately respond to our request for comment. 

Emotion recognition, which controversially attempts to use biometric scans to detect an individual’s feeling or state of mind, will be banned in the workplace and schools. Other AI systems that “manipulate human behavior to circumvent their free will” are similarly prohibited. AI-based “social scoring” systems, like those notoriously deployed in mainland China, also fall under the banned category.

Tech companies found sidestepping these rules or pressing on with banned applications could see fines ranging between 1.5% and 7% of their total revenue depending on the violation and the company’s size. This penalty system is what gives the EU AI Act teeth and what fundamentally separates it from other voluntary transparency and ethics commitments recently secured by the Biden Administration in the US. Biden’s White House also recently signed a first-of-its kind AI executive order laying out his vision for future US AI regulation

In the immediate future, large US tech firms like OpenAI and Google who operate “general purpose AI systems” will be required to keep up EU officials up to date on how they train their models, report summaries of the types of data they use to train those models, and create a policy acknowledging they will agree to adhere to EU copyright laws. General models deemed to pose a “systemic risk,” a label Bloomberg estimates currently only includes OpenAI’s GPT, will be subject to a stricter set of rules. Those could include requirements forcing the model’s maker to report the tool’s energy use and cybersecurity compliance, as well as calls for them to perform red teaming exercises to identify and potentially mitigate signs  of systemic risk. 

Generative AI models and capable of creating potentially misleading “deepfake” media will be required to clearly label those creations as AI-generated. Other US AI companies that create tools falling under the AI Act’s “unacceptable” risk category would likely no longer be able to continue operating in the EU when the legislation officially takes effect. 

[ Related: “The White House’s plan to deal with AI is as you’d expect” ]

AI Now Institute Executive Director Amba Kak spoke positively about the enforceable aspect of the of the AI Act, telling PopSci it was a “crucial counterpoint in a year that has otherwise largely been a deluge of weak voluntary proposals.” Kak said the red lines barring particularly threatening uses of AI and new transparency and diligence requirements were a welcome “step in the right direction.” 

Though supporters of the EU’s risk-based approach say it’s helpful to avoid subjecting  more mundane AI use cases to overbearing regulation, some European privacy experts worry the structure places too little emphasis on fundamental human rights and detracts from past the approach of psst EU legislation like the 2018 General Data Protection Regulation (GDPR) and the Charter of Fundamental Human Rights of the European Union (CFREU).

“The risk based approach is in tension with the rest of the EU human rights frameworks, “European Digital Rights Senior Policy Advisor Ella Jakubowska told PopSci during a phone interview. “The entire framework that was on the table from the beginning was flawed.” 

The AI Act’s risk-based approach, Jakubowska warned, may not always provide a full, clear picture of how certain seemingly low risk AI tools could be used in the future. Jakubowska said rights advocates like herself would prefer mandatory risk assessments for all developers of AI systems.

“Overall it’s very disappointing,” she added. 

Daniel Leufer, a Senior Policy Analyst for the digital rights organization AccessNow echoed those concerns regarding the risk-based approach, which he argues were designed partly as a concession to tech industry groups and law enforcement. Leufer says AccessNow and other digital rights organizations had to push EU member states to agree to include “unacceptable” risk categories, which some initially refused to acknowledge. Kak, the AI Now Institute Executive Director, went on to say the AI Act could have done more to clarify regulations around AI applications in law enforcement and national security domains.

An uncertain road ahead 

The framework agreed upon last week was the culmination of years’ worth of back and forth debate between EU member states, tech firms, and civil society organizations. First drafts of the AI Act date back to 2021, months before OpenAI’s ChatGPT and DALL-E generative AI tools enraptured the minds of millions. The skeleton of the legislation reportedly dates back even further still to as early as 2018. 

Much has changed since then. Even the most prescient AI experts would have struggled to imagine witnessing hundreds of top technologists and business leaders frantically adding their names to impassioned letters urging a moratorium on AI tech to supposedly safeguard humanity. Few similarly could have predicted the current wave of copyright lawsuits lodged against generative AI makers questioning the legality of their massive data scraping techniques or the torrent of AI-generated clickbait filling the web. 

Similarly, it’s impossible to predict what the AI landscape will look like in 2025, which is the earliest the EU could actually enforce its hefty new regulations. Axios notes EU officials will urge companies to agree to the rules in the meantimes, but on a voluntary basis.

Update 1/4/24 2:13PM: An earlier version of this story said Amba Kak spoke positively about the EU AI Act. This has been edited to clarify that she specifically spoke favorably about the enforceable aspect of the Act.

The post EU’s powerful AI Act is here. But is it too late? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A ‘brain organoid’ biochip displayed serious voice recognition and math skills https://www.popsci.com/technology/brainoware-brain-organoid-chip/ Tue, 12 Dec 2023 19:35:00 +0000 https://www.popsci.com/?p=595217
Brainoware biocomputing study illustration
The Brainoware chip can accurately differentiate between human speakers using just a single vowel sound 78 percent of the time. Indiana University

Researchers dubbed it Brainoware.

The post A ‘brain organoid’ biochip displayed serious voice recognition and math skills appeared first on Popular Science.

]]>
Brainoware biocomputing study illustration
The Brainoware chip can accurately differentiate between human speakers using just a single vowel sound 78 percent of the time. Indiana University

Your biological center for thought, comprehension, and learning bears some striking similarities to a data center housing rows upon rows of highly advanced processing units. But unlike those neural network data centers, the human brain runs an electrical energy budget. On average, the organ functions on roughly 12 watts of power, compared with a desktop computer’s 175 watts. For today’s advanced artificial intelligence systems, that wattage figure can easily increase into the millions.

[Related: Meet ‘anthrobots,’ tiny bio-machines built from human tracheal cells.]

Knowing this, researchers believe the development of cyborg “biocomputers” could eventually usher in a new era of high-powered intelligent systems for a comparative fraction of the energy costs. And they’re already making some huge strides towards engineering such a future.

As detailed in a new study published in Nature Electronics, a team at Indiana University has successfully grown their own nanoscale “brain organoid” in a Petri dish using human stem cells. After connecting the organoid to a silicon chip, the new biocomputer (dubbed “Brainoware”) was quickly trained to accurately recognize speech patterns, as well as perform certain complex math predictions.

As New Atlas explains, researchers treated their Brainoware as what’s known as an “adaptive living reservoir” capable of responding to electrical inputs in a “nonlinear fashion,” while also ensuring it possessed at least some memory. Simply put, the lab-grown brain cells within the silicon-organic chip function as an information transmitter capable of both receiving and transmitting electrical signals. While these feats in no way imply any kind of awareness or consciousness on Brainoware’s part, they do provide enough computational power for some interesting results.

To test out Brainoware’s capabilities, the team converted 240 audio clips of adult male Japanese speakers into electrical signals, and then sent them to the organoid chip. Within two days, the neural network system partially powered by Brainoware could accurately differentiate between the 8 speakers 78 percent of the time using just a single vowel sound.

[Related: What Pong-playing brain cells can teach us about better medicine and AI.]

Next, researchers experimented with their creation’s mathematical knowledge. After a relatively short training time, Brainoware could predict a Hénon map. While one of the most studied examples of dynamical systems exhibiting chaotic behavior, Hénon maps are a lot more complicated than simple arithmetic, to say the least.

In the end, Brainoware’s designers believe such human brain organoid chips can underpin neural network technology, and possibly do so faster, cheaper, and less energy intensive than existing options. There are still a number of hurdles—both logistical and ethical—to clear, but although general biocomputing systems may be years down the line, researchers think such advances are “likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases.”

But for now, let’s see how Brainoware can do in a game of Pong.

The post A ‘brain organoid’ biochip displayed serious voice recognition and math skills appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Generative AI could face its biggest legal tests in 2024 https://www.popsci.com/technology/generative-ai-lawsuits/ Thu, 07 Dec 2023 15:00:00 +0000 https://www.popsci.com/?p=594305
DALL E Generative AI text abstract photo
The legal battles are just beginning. Getty

Lawsuits arrived almost as soon as generative AI programs debuted. The consequences could catch up to them next year.

The post Generative AI could face its biggest legal tests in 2024 appeared first on Popular Science.

]]>
DALL E Generative AI text abstract photo
The legal battles are just beginning. Getty

AI has been eating the world this year, with the launch of GPT-4, DALL·E 3, Bing Chat, Gemini, and dozens of other AI models and tools capable of generating text and images from a simple written prompt. To train these models, AI developers have relied on millions of texts and images created by real people—and some of them aren’t very happy that their work has been used without their permission. With the launches came the lawsuits. And next year, the first of them will likely go to trial. 

Almost all the pending lawsuits involve copyright to some degree or another, so the tech companies behind each AI model are relying on fair use arguments for their defense, among others. In most cases, they can’t really argue that their AIs weren’t trained on the copyrighted works. Instead, many argue that scraping content from the internet to create generative content is transformative because the outputs are “new” works. While text-based plagiarism may be easier to pin down than image generators mimicking visual styles of specific artists, the sheer scope of generative AI tools has created massive legal messes that will be playing out in 2024 and beyond.

In January, Getty Images filed a lawsuit against Stability AI (the makers of Stable Diffusion) seeking unspecified damages, alleging that the generative image model was unlawfully trained using millions of copyrighted images from stock photo giant’s catalog. Although Getty has also filed a similar suit in Delaware, this week, a judge ruled that the lawsuit can go to trial in the UK. A date has not been set. For what it’s worth, the examples Getty uses showing Stable Diffusion adding a weird, blurry, Getty-like watermark to some of its outputs are hilariously damning.) 

A group of visual artists is currently suing Stability AI, Midjourney, DeviantArt, and Runway AI for copyright infringement by using their works to train their AI models. According to the lawsuit filed in San Francisco, the models can create images that match their distinct styles when the artists’ names are entered as part of a prompt. A judge largely dismissed an earlier version of the suit as two of the artists involved had not registered their copyright with the US copyright office, but gave the plaintiffs permission to refile—which they did in November. We will likely see next year if the amended suit can continue.

Writers’ trade group the Authors Guild has sued OpenAI (the makers of ChatGPT, GPT-4, and DALL·E 3) on behalf John Grisham, George R. R. Martin, George Saunders, and 14 other writers, for unlawfully using their work to train its large language models (LLMs). The plaintiffs argue that because the ChatGPT can accurately summarize their works, the copyrighted full texts must be somewhere in the training database. The proposed class-action lawsuit filed in New York in September also argues that some of the training data may have come from pirate websites—although a similar lawsuit brought by Sarah Silverman against Meta was largely dismissed in November. They are seeking damages and injunction preventing their works being used again without license. As yet, no judge has ruled on the case but we should know more in the coming months.

And it’s not just artists and authors. Three music publishers—Universal Music, Concord, and ABKCO—are suing Anthropic (makers of Claude) for illegally scraping their musicians’ song lyrics to train its models. According to the lawsuit filed in Tennessee, Claude can both quote the copyrighted lyrics when asked for them and incorporate them verbatim into compositions it claims to be its own. The suit was only filed in October, so don’t expect a court date before the end of the year—though Anthropic will likely try to get the case dismissed.

In perhaps the most eclectic case, a proposed class-action lawsuit is being brought against Google for misuse of personal information and copyright infringement by eight anonymous plaintiffs, including two minors. According to the lawsuit filed in San Francisco in July, among the content the plaintiffs allege that Google misused are books, photos from dating websites, Spotify playlists, and TikTok videos. Unsurprisingly, Google is fighting it hard and has moved to dismiss the case. As they filed that motion back in October, we may know before the end of the year if the case will continue. 

[ Related: “Google stole data from millions of people to train AI, lawsuit says ]

Next year, it looks like we could finally see some of these lawsuits go to trial and get some kind of ruling over the legality (or illegality) of using copyrighted materials scraped from the internet to train AI models. Most of the plaintiffs are seeking damages for their works being used without license, although some—like the Authors Guild—are also seeking an injunction that would prevent AI makers from continuing to use models trained on the copyrighted works. If that was upheld, any AI trained on the relevant data would have to cease operating and be trained on a new dataset without it. 

Of course, the lawsuits could all settle, they could run longer, and they could even be dismissed out of hand. And whatever any judge does rule, we can presumably expect to see various appeal attempts. While all these lawsuits are pending, generative AI models are being used by more and more people, and are continuing to be developed and released. Even if a judge declares generative AI makers’ behavior a gross breach of copyright law and fines them millions of dollars, given how hesitant US courts have been to ban tech products for copyright or patent infringement, it seems unlikely that they are going cram this genie back in the bottle. 

The post Generative AI could face its biggest legal tests in 2024 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google announces Gemini, its ‘multimodal’ answer to ChatGPT https://www.popsci.com/technology/google-gemini-ai-debut/ Wed, 06 Dec 2023 20:20:00 +0000 https://www.popsci.com/?p=594250
Screenshot from Gemini-powered Bard demonstration video
The drawing apparently looks close enough to a duck for Gemini. Google DeepMind / YouTube

In an edited demo video, Gemini appears able to describe sketches, identify movie homages, and crack jokes.

The post Google announces Gemini, its ‘multimodal’ answer to ChatGPT appeared first on Popular Science.

]]>
Screenshot from Gemini-powered Bard demonstration video
The drawing apparently looks close enough to a duck for Gemini. Google DeepMind / YouTube

On Wednesday, Google announced the arrival of Gemini, its new multimodal large language model built from the ground up by the company’s AI division, DeepMind. Among its many functions, Gemini will underpin Google Bard, which has previously struggled to emerge from the shadow of its chatbot forerunner, OpenAI’s ChatGPT.

AI photo

According to a December 6 blog post from Google CEO Sundar Pichai and DeepMind co-founder and CEO Demis Hassabis, there are technically three versions of the LLM—Gemini Ultra, Pro, and Nano—meant for various applications. A “fine tuned” Gemini Pro now underpins Bard, while the Nano variant will be seen in products such as Pixel Pro smartphones. The Gemini variants will also arrive for Google Search, Ads, and Chrome in the coming months, although public access to Ultra will not become available until 2024.

Unlike many of its AI competitors, Gemini was trained to be “multimodal” from launch, meaning it can already handle both text, audio, and image-based prompts. In an accompanying video demonstration, Gemini is verbally tasked to identify what is placed in front of it (a piece of paper) and then correctly identifies a user’s sketch of a duck in real-time. Other abilities appear to include inferring what actions happen next in videos once they are paused, generating music based on visual prompts, and assessing children’s homework—often with a slightly cheeky, pun-prone personality. It’s worth noting, however, that the video description includes the disclaimer, “For the purposes of this demo, latency has been reduced and Gemini outputs have been shortened for brevity.”

In a follow-up blog post, Google confirmed Gemini only actually responded to a combination of still images and written user prompts, and that their demo video was edited to present a smoother interaction with audio capabilities.

Gemini’s accompanying technical report indicates the LLM’s most powerful iteration, Ultra, “exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in [LLM] research and development.” That said, the improvements appear somewhat modest—Gemini Ultra correctly answered multidisciplinary questions 90 percent of the time, versus ChatGPT’s 86.4 percent. Regardless of statistical hairsplitting, however, the results indicate ChatGPT may have some real competition with Gemini. 

[Related: The logic behind AI chatbots like ChatGPT is surprisingly basic.]

Unsurprisingly, Google cautioned in Wednesday’s announcement that its new star AI is far from perfect, and is still prone to the industry-wide “hallucinations” which plague the emerging technology—i.e. the LLM will occasionally randomly make up incorrect or nonsensical answers. Google also subjected Gemini to “the most comprehensive safety evaluations of any Google AI model,” per Eli Collins, Google DeepMind VP of product, speaking at the December 6 launch event. This included tasking Gemini with “real toxicity prompts,” a test developed by the Allen Institute for AI involving over 100,000 problematic inputs meant to assess a large language model’s potential political and demographic biases.

Gemini will continue to integrate into Google’s suite of products in the coming months alongside a series of closed testing phases. If all goes as planned, a Gemini Ultra-powered Bard Advanced will become available to the public sometime next year—but, as has been well established by now, the ongoing AI arms race is often difficult to forecast.

When asked if it is powered by Gemini, Bard informed PopSci it “unfortunately” does not possess access to information “about internal Google projects.”

“If you’re interested in learning more about… ‘Gemini,’ I recommend searching for information through official Google channels or contacting someone within the company who has access to such information,” Bard wrote to PopSci. “I apologize for the inconvenience and hope this information is helpful.”

UPDATE 12/08/23 11:53AM: Google published a blog post on December 6 clarifying its Gemini hands-on video, as well as the program’s multimodal capabilities. Although the demonstration may make it look like Gemini responded to moving images and voice commands, it was offered a combination of stills and written prompts by Google. The footage was then edited for latency and streamlining purposes. The text of this post has since been edited to reflect this.

The post Google announces Gemini, its ‘multimodal’ answer to ChatGPT appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Swapping surgical bone saws for laser beams https://www.popsci.com/technology/bone-laser-surgery/ Wed, 06 Dec 2023 17:45:00 +0000 https://www.popsci.com/?p=594135
Researchers working with laser array in lab
The new device's collaborators working at the laser lab. Universität Basel, Reinhard Wendler

More lasers may allow for safer and more precise medical procedures.

The post Swapping surgical bone saws for laser beams appeared first on Popular Science.

]]>
Researchers working with laser array in lab
The new device's collaborators working at the laser lab. Universität Basel, Reinhard Wendler

When it comes to slicing into bone, three lasers are better than one. At least, that’s the thinking behind a new, partially self-guided surgical system designed by a team at Switzerland’s University of Basel.

Although medical fields like ophthalmology have employed laser tools for decades, the technology’s applications still remain off the table for many surgical procedures. This is most frequently due to safety concerns, including the potential for lasers to injure surrounding tissues beyond the targeted area, as well as a surgeon’s lack of full control over incision depth. To potentially solve these issues, laser physicists and medical experts experimented with increasing the number of lasers used in a procedure, while also allowing the system to partly monitor itself. Their results are documented in a recent issue of Laser Surgeries in Medicine.

[Related: AI brain implant surgery helped a man regain feeling in his hand.]

It’s all about collaboration. The first laser scans a surgery site while emitting a pulsed beam to cut through tissue in miniscule increments at a time. As the tissues vaporize, a spectrometer  analyzes and classifies the results using on-board memory to map the patient’s bone and soft tissue regions. From there, a second laser takes over to cut bone, but only where specifically mapped by its predecessor. Meanwhile, a third optical laser measures incisions in real-time to ensure the exact depth of cuts.

Using pig legs acquired from a nearby supplier, researchers determined their laser trifecta accurately performed the surgical assignments down to fractions of a millimeter, and nearly as fast as the standard methods in use today. What’s more, it did it all sans steady human hands.

“The special thing about our system is that it controls itself without human interference,” laser physicist Ferda Canbaz said in a University of Basel’s profile on December 5.

The system’s benefits extend further than simply getting the job done. The lasers’ smaller, extremely localized incisions could allow tissue to heal faster and reduce scarring in the long run. The precise cutting abilities also allow for shaping certain geometries that existing tools cannot accomplish. From a purely logistical standpoint, less physical interaction between surgeons and patients could also reduce risks of infections or similar postsurgical complications.

Researchers hope such intricate angling could one day enable bone implants to physically interlock with a patient’s existing bone, potentially even without needing bone cement. There might even come a time when similar laser arrays could not only identify tumors, but subsequently remove them with extremely minimal surrounding tissue injury.

The post Swapping surgical bone saws for laser beams appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Will AI render programming obsolete? https://www.popsci.com/technology/ai-v-programming/ Sat, 02 Dec 2023 17:00:00 +0000 https://www.popsci.com/?p=591658
coding on a laptop
Viewing programming broadly as the act of making a computer carry out the behaviors that you want it to carry out suggests that, at the end of the day, you can’t replace the individuals deciding what those behaviors ought to be. DepositPhotos

It's exhilarating to think that, with the help of generative AI, anyone who can write can also write programs. It’s not so simple.

The post Will AI render programming obsolete? appeared first on Popular Science.

]]>
coding on a laptop
Viewing programming broadly as the act of making a computer carry out the behaviors that you want it to carry out suggests that, at the end of the day, you can’t replace the individuals deciding what those behaviors ought to be. DepositPhotos

This article was originally featured on MIT Press.

In 2017, Google researchers introduced a novel machine-learning program called a “transformer” for processing language. While they were mostly interested in improving machine translation—the name comes from the goal of transforming one language into another—it didn’t take long for the AI community to realize that the transformer had tremendous, far-reaching potential.

Trained on vast collections of documents to predict what comes next based on preceding context, it developed an uncanny knack for the rhythm of the written word. You could start a thought, and like a friend who knows you exceptionally well, the transformer could complete your sentences. If your sequence began with a question, then the transformer would spit out an answer. Even more surprisingly, if you began describing a program, it would pick up where you left off and output that program.

It’s long been recognized that programming is difficult, however, with its arcane notation and unforgiving attitude toward mistakes. It’s well documented that novice programmers can struggle to correctly specify even a simple task like computing a numerical average, failing more than half the time. Even professional programmers have written buggy code that has resulted in crashing spacecraftcars, and even the internet itself.

So when it was discovered that transformer-based systems like ChatGPT could turn casual human-readable descriptions into working code, there was much reason for excitement. It’s exhilarating to think that, with the help of generative AI, anyone who can write can also write programs. Andrej Karpathy, one of the architects of the current wave of AI, declared, “The hottest new programming language is English.” With amazing advances announced seemingly daily, you’d be forgiven for believing that the era of learning to program is behind us. But while recent developments have fundamentally changed how novices and experts might code, the democratization of programming has made learning to code more important than ever because it’s empowered a much broader set of people to harness its benefits. Generative AI makes things easier, but it doesn’t make it easy.

There are three main reasons I’m skeptical of the idea that people without coding experience could trivially use a transformer to code. First is the problem of hallucination. Transformers are notorious for spitting out reasonable-sounding gibberish, especially when they aren’t really sure what’s coming next. After all, they are trained to make educated guesses, not to admit when they are wrong. Think of what that means in the context of programming.

Say you want to produce a program that computes averages. You explain in words what you want and a transformer writes a program. Outstanding! But is the program correct? Or has the transformer hallucinated in a bug? The transformer can show you the program, but if you don’t already know how to program, that probably won’t help. I’ve run this experiment myself and I’ve seen GPT (OpenAI’s “generative pre-trained transformer”, an offshoot of the Google team’s idea) produce some surprising mistakes, like using the wrong formula for the average or rounding all the numbers to whole numbers before averaging them. These are small errors, and are easily fixed, but they require you to be able to read the program the transformer produces.

It’s actually quite hard to write verbal descriptions of tasks, even for people to follow.

It might be possible to work around this challenge, partly by making transformers less prone to errors and partly by providing more testing and feedback so it’s clearer what the programs they output actually do. But there’s a deeper and more challenging second problem. It’s actually quite hard to write verbal descriptions of tasks, even for people to follow. This concept should be obvious to anyone who has tried to follow instructions for assembling a piece of furniture. People make fun of IKEA’s instructions, but they might not remember what the state of the art was before IKEA came on the scene. It was bad. I bought a lot of dinosaur model kits as a kid in the 70s and it was a coin flip as to whether I’d succeed in assembling any given Diplodocus.

Some collaborators and I are looking into this problem. In a pilot study, we recruited pairs of people off the internet and split them up into “senders” and “receivers.” We explained a version of the averaging problem to the senders. We tested them to confirm that they understood our description. They did. We then asked them to explain the task to the receivers in their own words. They did. We then tested the receivers to see if they understood. Once again, it was roughly a coin flip whether the receivers could do the task. English may be a hot programming language, but it’s almost as error-prone as the cold ones!

Finally, viewing programming broadly as the act of making a computer carry out the behaviors that you want it to carry out suggests that, at the end of the day, you can’t replace the individuals deciding what those behaviors ought to be. That is, generative AI could help express your desired behaviors more directly in a form that typical computers can carry out. But it can’t pick the goal for you. And the broader the array of people who can decide on goals, the better and more representative computing will become.

In the era of generative AI, everyone has the ability to engage in programming-like activities, telling computers what to do on their behalf. But conveying your desires accurately—to people, traditional programming languages, or even new-fangled transformers—requires training, effort, and practice. Generative AI is helping to meet people partway by greatly expanding the ability of computers to understand us. But it’s still on us to learn how to be understood.

Michael L. Littman is University Professor of Computer Science at Brown University and holds an adjunct position with the Georgia Institute of Technology College of Computing. He was selected by the American Association for the Advancement of Science as a Leadership Fellow for Public Engagement with Science in Artificial Intelligence. He is the author of “Code to Joy.”

The post Will AI render programming obsolete? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists are developing a handheld eye-scanner for detecting traumatic brain injury https://www.popsci.com/technology/eye-scan-brain-injury-device/ Thu, 30 Nov 2023 18:00:00 +0000 https://www.popsci.com/?p=593233
An ambulance speeding through traffic at nighttime
First responders could one day use a similar device. Deposit Photos

Assessing potential head trauma within the first 60 minutes can save lives. A new device could offer a quick way to act fast.

The post Scientists are developing a handheld eye-scanner for detecting traumatic brain injury appeared first on Popular Science.

]]>
An ambulance speeding through traffic at nighttime
First responders could one day use a similar device. Deposit Photos

The first 60 minutes following a traumatic brain injury such as concussion are often referred to as a patient’s “golden hour.” Identifying and diagnosing the head trauma’s severity within this narrow time frame can be crucial in implementing treatment, preventing further harm, and even saving someone’s life. Unfortunately, this can be more difficult than it may seem, since symptoms often only present themselves hours or days following an accident. Even when symptoms are quickly recognizable, first responders need to confirm them and access to CT and MRI scans is often needed, which is only available at hospitals that can be from the scene of the injury.

[Related: When to worry about a concussion.]

To clear this immense hurdle, a team at UK’s University of Birmingham set out to design a tool capable of quickly and accurately assessing potential TBI incidents. Their resulting prototype, that fits in the palm of a hand, has detected TBI issues within postmortem animal samples. As detailed in a new paper published in Science Advances, a new, lightweight tool developed by the team combines a smartphone, a safe-to-use laser dubbed EyeD, and a Raman spectroscopy system to assess the structural and biochemical health of an eye—specifically the area housing the optical nerve and neuroretina. Both optic nerve and brain biomarkers function within an extremely intricate, precise balance, so even the subtlest changes within an eye’s molecular makeup can indicate telltale signs of TBI.

After focusing their device towards the back of the eye, EyeD’s smartphone camera issues an LED flash. The light passes through a beam splitter while boosted by an accompanying input laser, and then travels through another mirror while refracted by the spectrometer. This offers a view of various lipid and protein biomarkers sharing identical biological information as those within the brain. The readings are then fed into a neural network program to aid in rapidly classifying TBI and non-TBI examples.

The team first tested EyeD on what’s known as a “phantom eye,” an artificial approximation of the organ often used during the development and testing of retinal imaging technology. After confirming EyeD’s ability to align and focus on the back of an eye, researchers moved onto clinical testing using postmortem pig eye tissue.

Although the tool currently only exists as a proof-of-concept, researchers are ready to begin assessing clinical feasibility and efficacy studies, then move on to real world human testing. If all goes as planned, EyeD devices could soon find their way into the hands of emergency responders, where they can dramatically shorten TBI diagnosis time gaps.

The post Scientists are developing a handheld eye-scanner for detecting traumatic brain injury appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How AI could help scientists spot ‘ultra-emission’ methane plumes faster—from space https://www.popsci.com/environment/methane-plume-ai-detection/ Mon, 27 Nov 2023 20:00:00 +0000 https://www.popsci.com/?p=592571
Global Warming photo

Reducing leaks of the potent greenhouse gas could alleviate global warming by as much as 0.3 degrees Celsius over the next two decades.

The post How AI could help scientists spot ‘ultra-emission’ methane plumes faster—from space appeared first on Popular Science.

]]>
Global Warming photo

Reducing damaging “ultra-emission” methane leaks could soon become much easier–thanks to a new, open-source tool that combines machine learning and orbital data from multiple satellites, including one attached to the International Space Station.

Methane emissions originate anywhere food and plant matter decompose without oxygen, such as marshes, landfills, fossil fuel plants—and yes, cow farms. They are also infamous for their dramatic effect on air quality. Although capable of lingering in the atmosphere for just 7 to 12 years compared to CO2’s centuries-long lifespan, the gas is still an estimated 80 times more effective at retaining heat. Immediately reducing its production is integral to stave off climate collapse’s most dire short-term consequences—cutting emissions by 45 percent by 2030, for example, could shave off around 0.3 degrees Celsius from the planet’s rising temperature average over the next twenty years.

[Related: Turkmenistan’s gas fields emit loads of methane.]

Unfortunately, it’s often difficult for aerial imaging to precisely map real time concentrations of methane emissions. For one thing, plumes from so-called “ultra-emission” events like oil rig and natural gas pipeline malfunctions (see: Turkmenistan) are invisible to human eyes, as well as most satellites’ multispectral near-infrared wavelength sensors. And what aerial data is collected is often thrown off by spectral noise, requiring manual parsing to accurately locate the methane leaks.

A University of Oxford team working alongside Trillium Technologies’ NIO.space has developed a new, open-source tool powered by machine learning that can identify methane clouds using much narrower hyperspectral bands of satellite imaging data. These bands, while more specific, produce much more vast quantities of data—which is where artificial intelligence training comes in handy.

The project is detailed in new research published in Nature Scientific Reports by a team at the University of Oxford, alongside a recent university profile. To train their model, engineers fed it a total of 167,825 hyperspectral image tiles—each roughly 0.66 square miles—generated by NASA’s Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) satellite while orbiting the Four Corners region of the US. The model was subsequently trained using additional orbital monitors, including NASA’s hyperspectral EMIT sensor currently aboard the International Space Station.

The team’s current model is roughly 21.5 percent more accurate at identifying methane plumes than the existing top tool, while simultaneously providing nearly 42 percent fewer false detection errors compared to the same industry standard. According to researchers, there’s no reason to believe those numbers won’t improve over time.

[Related: New satellites can pinpoint methane leaks to help us beat climate change.]

“What makes this research particularly exciting and relevant is the fact that many more hyperspectral satellites are due to be deployed in the coming years, including from ESA, NASA, and the private sector,” Vít Růžička, lead researcher and a University of Oxford doctoral candidate in the department of computer science, said during a recent university profile. As this satellite network expands, Růžička believes researchers and environmental watchdogs will soon gain an ability to automatically, accurately detect methane plume events anywhere in the world.

These new techniques could soon enable independent, globally-collaborated identification of greenhouse gas production and leakage issues—not just for methane, but many other major pollutants. The tool currently utilizes already collected geospatial data, and is not able to currently provide real-time analysis using orbital satellite sensors. In the University of Oxford’s recent announcement, however, research project supervisor Andrew Markham adds that the team’s long-term goal is to run their programs through satellites’ onboard computers, thus “making instant detection a reality.”

The post How AI could help scientists spot ‘ultra-emission’ methane plumes faster—from space appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Actually, never mind, Sam Altman is back as OpenAI’s CEO https://www.popsci.com/technology/altman-openai-return-ceo/ Wed, 22 Nov 2023 15:00:00 +0000 https://www.popsci.com/?p=591183
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued.
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued. Getty Images

The shakeup at one of Silicon Valley's most important AI companies continues.

The post Actually, never mind, Sam Altman is back as OpenAI’s CEO appeared first on Popular Science.

]]>
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued.
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued. Getty Images

Sam Altman is CEO of OpenAI once again. The return of the influential AI startup’s co-founder caps a chaotic four-days that saw two replacement CEOs, Altman’s potential transition to Microsoft, and threats of mass resignation from nearly all of the company’s employees. Altman’s return to OpenAI will coincide with a shakeup within the company’s nonprofit arm board of directors.

Silicon Valley’s pre-Thanksgiving saga started on November 17, when OpenAI’s board suddenly announced Altman’s departure after alleging the 38-year-old entrepreneur “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

The move shocked not only shocked industry insiders and investors, but executive-level employees at the company, as well. OpenAI’s president Greg Brockman announced his resignation less than three hours after news broke, while the startup’s chief operating officer described his surprise in a November 18 internal memo.

“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices,” he wrote at the time.

A flurry of breathless headlines ensued, naming first one, then another CEO replacement as rumors began circulating that Altman would join Microsoft as the CEO of its new AI development team. Microsoft previously invested over $13 billion, and relies on the company’s tech to power its growing suite of AI-integrated products.

Just after midnight on November 22, however, Altman posted to X his intention to return to OpenAI alongside a reorganized board of directors that will include previous members such former White House adviser and Harvard University President Larry Summers, as well as former Quora CEO and early Facebook employee Adam D’Angelo. This is just what happened. Entrepreneur Tasha McCauley, OpenAI chief scientist Ilya Sutskever, and director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology Helen Toner are no longer board members.

[Related: Big Tech’s latest AI doomsday warning might be more of the same hype.]

“[E]verything i’ve [sic] done over the past few days has been in service of keep this team and its mission together,” Altman wrote on the social media platform owned by former OpenAI executive Elon Musk. Altman added he looks forward to returning and “building on our strong partnership” with Microsoft.

Although concrete explanations behind the attempted corporate coup remain unconfirmed, it appears members of the previous board believed Altman was “pushing too far, too fast” in their overall goal to create a safe artificial general intelligence (AGI), a term referring to AI that is comparable to, or exceeds, human capacities. Many of AI’s biggest players believe it is their ethical duty to steer the technology towards a future that benefits humanity instead of ending it. Critics have voiced multiple, repeated concerns over Silicon Valley’s approach, ethos, and rationality.

The post Actually, never mind, Sam Altman is back as OpenAI’s CEO appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hyundai’s robot-heavy EV factory in Singapore is fully operational https://www.popsci.com/technology/hyundai-singapore-factory/ Tue, 21 Nov 2023 18:15:00 +0000 https://www.popsci.com/?p=590969
Robot dog at Hyundai factory working on car
Over 200 robots will work alongside human employees at the new facility. Hyundai

The seven-story facility includes a rooftop test track and ‘Smart Garden.’

The post Hyundai’s robot-heavy EV factory in Singapore is fully operational appeared first on Popular Science.

]]>
Robot dog at Hyundai factory working on car
Over 200 robots will work alongside human employees at the new facility. Hyundai

After three years of construction and limited operations, the next-generation Hyundai Motor Group Innovation Center production facility in Singapore is officially online and fully functioning. Announced on November 20, the 935,380-square-foot, seven-floor facility relies on 200 robots to handle over 60 percent of all “repetitive and laborious” responsibilities, allowing human employees to focus on “more creative and productive duties,” according to the company.

In a key departure from traditional conveyor-belt factories, HMGIC centers on what the South Korean vehicle manufacturer calls a “cell-based production system” alongside a “digital twin Meta-Factory.” Instead of siloed responsibilities for automated machinery and human workers, the two often cooperate using technology such as virtual and augmented reality. As Hyundai explains, while employees simulate production tasks in a digital space using VR/AR, for example, robots will physically move, inspect, and assemble various vehicle components.

[Related: Everything we love about Hyundai’s newest EV.]

By combining robotics, AI, and the Internet of Things, Hyundai believes the HMGIC can offer a “human-centric manufacturing innovation system,” Alpesh Patel, VP and Head of the factory’s Technology Innovation Group, said in Monday’s announcement

Atop the HMGIC building is an over 2000-feet-long vehicle test track, as well as a robotically assisted “Smart Farm” capable of growing up to nine different crops. While a car factory vegetable garden may sound somewhat odd, it actually compliments the Singapore government’s ongoing “30 by 30” initiative.

Due to the region’s rocky geology, Singapore can only utilize about one percent of its land for agriculture—an estimated 90 percent of all food in the area must be imported. Announced in 2022, Singapore’s 30 by 30 program aims to boost local self-sufficiency by increasing domestic yields to 30 percent of all consumables by the decade’s end using a combination of sustainable urban growth methods. According to Hyundai’s announcement, the HMGICS Smart Farm is meant to showcase farm productivity within compact settings—while also offering visitors some of its harvested crops. The rest of the produce will be donated to local communities, as well as featured on the menu at a new Smart Farm-to-table restaurant scheduled to open at the HMGICS in spring 2024.

[Related: Controversial ‘robotaxi’ startup loses CEO.]

HMGICS is expected to produce up to 30,000 electric vehicles annually, and currently focuses on the IONIQ 5, as well as its autonomous robotaxi variant. Beginning in 2024, the facility will also produce Hyundai’s IONIQ 6. If all goes according to plan, the HMGICS will be just one of multiple cell-based production system centers.

The post Hyundai’s robot-heavy EV factory in Singapore is fully operational appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An equation co-written with AI reveals monster rogue waves form ‘all the time’ https://www.popsci.com/technology/ai-model-rogue-wave/ Mon, 20 Nov 2023 22:00:00 +0000 https://www.popsci.com/?p=590809
Black and white photo of merchant ship encountering rogue wave
Photo of a merchant ship taken in the Bay of Biscay off France, circa 1940. Huge waves are common near the Bay of Biscay's 100-fathom line. Published in Fall 1993 issue of Mariner's Weather Log. Public Domain

'This is equivalent to around 1 monster wave occurring every day at any random location in the ocean.'

The post An equation co-written with AI reveals monster rogue waves form ‘all the time’ appeared first on Popular Science.

]]>
Black and white photo of merchant ship encountering rogue wave
Photo of a merchant ship taken in the Bay of Biscay off France, circa 1940. Huge waves are common near the Bay of Biscay's 100-fathom line. Published in Fall 1993 issue of Mariner's Weather Log. Public Domain

Rogue monster waves, once believed extremely rare, are now statistically confirmed to occur “all the time” thanks to researchers’ new, artificial intelligence-aided analysis. Using a combined hundreds of years’ worth of information gleaned from over 1 billion wave patterns, scientists collaborating between the University of Copenhagen and the University of Victoria have produced an algorithmic equation capable of predicting the “recipe” for extreme rogue waves. In doing so, the team appear to also upend beliefs about oceanic patterns dating back to the 1700’s.

Despite centuries of terrifying, unconfirmed rumors alongside landlubber skepticism, monstrous rogue waves were only scientifically documented for the first time in 1995. But since laser measuring equipment aboard the Norwegian oil platform Draupner captured unimpeachable evidence of an encounter with an 85-foot-high wall of water, researchers have worked to study the oceanic phenomenon’s physics, characteristics, and influences. Over the following decade, oceanographers came to define a rogue wave as being at least twice the height of a formation’s “significant wave height,” or the mean of the largest one-third of a wave pattern. They also began confidently citing “some reasons” behind the phenomena, but knew there was much more to learn.

[Related: New AI-based tsunami warning software could help save lives.]

Nearly two decades after Draupner, however, researchers’ new, AI-assisted approach offers unprecedented analysis through a study published today in Proceedings of the National Academy of Sciences.

“Basically, it is just very bad luck when one of these giant waves hits,” Dion Häfner, a research engineer and the paper’s first author, said in a November 20 announcement. “They are caused by a combination of many factors that, until now, have not been combined into a single risk estimate.”

Using readings obtained from buoys spread across 158 locations near US coasts and overseas territories, the team first amassed information equivalent to 700 years’ worth of sea state information, wave heights, water depths, and bathymetric data. After mapping all the causal variables that influence rogue waves, Häfner and their colleagues used various AI methods to synthesize the data into a model capable of calculating rogue wave formation probabilities. (These included symbolic regression which generates an equation output rather than a single prediction.) Unfortunately, the results are unlikely to ease fears of anyone suffering from thalassophobia.

“Our analysis demonstrates that abnormal waves occur all the time,” Johannes Gemmrich, the study’s second author, said in this week’s announcement. According to Gemmrich, the team registered 100,000 dataset instances fitting the bill for rogue waves.

“This is equivalent to around 1 monster wave occurring every day at any random location in the ocean,” Gemmrich added, while noting they weren’t necessarily all “monster waves of extreme size.” A small comfort, perhaps.

Until the new study, many experts believed the majority of rogue waves formed when two waves combined into a single, massive mountain of water. Based on the new equation, however, it appears the biggest influence is owed to “linear superposition.” First documented in the 1700’s, such situations occur when two wave systems cross paths and reinforce one another, instead of combining. This increases the likelihood of forming massive waves’ high crests and deep troughs. Although understood to exist for hundreds of years, the new dataset offers concrete support for the phenomenon and its effects on wave patterns.

[Related: How Tonga’s volcanic eruption can help predict tsunamis.]

And while it’s probably disconcerting to imagine an eight-story-tall wave occurring somewhere in the world every single day, the new algorithmic equation can at least help you stay well away from locations where rogue waves are most likely to occur at any given time. This won’t often come in handy for the average person, but for the estimated 50,000 cargo ships daily sailing across the world, integrating the equation into their forecasting tools could save lives.

Knowing this, Häfner’s team has already made their algorithm, research, and amassed data available as open source information, so that weather services and public agencies can start identifying—and avoiding—any rogue wave-prone areas.

The post An equation co-written with AI reveals monster rogue waves form ‘all the time’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Controversial ‘robotaxi’ startup loses CEO https://www.popsci.com/technology/cruise-ceo-resign/ Mon, 20 Nov 2023 20:00:00 +0000 https://www.popsci.com/?p=590754
Cruise robotaxi action shot at night
GM suspended all Cruise robotaxi services across the US earlier this month. Tayfun Coskun/Anadolu Agency via Getty Images

General Motors suspended Cruise's driverless fleet nationwide earlier this month.

The post Controversial ‘robotaxi’ startup loses CEO appeared first on Popular Science.

]]>
Cruise robotaxi action shot at night
GM suspended all Cruise robotaxi services across the US earlier this month. Tayfun Coskun/Anadolu Agency via Getty Images

Cruise CEO Kyle Vogt announced his resignation from the controversial robotaxi startup on Sunday evening. The co-founder’s sudden departure arrives after months of public and political backlash relating to the autonomous vehicle fleet’s safety, and hints at future issues for the company purchased by General Motors in 2016 for over $1 billion.

Vogt’s resignation follows months of documented hazardous driving behaviors from Cruise’s autonomous vehicle fleet, including injuring pedestrians, delaying emergency responders, and failing to detect children. Cruise’s Golden State tenure itself lasted barely two months following a California Public Utilities Commission greenlight on 24/7 robotaxi services in August. Almost immediately, residents and city officials began documenting instances of apparent traffic pileups, blocked roadways, and seemingly reckless driving involving Cruise and Google-owned Waymo robotaxis. Meanwhile, Cruise representatives including Vogt aggressively campaigned against claims of an unsafe vehicle fleet.

[Related: San Francisco is pushing back against the rise of robotaxis.]

“Anything that we do differently than humans is being sensationalized,” Vogt told The Washington Post in September.

On October 2, a Cruise robotaxi failed to avoid hitting a woman pedestrian first struck by another car, subsequently dragging her 20 feet down the road. GM issued a San Francisco moratorium on Cruise operations three weeks later, followed by a nationwide expansion of the suspension on November 6.

But even with Cruise on an indefinite hiatus, competitors like Waymo and Zoox continue testing autonomous taxis across San Francisco, Los Angeles, Phoenix, Austin, and elsewhere to varying degrees of success. As The New York Times reports, Waymo’s integration into Phoenix continues to progress smoothly. Meanwhile, Austin accidents became so concerning that city officials felt the need to establish an internal task force over the summer to help log and process autonomous vehicle incidents.

[Related: Self-driving taxis allegedly blocked an ambulance and the patient died.]

In a thread posted to X over the weekend, Vogt called his experience helming Cruise “amazing,” and expressed gratitude to the company and its employees while telling them to “remember why this work matters.”

“The status quo on our roads sucks, but together we’ve proven there is something far better around the corner,” wrote Vogt before announcing his plans to spend time with his family and explore new ideas.

“Thanks for the great ride!” Vogt concluded.

The post Controversial ‘robotaxi’ startup loses CEO appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI chaos explained: What it could mean for the future of artificial intelligence https://www.popsci.com/technology/sam-altman-fired-openai-microsoft/ Mon, 20 Nov 2023 19:00:00 +0000 https://www.popsci.com/?p=590725
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued.
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued. Getty Images

The firing of CEO Sam Altman, the threat of employee exodus, and more.

The post OpenAI chaos explained: What it could mean for the future of artificial intelligence appeared first on Popular Science.

]]>
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued.
On Friday, founder and OpenAI CEO Sam Altman was fired by the board of directors. Chaos ensued. Getty Images

Update November 22, 2023, 10:06am: Actually, nevermind, Sam Altman is back as OpenAI’s CEO.

OpenAI, the company behind ChatGPT, has had a wild weekend. On Friday, founder and CEO Sam Altman was fired by its board of directors, kickstarting an employee revolt that’s still ongoing. The company has now had three CEOs in as many days. The shocking shakeup at one of the most important companies driving artificial intelligence research could have far-reaching ramifications for how the technology continues to develop. For better or worse, OpenAI has always claimed to work for the good of humanity, not for profit—with the drama this weekend, a lot of AI researchers could end up at private companies, answerable only to shareholders and not society. Things are still changing fast, but here’s what we know so far, and how things might play out.

[ Related: A simple guide to the expansive world of artificial intelligence ]

‘Too far, too fast’

November should have been a great month for OpenAI. On November 6th, the company hosted its first developer conference where it unveiled GPT-4 Turbo, its latest large language model (LLM), and GPTs, customizable ChatGPT-based chatbots that can be trained to perform specific tasks. While OpenAI is best known for the text-based ChatGPT and DALL·E, the AI-powered image generator, the company’s ambitions include the development of artificial general intelligence, in which a computer matches or exceeds human capabilities. The industry is still currently debating the broad definition of AGI and OpenAI plays a large role in that conversation. This tumult has the potential to resonate well beyond the company’s own hierarchy.  

[ Related: What happens if AI grows smarter than humans? The answer worries scientists. ]

The recent upheaval stems from OpenAI’s complicated corporate structure, which was intended to ensure that OpenAI developed artificial intelligence that “benefits all of humanity,” rather than allowing the desire for profitability to enable technology that could potentially harm us. The AI venture started as a non-profit in 2015, but later spun out a for-profit company in 2019 so it could take on outside investment, including a huge deal with Microsoft. The quirk is that the board of directors of the non-profit still has complete control over the for-profit company and they are all barred from having a financial interest in OpenAI

However, the six-member board of directors had unchecked power to remove Altman—which it exercised late last week, to the surprise of almost everyone including major investors. Microsoft CEO, Satya Nadella, was reportedly “blindsided” and “furious” at how Altman was fired, as were many of OpenAI’s staff who took to Twitter/X to post heart emoji in support of Altman.

Initially, the board claimed that Altman was let go because “he was not consistently candid in his communications,” however, later accounts site differing opinions on the speed and safety of how OpenAI’s research was being commercialized. According to The Information, Ilya Sutskever, the company’s chief scientist and a board member, told an emergency all-hands meeting, “This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds [artificial general intelligence] that benefits all of humanity.” Sutskever apparently felt that Altman was “pushing too far, too fast,” and convinced the board to fire him, with chief technology officer Mira Murati taking over as the interim CEO. According to The Atlantic, the issues stemmed from the pace at which ChatGPT was deployed over the past year. The chatbot initially served as a “low-key research preview,” but it exploded in popularity and with that, features have rolled out faster than the more cautious board members were comfortable with. 

As well as Altman, President of the board Greg Brockman resigned in protest, which really kicked off the chaotic weekend. 

Three CEOs in three days and the threat of an exodus

Following internal pushback from the employees, over the weekend, Altman was reportedly in talks to resume his role as CEO. The extended will-they-won’t-they eventually fizzled. To make things more dramatic, Murati was then replaced as CEO by Emmett Shear, co-founder of streaming site Twitch, bringing the company to three CEOs in three days. Shear reportedly believes that AI has somewhere between a five percent and 50 percent chance of wiping out human life, and has advocated for slowing down the pace of its development, which aligns with the boards’ reported views.

Of course, as one of the biggest names in AI, Altman landed on his feet—both he and Brockman have already joined Microsoft, one of OpenAI’s biggest partners. On Twitter/X late last night, Microsoft CEO Satya Nadella, announced that he was “extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.”

This morning, more than 500 of OpenAI’s 750 employees signed an open letter demanding that the board step down and Altman be reinstated as CEO. If they don’t, Microsoft has apparently assured them that there are positions available for every OpenAI employee. Shockingly, even Sutskever signed the letter and also posted on Twitter/X that he regretted his “participation in the board’s actions.”

Turbulent aftermath

As of now, things are still developing. Unless something radical shifts at OpenAI, it seems like Microsoft has pulled off an impressive coup. Not only does the company continue to have access to OpenAI’s research and development, but it suddenly has its own advanced AI research unit. If the OpenAI employees do walk, Microsoft will have essentially partially acquired the $86 billion company for free.

Whatever happens, we’ve just seen a dramatic shift in the AI industry. For all the chaos of the last few days, the non-profit OpenAI was founded with laudable goals and the board seems to have seriously felt that their role was to ensure that AI—particularly, artificial general intelligence or AGI—was developed safely. With an AI advocate like Altman now working for a for-profit company unrestrained by any such lofty charter, who’s to say that it will? 

Similarly, OpenAI’s credibility is in serious doubt. Whatever its charter says, if the majority of the employees want to plow ahead with AGI development, it has a major problem on its hands. Either the board is going to have to fire a lot more people (or let them walk over to Microsoft) and totally remake itself, or it’s going to cave to the pressure and change its trajectory. And even if Altman does somehow rejoin OpenAI, which looks less and less likely, it’s hard to imagine how the non-profit’s total control of the for-profit company stays in-place. Somehow, the trajectory of AI seems considerably less predictable than it was just a week ago.

Update November 20, 2023, 2:11pm: Shear, OpenAI’s current CEO, has said he will launch an independent investigation into the circumstances around Altman’s firing. While it might be too little, too late for some employees, he says the investigation will allow him to “drive changes in the organization” up to and including “signification governance changes.”

Update November 21, 2023, 2:30pm: In an interview with CNN Monday evening, Microsoft CEO Satya Nadella reiterated the possibility that Altman could still return to his previous role at OpenAI. Nadella added he was “open to both possibilities” of Altman working for either OpenAI, or Microsoft.

The post OpenAI chaos explained: What it could mean for the future of artificial intelligence appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Some people think white AI-generated faces look more real than photographs https://www.popsci.com/technology/ai-white-human-bias/ Wed, 15 Nov 2023 17:05:00 +0000 https://www.popsci.com/?p=589787
Research paper examples of AI and human faces against blurry crowd background
Faces judged most often as (a) human and (b) AI. The stimulus type (AI or human; male or female), the stimulus ID (Nightingale & Farid, 2022), and the percentage of participants who judged the face as (a) human or (b) AI are listed below each face. Deposit Photos / Miller et al. / PopSci

At least to other white people, thanks to what researchers are dubbing ‘AI hyperealism.’

The post Some people think white AI-generated faces look more real than photographs appeared first on Popular Science.

]]>
Research paper examples of AI and human faces against blurry crowd background
Faces judged most often as (a) human and (b) AI. The stimulus type (AI or human; male or female), the stimulus ID (Nightingale & Farid, 2022), and the percentage of participants who judged the face as (a) human or (b) AI are listed below each face. Deposit Photos / Miller et al. / PopSci

As technology evolves, AI-generated images of human faces are becoming increasingly indistinguishable from real photos. But our ability to separate the real from the artificial may come down to personal biases—both our own, as well as that of AI’s underlying algorithms.

According to a new study recently published in the journal Psychological Science, certain humans may misidentify AI-generated white faces as real more often than they can accurately identify actual photos of caucasians. More specifically, it’s white people who can’t distinguish between real and AI-generated white faces. 

[Related: Tom Hanks says his deepfake is hawking dental insurance.]

In a series of trials conducted by researchers collaborating across universities in Australia, the Netherlands, and the UK, 124 white adults were tasked with classifying a series of faces as artificial or real, then rating their confidence for each decision on a 100-point scale. The team decided to match white participants with caucasian image examples in an attempt to mitigate potential own-race recognition bias—the tendency for racial and cultural populations to more poorly remember unfamiliar faces from different demographics.

“Remarkably, white AI faces can convincingly pass as more real than human faces—and people do not realize they are being fooled,” researchers write in their paper.

This was by no slim margin, either. Participants mistakenly classified a full 66 percent of AI images as photographed humans, versus barely half as many of the real photos. Meanwhile, the same white participants’ ability to discern real from artificial people of color was roughly 50-50. In a second experiment, 610 participants rated the same images using 14 attributes contributing to what made them look human, without knowing some photos were fake. Of those attributes, the faces’ proportionality, familiarity, memorability, and the perception of lifelike eyes ranked highest for test subjects.

Pie graph of 14 attributes to describe human and AI generated face pictures
Qualitative responses from Experiment 1: percentage of codes (N = 546) in each theme. Subthemes are shown at the outside edge of the main theme. Credit: Miller et al., 2023

The team dubbed this newly identified tendency to overly misattribute artificially generated faces—specifically, white faces—as “AI hyperrealism.” The stark statistical differences are believed to stem from well-documented algorithmic biases within AI development. AI systems are trained on far more white subjects than POC, leading to a greater ability to both generate convincing white faces, as well as accurately identify them using facial recognition techniques.

This disparity’s ramifications can ripple through countless scientific, social, and psychological situations—from identity theft, to racial profiling, to basic privacy concerns.

[Related: AI plagiarism detectors falsely flag non-native English speakers.]

“Our results explain why AI hyperrealism occurs and show that not all AI faces appear equally realistic, with implications for proliferating social bias and for public misidentification of AI,” the team writes in their paper, adding that the AI hyperrealism phenomenon “implies there must be some visual differences between AI and human faces, which people misinterpret.”

It’s worth noting the new study’s test pool was both small and extremely limited, so more research is undoubtedly necessary to further understand the extent and effects of such biases. But it remains true that very little is still known about what AI hyperrealism might mean for populations, as well as how they affect judgment in day-to-day lives. In the meantime, humans may receive some help in discernment from an extremely ironic source: During trials, the research team also built a machine learning program tasked with separating real from fake human faces—which it proceeded to accurately accomplish 94 percent of the time.

The post Some people think white AI-generated faces look more real than photographs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google DeepMind’s AI forecasting is outperforming the ‘gold standard’ model https://www.popsci.com/environment/ai-weather-forecast-graphcast/ Tue, 14 Nov 2023 22:10:00 +0000 https://www.popsci.com/?p=589666
Storm coming in over farm field
GraphCast accurately predicted Hurricane Lee's Nova Scotia landfall nine days before it happened. Deposit Photos

GraphCast's 10-day weather predictions reveal how meteorology may benefit from AI and machine learning.

The post Google DeepMind’s AI forecasting is outperforming the ‘gold standard’ model appeared first on Popular Science.

]]>
Storm coming in over farm field
GraphCast accurately predicted Hurricane Lee's Nova Scotia landfall nine days before it happened. Deposit Photos

No one can entirely predict where the artificial intelligence industry is taking everyone, but at least the AI is poised to reliably tell you what the weather will be like when you get there. (Relatively.) According to a paper published on November 14 in Science, a new, AI-powered 10-day climate forecasting program called GraphCast is already outperforming existing prediction tools nearly every time. The open-source technology is even showing promise for identifying and charting potentially dangerous weather events—all while using a fraction of the “gold standard” system’s computing power.

“Weather prediction is one of the oldest and most challenging–scientific endeavors,” GraphCast team member Remi Lam said in a statement on Tuesday. “Medium range predictions are important to support key decision-making across sectors, from renewable energy to event logistics, but are difficult to do accurately and efficiently.”

[Related: Listen to ‘Now and Then’ by The Beatles, a ‘new’ song recorded using AI.]

Developed by Lam and colleagues at Google DeepMind, the tech company’s AI research division, GraphCast is trained on decades of historic weather information alongside roughly 40 years of satellite, weather station, and radar reanalysis. This stands in sharp contrast to what are known as numerical weather prediction (NWP) models, which traditionally utilize massive amounts of data concerning thermodynamics, fluid dynamics, and other atmospheric sciences. All that data requires intense computing power, which itself requires intense, costly energy to crunch all those numbers. On top of all that, NWPs are slow—taking hours for hundreds of machines within a supercomputer to produce their 10-day forecasts.

GraphCast, meanwhile, offers highly accurate, medium range climatic predictions in less than a minute, all through just one of Google’s AI-powered machine learning tensor processing unit (TPU) machines.

Global Warming photo

During a comprehensive performance evaluation against the industry-standard NWP system—the High-Resolution Forecast (HRES)—GraphCast proved more accurate in over 90 percent of tests. When limiting the scope to only the Earth’s troposphere, the lowest portion of the atmosphere home to most noticeable weather events, GraphCast beat HRES in an astounding 99.7 percent of test variables. The Google DeepMind team was particularly impressed by the new program’s ability to spot dangerous weather events without receiving any training to look for them. By uploading a hurricane tracking algorithm and implementing it within GraphCast’s existing parameters, the AI-powered program was immediately able to more accurately identify and predict the storms’ path.

In September, GraphCast made its public debut through the organization behind HRES, the European Center for Medium-Range Weather Forecasts (ECMWF). During that time, GraphCast accurately predicted Hurricane Lee’s trajectory nine days ahead of its Nova Scotia landfall. Existing forecast programs proved not only less accurate, but also only determined Lee’s Nova Scotia destination six days in advance.

[Related: Atlantic hurricanes are getting stronger faster than they did 40 years ago.]

“Pioneering the use of AI in weather forecasting will benefit billions of people in their everyday lives,” Lam wrote on Tuesday, who notes GraphCast’s potential vital importance amid increasingly devastating events stemming from climate collapse.

“[P]redicting extreme temperatures is of growing importance in our warming world,” Lam continued. “GraphCast can characterize when the heat is set to rise above the historical top temperatures for any given location on Earth. This is particularly useful in anticipating heat waves, disruptive and dangerous events that are becoming increasingly common.”

Google DeepMind’s GraphCast is already available via its open-source coding, and ECMWF plans to continue experimenting with integrating the AI-powered system into its future forecasting efforts.

The post Google DeepMind’s AI forecasting is outperforming the ‘gold standard’ model appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How do chatbots work? https://www.popsci.com/science/how-does-chatgpt-work/ Fri, 10 Nov 2023 16:00:00 +0000 https://www.popsci.com/?p=588439
a person's hands typing on a laptop keyboard
Chatbots might seem like a new trend, but they're sort of based on an old concept. DepositPhotos

Although they haven’t been taught the rules of grammar, they often make grammatical sense.

The post How do chatbots work? appeared first on Popular Science.

]]>
a person's hands typing on a laptop keyboard
Chatbots might seem like a new trend, but they're sort of based on an old concept. DepositPhotos

If you remember chatting with SmarterChild back on AOL Instant Messenger back in the day, you know how far ChatGPT and Google Bard have come. But how do these so-called chatbots work—and what’s the best way to use them to our advantage?

Chatbots are AI programs that respond to questions in a way that makes them seem like real people. That sounds pretty sophisticated, right? And these bots are. But when it comes down to it, they’re doing one thing really well: predicting one word after another.

So for ChatGPT or Google Bard, these chatbots are based on what are called large language models. That’s a kind of algorithm, and it gets trained on what are basically fill-in-the-blank, Mad-Libs style questions. The result is a program that can take your prompt and spit out an answer in phrases or sentences.

But it’s important to remember that while they might appear pretty human-like, they are most definitely not—they’re only imitating us. They don’t have common sense, and they aren’t taught the rules of grammar like you or I were in school. They are also only as good as what they were schooled on—and they can also produce a lot of nonsense.

To hear all about the nuts and bolts of how chatbots work, and the potential danger (legal or otherwise) in using them, you can subscribe to PopSci+ and read the full story by Charlotte Hu, in addition to listening to our new episode of Ask Us Anything

The post How do chatbots work? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to use Bard AI for Gmail, YouTube, Google Flights, and more https://www.popsci.com/diy/bard-extension-guide/ Thu, 09 Nov 2023 13:30:11 +0000 https://www.popsci.com/?p=588290
A person holding a phone in a very dark room, with Google Bard on the screen, and the Google Bard logo illuminated in the background.
Bard can be inside your Google apps, if you let it. Mojahid Mottakin / Unsplash

You can use Google's AI assistant in other Google apps, as long as you're cool with it reading your email.

The post How to use Bard AI for Gmail, YouTube, Google Flights, and more appeared first on Popular Science.

]]>
A person holding a phone in a very dark room, with Google Bard on the screen, and the Google Bard logo illuminated in the background.
Bard can be inside your Google apps, if you let it. Mojahid Mottakin / Unsplash

There’s a new feature in the Google Bard AI assistant: connections to your other Google apps, primarily Gmail and Google Drive, called Bard Extensions. It means you can use Bard to look up and analyze the information you have stored in documents and emails, as well as data aggregated from the web at large.

Bard can access other Google services besides Gmail and Google Drive as well, including YouTube, Google Maps, and Google Flights. However, this access doesn’t extend to personal data yet, so you can look up driving directions to a place on Google Maps, but not get routes to the last five restaurants you went to.

If that sets alarm bells ringing in your head, Google promises that your data is “not seen by human reviewers, used by Bard to show you ads, or used to train the Bard model,” and you can disconnect the app connections at any time. In terms of exactly what is shared between Bard and other apps, Google isn’t specific.

[Related: The best apps and gadgets for a Google-free life]

Should you decide you’re happy with that trade-off, you’ll be able to do much more with Bard, from looking up flight times to hunting down emails in your Gmail archive.

How to set up Bard Extensions, and what Google can learn about you

Google Bard extensions in a Chrome browser window.
You can enable Bard Extensions one by one. Screenshot: Google

If you decide you want to use Bard Extensions, open up Google Bard on the web, then click the new extensions icon in the top right corner (it looks like a jigsaw piece). The next screen shows all the currently available extensions—turn the toggle switches on for the ones you want to give Bard access to. To revoke access, turn the switches off.

Some prompts (asking about today’s weather, for instance) require access to your location. This is actually handled as a general Google search permission in your browser, and you can grant or revoke access in your privacy settings. In Chrome, though, you can open google.com, then click the site information button on the left end of the address bar (it looks like two small sliders—or a padlock if you haven’t updated your browser to Chrome 119).

From the popup dialog that appears, you can turn the Location toggle switch off. This means Google searches (for restaurants and bars, for example) won’t know where you are searching from, and nor will Bard.

Google Bard settings, showing how to delete your Bard history.
You can have Google automatically delete your Bard history, just like you can with other Google apps. Screenshot: Google

As with other Google products, you can see activity that’s been logged with Bard. To do so, head to your Bard activity page in a web browser to review and delete specific prompts that you’ve sent to the AI. Click Choose an auto-delete option, and you can have this data automatically wiped after three, 18, or 36 months. You can also stop Bard from logging data in the first place by clicking Turn off.

There’s more information on the Bard Privacy Help Hub. Note that by using Bard at all, you’re accepting that human reviewers may see and check some of your prompts, so Google can improve the response accuracy of its AI. The company specifically warns against putting confidential information into Bard, and any reviewed prompts won’t have your Google Account details (like your name) attached to them.

Prompts reviewed by humans can be retained by Google for up to three years, even if you delete your Bard activity. Even with Bard activity-logging turned off, conversations are kept in Bard’s memory banks for 72 hours, in case you want to add related questions.

Tips for using Bard Extensions

A browser window displaying a Google Bard prompt related to YouTube, and the AI assistant's response.
In some cases, Bard Extensions aren’t too different from regular searches. Screenshot: Google

Extensions are naturally integrated into Bard, and in a lot of cases, the AI bot will know which extension to look up. Ask about accommodation prices for the weekend, for example, and it’ll use Google Hotels. Whenever Bard calls upon an extension, you’ll see the extension’s name appear while the AI is working out the answer.

Sometimes, you need to be pretty specific. A prompt such as “what plans have I made over email with <contact name> about <event>?” will invoke a Gmail search, but only if you include the “over email” bit. At the end of the response, you’ll see the emails (or documents) that Bard has used to give you an answer. You can also ask Bard to use specific extensions by tagging them in your prompt with the @ symbol—so @Gmail or @Google Maps.

[Related: All the products Google has sent to the graveyard]

Bard can look up information from emails or documents, and can read inside PDFs in your Google Drive. For example, tell it to summarize the contents of the most recent PDF in your Google Drive, or the contents of recent emails from your kid’s school, and it will do just that. Again, the more specific you can be, the better.

A browser window showing a Google Bard prompt related to Gmail, and the AI bot's response.
Bard can analyze the tone of emails and documents. Screenshot: Google

In terms of YouTube, Google Maps, Google Flights, and Google Hotels, Bard works more like a regular search engine—though you can combine searches with other prompts. If you’re preparing a wedding speech, for example, you can ask Bard for an outline as well as some YouTube videos that will give you inspiration. If you’re heading off on a road trip, you could combine a prompt about ideas on what to pack with Google Maps driving directions.

We’ve found that some Bard Extensions answers are a bit hit or miss—but so are AI chatbots in general. At certain times, Bard will analyze the wrong emails or documents, or will miss information it should’ve found, so it’s not (yet) something you can fully rely on. In some situations, you’ll get better answers if you switch over to Google Drive or YouTube and run a normal search from there instead—file searches based on dates, for instance, or video searches limited to a certain channel.

At other times, Bard is surprisingly good at picking out information from stacks of messages or documents. You can ask Bard “what’s the most cheerful email I got yesterday?” for example, which is something you can’t do with a standard, or even an advanced Gmail search. It’s well worth trying Bard Extensions out, at least briefly, to see if they prove useful for the kinds of information retrieval you need.

The post How to use Bard AI for Gmail, YouTube, Google Flights, and more appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Waze will start warning drivers about the most dangerous roads https://www.popsci.com/technology/waze-crash-prone-road-ai/ Tue, 07 Nov 2023 20:00:00 +0000 https://www.popsci.com/?p=587343
waze app on phone on car dashboard
Sean D / Unsplash

A new feature uses AI to combine historical crash data with current route information.

The post Waze will start warning drivers about the most dangerous roads appeared first on Popular Science.

]]>
waze app on phone on car dashboard
Sean D / Unsplash

Today, Waze announced a new feature called crash history alerts that will warn drivers about upcoming accident black spots on their route. If you are approaching a crash-prone section of road, like a series of tight turns or a difficult merge, the Google-owned navigation app will show a warning so you can take extra care.

Waze has long allowed users to report live traffic information, like speed checks and crashes, as they use the app to navigate. This crowdsourced information is used to warn other users about upcoming hazards, and now will apparently also be used to identify crash-prone roads. According to Google, an AI will use these community reports combined with historical crash data and key route information, like “typical traffic levels, whether it’s a highway or local road, elevation, and more,” to assess the danger of your upcoming route. If it includes a dangerous section, it will tell you just before you reach it. 

So as to minimize distractions, Waze says it will limit the amount of alerts it shows to drivers. Presumably, if you are navigating a snowy mountain pass, it won’t send you an alert as you approach each and every corner. It seems the feature is designed to let you know when you’re approaching an unexpectedly dangerous bit of road, rather than blasting you with notifications every time you take a rural road in winter. 

[Related: Apple announces car crash detection and satellite SOS]

Similarly, Waze won’t show alerts on roads you travel frequently. The app apparently trusts that you know the hazardous sections of your commute already. 

Google claims this is all part of Waze’s aim of “helping every driver make smart decisions on the road,” and it is right that driving is one of the riskiest things many people do on a daily basis. According to a CDC report that Google cites in its announcement, road traffic accidents are the leading cause of death in the US for people between 1 and 54, and that almost 3,700 people are killed every day in crashes “involving cars, buses, motorcycles, bicycles, trucks, or pedestrians.” Road design as well as driving culture are both part of the problem.

[Related: Pete Buttigieg on how to improve the deadly track record of US drivers]

Waze isn’t the first company to think up such an idea. Many engineers have developed similar routing algorithms that suggest the safest drives possible based on past driving and accident data. 

While one small pop up obviously can’t save the 1.35 million people who die on the roads each year, it could certainly help some of them. Google is running other traffic AI-related projects outside of Waze, too. For example, one Google Maps project aims to use traffic flow data to figure out which intersections to direct drivers to, ideally reducing gridlock at busy intersections. If you’re driving somewhere unfamiliar, maybe give Waze a try. An extra warning to take care when you’re approaching a tricky section of road might be just what you need to stay safe on the road.

The post Waze will start warning drivers about the most dangerous roads appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Listen to ‘Now and Then’ by The Beatles, a ‘new’ song recorded using AI https://www.popsci.com/technology/beatles-now-and-then-ai-listen/ Thu, 02 Nov 2023 15:45:00 +0000 https://www.popsci.com/?p=585589
The Beatles, English music group
Attempts to record 'Now and Then' date back to the 1990s. Roger Viollet Collection/Getty Image

John Lennon's voice received a boost from a neural network program named MAL to help record the lost track, released today.

The post Listen to ‘Now and Then’ by The Beatles, a ‘new’ song recorded using AI appeared first on Popular Science.

]]>
The Beatles, English music group
Attempts to record 'Now and Then' date back to the 1990s. Roger Viollet Collection/Getty Image

The Beatles have released their first song in over 50 years, produced in part using artificial intelligence. Based on a demo cassette tape recorded by John Lennon at his New York City home in 1978, “Now and Then” will be the last track to ever feature original contributions from all four members of the band. Check it out below:

AI photo

The Beatles dominated pop culture throughout the 60’s before parting ways in 1970 following their final full-length album, Let It Be. Following John Lennon’s assassination in 1980, two additional lost songs, “Real Love” and “Free as a Bird” were recorded and released in 1995 using old demos of Lennon’s vocals. Paul McCartney and Ringo Starr are the two surviving members after George Harrison’s death from lung cancer in 2001. 

Beatles fans have anticipated the release of the seminal band’s “final” song with a mix of excitement and caution ever since Sir Paul McCartney revealed the news back in June. Unlike other groups’ “lost” tracks or recording sessions, the new single featured John Lennon’s vocals “extracted” and enhanced using an AI program. In this case, a neural network designed to isolate individual voices identified Lennon’s voice, then set about “re-synthesizing them in a realistic way that matched trained samples of those instruments or voices in isolation,” explained Ars Technica earlier this year.

[Related: New Beatles song to bring John Lennon’s voice back, with a little help from AI.]

By combining the isolated tape audio alongside existing vocal samples, the AI ostensibly layers over weaker recording segments with synthesized approximations of the voice. “It’s not quite Lennon, but it’s about as close as you can get,” PopSci explained at the time.

The Beatles’ surviving members, McCartney and Ringo Starr, first learned of the AI software during the production of Peter Jackson’s 2021 documentary project, The Beatles: Get Back. Dubbed MAL, the program conducted similar vocal isolations of whispered or otherwise muddied conversions between band members, producers, and friends within hours of footage captured during Get Back’s recording sessions. 

AI photo

[Related: Scientists made a Pink Floyd cover from brain scans]

Attempts to record “Now and Then” date as far back as the 1990s. In a past interview, McCartney explained that George Harrison refused to contribute to the project at the time, due to Lennon’s vocal recordings sounding like, well, “fucking rubbish.” His words.

And listening to the track, it’s somewhat easy to understand Harrison’s point of view. While compositionally fine, “Now and Then” feels like more of a b-side than a beloved new single from The Beatles. Even with AI’s help, Lennon’s “vocals” contrast strongly against the modern instrumentation, and occasionally still sounds warbly and low-quality. Still, if nothing else, it is certainly an interesting usage of rapidly proliferating AI technology—and certainly a sign of divisive creative projects to come.

The post Listen to ‘Now and Then’ by The Beatles, a ‘new’ song recorded using AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Here’s what to know about President Biden’s sweeping AI executive order https://www.popsci.com/technology/white-house-ai-executive-order/ Mon, 30 Oct 2023 16:27:14 +0000 https://www.popsci.com/?p=584409
Photo of President Biden in White House Press Room
The executive order seems to focus on both regulating and investing in AI technology. Anna Moneymaker/Getty Images

'AI policy is like running a decathlon, where we don’t get to pick and choose which events we do,' says White House Advisor for AI, Ben Buchanan.

The post Here’s what to know about President Biden’s sweeping AI executive order appeared first on Popular Science.

]]>
Photo of President Biden in White House Press Room
The executive order seems to focus on both regulating and investing in AI technology. Anna Moneymaker/Getty Images

Today, President Joe Biden signed a new, sweeping executive order outlining plans on governmental oversight and corporate regulation of artificial intelligence. Released on October 30, the legislation is aimed at addressing widespread issues such as privacy concerns, bias, and misinformation enabled by a multibillion dollar industry increasingly entrenching itself within modern society. Though the solutions so far remain largely conceptual, the White House’s Executive Order Fact Sheet makes clear US regulating bodies intend to both attempt to regulate and benefit from the wide range of emerging and re-branded “artificial intelligence” technologies.

[Related: Zoom could be using your ‘content’ to train its AI.]

In particular, the administration’s executive order seeks to establish new standards for AI safety and security. Harnessing the Defense Production Act, the order instructs companies to make their safety test results and other critical information available to US regulators whenever designing AI that could pose “serious risk” to national economic, public, and military security, though it is not immediately clear who would be assessing such risks and on what scale. However, safety standards soon to be set by the National Institute of Standards and Technology must be met before public release of any such AI programs.

Drawing the map along the way 

“I think in many respects AI policy is like running a decathlon, where we don’t get to pick and choose which events we do,” Ben Buchanan, the White House Senior Advisor for AI, told PopSci via phone call. “We have to do safety and security, we have to do civil rights and equity, we have to do worker protections, consumer protections, the international dimension, government use of AI, [while] making sure we have a competitive ecosystem here.”

“Probably some of [order’s] most significant actions are [setting] standards for AI safety, security, and trust. And then require that companies notify us of large-scale AI development, and that they share the tests of those systems in accordance with those standards,” says Buchanan. “Before it goes out to the public, it needs to be safe, secure, and trustworthy.”

Too little, too late?

Longtime critics of the still-largely unregulated AI tech industry, however, claim the Biden administration’s executive order is too little, too late.

“A lot of the AI tools on the market are already illegal,” Albert Fox Cahn, executive director for the tech privacy advocacy nonprofit, Surveillance Technology Oversight Project, said in a press release. Cahn contended the “worst forms of AI,” such as facial recognition, deserve bans instead of regulation.

“[M]any of these proposals are simply regulatory theater, allowing abusive AI to stay on the market,” he continued, adding that, “the White House is continuing the mistake of over-relying on AI auditing techniques that can be easily gamed by companies and agencies.”

Buchanan tells PopSci the White House already has a “good dialogue” with companies such as OpenAI, Meta, and Google, although they are “certainly expecting” them to “hold up their end of the bargain on the voluntary commitments that they made” earlier this year.

A long road ahead

In Monday’s announcement, President Biden also urged Congress to pass bipartisan data privacy legislation “to protect all Americans, especially kids,” from the risks of AI technology. Although some states including Massachusetts, California, Virginia, and Colorado have proposed or passed legislation, the US currently lacks comprehensive legal safeguards akin to the EU’s General Data Protection Regulation (GDPR). Passed in 2018, the GDPR heavily restricts companies’ access to consumers’ private data, and can issue large fines if businesses are found to violate the law.

[Related: Your car could be capturing data on your sex life.]

The White House’s newest calls for data privacy legislation, however, “are unlikely to be answered,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, tells PopSci via email. “… [B]oth parties agree that there should be action but can’t agree on what it should look like.”

A federal hiring push is now underway to help staff the numerous announced projects alongside additional funding opportunities, all of which can be found via the new governmental website portal, AI.gov.

The post Here’s what to know about President Biden’s sweeping AI executive order appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch what happens when AI teaches a robot ‘hand’ to twirl a pen https://www.popsci.com/technology/nvidia-eureka-ai-training/ Fri, 20 Oct 2023 19:10:00 +0000 https://www.popsci.com/?p=581803
Animation of multiple robot hands twirling pens in computer simulation
You don't even need humans to help train some AI programs now. NVIDIA Research

The results are better than what most humans can manage.

The post Watch what happens when AI teaches a robot ‘hand’ to twirl a pen appeared first on Popular Science.

]]>
Animation of multiple robot hands twirling pens in computer simulation
You don't even need humans to help train some AI programs now. NVIDIA Research

Researchers are training robots to perform an ever-growing number of tasks through trial-and-error reinforcement learning, which is often laborious and time-consuming. To help out, humans are now enlisting large language model AI to speed up the training process. In a recent experiment, this resulted in some incredibly dexterous albeit simulated robots.

A team at NVIDIA Research directed an AI protocol powered by OpenAI’s GPT-4 to teach a simulation of a robotic hand nearly 30 complex tasks, including tossing a ball, pushing blocks, pressing switches, and some seriously impressive pen-twirling abilities.

AI photo

[Related: These AI-powered robot arms are delicate enough to pick up Pringles chips.]

NVIDIA’s new Eureka “AI agent” utilizes GPT-4 by asking the large language model (LLM) to write its own reward-based reinforcement learning software code. According to the company, Eureka doesn’t need intricate prompting or even pre-written templates; instead, it simply begins honing a program, then adheres to any subsequent external human feedback.

In the company’s announcement, Linxi “Jim” Fan, a senior research scientist at NVIDIA, described Eureka as a “unique combination” of LLMs and GPU-accelerated simulation programming. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists,” Fan added.

Judging from NVIDIA’s demonstration video, a Eureka-trained robotic hand can pull off pen spinning tricks to rival, if not beat, extremely dextrous humans. 

After testing its training protocol within an advanced simulation program, Eureka then analyzes its collected data and directs the LLM to further improve upon its design. The end result is a virtually self-iterative AI protocol capable of successfully encoding a variety of robotic hand designs to manipulate scissors, twirl pens, and open cabinets within a physics-accurate simulated environment.

Eureka’s alternatives to human-written trial-and-error learning programs aren’t just effective—in most cases, they’re actually better than those authored by humans. In the team’s open-source research paper findings, Eureka-designed reward programs outperformed humans’ code in over 80 percent of the tasks—amounting to an average performance improvement of over 50 percent in the robotic simulations.

[Related: How researchers trained a budget robot dog to do tricks.]

“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” Anima Anandkumar, senior director of AI research at NVIDIA’s senior director of AI research and one of the Eureka paper’s co-authors, said in the company’s announcement. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”

The post Watch what happens when AI teaches a robot ‘hand’ to twirl a pen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Finally, a smart home for chickens https://www.popsci.com/technology/smart-home-for-chickens-coop/ Thu, 19 Oct 2023 22:00:00 +0000 https://www.popsci.com/?p=581394
rendering of coop structure in grass
Coop

This startup uses an "AI guardian" named Albert Eggstein to count eggs and keep an eye on nearby predators.

The post Finally, a smart home for chickens appeared first on Popular Science.

]]>
rendering of coop structure in grass
Coop

For most Americans, eggs matter a lot. In a year, an average American is estimated to eat almost 300 eggs (that’s either in the form of eggs by themselves or in egg-utilizing products like baked goods). We truly are living in what some researchers have called the Age of the Chicken—at least geologically, the humble poultry will be one of our civilization’s most notable leftovers.

Food systems in the US are fairly centralized. That means small disruptions can ratchet up to become large disturbances. Just take the exorbitant egg prices from earlier this year as one example. 

To push back against supply chain issues, some households have taken the idea of farm to table a step further. Demand for backyard chickens rose both during the pandemic, and at the start of the year in response to inflation. But raising a flock can come with many unseen challenges and hassles. A new startup, Coop, is hatching at exactly the right time. 

[Related: 6 things to know before deciding to raise backyard chickens]

Coop was founded by AJ Forsythe and Jordan Barnes in 2021, and it packages all of the software essentials of a smart home into a backyard chicken coop. 

Agriculture photo
Coop

Barnes says that she can’t resist an opportunity to use a chicken pun; it’s peppered into the copy on their website, as well as the name for their products, and is even baked into her title at the company (CMO, she notes, stands for chief marketing officer, but also chicken marketing officer). She and co-founder Forsythe invited Popular Science to a rooftop patio on the Upper East side to see a fully set up Coop and have a “chick-chat” about the company’s tech. 

In addition to spending the time to get to know the chickens, they’ve spent 10,000 plus hours on the design of the Coop. Fred Bould, who had previously worked on Google’s Nest products, helped them conceptualize the Coop of the future

The company’s headquarters in Austin has around 30 chickens, and both Barnes and Forsythe keep chickens at home, too. In the time that they’ve spent with the birds, they’ve learned a lot about them, and have both become “chicken people.” 

An average chicken will lay about five eggs a week, based on weather conditions and their ranking in the pecking order. The top of the pecking order gets more food, so they tend to lay more eggs. “They won’t break rank on anything. Pecking order is set,” says Barnes. 

Besides laying eggs, chickens can be used for composting dinner scraps. “Our chickens eat like queens. They’re having sushi, Thai food, gourmet pizza,” Barnes adds.  

Agriculture photo
Coop

For the first generation smart Coop, which comes with a chicken house, a wire fence, lights that can be controlled remotely, and a set of cameras, all a potential owner needs to get things running on the ground are Wifi and about 100 square feet of grass. “Chickens tend to stick together. You want them to roam around and graze a little bit, but they don’t need sprawling plains to have amazing lives,” says Barnes. “We put a lot of thought into the hardware design and the ethos of the design. But it’s all infused with a very high level of chicken knowledge—the circumference of the roosting bars, the height of everything, the ventilation, how air flows through it.” 

[Related: Artificial intelligence is helping scientists decode animal languages]

They spent four weeks designing a compostable, custom-fit poop tray because they learned through market research that cleaning the coop was one of the big barriers for people who wanted chickens but decided against getting them. And right before the Coop was supposed to go into production a few months ago, they halted it because they realized that the lower level bars on the wire cage were wide enough for a desperate raccoon to sneak their tiny paws through. They redesigned the bars with a much closer spacing. 

The goal of the company is to create a tech ecosystem that makes raising chickens easy for the beginners and the “chicken-curious.” And currently, 56 percent of their customers have never raised chickens before, they say.

Agriculture photo
Coop

Key to the offering of Coop is its brain: an AI software named Albert Eggstein that can detect both the chickens and any potential predators that might be lurking around. “This is what makes the company valuable,” says Barnes. Not only can the camera pick up that there’s four chickens in the frame, but it can tell the chickens apart from one another. It uses these learnings to provide insights through an accompanying app, almost like what Amazon’s Ring does. 

[Related: Do all geese look the same to you? Not to this facial recognition software.]

As seasoned chicken owners will tell newbies, being aware of predators is the name of the game. And Coop’s software can categorize nearby predators from muskrats to hawks to dogs with a 98-percent accuracy. 

“We developed a ton of software on the cameras, we’re doing a bunch of computer vision work and machine learning on remote health monitoring and predator detection,” Forsythe says. “We can say, hey, raccoons detected outside, the automatic door is closed, all four chickens are safe.”

Agriculture photo
Coop

The system runs off of two cameras, one stationed outside in the run, and one stationed inside the roost. In the morning, the door to the roost is raised automatically 20 minutes after sunrise, and at night, a feature called nest mode can tell owners if all their chickens have come home to roost. The computer vision software is trained through a database of about 7 million images. There is also a sound detection software, which can infer chicken moods and behaviors through the pitch and pattern of their clucks, chirps, and alerts.

[Related: This startup wants to farm shrimp in computer-controlled cargo containers]

It can also condense the activity into weekly summary sheets, sending a note to chicken owners telling them that a raccoon has been a frequent visitor for the past three nights, for example. It can also alert owners to social events, like when eggs are ready to be collected.  

A feature that the team created called “Cluck talk,” can measure the decibels of chicken sounds to make a general assessment about whether they are hungry, happy, broody (which is when they just want to sit on their eggs), or in danger. 

Agriculture photo
Coop

There’s a lot of chicken-specific behaviors that they can build models around. “Probably in about 6 to 12 months we’re going to roll out remote health monitoring. So it’ll say, chicken Henrietta hasn’t drank water in the last six hours and is a little lethargic,” Forsythe explains. That will be part of a plan to develop and flesh out a telehealth offering that could connect owners with vets that they can communicate and share videos with. 

The company started full-scale production of their first generation Coops last week. They’re manufacturing the structures in Ohio through a specialized process called rotomolding, which is similar to how Yeti coolers are made. They have 50 beta customers who have signed up to get Coops, and are offering an early-bird pricing of $1,995. Like Peloton and Nest, customers will also have to pay a monthly subscription fee of $19.95 for the app features like the AI tools. In addition to the Coops, the company also offers services like chicken-sitting (aptly named chicken Tenders). 

For the second generation Coops, Forsythe and Barnes have been toying with new ideas. They’re definitely considering making a bigger version (the one right now can hold four to six chickens), or maybe one that comes with a water gun for deterring looming hawks. The chickens are sold separately.

The post Finally, a smart home for chickens appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How this programmer and poet thinks we should tackle racially biased AI https://www.popsci.com/technology/racial-bias-artificial-intelligence-buolamwini/ Tue, 17 Oct 2023 13:00:00 +0000 https://www.popsci.com/?p=568750
row of people undergoing body scan show by having grids projected onto them
AI-generated illustration by Dan Saelinger

The research and poetry of Joy Buolamwini shines a light on a major problem in artificial intelligence.

The post How this programmer and poet thinks we should tackle racially biased AI appeared first on Popular Science.

]]>
row of people undergoing body scan show by having grids projected onto them
AI-generated illustration by Dan Saelinger

THE FIRST TIME Joy Buolamwini ran into the problem of racial bias in facial recognition technology, she was an undergraduate at the Georgia Institute of Technology trying to teach a robot to play peekaboo. The artificial intelligence system couldn’t recognize Buolamwini’s dark-skinned face, so she borrowed her white roommate to complete the project. She didn’t stress too much about it—after all, in the early 2010s, AI was a fast-developing field, and that type of problem was sure to be fixed soon.

It wasn’t. As a graduate student at the Massachusetts Institute of Technology in 2015, Buolamwini encountered a similar issue. Facial recognition technology once again didn’t detect her features—until she started coding while wearing a white mask. AI, as impressive as it can be, has a long way to go at one simple task: It can fail, disastrously, to read Black faces and bodies. Addressing this, Buolamwini says, will require reimagining how we define successful software, train our algorithms, and decide for whom specific AI programs should be designed.

While studying at MIT, the programmer confirmed that computers’ bias wasn’t limited to the inability to detect darker faces. Through her Gender Shades project, which evaluated AI products’ ability to classify gender, she found that software that designated a person’s gender as male or female based on a photo was much worse at correctly gendering women and darker-skinned people. For example, although an AI developed by IBM correctly identified the gender of 88 percent of images overall, it classified only 67 percent of dark-skinned women as female compared to correctly noting the gender of nearly 100 percent of light-skinned men. 

“Our metrics of success themselves are skewed,” Buolamwini says. IBM’s Watson Visual Recognition AI seemed useful for facial recognition, but when skin tone and gender were considered, it quickly became apparent that the “supercomputer” was failing some demographics. The project leaders responded within a day of receiving the Gender Shades study results in 2018 and released a statement detailing how IBM had been working to improve its product, including by updating training data and recognition capabilities and evaluating its newer software for bias. The company improved Watson’s accuracy in identifying dark-skinned women, shrinking the error rate to about 4 percent. 

Prejudiced AI-powered identification software has major implications. At least four innocent Black men and one woman have been arrested in the US in recent years after facial recognition technology incorrectly identified them as criminals, mistaking them for other Black people. Housing units that use similar automated systems to let tenants into buildings can leave dark-skinned and female residents stranded outdoors. That’s why Buolamwini, who is also founder and artist-in-chief of the Algorithmic Justice League, which aims to raise public awareness about the impacts of AI and support advocates who prevent and counteract its harms, merges her ethics work with art in a way that humanizes very technical problems. She has mastered both code and words. “Poetry is a way of bringing in more people into these urgent and necessary conversations,” says Buolamwini, who is the author of the book Unmasking AI

portrait of Dr. Joy Buolamwini
Programmer and poet Joy Buolamwini wants us to reimagine how we train software and measure its success. Naima Green

Perhaps Buolamwini’s most famous work is her poem “AI, Ain’t I a Woman?” In an accompanying video, she demonstrates Watson and other AIs misidentifying famous Black women such as Ida B. Wells, Oprah Winfrey, and Michelle Obama as men. “Can machines ever see my queens as I view them?” she asks. “Can machines ever see our grandmothers as we knew them?” 

This type of bias has long been recognized as a problem in the burgeoning field of AI. But even if developers knew that their product wasn’t good at recognizing dark-skinned faces, they didn’t necessarily address the problem. They realized fixing it would take great investment—without much institutional support, Buolamwini says. “It turned out more often than not to be a question of priority,” especially with for-profit companies focused on mass appeal. 

Hiring more people of diverse races and genders to work in tech can lend perspective, but it can’t solve the problem on its own, Buolamwini adds. Much of the bias derives from data sets required to train computers, which might not include enough information, such as a large pool of images of dark-skinned women. Diverse programmers alone can’t build an unbiased product using a biased data set.

In fact, it’s impossible to fully rid AI of bias because all humans have biases, Buolamwini says, and their beliefs make their way into code. She wants AI developers to be aware of those mindsets and strive to make systems that do not propagate discrimination.

This involves being deliberate about which computer programs to use, and recognizing that specific ones may be needed for different services in different populations. “We have to move away from a universalist approach of building one system to rule them all,” Buolamwini explains. She gave the example of a healthcare AI: A data set trained mainly on male metrics could lead to signs of disease being missed in female patients. But that doesn’t mean the model is useless, as it could still benefit healthcare for one sex. Instead, developers should also consider building a female-specific model.

But even if it were possible to create unbiased algorithms, they could still perpetuate harm. For example, a theoretically flawless facial recognition AI could fuel state surveillance if it were rolled out across the US. (The Transportation Security Administration plans to try voluntary facial recognition checks in place of manual screening in more than 400 airports in the next several years. The new process might become mandatory in the more distant future.) “Accurate systems can be abused,” Buolamwini says. “Sometimes the solution is to not build a tool.”

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.

The post How this programmer and poet thinks we should tackle racially biased AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI revealed the colorful first word of an ancient scroll torched by Mount Vesuvius https://www.popsci.com/technology/ai-scroll-scan-vesuvius/ Fri, 13 Oct 2023 18:10:00 +0000 https://www.popsci.com/?p=579577
Charred scroll from Herculaneum undergoing laser scan
A scroll similar to this one revealed its long-lost first word: 'Purple.'. University of Kentucky

The carbonized scrolls are too delicate for human hands, but AI analysis found 'purple' amid the charred papyrus.

The post AI revealed the colorful first word of an ancient scroll torched by Mount Vesuvius appeared first on Popular Science.

]]>
Charred scroll from Herculaneum undergoing laser scan
A scroll similar to this one revealed its long-lost first word: 'Purple.'. University of Kentucky

The eruption of Mount Vesuvius in 79 CE is one of the most dramatic natural disasters in recorded history, yet so many of the actual records from that moment in time are inaccessible. Papyrus scrolls located in nearby Pompeii and Herculaneum, for example, were almost instantly scorched by the volcanic blast, then promptly buried under pumice and ash. In 1752, excavators uncovered around 800 such carbonized scrolls, but researchers have since largely been unable to read any of them due to their fragile conditions.

On October 12, however, organizers behind the Vesuvius Challenge—an ongoing machine learning project to decode the physically inaccessible library—offered a major announcement: an AI program uncovered the first word in one of the relics after analyzing and identifying its incredibly tiny residual ink elements. That word? Πορφύραc, or porphyras… or “purple,” for those who can’t speak Greek.

[Related: A fresco discovered in Pompeii looks like ancient pizza—but it’s likely focaccia.]

Identifying the word for an everyday color may not sound groundbreaking, but the uncovery of “purple” already has experts intrigued. Speaking to The Guardian on Thursday, University of Kentucky computer scientist and Vesuvius Challenge co-founder Brent Seales explained that the particular word isn’t terribly common to find in such documents.

“This word is our first dive into an unopened ancient book, evocative of royalty, wealth, and even mockery,” said Seales. “Pliny the Elder explores ‘purple’ in his ‘natural history’ as a production process for Tyrian purple from shellfish. The Gospel of Mark describes how Jesus was mocked as he was clothed in purple robes before crucifixion. What this particular scroll is discussing is still unknown, but I believe it will soon be revealed. An old, new story that starts for us with ‘purple’ is an incredible place to be.”

The visualization of porphyras is thanks in large part to a 21-year-old computer student named Luke Farritor, who subsequently won $40,000 as part of the Vesuvius Challenge after identifying an additional 10 letters on the same scroll. Meanwhile, Seales believes that the entire scroll should be recoverable, even though scans indicate certain areas may be missing words due to its nearly 2,000 year interment.

As The New York Times notes, the AI-assisted analysis could also soon be applied to the hundreds of remaining carbonized scrolls. Given that these scrolls appear to have been part of a larger library amassed by Philodemus, an Epicurean philosopher, it stands to reason that a wealth of new information may emerge alongside long-lost titles, such as the poems of Sappho.

“Recovering such a library would transform our knowledge of the ancient world in ways we can hardly imagine,” one papyrus expert told The New York Times. “The impact could be as great as the rediscovery of manuscripts during the Renaissance.”

The post AI revealed the colorful first word of an ancient scroll torched by Mount Vesuvius appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI design for a ‘walking’ robot is a squishy purple glob https://www.popsci.com/technology/ai-robot-blob/ Fri, 13 Oct 2023 15:30:00 +0000 https://www.popsci.com/?p=579501
AI-designed multi-legged robots on table
They may not look like much, but they skipped past billions of years' of evolution to get those little legs. Northwestern University

During testing, the creation could walk half its body length per second—roughly half as fast as the average human stride.

The post AI design for a ‘walking’ robot is a squishy purple glob appeared first on Popular Science.

]]>
AI-designed multi-legged robots on table
They may not look like much, but they skipped past billions of years' of evolution to get those little legs. Northwestern University

Sam Kreigman and his colleagues made headlines a few years back with their “xenobots”— synthetic robots designed by AI and built from biological tissue samples. While experts continue to debate how to best classify such a creation, Kriegman’s team at Northwestern University has been hard at work on a similarly mind-bending project meshing artificial intelligence, evolutionary design, and robotics.

[Related: Meet xenobots, tiny machines made out of living parts.]

As detailed in a new paper published earlier this month in the Proceedings of the National Journal of Science, researchers recently tasked an AI model with a seemingly straightforward prompt: Design a robot capable of walking across a flat surface. Although the program delivered original, working examples within literal seconds, the new robots “[look] nothing like any animal that has ever walked the earth,” Kriegman said in Northwestern’s October 3 writeup.

And judging from video footage of the purple multi-“legged” blob-bots, it’s hard to disagree:

Evolution photo

After offering their prompt to the AI program, the researchers simply watched it analyze and iterate upon a total of nine designs. Within just 26 seconds, the artificial intelligence managed to fast forward past billions of years of natural evolutionary biology to determine legged movement as the most effective method of mobility. From there, Kriegman’s team imported the final schematics into a 3D printer, which then molded a jiggly, soap bar-sized block of silicon imbued with pneumatically actuated musculature and three “legs.” Repeatedly pumping air in and out of the musculature caused the robots’ limbs to expand and contract, causing movement. During testing, the robot could walk half its body length per second—roughly half as fast as the average human stride.

“It’s interesting because we didn’t tell the AI that a robot should have legs,” Kriegman said. “It rediscovered that legs are a good way to move around on land. Legged locomotion is, in fact, the most efficient form of terrestrial movement.”

[Related: Disney’s new bipedal robot could have waddled out of a cartoon.]

If all this weren’t impressive enough, the process—dubbed “instant evolution” by Kriegman and colleagues—all took place on a “lightweight personal computer,” not a massive, energy-intensive supercomputer requiring huge datasets. According to Kreigman, previous AI-generated evolutionary bot designs could take weeks of trial and error using high-powered computing systems. 

“If combined with automated fabrication and scaled up to more challenging tasks, this advance promises near-instantaneous design, manufacture, and deployment of unique and useful machines for medical, environmental, vehicular, and space-based tasks,” Kriegman and co-authors wrote in their abstract.

“When people look at this robot, they might see a useless gadget,” Kriegman said. “I see the birth of a brand-new organism.”

The post AI design for a ‘walking’ robot is a squishy purple glob appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI could consume as much energy as Argentina annually by 2027 https://www.popsci.com/technology/ai-energy-use-study/ Thu, 12 Oct 2023 17:00:00 +0000 https://www.popsci.com/?p=579119
Computer server stacks in dark room
AI programs like ChatGPT could annually require as much as 134 TWh by 2027. Deposit Photos

A new study adds 'environmental stability' to the list of AI industry concerns.

The post AI could consume as much energy as Argentina annually by 2027 appeared first on Popular Science.

]]>
Computer server stacks in dark room
AI programs like ChatGPT could annually require as much as 134 TWh by 2027. Deposit Photos

Artificial intelligence programs’ impressive (albeit often problematic) abilities come at a cost—all that computing power requires, well, power. And as the world races to adopt sustainable energy practices, the rapid rise of AI integration into everyday lives could complicate matters. New expert analysis now offers estimates of just how energy hungry the AI industry could become in the near future, and the numbers are potentially concerning.

According to a commentary published October 10 in Joule, Vrije Universiteit Amsterdam Business and Economics PhD candidate Alex de Vries argues that global AI-related electricity consumption could top 134 TWh annually by 2027. That’s roughly comparable to the annual consumption of nations like Argentina, the Netherlands, and Sweden.

[Related: NASA wants to use AI to study unidentified aerial phenomenon.]

Although de Vries notes data center electricity usage between 2010-2018 (excluding resource-guzzling cryptocurrency mining) has only increased by roughly 6 percent, “[t]here is increasing apprehension that the computation resources necessary to develop and maintain AI models and applications could cause a surge in data centers’ contribution to global electricity consumption.” Given countless industries’ embrace of AI over the last year, it’s not hard to imagine such a hypothetical surge becoming reality. For example, if Google—already a major AI adopter—integrated technology akin to ChatGPT into its 9 billion-per-day Google searches, the company could annually burn through 29.2 TWh of power, or as much electricity as all of Ireland.

de Vries, who also founded the digital trend watchdog research company Digiconomist, believes such an extreme scenario is somewhat unlikely, mainly due to AI server costs alongside supply chain bottlenecks. But the AI industry’s energy needs will undoubtedly continue to grow as the technologies become more prevalent, and that alone necessitates a careful review of where and when to use such products.

This year, for example, NVIDIA is expected to deliver 100,000 AI servers to customers. Operating at full capacity, the servers’ combined power demand would measure between 650 and 1,020 MW, annually amounting to 5.7-8.9 TWh of electricity consumption. Compared to annual consumption rates of data centers, this is “almost negligible.” 

By 2027, however, NVIDIA could be (and currently is) on track to ship 1.5 million AI servers per year. Estimates using similar electricity consumption rates put their combined demand between 85-134 TWh annually. “At this stage, these servers could represent a significant contribution to worldwide data center electricity consumption,” writes de Vries.

As de Vries’ own site argues, AI is not a “miracle cure for everything,” still must deal with privacy concerns, discriminatory biases, and hallucinations. “Environmental sustainability now represents another addition to this list of concerns.”

The post AI could consume as much energy as Argentina annually by 2027 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Titanium-fused bone tissue connects this bionic hand directly to a patient’s nerves https://www.popsci.com/technology/bionic-hand-phantom-pain/ Thu, 12 Oct 2023 15:00:00 +0000 https://www.popsci.com/?p=579098
Patient wearing a highly integrated bionic hand in between many others
The breakthrough bionic limb relies on osseointegration to attach to its wearer. Ortiz-Catalan et al., Sci. Rob., 2023

Unlike other prosthetics, a new model connects directly to a patient's limb via both bone and nerves.

The post Titanium-fused bone tissue connects this bionic hand directly to a patient’s nerves appeared first on Popular Science.

]]>
Patient wearing a highly integrated bionic hand in between many others
The breakthrough bionic limb relies on osseointegration to attach to its wearer. Ortiz-Catalan et al., Sci. Rob., 2023

Adjusting to prosthetic limbs isn’t as simple as merely finding one that fits your particular body type and needs. Physical control and accuracy are major issues despite proper attachment, and sometimes patients’ bodies reject even the most high-end options available. Such was repeatedly the case for a Swedish patient after losing her right arm in a farming accident over two decades ago. For years, the woman suffered from severe pain and stress issues, likening the sensation to “constantly [having] my hand in a meat grinder.”

Phantom pain is an unfortunately common affliction for amputees, and is believed to originate from nervous system signal confusions between the spinal cord and brain. Although a body part is amputated, the peripheral nerve endings remain connected to the brain, and can thus misread that information as pain.

[Related: We’re surprisingly good at surviving amputations.]

With a new, major breakthrough in prosthetics, however, her severe phantom pains are dramatically alleviated thanks to an artificial arm built on titanium-fused bone tissue alongside rearranged nerves and muscles. As detailed in a new study published via Science Robotics, the remarkable advancements could provide a potential blueprint for many other amputees to adopt such technology in the coming years.

The patient’s procedure started in 2018 when she volunteered to test a new kind of bionic arm designed by a multidisciplinary team of engineers and surgeons led by Max Ortiz Catalan, head of neural prosthetics research at Australia’s Bionics Institute and founder of the Center for Bionics and Pain Research. Using osseointegration, a process infusing titanium into bone tissue to provide a strong mechanical connection, the team was able to attach their prototype to the remaining portion of her right limb.

Accomplishing even this step proved especially difficult because of the need to precisely align the volunteer’s radius and ulna. The team also needed to account for the small amount of space available to house the system’s components. Meanwhile, the limb’s nerves and muscles needed rearrangement to better direct the patient’s neurological motor control information into the prosthetic attachment.

“By combining osseointegration with reconstructive surgery, implanted electrodes, and AI, we can restore human function in an unprecedented way,” Rickard Brånemark, an MIT research affiliate and associate professor at Gothenburg University who oversaw the surgery, said via an update from the Bionics Institute. “The below elbow amputation level has particular challenges, and the level of functionality achieved marks an important milestone for the field of advanced extremity reconstructions as a whole.”

The patient said her breakthrough prosthetic can be comfortably worn all day, is highly integrated with her body, and has even relieved her chronic pain. According to Catalan, this reduction can be attributed to the team’s “integrated surgical and engineering approach” that allows [her] to use “somewhat the same neural resources” as she once did for her biological hand.

“I have better control over my prosthesis, but above all, my pain has decreased,” the patient explained. “Today, I need much less medication.” 

The post Titanium-fused bone tissue connects this bionic hand directly to a patient’s nerves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new Google AI project wants to improve the timing of traffic lights https://www.popsci.com/technology/google-project-green-light/ Wed, 11 Oct 2023 19:00:00 +0000 https://www.popsci.com/?p=578746
monitor displaying a traffic intersection
Google

Data from Maps can show where drivers are getting stuck.

The post A new Google AI project wants to improve the timing of traffic lights appeared first on Popular Science.

]]>
monitor displaying a traffic intersection
Google

Traffic lights are the worst—not only do they put stops in your journey, but all those stopped cars pollute the local environment. According to one paper, pollution can be 29 times worse at city intersections than on open roads, with half the emissions coming from cars accelerating after having to stop. Many companies are developing tech that can make intersections “smarter” or help drivers navigate around jams. Google, though, has an AI-powered system-level plan to fix things.

Called Project Green Light, Google Research is using Google Maps data and AI to make recommendations to city planners on how specific traffic light controlled intersections can be optimized for better traffic flow—and reduced emissions. 

Green Light relies on Google Maps driving trends data, which Google claims is “one of the strongest understandings of global road networks.” Apparently, the information it has gathered from its years of mapping cities around the world allows it to infer data about specific traffic light controlled junctions, including “cycle length, transition time, green split (i.e. right-of-way time and order), coordination and sensor operation (actuation).”

From that, Google is able to create a virtual model of how traffic flows through a given city’s intersections. This allows it to understand the normal traffic patterns, like how much cars have to stop and start, the average wait time at each set of lights, how coordinated nearby intersections are, and how things change throughout the day. Crucially, the model also allows Google to use AI to identify potential adjustments to traffic light timing at specific junctions that could improve traffic flow. 

[Related: Google’s new pollen mapping tool aims to reduce allergy season suffering]

And this isn’t just some theoretical research project. According to Google, Green Light is now operating in 70 intersections across 12 cities around the world. City planners are provided with a dashboard where they can see Green Light’s recommendation, and accept or reject them. (Though they have to implement any changes with their existing traffic control systems, which Google claims takes “as little as five minutes.”) 

Once the changes are implemented, Green Light analyzes the new data to see if they had the intended impact on traffic flow. All the info is displayed in the city planner’s dashboard, so they can see how things are paying off. 

AI photo
Google

A big part of Green Light is that it doesn’t require much extra effort or expense from cities. While city planners have always attempted to optimize traffic patterns, developing models of traffic flow has typically required manual surveys or dedicated hardware, like cameras or car sensors. With Green Light, city planners don’t need to install anything—Google is gathering the data from its Maps users.

Although Google hasn’t published official numbers, it claims that the early results in its 12 test cities “indicate a potential for up to 30 percent reduction in stops and 10 percent reduction in greenhouse gas emissions” across 30 million car journeys per month. 

And city planners seem happy too, at least according to Google’s announcement. David Atkin from Transport for Greater Manchester in the UK is quoted as saying, “Green Light identified opportunities where we previously had no visibility and directed engineers to where there were potential benefits in changing signal timings.”

Similarly, Rupesh Kumar, Kolkata’s Joint Commissioner of Police, says, “Green Light has become an essential component of Kolkata Traffic Police. It serves several valuable purposes which contribute to safer, more efficient, and organized traffic flow and has helped us to reduce gridlock at busy intersections.”

Right now, Green Light is still in its testing phase. If you’re in Seattle, USA; Rio de Janeiro, Brazil; Manchester, UK; Hamburg, Germany; Budapest, Hungary; Haifa, Israel; Abu Dhabi, UAE; Bangalore, Hyderabad, and Kolkata, India; and Bali and Jakarta, Indonesia, there’s a chance you’ve already driven through a Green Light optimized junction.

However, if you’re a member of a city government, traffic engineer, or city planner and want to sign your metropolis up for Green Light, you can join the waiting list. Just fill out this Google Form.

The post A new Google AI project wants to improve the timing of traffic lights appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
5 surprising stats about AI-generated art’s takeover https://www.popsci.com/technology/artificial-intelligence-art-statistics/ Tue, 10 Oct 2023 13:00:58 +0000 https://www.popsci.com/?p=568790
robot approaches bob-ross-looking artist in front of easel, with large landscape painting forming background
AI-generated illustration by Dan Saelinger

In seconds, a computer may be able to generate pieces similar to what a human artist could spend hours working on.

The post 5 surprising stats about AI-generated art’s takeover appeared first on Popular Science.

]]>
robot approaches bob-ross-looking artist in front of easel, with large landscape painting forming background
AI-generated illustration by Dan Saelinger

HANDMADE ART can be an enchanting expression of the world, whether it’s displayed above a roaring fireplace, hung inside a chic gallery, or seen by millions in a museum. But new works don’t always require a human touch. Computer-generated art has been around since British painter Harold Cohen engineered a system, named AARON, to automatically sketch freehand-like drawings in the early 1970s. But in the past 50 years, and especially in the past decade, artificial intelligence programs have used neural networks and machine learning to accomplish much more than pencil lines. Here are some of the numbers behind the automated art boom. 

Six-figure bid

In 2018, a portrait of a blurred man created by Paris-based art collective Obvious sold for a little more than $400,000, which is about the average sale price of a home in Connecticut. Christie’s auctioned off Edmond de Belamy, from La Famille de Belamy, at nearly 45 times the estimated value—making it the most expensive work of AI art to date.

A giant database 

While an artist’s inspiration can come from anything in the world, AI draws from databases that collect digitized works of human creativity. LAION-5B, an online set of nearly 6 billion pictures, has enabled computer models like Stable Diffusion to make derivative images, such as the headshot avatars remixed into superheroic or anime styles that went viral on Twitter in 2022.

Mass production

A caricaturist on the sidewalk of a busy city can whip up a cheeky portrait within a few minutes and a couple dozen drawings a day. Compare that to popular image generators like DALL-E, which can make millions of unique images daily. But all that churn comes at a cost. By some estimates, a single generative AI prompt has a carbon footprint four to five times higher than that of a search engine query.

The new impressionism

Polish painter Greg Rutkowski is known for using his classical technique and style to depict fantastical landscapes and characters such as dragons. Now AI is imitating it—much to Rutkowski’s displeasure. Stable Diffusion users have submitted his name as a prompt tens of thousands of times, according to Lexica, a database of generated art. The painter has joined other artists in a lawsuit against Midjourney, DeviantArt, and Stability AI, arguing that those companies violated human creators’ copyrights.

Art critics 

Only about one-third of Americans consider AI generators able to produce “visual images from keywords” a major advance, and fewer than half think it’s even a minor one, according to a 2022 Pew Research Center survey. More people say the technology is better suited to boost biology, medicine, and other fields. But there was one skill that AI rated even worse in: writing informative news articles like this one.

Read more about life in the age of AI:

Or check out all of our PopSci+ stories.

The post 5 surprising stats about AI-generated art’s takeover appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch robot dogs train on obstacle courses to avoid tripping https://www.popsci.com/technology/dog-robot-vine-course/ Fri, 06 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=577508
Better navigation of complex environments could help robots walk in the wild.
Better navigation of complex environments could help robots walk in the wild. Carnegie Mellon University

Four-legged robots have a tough time traipsing through heavy vegetation, but a new stride pattern could help.

The post Watch robot dogs train on obstacle courses to avoid tripping appeared first on Popular Science.

]]>
Better navigation of complex environments could help robots walk in the wild.
Better navigation of complex environments could help robots walk in the wild. Carnegie Mellon University

Four-legged robots can pull off a lot of complex tasks, but there’s a reason you don’t often see them navigating “busy” environments like forests or vine-laden overgrowth. Despite all their abilities, most on-board AI systems remain pretty bad at responding to all those physical variables in real-time. It might feel like second nature to us, but it only takes the slightest misstep in such situations to send a quadrupedal robot tumbling.

After subjecting their own dog bot to a barrage of obstacle course runs, however, a team at Carnegie Mellon University’s College of Engineering is now offering a solid step forward, so to speak, for robots deployed in the wild. According to researchers, teaching a quadrupedal robot to reactively retract its legs while walking provides the best gait for both navigating and untangling out of obstacles in its way.

[Related: How researchers trained a budget robot dog to do tricks.]

“Real-world obstacles might be stiff like a rock or soft like a vine, and we want robots to have strategies that prevent tripping on either,” Justin Yim, a University of Illinois Urbana-Champaign engineering professor and project collaborator, said in CMU’s recent highlight.

The engineers compared multiple stride strategies on a quadrupedal robot while it tried to walk across a short distance interrupted by multiple, low-hanging ropes. The robot quickly entangled itself while high-stepping, or walking with its knees angled forward, but retracting its limbs immediately after detecting an obstacle allowed it to smoothly cross the stretch of floor.

AI photo

“When you take robots outdoors, the entire problem of interacting with the environment becomes exponentially more difficult because you have to be more deliberate in everything that you do,” David Ologan, a mechanical engineering master’s student, told CMU. “Your system has to be robust enough to handle any unforeseen circumstances or obstructions that you might encounter. It’s interesting to tackle that problem that hasn’t necessarily been solved yet.”

[Related: This robot dog learned a new trick—balancing like a cat.]

Although wheeled robots may still prove more suited for urban environments, where the ground is generally flatter and infrastructures such as ramps are more common, walking bots could hypothetically prove much more useful in outdoor settings. Researchers believe integrating their reactive retraction response into existing AI navigation systems could help robots during outdoor search-and-rescue missions. The newly designed daintiness might also help quadrupedal robots conduct environmental surveying without damaging their surroundings.

“The potential for legged robots in outdoor, vegetation-based environments is interesting to see,” said Ologan. “If you live in a city, a wheeled platform is probably a better option… There is a trade-off between being able to do more complex actions and being efficient with your movements.”

The post Watch robot dogs train on obstacle courses to avoid tripping appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
DARPA wants to modernize how first responders do triage during disasters https://www.popsci.com/technology/darpa-triage-challenge/ Thu, 05 Oct 2023 13:00:00 +0000 https://www.popsci.com/?p=576638
mass-casualty triage occurring via different technologies
Ard Su for Popular Science

The Pentagon is looking for new ways to handle mass casualty events, and hopes that modern tech can help save more lives.

The post DARPA wants to modernize how first responders do triage during disasters appeared first on Popular Science.

]]>
mass-casualty triage occurring via different technologies
Ard Su for Popular Science

In Overmatched, we take a close look at the science and technology at the heart of the defense industry—the world of soldiers and spies.

IF A BUILDING COLLAPSES or a bomb goes off, there are often more people who need medical treatment than there are people who can help them. That mismatch is what defines a mass casualty incident. The military’s most famous R&D agency, DARPA, wants to figure out how to better handle those situations, so more people come out of them alive.

That’s the goal of what the agency is calling the DARPA Triage Challenge, a three-year program that kicks off November 6 and will bring together medical knowledge, autonomous vehicles, noninvasive sensors, and algorithms to prioritize and plan patient care when there are too many patients and not enough care—a process typically called triage. Teams, yet to be named, will compete to see if their systems can categorize injured people in large, complex situations and determine their need for treatment.

A sorting hat for disasters

Triage is no simple task, even for people who make it part of their profession, says Stacy Shackelford, the trauma medical director for the Defense Health Agency’s Colorado Springs region. Part of the agency’s mandate is to manage military hospitals and clinics. “Even in the trauma community, the idea of triage is somewhat of a mysterious topic,” she says. 

The word triage comes from the French, and it means, essentially, “sorting casualties.” When a host of humans get injured at the same time, first responders can’t give them all equal, simultaneous attention. So they sort them into categories: minimal, minorly injured; delayed, seriously injured but not in an immediately life-threatening way; immediate, severely injured in such a way that prompt treatment would likely be lifesaving; and expectant, dead or soon likely to be. “It really is a way to decide who needs lifesaving interventions and who can wait,” says Shackelford, “so that you can do the greatest good for the greatest number of people.”

The question of whom to treat when and how has always been important, but it’s come to the fore for the Defense Department as the nature of global tensions changes, and as disasters that primarily affect civilians do too. “A lot of the military threat currently revolves around what would happen if we went towards China or we went to war with Russia, and there’s these types of near-peer conflicts,” says Shackelford. The frightening implication is that there would be more injuries and deaths than in other recent conflicts. “Just the sheer number of possible casualties that could occur.” Look, too, at the war in Ukraine. 

The severity, frequency, and unpredictability of some nonmilitary disasters—floods, wildfires, and more—is also shifting as the climate changes. Meanwhile, mass shootings occur far too often; a damaged nuclear power plant could pose a radioactive risk; earthquakes topple buildings; poorly maintained buildings topple themselves. Even the pandemic, says Jeffrey Freeman, director of the National Center for Disaster Medicine and Public Health at the Uniformed Services University, has been a kind of slow-moving or rolling disaster. It’s not typically thought of as a mass casualty incident. But, says Freeman, “The effects are similar in some ways, in that you have large numbers of critically ill patients in need of care, but dissimilar in that those in need are not limited to a geographic area.” In either sort of scenario, he continues, “Triage is critical.”

Freeman’s organization is currently managing an assessment, mandated by Congress, of the National Medical Disaster System, which was set up in the 1980s to manage how the Department of Defense, military treatment facilities, Veterans Affairs medical centers, and civilian hospitals under the Department of Health and Human Services respond to large-scale catastrophes, including combat operations overseas. He sees the DARPA Triage Challenge as highly relevant to dealing with incidents that overwhelm the existing system—a good goal now and always. “Disasters or wars themselves are sort of unpredictable, seemingly infrequent events. They’re almost random in their occurrence,” he says. “The state of disaster or the state of catastrophe is actually consistent. There are always disasters occurring, there are always conflicts occurring.” 

He describes the global state of disaster as “continuous,” which makes the Triage Challenge, he says, “timeless.”

What’s more, the concept of triage, Shackelford says, hasn’t really evolved much in decades, which means the potential fruits of the DARPA Triage Challenge—if it pans out—could make a big difference in what the “greatest good, greatest number” approach can look like. With DARPA, though, research is always a gamble: The agency takes aim at tough scientific and technological goals, and often misses, a model called “high-risk, high-reward” research.

Jean-Paul Chretien, the Triage Challenge program manager at DARPA, does have some specific hopes for what will emerge from this risk—like the ability to identify victims who are more seriously injured than they seem. “It’s hard to tell by looking at them that they have these internal injuries,” he says. The typical biosignatures people check to determine a patient’s status are normal vital signs: pulse, blood pressure, respiration. “What we now know is that those are really lagging indicators of serious injury, because the body’s able to compensate,” Chretien says. But when it can’t anymore? “They really fall off a cliff,” he says. In other words, a patient’s pulse or blood pressure may seem OK, but a major injury may still be present, lurking beneath that seemingly good news. He hopes the Triage Challenge will uncover more timely physiological indicators of such injuries—indicators that can be detected before a patient is on the precipice.

Assessment from afar

The DARPA Triage Challenge could yield that result, as it tasks competitors—some of whom DARPA is paying to participate in the competition, and some of whom will fund themselves—with two separate goals. The first addresses the primary stage of triage (the sorting of people in the field) while the second deals with what to do once they’re in treatment. 

For the first stage, Triage Challenge competitors have to develop sensor systems that can assess victims at a distance, gathering data on physiological signatures of injury. Doing this from afar could keep responders from encountering hazards, like radioactivity or unstable buildings, during that process. The aim is to have the systems move autonomously by the end of the competition.

The signatures such systems seek may include, according to DARPA’s announcement of the project, things like “ability to move, severe hemorrhage, respiratory distress, and alertness.” Competitors could equip robots or drones with computer-vision or motion-tracking systems, instruments that use light to measure changes in blood volume, lasers that analyze breathing or heart activity, or speech recognition capabilities. Or all of the above. Algorithms the teams develop must then extract meaningful conclusions from the data collected—like who needs lifesaving treatment right now

The second focus of the DARPA Triage Challenge is the period after the most urgent casualties have received treatment—the secondary stage of triage. For this part, competitors will develop technology to dig deeper into patients’ statuses and watch for changes that are whispering for help. The real innovations for this stage will come from the algorithmic side: software that, for instance, parses the details of an electrocardiogram—perhaps using a noninvasive electrode in contact with the skin—looking at the whole waveform of the heart’s activity and not just the beep-beep of a beat, or software that does a similar stare into a pulse oximeter’s output to monitor the oxygen carried in red blood cells. 

For her part, Shackelford is interested in seeing teams incorporate a sense of time into triage—which sounds obvious but has been difficult in practice, in the chaos of a tragedy. Certain conditions are extremely chronologically limiting. Something fell on you and you can’t breathe? Responders have three minutes to fix that problem. Hemorrhaging? Five to 10 minutes to stop the bleeding, 30 minutes to get a blood transfusion, an hour for surgical intervention. “All of those factors really factor into what is going to help a person at any given time,” she says. And they also reveal what won’t help, and who can’t be helped anymore.

Simulating disasters

DARPA hasn’t announced the teams it plans to fund yet, and self-funded teams also haven’t revealed themselves. But whoever they are, over the coming three years, they will face a trio of competitions—one at the end of each year, each of which will address both the primary and secondary aspects of triage.

The primary triage stage competitions will be pretty active. “We’re going to mock up mass-casualty scenes,” says Chretien. There won’t be people with actual open wounds or third-degree burns, of course, but actors pretending to have been part of a disaster. Mannequins, too, will be strewn about. The teams will bring their sensor-laden drones and robots. “Those systems will have to, on their own, find the casualties,” he says. 

These competitions will feature three scenarios teams will cycle through, like a very stressful obstacle course. “We’ll score them based on how quickly they complete the test,” Chretien says, “how good they are at actually finding the casualties, and then how accurately they assess their medical status.” 

But it won’t be easy: The agency’s description of the scenarios says they might involve both tight spaces and big fields, full light and total darkness, “dust, fog, mist, smoke, talking, flashing light, hot spots, and gunshot and explosion sounds.” Victims may be buried under debris, or overlapping with each other, challenging sensors to detect and individuate them.

DARPA is also building a virtual world that mimics the on-the-ground scenarios, for a virtual version of the challenge. “This will be like a video-game-type environment but [with the] same idea,” he says. Teams that plan to do the concrete version can practice digitally, and Chretien also hopes that teams without all the hardware they need to patrol the physical world will still try their hands digitally. “It should be easier in terms of actually having the resources to participate,” he says. 

The secondary stage’s competitions will be a little less dramatic. “There’s no robotic system, no physical simulation going on there,” says Chretien. Teams will instead get real clinical trauma data, from patients hospitalized in the past, gathered from the Maryland Shock Trauma Center and the University of Pittsburgh. Their task is to use that anonymized patient data to determine each person’s status and whether and what interventions would have been called for when. 

At stake is $7 million in total prize money over three years, and for the first two years, only teams that DARPA didn’t already pay to participate are eligible to collect. 

Also at stake: a lot of lives. “What can we do, technologically, that can make us more efficient, more effective,” says Freeman, “with the limited amount of people that we have?” 

Read more PopSci+ stories.

The post DARPA wants to modernize how first responders do triage during disasters appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An ‘electronic tongue’ could help robots taste food like humans https://www.popsci.com/technology/electronic-tongue-ai-robot/ Wed, 04 Oct 2023 20:00:00 +0000 https://www.popsci.com/?p=577156
Electronic artificial tongue sensor
The sensor could one day help AI develop their own versions of taste palates. Das Research Lab/Penn State

A combination of ultra-thin sensors marks the first step in machines being able to mimic our tastes.

The post An ‘electronic tongue’ could help robots taste food like humans appeared first on Popular Science.

]]>
Electronic artificial tongue sensor
The sensor could one day help AI develop their own versions of taste palates. Das Research Lab/Penn State

AI programs can already respond to sensory stimulations like touch, sight, smell, and sound—so why not taste? Engineering researchers at Penn State hope to one day accomplish just that, in the process designing an “electronic tongue” capable of detecting gas and chemical molecules with components that are only a few atoms thick. Although not capable of “craving” a late-night snack just yet, the team is hopeful their new design could one day pair with robots to help create AI-influenced diets, curate restaurant menus, and even train people to broaden their own palates.

Unfortunately, human eating habits aren’t based solely on what we nutritionally require; they are also determined by flavor preferences. This comes in handy when our taste buds tell our brains to avoid foul-tasting, potentially poisonous foods, but it also is the reason you sometimes can’t stop yourself from grabbing that extra donut or slice of cake. This push-and-pull requires a certain amount of psychological cognition and development—something robots currently lack.

[Related: A new artificial skin could be more sensitive than the real thing]

“Human behavior is easy to observe but difficult to measure. and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that,” 

Saptarshi Das, an associate professor of engineering science and mechanics, said in an October 4 statement. Das is a corresponding author of the team’s findings, which were published last month in the journal Nature Communications, and helped design the robotic system capable of “tasting” molecules.

To create their flat, square “electronic gustatory complex,” the team combined chemitransistors—graphene-based sensors that detect gas and chemical molecules—with molybdenum disulfide memtransistors capable of simulating neurons. The two components worked in tandem, capitalizing on their respective strengths to simulate the ability to “taste” molecular inputs.

“Graphene is an excellent chemical sensor, [but] it is not great for circuitry and logic, which is needed to mimic the brain circuit,” said Andrew Pannone, an engineering science and mechanics grad student and study co-author, in a press release this week. “For that reason, we used molybdenum disulfide… By combining these nanomaterials, we have taken the strengths from each of them to create the circuit that mimics the gustatory system.”

When analyzing salt, for example, the electronic tongue detected the presence of sodium ions, thereby “tasting” the sodium chloride input. The design is reportedly flexible enough to apply to all five major taste profiles: salty, sour, bitter, sweet, and umami. Hypothetically, researchers could arrange similar graphene device arrays that mirror the approximately 10,000 different taste receptors located on a human tongue.

[Related: How to enhance your senses of smell and taste]

“The example I think of is people who train their tongue and become a wine taster. Perhaps in the future we can have an AI system that you can train to be an even better wine taster,” Das said in the statement.

The post An ‘electronic tongue’ could help robots taste food like humans appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The first AI started a 70-year debate https://www.popsci.com/technology/the-first-ai-logic-theorist/ Tue, 03 Oct 2023 13:00:00 +0000 https://www.popsci.com/?p=568784
old-style classroom with robot taking shape in front of blackboard with many drawings while man stands at desk
AI-generated illustration by Dan Saelinger

The Logic Theorist started a discussion that continues today—can a machine be intelligent like us?

The post The first AI started a 70-year debate appeared first on Popular Science.

]]>
old-style classroom with robot taking shape in front of blackboard with many drawings while man stands at desk
AI-generated illustration by Dan Saelinger

IN THE SUMMER of 1956, a small group of computer science pioneers convened at Dartmouth College to discuss a new concept: artificial intelligence. The vision, in the meeting’s proposal, was that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Ultimately, they presented just one operational program, stored on computer punch cards: the Logic Theorist.

Many have called the Logic Theorist the first AI program, though that description was debated then—and still is today. The Logic Theorist was designed to mimic human skills, but there’s disagreement about whether the invention actually mirrored the human mind and whether a machine really can replicate the insightfulness of our intelligence. But science historians view the Logic Theorist as the first program to simulate how humans use reason to solve complex problems and was among the first made for a digital processor. It was created in a new system, the Information Processing Language, and coding it meant strategically pricking holes in pieces of paper to be fed into a computer. In just a few hours, the Logic Theorist proved 38 of 52 theorems in Principia Mathematica, a foundational text of mathematical reasoning. 

The Logic Theorist’s design reflects its historical context and the mind of one of its creators, Herbert Simon, who was not a mathematician but a political scientist, explains Ekaterina Babintseva, a historian of science and technology at Purdue University. Simon was interested in how organizations could enhance rational decision-making. Artificial systems, he believed, could help people make more sensible choices. 

“The type of intelligence the Logic Theorist really emulated was the intelligence of an institution,” Babintseva says. “It’s bureaucratic intelligence.” 

But Simon also thought there was something fundamentally similar between human minds and computers, in that he viewed them both as information-processing systems, says Stephanie Dick, a historian and assistant professor at Simon Fraser University. While consulting at the RAND Corporation, a nonprofit research institute, Simon encountered computer scientist and psychologist Allen Newell, who became his closest collaborator. Inspired by the heuristic teachings of mathematician George Pólya, who taught problem-solving, they aimed to replicate Pólya’s approach to logical, discovery-oriented decision-making with more intelligent machines.

This stab at human reasoning was written into a program for JOHNNIAC, an early computer built by RAND. The Logic Theorist proved Principia’s mathematical theorems through what its creators claimed was heuristic deductive methodology: It worked backward, making minor substitutions to possible answers until it reached a conclusion equivalent to what had already been proven. Before this, computer programs mainly solved problems by following linear step-by-step instructions. 

The Logic Theorist was a breakthrough, says Babintseva, because it was the first program in symbolic AI, which uses symbols or concepts, rather than data, to train AI to think like a person. It was the predominant approach to artificial intelligence until the 1990s, she explains. More recently, researchers have revived another approach considered at the 1950s Dartmouth conference: mimicking our physical brains through machine-learning algorithms and neural networks, rather than simulating how we reason. Combining both methods is viewed by some engineers as the next phase of AI development.  

The Logic Machine’s contemporary critics argued that it didn’t actually channel heuristic thinking, which includes guesswork and shortcuts, and instead showed precise trial-and-error problem-solving. In other words, it could approximate the workings of the human mind but not the spontaneity of its thoughts. The debate over whether this kind of program can ever match our brainpower continues. “Artificial intelligence is really a moving target,” Babintseva says, “and many computer scientists would tell you that artificial intelligence doesn’t exist.”

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.

The post The first AI started a 70-year debate appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl https://www.popsci.com/technology/chipotle-burrito-bowl-salad-robot/ Tue, 03 Oct 2023 12:00:00 +0000 https://www.popsci.com/?p=576646
Chipotle automated makeline station
Chipotle also announced an avocado-pitting robot earlier this year. Chipotle

Human workers will still have to add the guacamole.

The post Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl appeared first on Popular Science.

]]>
Chipotle automated makeline station
Chipotle also announced an avocado-pitting robot earlier this year. Chipotle

Back in July, Chipotle revealed the “Autocado”—an AI-guided avocado-pitting robot prototype meant to help handle America’s insatiable guacamole habit while simultaneously reducing food waste. Today, the fast casual chain announced its next automated endeavor—a prep station capable of assembling entrees on its own.

[Related: Chipotle is testing an avocado-pitting, -cutting, and -scooping robot.]

According to the company’s official reveal this morning, its newest robotic prototype—a collaboration with the food service automation startup, Hyphen—creates virtually any combination of available base ingredients for Chipotle’s burrito bowls and salads underneath human employees’ workspace. Meanwhile, staff are reportedly allowed to focus on making other, presumably more structurally complex and involved dishes such as burritos, quesadillas, tacos, and kid’s meals. Watch the robot prototype plop food into little piles in the bowl under the workspace here: 

AI photo

As orders arrive via Chipotle’s website, app, or another third-party service like UberEats, burrito bowls and salads are automatically routed within the makeline, where an assembly system passes dishes beneath the various ingredient containers. Precise portions are then doled out accordingly, after which the customer’s order surfaces via a small elevator system on the machine’s left side. Chipotle employees can then add any additional chips, salsas, and guacamole, as well as an entree lid before sending off the orders for delivery.

[Related: What robots can and can’t do for a restaurant.]

Chipotle estimates around 65 percent of all its digital orders are salads and burrito bowls, so their so-called “cobot” (“collaborative” plus “robot”) could hypothetically handle a huge portion of existing kitchen prep. The automated process may also potentially offer more accurate orders, the company states. 

Advocates frequently voice concern about automation and its effect on human jobs. And Chipotle isn’t the only chain in question—companies like Wendy’s and Panera continue to experiment with their own automation plans. Curt Garner, Chipotle’s Chief Customer and Technology Officer described the company’s long-term goal of having the automated digital makeline “be the centerpiece of all our restaurants’ digital kitchens.”

For now, however, the new burrito bowl bot can only be found at the Chipotle Cultivate Center in Irvine, California—presumably alongside the Autocado.

The post Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tom Hanks says his deepfake is hawking dental insurance https://www.popsci.com/technology/celebrity-deepfake-tom-hanks/ Mon, 02 Oct 2023 18:10:00 +0000 https://www.popsci.com/?p=576583
Tom Hanks smiling
A real photo of Tom Hanks taken in 2021. Deposit Photos

The iconic American actor recently warned of an AI-generated advertisement featuring 'his' voice.

The post Tom Hanks says his deepfake is hawking dental insurance appeared first on Popular Science.

]]>
Tom Hanks smiling
A real photo of Tom Hanks taken in 2021. Deposit Photos

Take it from Tom Hanks—he is not interested in peddling dental plans.

“BEWARE!! [sic] There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” the actor wrote via an Instagram post to his account over the weekend.

Hanks’ warning was superimposed over a screenshot of the deepfaked dental imposter in question, and subsequently highlighted by Variety on Sunday afternoon. According to Gizmodo, the simulated celebrity appears to be based on an image owned by the Los Angeles Times from at least 2014.

The latest example of generative AI’s continued foray into uncharted legal and ethical territories seems to confirm the Oscar-winning actor’s fears first voiced barely five months ago. During an interview while on The Adam Buxton Podcast, Hanks explained his concerns about AI tech’s implications for actors, especially after their deaths.

[Related: This fictitious news show is entirely produced by AI and deepfakes.]

“Anybody can now recreate themselves at any age they are by way of AI or deepfake technology. I could be hit by a bus tomorrow and that’s it, but performances can go on and on and on and on,” Hanks said in May. “Outside the understanding of AI and deepfake, there’ll be nothing to tell you that it’s not me and me alone. And it’s going to have some degree of lifelike quality. That’s certainly an artistic challenge, but it’s also a legal one.”

Hanks’ warnings come as certain corners of the global entertainment industry are already openly embracing the technology, with or without performers’ consent. In China, for example, AI companies are now offering deepfake services to clone popular online influencers to hawk products ostensibly 24/7 using their own “livestreams.”

According to a report last month from MIT Technology Review, Chinese startups only require a few minutes’ worth of source video alongside roughly $1,000 to replicate human influencers for as long as a client wants. Those fees alongside an AI clone’s complexity and abilities, but often are significantly cheaper than employing human livestream labor. A report from Chinese analytics firm iiMedia Research, for example, estimates companies could cut costs by as much as 70 percent by switching to AI talking heads. Combined with other economic and labor challenges, earnings for human livestream hosts in the country have dropped as much as 20 percent since 2022.

[Related: Deepfake videos may be convincing enough to create false memories.]

Apart from the financial concerns, deepfaking celebrities poses ethical issues, especially for the families of deceased entertainers. Also posting to Instagram over the weekend, Zelda Williams—daughter of the late Robin Williams—offered her thoughts after encountering deepfaked audio of her father’s voice.

“I’ve already heard AI used to get his ‘voice’ to say whatever people want and while I find it personally disturbing, the ramifications go far beyond my own feelings,” wrote Williams, as reported via Rolling Stone on October 2. “These recreations are, at their very best, a poor facsimile of greater people, but at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is, instead of what it should stand for.”

AI is currently a major focal point for ongoing labor negotiations within Hollywood. Last week, the Writers Guild of America reached an agreement with industry executives following a five-month strike, settling on a contract that offers specific guidelines protecting writers’ livelihoods and art against AI outsourcing. Meanwhile, members of the Screen Actors Guild remain on strike while seeking their own guarantees against AI in situations such as background actor generation and posthumous usages of their likeness.

The post Tom Hanks says his deepfake is hawking dental insurance appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI narrators will read classic literature to you for free https://www.popsci.com/technology/ai-reads-audiobooks/ Mon, 02 Oct 2023 11:00:00 +0000 https://www.popsci.com/?p=576188
old books in a pile
Deposit Photos

Synthetic voices can take old texts such as "Call of the Wild" and narrate them on platforms like Spotify. Here's how it works—and how to listen.

The post AI narrators will read classic literature to you for free appeared first on Popular Science.

]]>
old books in a pile
Deposit Photos

Recording an audiobook is no easy task, even for experienced voice actors. But demand for audiobooks is on the rise, and major streaming platforms like Spotify are making dedicated spaces for them to grow into. To fuse innovation with frenzy, MIT and Microsoft researchers are using AI to create audiobooks from online texts. In an ambitious new project, they are collaborating with Project Gutenberg, the world’s oldest and probably largest online repository of open-license ebooks, to make 5,000 AI-narrated audiobooks. This collection includes classic titles in literature like Pride and Prejudice, Madame Bovary, Call of the Wild, and Alice’s Adventures in Wonderland. The trio published an arXiv preprint on their efforts in September. 

“What we wanted to do was create a massive amount of free audiobooks and give them back to the community,” Mark Hamilton, a PhD student at the MIT Computer Science & Artificial Intelligence Laboratory and a lead researcher on the project, tells PopSci. “Lately, there’s been a lot of advances in neural text to speech, which are these algorithms that can read text, and they sound quite human-like.”

The magic ingredient that makes this possible is a neural text-to-speech algorithm which is trained on millions of examples of human speech, and then it’s tasked to mimic it. It can generate different voices with different accents in different languages, and can create custom voices with only five seconds of audio. “They can read any text you give them and they can read them incredibly fast,” Hamilton says. “You can give it eight hours of text and it will be done in a few minutes.”

Importantly, this algorithm can pick up on the subtleties like tones and the modifications humans add when reading words, like how a phone number or a website is read, what gets grouped together, and where the pauses are. The algorithm is based off previous work from some of the paper’s co-authors at Microsoft. 

Like large language models, this algorithm relies heavily on machine learning and neural networks. “It’s the same core guts, but different inputs and outputs,” Hamilton explains. Large language models take in text and fill in gaps. They use that basic functionality to build chat applications. Neural text-to-speech algorithms, on the other hand, take in text, pump them through the same kinds of algorithms, but now instead of spitting out text, they’re spitting out sound, Hamilton says.

[Related: Internet Archive just lost a federal lawsuit against big book publishers]

“They’re trying to generate sounds that are faithful to the text that you put in. That also gives them a little bit of leeway,” he adds. “They can spit out the kind of sound they feel is necessary to solve the task well. They can change, group, or alter the pronunciation to make it sound more humanlike.” 

A tool called a loss function can then be used to evaluate whether a model did a good job, a bad job. Implementing AI in this way can speed up the efforts of projects like Librivox, which currently uses human volunteers to make audiobooks of public domain works.

The work is far from done. The next steps are to improve the quality. Since Project Gutenberg ebooks are created by human volunteers, every single person who makes the ebook does it slightly differently. They may include random text in unexpected places, and where ebook makers place page numbers, the table of contents, or illustrations might change from book to book. 

“All these different things just result in strange artifacts for an audiobook and stuff that you wouldn’t want to listen to at all,” Hamilton says. “The north star is to develop more and more flexible solutions that can use good human intuition to figure out what to read and what not to read in these books.” Once they get that down, their hope is to use that, along with the most recent advances in AI language technology to scale the audiobook collection to all the 60,000 on Project Gutenberg, and maybe even translate them.

For now, all the AI-voiced audiobooks can be streamed for free on platforms such as Spotify, Google Podcasts, Apple Podcasts, and the Internet Archive.

There are a variety of applications for this type of algorithm. It can read plays, and assign distinct voices to each character. It can mock up a whole audiobook in your voice, which could make for a nifty gift. However, even though there are many fairly innocuous ways to use this tech, experts have previously voiced their concerns about the drawbacks of artificially generated audio, and its potential for abuse

Listen to Call of the Wild, below.

The post AI narrators will read classic literature to you for free appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The CIA is building its version of ChatGPT https://www.popsci.com/technology/cia-chatgpt-ai/ Wed, 27 Sep 2023 16:00:00 +0000 https://www.popsci.com/?p=575174
CIA headquarters floor seal logo
The CIA believes such a tool could help parse vast amounts of data for analysts. CIA

The agency's first chief technology officer confirms a chatbot based on open-source intelligence will soon be available to its analysts.

The post The CIA is building its version of ChatGPT appeared first on Popular Science.

]]>
CIA headquarters floor seal logo
The CIA believes such a tool could help parse vast amounts of data for analysts. CIA

The Central Intelligence Agency confirmed it is building a ChatGPT-style AI for use across the US intelligence community. Speaking with Bloomberg on Tuesday, Randy Nixon, director of the CIA’s Open-Source Enterprise, described the project as a logical technological step forward for a vast 18-agency network that includes the CIA, NSA, FBI, and various military offices. The large language model (LLM) chatbot will reportedly provide summations of open-source materials alongside citations, as well as chat with users, according to Bloomberg

“Then you can take it to the next level and start chatting and asking questions of the machines to give you answers, also sourced. Our collection can just continue to grow and grow with no limitations other than how much things cost,” Nixon said.

“We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” Nixon continued, adding, “We have to find the needles in the needle field.”

[Related: ChatGPT can now see, hear, and talk to some users.]

The announcement comes as China’s make their ambitions to become the global leader in AI technology by the decade’s end known. In August, new Chinese government regulations went into effect requiring makers of publicly available AI services submit regular security assessments. As Reuters noted in July, the oversight will likely restrict at least some technological advancements in favor of ongoing national security crackdowns. The laws are also far more stringent than those currently within the US, as regulators struggle to adapt to the industry’s rapid advancements and societal consequences.

Nixon has yet to discuss  the overall scope and capabilities of the proposed system, and would not confirm what AI model forms the basis of its LLM assistant. For years, however, US intelligence communities have explored how to best leverage AI’s vast data analysis capabilities alongside private partnerships. The CIA even hosted a “Spies Supercharged” panel during this year’s SXSW in the hopes of recruiting tech workers across sectors such as quantum computing, biotech, and AI. During the event, CIA deputy director David Cohen reiterated concerns regarding AI’s unpredictable effects for the intelligence community.

“To defeat that ubiquitous technology, if you have any good ideas, we’d be happy to hear about them afterwards,” Cohen said at the time.

[Related: The CIA hit up SXSW this year—to recruit tech workers.]

Similar criticisms arrived barely two weeks ago via the CIA’s first-ever chief technology officer, Nand Mulchandani. Speaking at the Billington Cybersecurity Summit, Mulchandani contended that while some AI-based systems are “absolutely fantastic” for tasks such as vast data trove pattern analysis, “in areas where it requires precision, we’re going to be incredibly challenged.” 

Mulchandani also conceded that AI’s often seemingly “hallucinatory” offerings could still be helpful to users.

“AI can give you something so far outside of your range, that it really then opens up the vista in terms of where you’re going to go,” he said at the time. “[It’s] what I call the ‘crazy drunk friend.’” 

The post The CIA is building its version of ChatGPT appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Mysterious ‘fairy circles’ may appear on three different continents https://www.popsci.com/science/fairy-circles-desert-ai/ Wed, 27 Sep 2023 14:00:00 +0000 https://www.popsci.com/?p=575087
Aerial view of a hot air balloon over Namib desert. The circular “fairy circles” are derived from any vegetation & surrounded by tall grass.
Aerial view of a hot air balloon over Namib desert. The circular “fairy circles” are derived from any vegetation & surrounded by tall grass. Getty Images

Researchers used AI to comb the world's deserts for the natural phenomena, but debate continues.

The post Mysterious ‘fairy circles’ may appear on three different continents appeared first on Popular Science.

]]>
Aerial view of a hot air balloon over Namib desert. The circular “fairy circles” are derived from any vegetation & surrounded by tall grass.
Aerial view of a hot air balloon over Namib desert. The circular “fairy circles” are derived from any vegetation & surrounded by tall grass. Getty Images

The natural circles that pop up on the soil in the planet’s arid regions are an enduring scientific debate and mystery. These “fairy circles” are circular patterns of bare soil surrounded by plants and vegetation. Until very recently, the unique phenomena have only been described in the vast Namib desert and the Australian outback. While their origins and distribution are hotly debated, a study with satellite imagery published on September 25 in the journal Proceedings of the National Academy of Sciences (PNAS) indicates that fairy circles may be more common than once realized. They are potentially found in 15 countries across three continents and in 263 different sites. 

[Related: A new study explains the origin of mysterious ‘fairy circles’ in the desert.]

These soil shapes occur in arid areas of the Earth, where nutrients and water are generally scarce. Their signature circular pattern and hexagonal shape is believed to be the best way that the plants have found to survive in that landscape. Ecologist Ken Tinsly observed the circles in Namibia in 1971, and the story goes that he borrowed the name fairy circles from a naturally occurring ring of mushrooms that are generally found in Europe.

By 2017, Australian researchers found the debated western desert fairy circles, and proposed that the mechanisms of biological self-organization and pattern formation proposed by mathematician Alan Turing were behind them. In the same year, Aboriginal knowledge linked those fairy circles to a species of termites. This “termite theory” of fairy circle origin continues to be a focus of research—a team from the University of Hamburg in Germany published a study seeming to confirm that termites are behind these circles in July.

In this new study, a team of researchers from Spain used artificial intelligence-based models to look at the fairy circles from Australia and Namibia and directed it to look for similar patterns. The AI scoured the images for months and expanded the areas where these fairy circles could exist. These locations include the circles in Namibia, Western Australia, the western Sahara Desert, the Sahel region that separates the African savanna from the Sahara Desert, the Horn of Africa to the East, the island of Madagascar, southwestern Asia, and Central Australia.

DCIM\101MEDIA\DJI_0021.JPG
Fairy circles on a Namibian plain. CREDIT: Audi Ekandjo.

The team then crossed-checked the results of the AI system with a different AI program trained to study the environments and ecology of arid areas to find out what factors govern the appearance of these circular patterns. 

“Our study provides evidence that fairy-circle[s] are far more common than previously thought, which has allowed us, for the first time, to globally understand the factors affecting their distribution,” study co-author and Institute of Natural Resources and Agrobiology of Seville soil ecologist Manuel Delgado Baquerizo said in a statement

[Related: The scientific explanation behind underwater ‘Fairy Circles.’]

According to the team, these circles generally appear in arid regions where the soil is mainly sandy, there is water scarcity, annual rainfall is between 4 to 12 inches, and low nutrient continent in the soil.

“Analyzing their effects on the functioning of ecosystems and discovering the environmental factors that determine their distribution is essential to better understand the causes of the formation of these vegetation patterns and their ecological importance,” study co-author and  University of Alicante data scientist Emilio Guirado said in a statement

More research is needed to determine the role of insects like termites in fairy circle formation, but Guirado told El País that “their global importance is low,” and that they may play an important role in local cases like those in Namibia, “but there are other factors that are even more important.”

The images are now included in a global atlas of fairy circles and a database that could help determine if these patterns demonstrate resilience to climate change. 

“We hope that the unpublished data will be useful for those interested in comparing the dynamic behavior of these patterns with others present in arid areas around the world,” said Guirado.

The post Mysterious ‘fairy circles’ may appear on three different continents appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft wants small nuclear reactors to power its AI and cloud computing services https://www.popsci.com/technology/microsoft-nuclear-power/ Tue, 26 Sep 2023 21:00:00 +0000 https://www.popsci.com/?p=574761
The NuScale VOYGR™ SMR power plant. The first NRC certified U.S. small modular reactor design. It hopes to be operational by 2029.
The NuScale VOYGR™ SMR power plant. The first NRC certified U.S. small modular reactor design. It hopes to be operational by 2029. NuScale VOYGR™ via Office of Nuclear Energy

The company posted a job opening for a 'principal program manager' for nuclear technology.

The post Microsoft wants small nuclear reactors to power its AI and cloud computing services appeared first on Popular Science.

]]>
The NuScale VOYGR™ SMR power plant. The first NRC certified U.S. small modular reactor design. It hopes to be operational by 2029.
The NuScale VOYGR™ SMR power plant. The first NRC certified U.S. small modular reactor design. It hopes to be operational by 2029. NuScale VOYGR™ via Office of Nuclear Energy

Bill Gates is a staunch advocate for nuclear energy, and although he no longer oversees day-to-day operations at Microsoft, its business strategy still mirrors the sentiment. According to a new job listing first spotted on Tuesday by The Verge, the tech company is currently seeking a “principal program manager” for nuclear technology tasked with “maturing and implementing a global Small Modular Reactor (SMR) and microreactor energy strategy.” Once established, the nuclear energy infrastructure overseen by the new hire will help power Microsoft’s expansive plans for both cloud computing and artificial intelligence.

Among the many, many, (many) concerns behind AI technology’s rapid proliferation is the amount of energy required to power such costly endeavors—a worry exacerbated by ongoing fears pertaining to climate collapse. Microsoft believes nuclear power is key to curtailing the massive amounts of greenhouse emissions generated by fossil fuel industries, and has made that belief extremely known in recent months.

[Related: Microsoft thinks this startup can deliver on nuclear fusion by 2028.]

Unlike traditional nuclear reactor designs, an SMR is meant to be far more cost-effective, easier to construct, and smaller, all the while still capable of generating massive amounts of energy. Earlier this year, the US Nuclear Regulatory Commission approved a first-of-its-kind SMR; judging from Microsoft’s job listing, it anticipates many more are to come. Among the position’s many responsibilities is the expectation that the principal program manager will “[l]aise with engineering and design teams to ensure technical feasibility and optimal integration of SMR and microreactor systems.”

But as The Verge explains, making those nuclear ambitions a reality faces a host of challenges. First off, SMRs demand HALEU, a more highly enriched uranium than traditional reactors need. For years, the world’s largest HALEU supplier has been Russia, whose ongoing invasion of Ukraine is straining the supply chain. Meanwhile, nuclear waste storage is a perpetual concern for the industry, as well as the specter of disastrous, unintended consequences.

Microsoft is obviously well aware of such issues—which could factor into why it is also investing in moonshot energy solutions such as nuclear fusion. Not to be confused with current reactors’ fission capabilities, nuclear fusion involves forcing atoms together at extremely high temperatures, thus producing a new, smaller atom alongside massive amounts of energy. Back in May, Microsoft announced an energy purchasing partnership with the nuclear fusion startup called Helion, which touts an extremely ambitious goal of bringing its first generator online in 2028.

Fission or fusion, Microsoft’s nuclear aims require at least one new job position—one with a starting salary of $133,600.

The post Microsoft wants small nuclear reactors to power its AI and cloud computing services appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI program could teach you to be better at chess https://www.popsci.com/technology/artificial-intelligence-chess-program/ Tue, 26 Sep 2023 13:00:00 +0000 https://www.popsci.com/?p=568779
child and robot sit at chess table playing game
AI-generated illustration by Dan Saelinger

‘Learn Chess with Dr. Wolf’ critiques—or praises—your moves as you make them.

The post This AI program could teach you to be better at chess appeared first on Popular Science.

]]>
child and robot sit at chess table playing game
AI-generated illustration by Dan Saelinger

YOU ARE NEVER going to beat the world’s best chess programs. After decades of training and studying, you might manage a checkmate or two against Stockfish, Komodo, or another formidable online foe. But if you tally up every match you ever play against an artificial intelligence, the final score will land firmly on the side of the machine.

Don’t feel bad. The same goes for the entire human race. Computer vs. chess master has been a losing prospect since 1997, when IBM’s Deep Blue beat legendary grandmaster Garry Kasparov in a historic tournament. The game is now firmly in artificial intelligence’s domain—but these chess overlords can also improve your game by serving as digital coaches.

That’s where Learn Chess with Dr. Wolf comes into play. Released in 2020, the AI program from Chess.com is a remarkably effective tutor, able to adapt to your skill level, offer tips and hints, and help you review past mistakes as you learn new strategies, gambits, and defenses. It’s by no means the only chess platform designed to teach—Lichess, Shredder Chess, and Board Game Arena are all solid options. Magnus Carlsen, a five-time World Chess Championship winner, even has his own tutoring app, Magnus Trainer.

Dr. Wolf, however, approaches the game a bit differently. “The wish that we address is to have not just an [AI] opponent, but a coach who will praise your good moves and explain what they’re doing while they’re doing it,” says David Joerg, Chess.com’s head of special projects and the developer behind Dr. Wolf.

The program is similar to the language-learning app Duolingo in some ways—it makes knowledge accessible and rewards nuances. Players pull up the interface and begin a game against the AI, which offers real-time text analysis of both sides’ strategies and movements.

If you make a blunder, the bot points out the error, maybe offers up a pointer or two, and asks if you want to give it another shot. “Are you certain?” Dr. Wolf politely asks after my rookie mistake of opening up my undefended pawn on e4 for capture. From there, I can choose either to play on or to take back my move. A corrected do-over results in a digital pat on the back from the esteemed doctor, while repeated errors may push it to course-correct.

“The best teachers in a sport already do [actively train you], and AI makes it possible for everyone to experience that,” Joerg says. He adds that Dr. Wolf’s users have something in common with professional chess players too—they use AI opponents in their daily training regimens. Experts often rely on the ChessBase platform, which runs its ever-growing algorithms off powerful computers, feeding them massive historical match archives. Dr. Wolf, however, isn’t coded for grandmasters like Carlsen or Hikaru Nakamura; rather, it’s designed to remove amateur players’ hesitancy about diving into a complex game that’s become even more imposing thanks to AI dominance.

“I see it not as a playing-field leveler as much as an on-ramp,” says Joerg. “It makes it possible for people to get in and get comfortable without the social pressure.” While machines may have a permanent upper hand in chess, Dr. Wolf shows us, as any good challenger would, that it all comes down to how you see the board in front of you.

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.

The post This AI program could teach you to be better at chess appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
ChatGPT can now see, hear, and talk to some users https://www.popsci.com/technology/chatgpt-voice-pictures/ Mon, 25 Sep 2023 15:00:00 +0000 https://www.popsci.com/?p=573907
chatgpt shown on a mobile phone
Examples included creating and reading its own children's bedtime story. Deposit Photos

OpenAI's program can analyze pictures and speak with premium subscribers.

The post ChatGPT can now see, hear, and talk to some users appeared first on Popular Science.

]]>
chatgpt shown on a mobile phone
Examples included creating and reading its own children's bedtime story. Deposit Photos

ChatGPT has a voice—or, rather, five voices. On Monday, OpenAI announced its buzzworthy, controversial large language model (LLM) can now verbally converse with users, as well as parse uploaded photos and images.

In video demonstrations, ChatGPT is shown offering an extemporaneous children’s bedtime story based on the guided prompt, “Tell us a story about a super-duper sunflower hedgehog named Larry.” ChatGPT then describes its hedgehog protagonist, and offers details about its home and friends. In another example, the photo of a bicycle is uploaded via ChatGPT’s smartphone app alongside the request “Help me lower my bike seat.” ChatGPT then offers a step-by-step process alongside tool recommendations via a combination of user-uploaded photos and user text inputs. The company also describes situations such as ChatGPT helping craft dinner recipes based on ingredients identified within photographs of a user’s fridge and pantry, conversing about landmarks seen in pictures, and helping with math homework—although numbers aren’t necessarily its strong suit.

[Related: School district uses ChatGPT to help remove library books.]

According to OpenAI, the initial five audio voices are based on a new text-to-speech model that can create lifelike audio from only input text and a “few seconds” of sample speech. The current voice options were designed after collaborating with professional voice actors.

Unlike the LLM’s previous under-the-hood developments, OpenAI’s newest advancements are particularly focused on users’ direct experiences with the program as the company seeks to expand ChatGPT’s scope and utility to eventually make it a more complete virtual assistant. The audio and visual add-ons are also extremely helpful in terms of accessibility for disabled users.

“This approach has been informed directly by our work with Be My Eyes, a free mobile app for blind and low-vision people, to understand uses and limitations,” OpenAI explains in its September 25 announcement. “Users have told us they find it valuable to have general conversations about images that happen to contain people in the background, like if someone appears on TV while you’re trying to figure out your remote control settings.”

For years, popular voice AI assistants such as Siri and Alexa have offered particular abilities and services based on programmable databases of specific commands. As The New York Times notes, while updating and altering those databases often proves time-consuming, LLM alternatives can be much speedier, flexible, and nuanced. As such, companies like Amazon and Apple are investing in retooling their AI assistants to utilize LLMs of their own. 

OpenAI is threading a very narrow needle to ensure its visual identification ability is as helpful as possible, while also respecting third-parties’ privacy and safety. The company first demonstrated its visual ID function earlier this year, but said it would not release any version of it to the public before a more comprehensive understanding of how it could be misused. OpenAI states its developers took “technical measures to significantly limit ChatGPT’s ability to analyze and make direct statements about people” given the program’s well-documented issues involving accuracy and privacy. Additionally, the current model is only “proficient” with tasks in English—its capabilities significantly degrade with other languages, particularly those employing non-roman scripts.

OpenAI plans on rolling out ChatGPT’s new audio and visual upgrades over the next two weeks, but only for premium subscribers to its Plus and Enterprise plans. That said, the capabilities will become available to more users and developers “soon after.”

The post ChatGPT can now see, hear, and talk to some users appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Neuralink’s human trials volunteers ‘should have serious concerns,’ say medical experts https://www.popsci.com/technology/neuralink-monkey-abuse/ Thu, 21 Sep 2023 18:00:00 +0000 https://www.popsci.com/?p=573344
Elon Musk in suit
New reports cite horrific, deadly medical complications for Neuralink's test monkey subjects. Chesnot/Getty Images

A medical ethics committee responded to Elon Musk's brain-interface startup issuing an open call for patients yesterday.

The post Neuralink’s human trials volunteers ‘should have serious concerns,’ say medical experts appeared first on Popular Science.

]]>
Elon Musk in suit
New reports cite horrific, deadly medical complications for Neuralink's test monkey subjects. Chesnot/Getty Images

On Tuesday, Elon Musk’s controversial brain-computer interface startup Neuralink announced it received an independent review board’s approval to begin a six-year-long human clinical trial. Neuralink’s application for quadriplegic volunteers, particularly those suffering from spinal column injuries and ALS, is now open. Less than a day later, however, a Wired investigation revealed grisly details surrounding the deaths of the monkeys used in Neuralink’s experiments–deaths that Elon Musk has denied were directly caused by the implants. 

Almost simultaneously a medical ethics organization focused on animal rights filed a complaint with the Securities and Exchange Commission urging SEC to investigate Neuralink for alleged “efforts to mislead investors about the development history and safety of the device.” In Thursday’s email to PopSci, the committee urged potential Neuralink volunteers to reconsider their applications.

[Related: Neuralink is searching for its first human test subjects]

“Patients should have serious concerns about the safety of Neuralink’s device,” wrote Ryan Merkley, director of research advocacy for the committee, which was founded in 1985 and has over 17,000 doctor members. “There are well-documented reports of company employees conducting rushed, sloppy experiments in monkeys and other animals.”

According to Merkley and Wired’s September 20 report, Neuralink experiments on as many as 12 macaque monkeys resulted in chronic infections, paralysis, brain swelling, and other adverse side effects, eventually requiring euthanasia. The FDA previously denied Neuralink’s requests to begin human clinical trials, citing concerns regarding the implant’s electrodes migrating within the brain, as well as perceived complications in removing the device without causing brain damage. FDA approval was granted in May of 2023.

[Related: Neuralink human brain-computer implant trials finally get FDA approval]

Elon Musk first acknowledged some Neuralink test monkeys died during clinical trials on September 10, but denied their deaths were due to the experimental brain-computer interface implants. He did not offer causes of death, but instead claimed all monkeys chosen for testing were “close to death already.”

Wired’s investigation—based on public records, as well as interviews with former Neuralink employees and others—offers darker and often horrific accounts of the complications allegedly suffered by a dozen rhesus macaque test subjects between 2017 and 2020. In addition to neurological, psychological, and physical issues stemming from the test implants, some implants reportedly malfunctioned purely due to the mechanical installation of titanium plates and bone screws. In these instances, the cranial openings allegedly often grew infected and were immensely painful to the animals, and some implants became so loose they could be easily dislodged.

In his email to PopSci, Merkley reiterated the FDA’s past concerns regarding the Neuralink prototypes’ potential electrode migrations and removal procedures, and urged Musk’s company to “shift to developing a noninvasive brain-computer interface, where other researchers have already made progress.”

As Wired also notes, if the SEC takes action, it would be at least the third federal investigation into Neuralink’s animal testing procedures. Reuters detailed “internal staff complaints” regarding “hack job” operations on the test pigs in December 2022; last February, the US Department of Transportation opened its own Neuralink investigation regarding allegations of the company unsafely transporting antibiotic-resistant pathogens via “unsafe packaging and movement of implants removed from the brains of monkeys.”

During a Neuralink presentation last year, Musk claimed the company’s animal testing was never “exploratory,” and only focused on fully informed decisions. Musk repeatedly emphasized test animals’ safety, stressing that Neuralink is “not cavalier about putting devices into animals.” At one point, he contended that a monkey shown in a video operating a computer keyboard via Neuralink implant “actually likes doing the demo, and is not strapped to the chair or anything.”

“We are extremely careful,” he reassured his investors and audience at the time.

The post Neuralink’s human trials volunteers ‘should have serious concerns,’ say medical experts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why AI could be a big problem for the 2024 presidential election https://www.popsci.com/technology/ai-2024-election/ Tue, 19 Sep 2023 13:05:00 +0000 https://www.popsci.com/?p=568764
robot approaches voting booth next to person who is voting
AI-generated illustration by Dan Saelinger

Easy access to platforms like ChatGPT enhances the risks to democracy.

The post Why AI could be a big problem for the 2024 presidential election appeared first on Popular Science.

]]>
robot approaches voting booth next to person who is voting
AI-generated illustration by Dan Saelinger

A DYSTOPIAN WORLD fills the frame of the 32-second video. China’s armed forces invade Taiwan. The action cuts to shuttered storefronts after a catastrophic banking collapse and San Francisco in a military lockdown. “Who’s in charge here? It feels like the train is coming off the tracks,” a narrator says as the clip ends.

Anyone who watched the April ad on YouTube could be forgiven for seeing echoes of current events in the scenes. But the spliced news broadcasts and other footage came with a small disclaimer in the top-left corner: “Built entirely with AI imagery.” Not dramatized or enhanced with special effects, but all-out generated by artificial intelligence. 

The ad spot, produced by the Republican National Committee in response to President Joe Biden’s reelection bid, was an omen. Ahead of the next American presidential election, in 2024, AI is storming into a political arena that’s still warped by online interference from foreign states after 2016 and 2020. 

Experts believe its influence will only worsen as voting draws near. “We are witnessing a pivotal moment where the adversaries of democracy possess the capability to unleash a technological nuclear explosion,” says Oren Etzioni, the former CEO of and current advisor to the nonprofit AI2, a US-based research institute focusing on AI and its implications. “Their weapons of choice are misinformation and disinformation, wielded with unparalleled intensity to shape and sway the electorate like never before.”

Regulatory bodies have begun to worry too. Although both major US parties have embraced AI in their campaigns, Congress has held several hearings on the tech’s uses and its potential oversight. This summer, as part of a crackdown on Russian disinformation, the European Union asked Meta and Google to label content made by AI. In July, those two companies, plus Microsoft, Amazon, and others, agreed to the White House’s voluntary guardrails, which includes flagging media produced in the same way.

It’s possible to defend oneself against misinformation (inaccurate or misleading claims) and targeted disinformation (malicious and objectively false claims designed to deceive). Voters should consider moving away from social media to traditional, trusted sources for information on candidates during the election season. Using sites such as FactCheck.org will help counter some of the strongest distortion tools. But to truly bust a myth, it’s important to understand who—or what—is creating the fables.

A trickle to a geyser

As misinformation from past election seasons shows, political interference campaigns thrive at scale—which is why the volume and speed of AI-fueled creation worries experts. OpenAI’s ChatGPT and similar services have made generating written content easier than ever. These software tools can create ad scripts as well as bogus news stories and opinions that pull from seemingly legitimate sources. 

“We’ve lowered the barriers of entry to basically everybody,” says Darrell M. West, a senior fellow at the Brookings Institution who writes regularly about the impacts of AI on governance. “It used to be that to use sophisticated AI tools, you had to have a technical background.” Now anyone with an internet connection can use the technology to generate or disseminate text and images. “We put a Ferrari in the hands of people who might be used to driving a Subaru,” West adds.

Political campaigns have used AI since at least the 2020 to identify fundraising audiences and support get-out-the-vote efforts. An increasing concern is that the more advanced iterations could also be used to automate robocalls with a robotic impersonation of the candidate supposedly on the other end of the line.

At a US congressional hearing in May, Sen. Richard Blumenthal of Connecticut played an audio deepfake his office made—using a script written by ChatGPT and audio clips from his public speeches—to illustrate AI’s efficacy and argue that it should not go unregulated. 

At that same hearing, OpenAI’s own CEO, Sam Altman, said misinformation and targeted disinformation, aimed at manipulating voters, were what alarmed him most about AI. “We’re going to face an election next year and these models are getting better,” Altman said, agreeing that Congress should institute rules for the industry.

Monetizing bots and manipulation

AI may appeal to campaign managers because it’s cheap labor. Virtually anyone can be a content writer—as in the case of OpenAI, which trained its models by using underpaid workers in Kenya. The creators of ChatGPT wrote in 2019 that they worried about the technology lowering the “costs of disinformation campaigns” and supporting “monetary gain, a particular political agenda, and/or a desire to create chaos or confusion,” though that didn’t stop them from releasing the software.

Algorithm-trained systems can also assist in the spread of disinformation, helping code bots that bombard voters with messages. Though the AI programming method is relatively new, the technique as a whole is not: A third of pro-Trump Twitter traffic during the first presidential debate of 2016 was generated by bots, according to an Oxford University study from that year. A similar tactic was also used days before the 2017 French presidential election, with social media imposters “leaking” false reports about Emmanuel Macron.

Such fictitious reports could include fake videos of candidates committing crimes or making made-up statements. In response to the recent RNC political ad against Biden, Sam Cornale, the Democratic National Committee’s executive director, wrote on X (formerly Twitter) that reaching for AI tools was partly a consequence of the decimation of the Republican “operative class.” But the DNC has also sought to develop AI tools to support its candidates, primarily for writing fundraising messages tailored to voters by demographic.

The fault in our software

Both sides of the aisle are poised to benefit from AI—and abuse it—in the coming election, continuing a tradition of political propaganda and smear campaigns that can be traced back to at least the 16th century and the “pamphlet wars.” But experts believe that modern dissemination strategies, if left unchecked, are particularly dangerous and can hasten the demise of representative governance and fair elections free from intimidation. 

“What I worry about is that the lessons we learned from other technologies aren’t going to be integrated into the way AI is developed,” says Alice E. Marwick, a principal investigator at the Center for Information, Technology, and Public Life at the University of North Carolina at Chapel Hill. 

AI often has biases—especially against marginalized genders and people of color—that can echo the mainstream political talking points that already alienate those communities. AI developers could learn from the ways humans misuse their tools to sway elections and then use those lessons to build algorithms that can be held in check. Or they could create algorithmic tools to verify and fight the false-info generators. OpenAI predicted the fallout. But it may also have the capacity to lessen it.

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.

The post Why AI could be a big problem for the 2024 presidential election appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA wants to use AI to study unidentified aerial phenomenon https://www.popsci.com/technology/nasa-uap-report-findings/ Thu, 14 Sep 2023 15:00:00 +0000 https://www.popsci.com/?p=570329
A weather balloon against blue sky
Relax, it's just a weather balloon over Cape Canaveral, Florida. NASA

'We don't know what these UAP are, but we're going to find out. You bet your boots,' says NASA Director Bill Nelson.

The post NASA wants to use AI to study unidentified aerial phenomenon appeared first on Popular Science.

]]>
A weather balloon against blue sky
Relax, it's just a weather balloon over Cape Canaveral, Florida. NASA

This post has been updated.

A new NASA-commissioned independent study report recommends leveraging NASA’s expertise and public trust alongside artificial intelligence to investigate unidentified aerial phenomena (UAP) on Earth. As such, today NASA Director Bill Nelson announced the appointment of a NASA Director of UAP Research to develop and oversee implementation of investigation efforts.

“The director of UAP Research is a pivotal addition to NASA’s team and will provide leadership, guidance and operational coordination for the agency and the federal government to use as a pipeline to help identify the seemingly unidentifiable,” Nicola Fox, associate administrator of the Science Mission Directorate at NASA, said in a release.

Although NASA officials repeated multiple times that the study found no evidence of extraterrestrial origin, they conceded they still “do not know” the explanation behind at least some of the documented UAP sightings. Nelson stressed the agency’s aim to begin minimizing public stigma surrounding UAP events, and begin shifting the subject “from sensationalism to science.” In keeping with this strategy, the panel report relied solely on unclassified and open source UAP data to ensure all findings could be shared openly and freely with the public.

[Related: Is the truth out there? Decoding the Pentagon’s latest UFO report.]

“We don’t know what these UAP are, but we’re going to find out,” Nelson said at one point. “You bet your boots.”

According to today’s public announcement, the study team additionally recommends NASA utilize its “open-source resources, extensive technological expertise, data analysis techniques, federal and commercial partnerships, and Earth-observing assets to curate a better and robust dataset for understanding future UAP.”

Composed of 16 community experts across various disciplines, the UAP study team was first announced in June of last year, and began work on their study in October. In May 2023, representatives from the study team expressed frustration with the fragmentary nature of available UAP data.

“The current data collection efforts regarding UAPs are unsystematic and fragmented across various agencies, often using instruments uncalibrated for scientific data collection,” study chair David Spergel, an astrophysicist and president of the nonprofit science organization the Simons Foundation, said at the time. “Existing data and eyewitness reports alone are insufficient to provide conclusive evidence about the nature and origin of every UAP event.”

Today’s report notes that although AI and machine learning tools have become “essential tools” in identifying rare occurrences and outliers within vast datasets, “UAP analysis is more limited by the quality of data than by the availability of techniques.” After reviewing neural network usages in astronomy, particle physics, and other sciences, the panel determined that the same techniques could be adapted to UAP research—but only if datasets’ quality is both improved and codified. Encouraging the development of rigorous data collection standards and methodologies will be crucial to ensuring reliable, evidence-based UAP analysis.

[Related: You didn’t see a UFO. It was probably one of these things.]

Although no evidence suggests extraterrestrial intelligence is behind documented UAP sightings, “Do I believe there is life in the universe?” Nelson asked during NASA’s press conference. “My personal opinion is, yes.”

The post NASA wants to use AI to study unidentified aerial phenomenon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>