Category: Story

Home / Category: Story

Personal technology is getting a bad rap these days. It keeps getting more addictive: Notifications keep us glued to our phones. Autoplaying episodes lure us into Netflix binges. Social awareness cues—like the "seen-by" list on Instagram Stories—enslave us to obsessive, ouroboric usage patterns. (Blink twice if you've ever closed Instagram, only to re-open it reflexively.) Our devices, apps, and platforms, experts increasingly warn, have been engineered to capture our attention and ingrain habits that are (it seems self evident) less than healthy.

Unless, that is, you're talking about fitness trackers. For years, the problem with Fitbits, Garmins, Apple Watches, and their ilk has been that they aren't addictive enough. About one third of people who buy fitness trackers stop using them within six months, and more than half eventually abandon them altogether.

As for that guy at work whose Fitbit appears to be bionically integrated with his wrist, it's unclear whether wearing the thing actually makes him more fit. Most studies on the effectiveness of fitness trackers have produced weak or inconclusive findings (blame short investigation windows and small, homogenous sample sizes). In fact, two of the most well-designed studies to date have turned up less than stellar results.

The first, a randomized controlled trial involving 800 test subjects, was conducted between June, 2013 and August, 2014. The results, which were published lastyear in The Lancet Diabetes & Endocrinology, found that, after one year of use, a clip-on activity tracker had no effect on test subjects' overall health and fitness—even when it was combined with a financial incentive. (In a perverse twist, volunteers whose incentives were removed six months into the study fared worse, in the long run, than those who were never offered them at all.) The second, an RCT out of the University of Pittsburgh conducted between October 2010 and October 2012, examined whether combining a weight loss program with a fitness tracker, worn on the upper arm, could help test subjects lose more weight or improve their overall health. The results, published last year in the Journal of the American Medical Association, showed that subjects without fitness trackers lost more weight than their gadget-wearing counterparts—a difference of about eight pounds. And while it's true that weight is not a great proxy for health, the findings also showed that the test subjects with fitness trackers were no more active or fit than those without.

All of which is, frankly, pretty embarrassing for companies that manufacture fitness devices—not to mention disquieting for the people who wear them.

And yet, none of this means you should ditch your fancy new fitness tracker. Have companies like Fitbit and Garmin been slow to incorporate sticky features into their products? Yes. Unequivocally. By 2013—the year Apple brought attention-enslaving push notifications to its phones’ lock screens, and around the time the Lancet study was getting off the ground—fitness trackers and their accompanying apps had only just begun to leverage theories from psychology and behavioral economics. But today's products are different.

The fact is, most existing studies on fitness trackers—including the two I cited above—hinge on devices that are several years old. (Think glorified pedometers that don't connect seamlessly with the supercomputer in your pocket.) And while peer-reviewed research on the latest wave of workout gadgets is still sparse, signs suggest newer wearables are finally becoming more addictive.

For starters, wearable fitness trackers themselves have turned into wildly capable machines. It's no longer enough to measure steps and active minutes; features like sleep-tracking and 24/7 heart rate monitoring have also become table stakes. So, too, have the beefy batteries necessary to make features like continuous heart-rate detection worth a damn. Fitbit's newest "motivating timepiece," the Ionic, can go four days between charges. The Fenix 5, Garmin's flagship fitness watch, can last up to two weeks.

"If it's comfortable, it's waterproof, the display's always readable, and it's got a long battery life, there's less excuse for people to take it off," says Phil McClendon, Garmin's lead product manager. For technology companies, few metrics matter more than engagement. Application developers call it time in app. Online publishers (like WIRED!) call it time on site. Wearable manufacturers are all about that time on wrist.

The software's gotten better, too, along with user experience. Collecting information is one thing. Presenting it in a way people find comprehensible, motivating, and actionable is another. Consider something as simple as a reminder to move—another feature ubiquitous among newer fitness watches. Buzzing people once an hour, regardless of their current activity, is annoying (if my device tells me to get up and move while I'm on a hike, it's going off a cliff). Instead, most wearables now tell you to move only if you've been sedentary for more than a predetermined period of time. And according to Fitbit, at least, those reminders work. "People who would get six reminders to move a day, on average, after a few months, they get about 40 percent fewer reminders to move," says Shelton Yuen, Fitbit's vice president of research. "That’s a very detailed example, but I feel like it’s such an important one, because it means the user's innate behavior is changing."

    More on Fitness

  • Erin Griffith

    When Your Activity Tracker Becomes a Personal Medical Device

  • Nick Stockton

    What's Up With That: Why Running Hurts Every Part of Your Body

  • Robbie Gonzalez

    Why You'll Never Run a Sub 2 Hour Marathon—But the Pros Might

Of course, Fitbit would say that. But outside experts agree that fitness tech is improving. "There are two things, specifically, that apps and devices are actually getting better at," says University of Pennsylvania researcher Mitesh Patel, who studies whether and how wearable devices can facilitate improvements in health. The first is leveraging social networks to stoke competition or foster support. Researchers led by Penn State psychologist Liza Rovniak recently showed support networks to be highly effective at increasing physical activity in unmotivated adults, but Patel suspects the leaderboard format, a popular way of promoting competition by ranking users, fails to inspire anyone but those people at the top of the charts (who probably need the least encouragement anyway).

The second is goal setting. "We know that people need to strive for an achievable goal in order to change their behavior," Patel says, the operative word there being "achievable." The problem with early fitness trackers was that they all used the same goal (step count) and they all set the bar way too high (10,000 steps). But the average American takes just 5,000 steps a day. Asking her to double that figure isn't just unrealistic—it can actually be discouraging.

But today's fitness wearables tailor their feedback to users' individual habits. Rather than tell you to take 10,000 steps, Garmin's Insights feature will nudge you if it senses you're moving less than you usually do on a given day of the week. Fitbit now allows users to set and track personalized goals related to things like weight and cardiovascular fitness.

These are just some of the ways wearable manufacturers have begun borrowing theories from psychology and behavioral economics to motivate users in recent years—and there will be more to come. "They're constantly adding features," says Brandeis University psychologist Alycia Sullivan, a researcher at the Boston Roybal Center for Active Lifestyle Interventions and coauthor of a recent review of fitness tracker motivation strategies. Now that these devices are small, powerful, and packed with sensors, she says, expect most of those features to show up on the software side of things. "That's where these companies are most able to leverage the data they're accumulating toward interactive, personalized information you'll actually use."

It may have taken them a while to catch up with the Facebooks and Netflixes of the world, but our fitness devices are finally poised to hijack our brains—and bodies—for good.

Related Video

Gadgets

Fitbit Jailbreak Tips and Tricks

When you're trying to meet a Fitbit threshold, but you just can't cut the mustard, we've got several cheats that will bring your count up to 10,000 steps with minimal effort.

The Case of the Missing Dark Matter

March 20, 2019 | Story | No Comments

Physicists don’t know much about dark matter. They can’t agree on what it’s made of, how much a single particle weighs, or the best way to construct a Play-Doh diorama of it. (How would you do it? Dark matter is invisible—light doesn’t interact with it at all.) Nobody has ever caught a dark matter particle on Earth.

But after 30-plus years of telescope observations, most researchers do agree on one thing: The universe contains a lot of it. Astrophysicists think dark matter dominates ordinary matter in the universe by more than five times because galaxies rotate too fast for their visible star-stuff to handle. Without the extra dark matter holding them together, the laws of physics say that these galaxies would fall apart—the Milky Way, for example, rotates so fast that it must contain 30 times more dark matter than ordinary matter. In fact, every galaxy that astronomers have ever studied contains dark matter.

Until now.

An international team of astrophysicists has discovered a galaxy 65 million light years away with so little dark matter that it may contain none at all. To arrive at this conclusion, they measured the speeds of 10 twinkly blobs in the galaxy, called globular clusters, that each contain millions of stars. Their measurements showed that this galaxy’s stars can handle its rotational speed. Compared to other galaxies of the same brightness, “it has at least 400 times less dark matter than what we expected,” says astrophysicist Pieter van Dokkum of Yale University.

This is weird—and it could change what astrophysicists think dark matter is, in addition to upending their understanding of how galaxies form, says van Dokkum. Right now, they think that galaxies form around a scaffolding made of dark matter. The stars only take shape on top of the dark matter that is already there. “Dark matter accumulates; ordinary gas falls into it; it turns into stars, and then you get a galaxy,” says astrophysicist Jeremiah Ostriker of Columbia University, who was not involved in the work.

“Finding a galaxy without dark matter is an oxymoron,” says van Dokkum. It’s like finding a body without a skeleton. “How do you form such a thing? How do you create a galaxy without dark matter first?”

However, it’s still too early to throw out the old rules, says astrophysicist James Bullock of the University of California, Irvine. He points out that the galaxy, memorably named NGC1052-DF2, is orbiting another one. It’s possible that this galaxy formed on top of dark matter just like any other, and the neighboring galaxy stripped the dark matter away, he says.

To imagine this process, you can visualize dark matter as a diffuse collection of individual particles—unlike ordinary matter, which clumps into stars and planets. “It’s better to think of it as a fluid, like a sea of dark matter,” says Bullock. The leading dark matter theory predicts that this “sea” of particles moves around a galaxy in deep, plunging orbits like comets around the sun. Bullock thinks that as the dark matter particles reached the extremes of their orbits, forces from the neighboring galaxy could have ripped them away.

The next step is to figure out whether this galaxy is an exception or the norm, says Ostriker. If astrophysicists find more similar galaxies, they’ll have to revise their current theories about dark matter. The leading theory—that dark matter consists of so-called weakly interacting massive particles, each slightly heavier than a proton—would not be able to explain the existence of many dark matter-less galaxies.

Other theories might work better. For example, Ostriker has proposed a theory that does predict certain galaxies to have extremely low amounts of dark matter, in which dark matter particles are more than 1030 times lighter than WIMPs.

If the current theory is wrong, that will also affect the strategies of the experiments trying to catch dark matter particles on Earth, says Bullock. These collaborations, such as LUX-ZEPLIN experiment in South Dakota, the XENON1T experiment in Italy, and the ADMX experiment in Washington, are trying to figure out what dark matter actually is made of, and they look to astronomical observations to guide their detector designs. LUX-ZEPLIN and XENON1T both use liquid xenon to hunt for WIMPs. ADMX looks for another candidate known as an axion, which is lighter than a WIMP and requires a different type of detector.

Van Dokkum and his team plan to keep searching for similar galaxies—or just any other weird thing that challenges the current understanding of dark matter. In 2016, they found the opposite of this galaxy—one that was rotating so fast that they concluded it was 99.99 percent dark matter. “That object was a surprise in the other direction,” he says. They don’t know how that galaxy formed, either.

They’re hoping that these weird objects will help guide theorists like Ostriker and Bullock to better understand what dark matter is. “We know so little about dark matter that any new constraint is welcome,” says van Dokkum. Even if it means throwing away what little they have.

Dark Matter

  • To build their dark matter detectors, physicists wade into the wild, speculative xenon gas market

  • After discovering gravitational waves, physicists gird themselves for the next discovery

  • Turns out, dark matter detectors can be re-purposed for nuclear security

Related Video

Science

Massive Black Holes Whip Dark Matter Into a Frenzy

Inside a simulation of the universe's particle accelerator with WIRED Science writer Nick Stockton.

For 20 years, an experiment in Italy known as DAMA has detected an oscillating signal that could be coming from dark matter—the fog of invisible particles that ostensibly fill the cosmos, sculpting everything else with their gravity.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

One of the oldest and biggest experiments hunting for dark matter particles, DAMA is alone in claiming to see them. It purports to pick up on rare interactions between the hypothesized particles and ordinary atoms. But if these dalliances between the visible and invisible worlds really do produce DAMA’s data, several other experiments would probably also have detected dark matter by now. They have not.

Late last month, Rita Bernabei of the University of Rome Tor Vergata, DAMA’s longtime leader, presented the results of an additional six years of measurements. She reported that DAMA’s signal looks as strong as ever. But researchers not involved with the experiment have since raised serious arguments against dark matter as the signal’s source.

DAMA searches for popular dark matter candidates called WIMPs, or “weakly interacting massive particles.” The scientists monitor an array of sodium iodide crystals kept deep under Gran Sasso Mountain in the Apennines, looking for flashes of radiation that could be caused by dark matter particles colliding with atomic nuclei in the crystals. As the solar system hurtles through the galaxy, “it looks like a wind of WIMPs coming at you,” explained Katherine Freese, a physicist at the University of Michigan who in 1986 co-developed the idea for such an experiment, “in the same way that when you’re driving it looks like the rain is coming into your windshield.”

Exactly in line with this hypothesis, the DAMA scientists find that nuclear activity in their crystals varies throughout the year. The signal always peaks in June, when Earth is moving fastest through the dark-matter-filled galaxy, and troughs in December, when the planet curves into the leg of its orbit that opposes the sun’s motion around the galaxy, slowing us relative to the dark matter wind.

The latest run of the experiment, called DAMA/LIBRA-phase 2, began in 2011. After taking data for six earthly orbits, the team reports that they continue to see a seasonal signal consistent with dark matter. As Bernabei told Quanta by email, “The annual modulation signature and the adopted procedures assure sensitivity to a plethora of possible dark matter candidates.”

Outside experts saw otherwise. In a paper posted April 4 to the physics preprint site arxiv.org, three physicists showed that a standard dark-matter WIMP cannot produce the new DAMA signal. “The vanilla one that everybody loves—that one’s gone,” said Freese, who coauthored the new paper with her student Sebastian Baum and Chris Kelso of the University of North Florida.

Freese and colleagues focused on a new feature of the DAMA data. As part of the DAMA/LIBRA-phase 2 upgrade, the team at Gran Sasso switched out hardware to make their detectors sensitive to lower-energy excitations inside the sodium iodide crystals. Bernabei reported an annual modulation in lower-energy nuclear recoils that was broadly similar to the signal for higher-energy recoils.

But if a vanilla WIMP were really the source of the annual modulation, the low-energy recoils should change relative to the high-energy recoils, wrote Freese and her coauthors. They found that nuclear activity should vary between June and December either much more dramatically, or much less so, at low energies than at high energies, depending on whether dark matter particles are lightweight or heavy. If WIMPs are light, DAMA should see them colliding with light sodium atoms at low energies much more often than with heavy iodine atoms. Overall, DAMA’s signal should be strongest for the very-lowest-energy recoils. Alternatively, heavy WIMPs will interact almost exclusively with iodine atoms at low energies and very little with sodium. Overall, in that case, the signal will weaken as you look at the lowest-energy events.

Instead, neither shift is seen in the DAMA/LIBRA-phase 2 data, “which is difficult to explain with dark matter,” said Jonathan Davis, a theoretical physicist at King’s College London.

In their paper, Baum, Freese and Kelso show that WIMPs can still generate the observed annual modulation if they have a twist: an innate preference for protons over neutrons that will lead them to interact more often with sodium than iodine (which has more neutrons). However, several physicists said this special “isospin-violating” property probably would have affected the results of other dark matter experiments, such as XENON1T, a 3.2-ton liquid xenon detector also located under Gran Sasso, which has seen no such effect.

The eerie silence in XENON1T, and in other catchily named dark matter detectors like LUX and PICO, had already dimmed many experts’ hopes about DAMA. These experiments, which look for telltale nuclear activity in different types of materials, have published a string of null results, ruling out large classes of WIMPs that would be compatible with DAMA’s signal. (Other dark matter candidates, such as axions, can’t be tested by these experiments.)

It had been possible, however, to think dark matter might just have an unexplained affinity for sodium iodide. But the April 4 analysis changes that. “What they show in this paper nicely is that … you can exclude DAMA with itself—not with reference to other experiments,” said Laura Baudis, a physicist at the University of Zurich and a member of the collaboration that runs XENON1T.

Hard as it is to account for the DAMA signal using dark matter, it’s equally difficult to understand it any other way. For decades, experts have mulled over more mundane explanations. “Several have been put forward and rapidly dispelled,” said Juan Collar, a physicist at the University of Chicago who leads the CoGeNT dark matter experiment. “I personally cannot come up with a good explanation.”

Davis argued in Physical Review Letters in 2014 that the annual modulation comes from a combination of muons bombarding Earth most heavily in July and solar neutrinos peaking in January. But other physicists quickly showed that the latter seasonal effect is too small to produce the signal, at least in the way he had proposed. In a new paper that’s causing some buzz, Daniel McKinsey, a physicist at the University of California, Berkeley, contends that the signal could come from argon contamination. Certain isotopes of argon radioactively decay more or less depending on the season. Yet this explanation works only if the nitrogen DAMA uses in one step of their procedure contains argon, which is unknown.

Many researchers said that a lack of transparency by Bernabei and the DAMA team has slowed efforts to understand what’s going on. For example, one limitation of Freese and coauthors’ analysis is that DAMA hasn’t released information about whether background effects amplify or diminish at lower energies, leaving outside researchers to assume that these effects have been corrected for already.

“I am certain that if they completely opened up to the community,” such as by sharing their data, “we would get to the bottom of what is causing the annual modulation,” Davis said. But the DAMA scientists present only their finalized data plots, argue strongly that their signal is evidence for dark matter, and take a “combative approach” to anyone who suggests otherwise, he said.

Other groups have had to step up. In the next few years, three new sodium iodide crystal experiments will start yielding results: ANAIS, COSINE-100 and SABRE, which has locations at Gran Sasso and at an underground lab in Australia. By replicating the experiment in the Southern Hemisphere, where summer and winter flip, SABRE will eliminate seasonal effects.

“For as long as physics remains an experimental science, the only way to advance knowledge will be through improved measurements,” Collar said. The complementary experiments either won’t see a modulation, in which case “we will have to turn the page and write the DAMA anomaly off,” he said. Or they’ll see something akin to it, in which case “we will have to work hard on a dark matter model that also explains all null observations with other detector materials.”

“Experience teaches that it will most likely not be dark matter,” said Neal Weiner, a theoretical physicist at New York University. “But I’m certainly persuaded enough that there’s a real chance it’s something interesting.”

Ultimately, said Freese, the new data release and her group’s analysis don’t change the big picture. “You still have to go build detectors made of the same material, but with different people doing it, and in different locations in the world … to figure out what the hell’s going on with DAMA,” she said. “Because nobody understands DAMA.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Massive Black Holes Whip Dark Matter Into a Frenzy

Inside a simulation of the universe's particle accelerator with WIRED Science writer Nick Stockton.

Update: SpaceX successfully launched its Falcon 9 rocket on a mission to resupply the International Space Station.

Another day, another SpaceX launch. On Monday at 4:30 pm ET, the commercial space company is slated to propel a previously-flown Dragon cargo ship into low Earth orbit aboard a used Falcon 9 rocket. The Dragon spacecraft, which is carrying food and supplies for the International Space Station, is scheduled to dock with the orbital outpost on Wednesday.

The cargo run will mark SpaceX’s 14th resupply mission to the ISS, its seventh Falcon 9 launch this year, and its second such launch in less than a week. Last Friday, one year to the day after launching and landing a used rocket for the first time, the company deployed a used Falcon 9 rocket to send 10 Iridium telecommunications satellites into orbit from Vandenberg Air Force Base in California. (SpaceX has now launched five missions for Iridium, but it has only used three boosters—the power of recycling!)

On the other side of the country, the Dragon capsule currently awaiting liftoff from Pad 40 at Florida's Cape Canaveral Air Force Station is loaded with close to three tons of grub, gear, and research equipment. Among these are the Atmosphere-Space Interactions Monitor—a suite of of optical cameras, photometers, and an X- and gamma-ray detector designed to study upper-atmospheric lightning and its relationship relationship to Earth's climate, and a nice complement to the Geostationary Lightning Mappers aboard NASA's next generation weather satellites. Also aboard are the Veggie Passive Orbital Nutrient Delivery System (an experimental method for growing food in microgravity), and the Multi-use Variable-g Platform, aka "MVP," a temperature- and humidity-controlled artificial gravity machine about the size of a microwave, with space inside for a range of biological samples, from cells, to fish, to flatworms.

About 10 minutes after Monday's launch, the Dragon will deploy its solar arrays and begin firing its thrusters with its sights set on the ISS. From aboard the space station, NASA astronaut Scott Tingle will assist Japanese astronaut Norishege Kanai in capturing the Dragon capsule with Canadarm2, the 58-foot-long robotic grappling arm, and mating it to the station's Harmony module. There, it will spend approximately one month before detaching and returning to Earth.

But the most noteworthy thing about Monday's resupply mission isn’t the launch itself. It won't be SpaceX's first time using a recycled rocket in a resupply mission. Neither will it be its first time lofting a previously flown Dragon capsule, let alone its first time reusing both major components simultaneously.

In truth, the most significant thing about the day's cargo run may be that there is little novelty to it whatsoever. Which is, of course, SpaceX's grand vision: To make rocket launches repeatable, reliable, quotidian. To achieve that vision, it'll need to up its launch cadence, which it appears to be doing: In 2017, the company launched 18 rockets. If Monday's liftoff goes as planned, it'll have seven on the year already, putting it well on track to exceed last year's high-water mark.

Up, Up, and Away

  • Elon Musk's long-term plan for SpaceX is to get humans off of Earth and on to Mars—but what does the company's recent progress say about that goal?

  • His plan to launch thousands of small satellites for faster internet is ambitious, too. But that may not be all SpaceX wants to use them for.

  • Oh, and then there's the kookoobananas plan to transport humans between Earth cities on rockets. Sure, man, sure.

Related Video

Science

The Commercial Space Race Heats Up

The private American companies battling it out for a $3.5 billion NASA contract have one last chance to successfully launch their spacecraft before a decision is made in January.

Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

But now the robots are here to help.

In a series of results reported in the journals Physical Review Letters and Chaos, scientists have used machine learning—the same computational technique behind recent successes in artificial intelligence—to predict the future evolution of chaotic systems out to stunningly distant horizons. The approach is being lauded by outside experts as groundbreaking and likely to find wide application.

“I find it really amazing how far into the future they predict” a system’s chaotic evolution, said Herbert Jaeger, a professor of computational science at Jacobs University in Bremen, Germany.

The findings come from veteran chaos theorist Edward Ott and four collaborators at the University of Maryland. They employed a machine-learning algorithm called reservoir computing to “learn” the dynamics of an archetypal chaotic system called the Kuramoto-Sivashinsky equation. The evolving solution to this equation behaves like a flame front, flickering as it advances through a combustible medium. The equation also describes drift waves in plasmas and other phenomena, and serves as “a test bed for studying turbulence and spatiotemporal chaos,” said Jaideep Pathak, Ott’s graduate student and the lead author of the new papers.

After training itself on data from the past evolution of the Kuramoto-Sivashinsky equation, the researchers’ reservoir computer could then closely predict how the flamelike system would continue to evolve out to eight “Lyapunov times” into the future, eight times further ahead than previous methods allowed, loosely speaking. The Lyapunov time represents how long it takes for two almost-identical states of a chaotic system to exponentially diverge. As such, it typically sets the horizon of predictability.

“This is really very good,” Holger Kantz, a chaos theorist at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, said of the eight-Lyapunov-time prediction. “The machine-learning technique is almost as good as knowing the truth, so to say.”

The algorithm knows nothing about the Kuramoto-Sivashinsky equation itself; it only sees data recorded about the evolving solution to the equation. This makes the machine-learning approach powerful; in many cases, the equations describing a chaotic system aren’t known, crippling dynamicists’ efforts to model and predict them. Ott and company’s results suggest you don’t need the equations—only data. “This paper suggests that one day we might be able perhaps to predict weather by machine-learning algorithms and not by sophisticated models of the atmosphere,” Kantz said.

Besides weather forecasting, experts say the machine-learning technique could help with monitoring cardiac arrhythmias for signs of impending heart attacks and monitoring neuronal firing patterns in the brain for signs of neuron spikes. More speculatively, it might also help with predicting rogue waves, which endanger ships, and possibly even earthquakes.

Ott particularly hopes the new tools will prove useful for giving advance warning of solar storms, like the one that erupted across 35,000 miles of the sun’s surface in 1859. That magnetic outburst created aurora borealis visible all around the Earth and blew out some telegraph systems, while generating enough voltage to allow other lines to operate with their power switched off. If such a solar storm lashed the planet unexpectedly today, experts say it would severely damage Earth’s electronic infrastructure. “If you knew the storm was coming, you could just turn off the power and turn it back on later,” Ott said.

He, Pathak and their colleagues Brian Hunt, Michelle Girvan and Zhixin Lu (who is now at the University of Pennsylvania) achieved their results by synthesizing existing tools. Six or seven years ago, when the powerful algorithm known as “deep learning” was starting to master AI tasks like image and speech recognition, they started reading up on machine learning and thinking of clever ways to apply it to chaos. They learned of a handful of promising results predating the deep-learning revolution. Most importantly, in the early 2000s, Jaeger and fellow German chaos theorist Harald Haas made use of a network of randomly connected artificial neurons—which form the “reservoir” in reservoir computing—to learn the dynamics of three chaotically coevolving variables. After training on the three series of numbers, the network could predict the future values of the three variables out to an impressively distant horizon. However, when there were more than a few interacting variables, the computations became impossibly unwieldy. Ott and his colleagues needed a more efficient scheme to make reservoir computing relevant for large chaotic systems, which have huge numbers of interrelated variables. Every position along the front of an advancing flame, for example, has velocity components in three spatial directions to keep track of.

It took years to strike upon the straightforward solution. “What we exploited was the locality of the interactions” in spatially extended chaotic systems, Pathak said. Locality means variables in one place are influenced by variables at nearby places but not by places far away. “By using that,” Pathak explained, “we can essentially break up the problem into chunks.” That is, you can parallelize the problem, using one reservoir of neurons to learn about one patch of a system, another reservoir to learn about the next patch, and so on, with slight overlaps of neighboring domains to account for their interactions.

Parallelization allows the reservoir computing approach to handle chaotic systems of almost any size, as long as proportionate computer resources are dedicated to the task.

Ott explained reservoir computing as a three-step procedure. Say you want to use it to predict the evolution of a spreading fire. First, you measure the height of the flame at five different points along the flame front, continuing to measure the height at these points on the front as the flickering flame advances over a period of time. You feed these data-streams in to randomly chosen artificial neurons in the reservoir. The input data triggers the neurons to fire, triggering connected neurons in turn and sending a cascade of signals throughout the network.

The second step is to make the neural network learn the dynamics of the evolving flame front from the input data. To do this, as you feed data in, you also monitor the signal strengths of several randomly chosen neurons in the reservoir. Weighting and combining these signals in five different ways produces five numbers as outputs. The goal is to adjust the weights of the various signals that go into calculating the outputs until those outputs consistently match the next set of inputs—the five new heights measured a moment later along the flame front. “What you want is that the output should be the input at a slightly later time,” Ott explained.

To learn the correct weights, the algorithm simply compares each set of outputs, or predicted flame heights at each of the five points, to the next set of inputs, or actual flame heights, increasing or decreasing the weights of the various signals each time in whichever way would have made their combinations give the correct values for the five outputs. From one time-step to the next, as the weights are tuned, the predictions gradually improve, until the algorithm is consistently able to predict the flame’s state one time-step later.

“In the third step, you actually do the prediction,” Ott said. The reservoir, having learned the system’s dynamics, can reveal how it will evolve. The network essentially asks itself what will happen. Outputs are fed back in as the new inputs, whose outputs are fed back in as inputs, and so on, making a projection of how the heights at the five positions on the flame front will evolve. Other reservoirs working in parallel predict the evolution of height elsewhere in the flame.

In a plot in their PRL paper, which appeared in January, the researchers show that their predicted flamelike solution to the Kuramoto-Sivashinsky equation exactly matches the true solution out to eight Lyapunov times before chaos finally wins, and the actual and predicted states of the system diverge.

The usual approach to predicting a chaotic system is to measure its conditions at one moment as accurately as possible, use these data to calibrate a physical model, and then evolve the model forward. As a ballpark estimate, you’d have to measure a typical system’s initial conditions 100,000,000 times more accurately to predict its future evolution eight times further ahead.

That’s why machine learning is “a very useful and powerful approach,” said Ulrich Parlitz of the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, who, like Jaeger, also applied machine learning to low-dimensional chaotic systems in the early 2000s. “I think it’s not only working in the example they present but is universal in some sense and can be applied to many processes and systems.” In a paper soon to be published in Chaos, Parlitz and a collaborator applied reservoir computing to predict the dynamics of “excitable media,” such as cardiac tissue. Parlitz suspects that deep learning, while being more complicated and computationally intensive than reservoir computing, will also work well for tackling chaos, as will other machine-learning algorithms. Recently, researchers at the Massachusetts Institute of Technology and ETH Zurich achieved similar results as the Maryland team using a “long short-term memory” neural network, which has recurrent loops that enable it to store temporary information for a long time.

Since the work in their PRL paper, Ott, Pathak, Girvan, Lu and other collaborators have come closer to a practical implementation of their prediction technique. In new research accepted for publication in Chaos, they showed that improved predictions of chaotic systems like the Kuramoto-Sivashinsky equation become possible by hybridizing the data-driven, machine-learning approach and traditional model-based prediction. Ott sees this as a more likely avenue for improving weather prediction and similar efforts, since we don’t always have complete high-resolution data or perfect physical models. “What we should do is use the good knowledge that we have where we have it,” he said, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.” The reservoir’s predictions can essentially calibrate the models; in the case of the Kuramoto-Sivashinsky equation, accurate predictions are extended out to 12 Lyapunov times.

The duration of a Lyapunov time varies for different systems, from milliseconds to millions of years. (It’s a few days in the case of the weather.) The shorter it is, the touchier or more prone to the butterfly effect a system is, with similar states departing more rapidly for disparate futures. Chaotic systems are everywhere in nature, going haywire more or less quickly. Yet strangely, chaos itself is hard to pin down. “It’s a term that most people in dynamical systems use, but they kind of hold their noses while using it,” said Amie Wilkinson, a professor of mathematics at the University of Chicago. “You feel a bit cheesy for saying something is chaotic,” she said, because it grabs people’s attention while having no agreed-upon mathematical definition or necessary and sufficient conditions. “There is no easy concept,” Kantz agreed. In some cases, tuning a single parameter of a system can make it go from chaotic to stable or vice versa.

Wilkinson and Kantz both define chaos in terms of stretching and folding, much like the repeated stretching and folding of dough in the making of puff pastries. Each patch of dough stretches horizontally under the rolling pin, separating exponentially quickly in two spatial directions. Then the dough is folded and flattened, compressing nearby patches in the vertical direction. The weather, wildfires, the stormy surface of the sun and all other chaotic systems act just this way, Kantz said. “In order to have this exponential divergence of trajectories you need this stretching, and in order not to run away to infinity you need some folding,” where folding comes from nonlinear relationships between variables in the systems.

The stretching and compressing in the different dimensions correspond to a system’s positive and negative “Lyapunov exponents,” respectively. In another recent paper in Chaos, the Maryland team reported that their reservoir computer could successfully learn the values of these characterizing exponents from data about a system’s evolution. Exactly why reservoir computing is so good at learning the dynamics of chaotic systems is not yet well understood, beyond the idea that the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. The technique works so well, in fact, that Ott and some of the other Maryland researchers now intend to use chaos theory as a way to better understand the internal machinations of neural networks.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Fear Not the Robot Singularity

The robot revolution we’re in the midst of is way more interesting and way less murder-y than science fiction. Call it the multiplicity.

The robot arm hovers over a pile of products before it makes its move, snagging a toothbrush with its suction cup. It holds the product up, waits for the red flash of a barcode scanner, then turns and drops the toothbrush in a cubby hole. Next the arm suction-cups a box of Goldfish crackers, turns, and files it, too.

At a startup called Kindred in San Francisco, technicians are teaching robots how to precisely manipulate objects like these. Why? Because somebody's got one hell of an online shopping habit. The idea is to get robots so good at picking and placing products that they make human workers look like sloths on sedatives, thus supercharging order fulfillment centers. And how these researchers are trying to do it has big implications for robots beyond the warehouse.

If you want to teach a robot to pick up an object, you could do it the classical way and program it with line after line of code. Or like Kindred says its system works, you can use more modern approaches in artificial intelligence: reinforcement learning and imitation learning.

According to Kindred, its robots start with the former. With reinforcement learning, the robots practice manipulating products on their own with trial and error. When they do something right, they “score,” hence the reinforcement. “The goal is to maximize the score over time,” says George Babu, cofounder of Kindred. “When you do something correctly, then you explore actions similar to the one that gave you a correct response.”

Reinforcement learning has its limitations, though. For one, it’s slow. In a purely digital environment, a simulator could rapidly try and fail, over and over and over—but with a robot in the real world, that iteration is constrained by the laws of the physical universe.

    More from the hardwired series

  • Matt Simon

    What Is a Robot?

  • Matt Simon

    This Robot Tractor Is Ready to Disrupt Construction

  • Matt Simon

    Inside SynTouch, the Mad Lab Giving Robots the Power to Feel

And two, Kindred’s robots can only teach themselves so much; there are simply too many scenarios that play out in the real world. So a human operator steps in to initiate the second of Kindred’s approaches: so-called imitation learning, looking through the robot’s eyes and guiding its arms. “Some of our algorithms are imitating where the human picked the object,” says Babu, “some of our algorithms are imitating how the human is moving through space to get the objects.”

This builds on what the robot learned through reinforcement, showing it what constitutes a good or bad grip. Essentially, it fills in knowledge gaps by creating lessons that the robot couldn’t practice on its own. Thus a robot learns to more precisely manipulate products like boxes of drugs and toothbrushes.

Which will be essential in an ecommerce environment (Gap is currently testing Kindred’s system), where a robot may encounter objects that are hard or soft or floppy or fragile. And with a human in the loop, the robot will have a tutor to guide it remotely if it comes across something novel. “If something changes, our algorithms say, Wait, I don't recognize this object. I don't feel confident doing this,” says Babu. “We quickly kick in the human to help the robot do the task and then we can learn from that and we can improve our algorithms.”

The power to easily teach robots will make for highly adaptable machines far beyond an order fulfillment center. “Long term, it'll likely mean you don't necessarily think of robots just doing one specific thing, like buying a robot for X or Y or Z,” says UC Berkeley roboticist Pieter Abbeel, whose own startup Embodied Intelligence is using VR controls to teach robots skills. “But you buy a robot that can help you with anything, assuming you can give a few demonstrations.”

Sure, the education of the robots has just begun—even boxes of allergy medicine still give them pause. But soon enough they’ll be running laps around us, all thanks to the gold old human touch.

Related Video

Science

Bossa Nova robot

This company wants its robot to take on the tedious and time consuming job of scanning inventory at stores. Please don't kick it while it is working.

You enter the University of Colorado Boulder's newest research laboratory through the side entrance. The door—which is heavy and white, with a black, jug-style handle—slides open from right to left. Crammed inside are a plain wooden dresser, two chairs, and a small desk, above which someone has taped a mediocre landscape-print (mountains, trees, clouds, etc.). A kaleidoscopic purple tapestry hangs from the far wall. The ceiling slings so low that it forces some visitors to duck, and the flooring is made of wood. Well, wood laminate.

The modest setup occupies just a few dozen square feet of space—a tight but necessary fit, given that CU Boulder's newest research laboratory is located not in a building on the university's campus, but the back of a Ram ProMaster cargo van.

The lab is mobile because it has to be. Researchers at CU Boulder’s Change Lab built it to study marijuana’s effects on human test subjects. But even in states like Colorado, where recreational marijuana has been legal since 2014, federal law prohibits scientists from experimenting with anything but government-grown pot.

And Uncle Sam’s weed is weak.

Cultivated by the University of Mississippi with funding from the National Institute on Drug Abuse, federally sanctioned cannabis is less potent and less chemically diverse than the range of cannabis products available for purchase at dispensaries. According to findings published in the journal Nature Scientific Reports earlier this year, the weed that researchers use in clinical cannabis studies is very different from the weed people actually use.

CU Boulder's mobile lab (aka the CannaVan, aka the Mystery Machine) lets researchers drive around that problem. "The idea is: If we can’t bring real-world cannabis into the lab, let’s bring the lab to the people," says neurobiologist Cinnamon Bidwell, a coauthor on the aforementioned Nature study and head of the CannaVan research team.

It works like this: CannaVan researchers first meet with test subjects on CU Boulder campus, where they assign study participants specific commercial cannabis products with known potency and chemical makeups (including edibles and concentrates). Once the test subjects leave, they purchase their assigned cannabis from a local dispensary. Later, CannaVan researchers drive to the subjects' homes. Participants enter the van sober, and researchers perform blood draws and establish test subjects' baseline mental and physical states. Then they go back into their homes; eat, smoke, vape, or dab their product as they please; and return to the van, where researchers draw the subjects' blood again, perform interviews, and evaluate things like memory and motor control.

Bidwell's team is currently using the van to investigate the potential risks of high-potency cannabis concentrates, like dabs, and the potential benefits of cannabis use among medical patients with anxiety and chronic pain. The researchers use the lab to evaluate the drugs' acute effects, track usage and quality of life, monitor symptoms, and investigate how patients titrate their doses. "Basically, we're looking at whether people can have pain relief without walking around feeling stoned all the time," Bidwell says.

Crucially, all of this happens without any CU researchers buying, touching, or even seeing commercial cannabis themselves. "As Colorado citizens, we can purchase and use these products. But as researchers, we can't legally bring them into our lab and directly test their effects, or directly analyze them," Bidwell says. The CannaVan studies are less precise than those her team could perform in a traditional lab (where they'd have greater influence over things like dosage, timing, and chemical makeup), but more controlled than a pure observational study. Plus, these studies are actually legal. “We’ve worked very closely with CU Boulder administration, our legal team, research compliance officers—the list goes on—to see that everything is above board,” Bidwell says.

The upshot: Randomized controlled trials these are not, but these first observational investigations from CU Boulder's CannaVan are liable to be some of the most relevant behavioral and therapeutic studies on cannabis in 2018, and—it seems likely—several years to come.

That's because weak government weed isn't the only thing holding back medical marijuana research. Even as California, Nevada, Massachusetts, and Maine this year join the list of states where recreational weed is legal, in a country where 93 percent of voters support some form of legal pot, cannabis retains its designation under federal law as a Schedule I narcotic. That's a classification on par with heroin and ecstasy, and one that seems unlikely to change in the current political climate.

Attorney General Jeff Sessions' aversion to medical marijuana has been well documented. In April, he directed a Justice Department task force to review and recommend changes to the Cole Memo, which, since 2013, has enabled states to implement their own medical marijuana laws with minimal intervention by the US government. A month later, Sessions asked Congress to undo the protections afforded by the Rohrbacher-Blumenauer amendment, which also shields state-legal medical marijuana programs from federal interference.

"He hasn't yet, but if Sessions prevails at rolling these protections back, everything becomes harder for everybody, and that scares me" says geneticist Reggie Gaudino, chief science officer of marijuana analytics company Steep Hill. "I think it would have a chilling effect on the entire field—sales, medical research, genetic studies, chemical analyses. All of it."

And experts agree a chilling effect is the opposite of what cannabis research needs. "There needs to be an enormous amount of work done not just on the compounds present in various cannabis products, but on the best ways to characterize exposure to those compounds," says Harvard pediatrician and public health researcher Marie McCormick. Earlier this year, she chaired a review by the National Academies of Science, Medicine and Engineering of existing marijuana research—the most thorough evaluation of its kind to date. The report found strong evidence for marijuana's therapeutic potential, but gaping holes in foundational research that could guide its medical and recreational use. "It's not terribly sexy work. It's slow and methodological. But it's critical to understanding the effects of cannabis exposure, its potential risks, and its potential remedies," McCormick says. That's not all going to happen in 2018, she adds, "but developing a solid research agenda would go a long way toward moving things forward, and a big thing that would help would be the removal of marijuana's Schedule I status."

    More on Marijuana

  • Nick Stockton

    Scientists Map the Receptor That Makes Weed Work

  • Katie M. Palmer

    A New Crop of Marijuana Geneticists Sets Out to Build Better Weed

  • Megan Molteni

    Jeff Sessions' War on Medical Marijuana Gets Public Health All Wrong

In Colorado, for example, rescheduling marijuana could embolden CU Boulder's legal team to allow locally grown, non-NIDA weed on campus. This summer, state lawmakers passed House Bill 1367, a law which, when it goes into effect in July of 2018, will allow licensed Colorado cultivators and researchers to grow and study marijuana for clinical investigations. "But it’s still up to the university to say whether they’ll go with state or federal laws," Bidwell says. CU Boulder researchers receive hundreds of millions of dollars in federal funding every year; adhering to local laws over federal ones could put some of that money at risk. "We don't know how the university will come on that," Bidwell says. "But the institution is, understandably, pretty risk averse, and we have no sense of a timeline on when they might decide."

In the meantime, Bidwell and her team will continue cruising Colorado in the CannaVan, conducting observational studies of real-world pot usage. And if you're in the Boulder area, the researchers are looking for study participants. Just … do be sure any vans you climb into are university-affiliated. Look for the CU-Boulder insignia, the chintzy purple tapestry, and the fake wood floors.

Related Video

Science

A New Crop of Marijuana Geneticists Build Better Weed

There are thousands of strains of weed. Cracking their genetic codes may be the key to transforming pot from a budding business to a high-flying industry and a cannabis analytics lab is trying to unlock the true potential of weed. Pictures by Preston Gannaway.

It will start with a flash of light brighter than any words of any human language can describe. When the bomb hits, its thermal radiation, released in just 300 hundred-millionths of a second, will heat up the air over K Street to about 18 million degrees Fahrenheit. It will be so bright that it will bleach out the photochemicals in the retinas of anyone looking at it, causing people as far away as Bethesda and Andrews Air Force Base to go instantly, if temporarily, blind. In a second, thousands of car accidents will pile up on every road and highway in a 15-mile radius around the city, making many impassable.

That’s what scientists know for sure about what would happen if Washington, DC, were hit by a nuke. But few know what the people—those who don’t die in the blast or the immediate fallout—will do. Will they riot? Flee? Panic? Chris Barrett, though, he knows.

When the computer scientist began his career at Los Alamos National Laboratory, the birthplace of the atomic bomb, the Cold War was trudging into its fifth decade. It was 1987, still four years before the collapse of the Soviet Union. Researchers had made projections of the blast radius and fallout blooms that would result from a 10-kiloton bomb landing in the nation’s capital, but they mostly calculated the immediate death toll. They weren’t used for much in the way of planning for rescue and recovery, because back then, the most likely scenario was mutually assured destruction.

But in the decades since, the world has changed. Nuclear threats come not from world powers but from rogue nation states and terrorist organizations. The US now has a $40 billion missile interception system; total annihilation is not presupposed.

The science of prediction has changed a lot, too. Now, researchers like Barrett, who directs the Biocomplexity Institute of Virginia Tech, have access to an unprecedented level of data from more than 40 different sources, including smartphones, satellites, remote sensors, and census surveys. They can use it to model synthetic populations of the whole city of DC—and make these unfortunate, imaginary people experience a hypothetical blast over and over again.

That knowledge isn’t simply theoretical: The Department of Defense is using Barrett’s simulations—projecting the behavior of survivors in the 36 hours post-disaster—to form emergency response strategies they hope will make the best of the worst possible situation.

You can think of Barrett’s system as a series of virtualized representation layers. On the bottom is a series of datasets that describe the physical landscape of DC—buildings, roads, the electrical grid, water lines, hospital systems. On top of that is dynamic data, like how traffic flows around the city, surges in electrical usage, and telecommunications bandwidth. Then there’s the synthetic human population. The makeup of these e-peeps is determined by census information, mobility surveys, tourism statistics, social media networks, and smartphone data, which is calibrated down to a single city block.

So say you’re a parent in a two-person working household with two kids under the age of 10 living on the corner of First and Adams Streets. The synthetic family that lives at that address inside the simulation may not travel to the actual office or school or daycare buildings that your family visits every day, but somewhere on your block a family of four will do something similar at similar times of day. “They’re not you, they’re not me, they’re people in aggregate,” Barrett says. “But it’s just like the block you live in; same family structures, same activity structures, everything.”

Fusing together the 40-plus databases to get this single snapshot requires tremendous computing power. Blowing it all up with a hypothetical nuclear bomb and watching things unfold for 36 hours takes exponentially more. When Barrett’s group at Virginia Tech simulated what would happen if the populations exhibited six different kinds of behaviors—like healthcare-seeking vs. shelter-seeking—it took more than a day to run and produced 250 terabytes of data. And that was taking advantage of the institute’s new 8,600-core cluster, recently donated by NASA. Last year, the US Threat Reduction Agency awarded them $27 million to speed up the pace of their analysis, so it could be run in something closer to real time.

The system takes advantage of existing destruction models, ones that have been well-characterized for decades. So simulating the first 10 or so minutes after impact doesn’t chew up much in the way of CPUs. By that time, successive waves of heat and radiation and compressed air and geomagnetic surge will have barreled through every building within five miles of 1600 Pennsylvania Avenue. These powerful pulses will have winked out the electrical grid, crippled computers, disabled phones, burned thread patterns into human flesh, imploded lungs, perforated eardrums, collapsed residences, and made shrapnel of every window in the greater metro area. Some 90,000 people will be dead; nearly everyone else will be injured. And the nuclear fallout will be just beginning.

That’s where Barrett’s simulations really start to get interesting. In addition to information about where they live and what they do, each synthetic Washingtonite is also assigned a number of characteristics following the initial blast—how healthy they are, how mobile, what time they made their last phone call, whether they can receive an emergency broadcast. And most important, what actions they’ll take.

These are based on historical studies of how humans behave in disasters. Even if people are told to shelter in place until help arrives, for example, they’ll usually only follow those orders if they can communicate with family members. They’re also more likely to go toward a disaster area than away from it—either to search for family members or help those in need. Barrett says he learned that most keenly in seeing how people responded in the hours after 9/11.

Inside the model, each artificial citizen can track family members’ health states; this knowledge is updated whenever they either successfully place a call or meet them in person. The simulation runs like an unfathomably gnarled decision tree. The model asks each agent a series of questions over and over as time moves forward: Is your household together? If so, go to the closest evacuation location. If not, call all household members. That gets paired with the likelihood that the avatar’s phone is working at that moment, that their family members are still alive, and that they haven’t accumulated so much radiation that they’re too sick to move. And on and on and on until the 36-hour clock runs out.

Then Barrett’s team can run experiments to see how different behaviors result in different mortality rates. The thing that leads to the worst outcomes? If people miss or disregard messages that tell them to delay their evacuation, they may be exposed to more of the fallout—the residual radioactive dust and ash that “falls out” of the atmosphere. About 25,000 more people die if everyone tries to be a hero, encountering lethal levels of radiation when they approach within a mile of ground zero.

Those scenarios give clues about how the government might minimize lethal behaviors and encourage other kinds. Like dropping in temporary cell phone communication networks or broadcasting them from drones. “If phones can work even marginally, then people are empowered with information to make better choices,” Barrett says. Then they'll be part of the solution rather than a problem to be managed. “Survivors can provide first-hand accounts of conditions on the ground—they can become human sensors.”

Not everyone is convinced that massive simulations are the best basis for formulating national policy. Lee Clarke, a sociologist at Rutgers who studies calamities, calls these sorts of preparedness plans "fantasy documents," designed to give the public a sense of comfort, but not much else. "They pretend that really catastrophic events can be controlled," he says, "when the truth of the matter is, we know that either we can't control it or there's no way to know."

Maybe not, but someone still has to try. For the next five years, Barrett’s team will be using its high-throughput modeling system to help the Defense Threat Reduction Agency grapple not just with nuclear bombs but with infectious disease epidemics and natural disasters too. That means they’re updating the system to respond in real time to whatever data they slot in. But when it comes to atomic attacks, they’re hoping to stick to planning.

Going Nuclear

  • As the probability of nuclear war changes, the so-called doomsday clock keeps track—and it just ticked closer to midnight.

  • Though bombs aren't the only nuclear threats; last year, hackers targeted a US nuclear plant.

  • If the worst does happen, know at least that the US has poured millions of dollars into technologies and treatments to help you survive a nuclear event.

Related Video

Science

Rare Films of Nuclear Bomb Tests Reveal Their True Power

Nuclear physicists are using film scanners and computer analysis on old bomb test footage to uncover the weapons' secrets.

Do you like a planet that hasn’t yet melted? Do you like sushi? How about breathing? Then you’re secretly in love with plankton, tiny marine organisms that float around at the mercy of currents. They sequester carbon dioxide and provide two thirds of the oxygen in our atmosphere and sacrifice themselves as baby food for the young fish that eventually end up on your plate.

Yet science knows little about the complex dynamics of plankton on ocean-wide scales. So researchers are asking the machines for help, developing clever robots that use AI to examine and classify plankton, the pivotal organisms at the base of our oceanic food chain. That kind of work will be critical as Earth’s oceans continue to transform, potentially throwing ecosystems in chaos.

Take IBM’s ocean-going microscopes—which, conveniently, leverage the same technology sitting in your pocket right now. Two LEDs sit a few inches above the same kind of image sensor you'd find in a smartphone. When plankton pass over the sensor, they cast two shadows. “So by taking two pictures, one with each LED, you can get the 3-D position of all the plankton in a drop of water on the image sensor,” says Tom Zimmerman, a researcher at IBM.

So you’ve got an image of some plankton, which could be one of two types: zooplankton are animals like fish larvae, and phytoplankton are marine algae. The old way of identifying them—there are over 4,000 species of phytoplankton alone—used to be to sort through it with the eyeballs of a human expert. But now researchers have artificial intelligence: IBM is working to integrate AI into the system to automatically quantify and identify the specks. The idea is to create a floating instrument that dangles hoses of different lengths so it can sample plankton concentrates at different depths. A network of these microscopes could then alert scientists to anomalies as they unfold in real time.

Take, for example, the misadventures of a zooplankton called a copepod. It eats algae, which can contain a toxin that gets it drunk. “Now, you think that would be fun for the copepods, but it isn't, because usually copepods dart around in random directions which helps them avoid being eaten by their predators,” says Zimmerman. “But when they get drunk they go straight and fast, which makes it really easy for them to get picked off by their predators.”

So the local copepod population starts to crash, and the algae population in turn explodes, the phytoplankton poisoning themselves with all their waste products. They die and release toxins that poison other organisms, and suck all the oxygen out of the water as they decay. Now you’ve got a whole lot of dead critters on your hands. “That's a case where watching the behavior [of plankton] would indicate that there's some imbalance,” says Zimmerman. “That's the kind of stuff we have to monitor.”

The system can at the moment track plankton concentrations. But it’s not just about quantifying the amount of plankton in a given area—it’s about decoding the balance between the zooplankton that eat phytoplankton, and how the organisms are behaving individually and as part of a group. IBM eventually wants to track things like drunken copepod movements in real time; it's still building a library of plankton, but hopes to have a system of devices in the wild within five years.

Scientists have to consider shape, too. A giant single-celled organism called a stentor, for example, is normally trumpet-shaped, but will ball up when exposed to too much sugar. “So behavior, shape, these are all things that with AI we can definitely track to understand if something is going wrong,” says Simone Bianco, a researcher at IBM.

IBM isn’t the first to enlist AI in the quest to better understand plankton. The excellently named FlowCytobot sticks to piers and sucks in water, which passes through a laser. Particles like plankton scatter this light, which triggers an imager.

The system judges the images based on some 250 features, like symmetry. “Then through manual classification, where the user creates an image training set of hundreds of images at a time, the neural net learns to identify those plankton without user input,” says Ivory Engstrom, director of special projects at McLane Research Laboratories, a scientific instrument company that makes the FlowCytobot.

The FlowCytobot alerts scientists, like these studying algae blooms in Texas, to events like the outbreak of toxin, but it’s tethered in one place. Over at the Monterey Bay Aquarium Research Institute, scientists are working on a more mobile platform for monitoring plankton: the Wave Glider. Think of it like a very expensive surfboard, loaded with solar-powered instruments.

MBARI researcher Thom Maughan is developing his own microscope that’ll allow the Wave Glider to sniff out plankton. This data will be made publicly available through MBARI’s Oceanographic Decision Support System. “When we show the Wave Glider in its position out there, you'll be able to hover your mouse over it and get some idea of the size distribution of the microorganisms that the microscope is seeing,” says Maughan. “Then you should be able to drill down and see what types of organisms are being identified.”

This kind of automation isn’t just about convenience—it’s about necessity. “It's getting to be a rare person that can identify the plankton,” says Maughan. “Those are the old-school traditional microbiologists. Apparently they're getting to be fewer and fewer of those folks who are really intimate with that plankton world.”

With the oceans undergoing rapid transformation, science can’t afford to lose this knowledge. Plankton are all too important, and still all too mysterious. Leave it to the machines, though, to help make sense of a confounding ocean kingdom.

More ocean robotics

  • Over at MIT, researchers have developed a hypnotic fish robot for studying coral reefs.

  • This mermaid robot, on the other hand, isn't quite so elegant. Still useful, though.

  • Here's more on MBARI's extensive drone program.

Related Video

Science

Watch MIT’s Hypnotic Robot Fish Swim a Coral Reef

Researchers detail the evolution of the world’s strangest fish, and describe how it could be a potentially powerful tool for scientists to study ocean life.

Lyft Delivers Carbon-Neutral Rides

March 20, 2019 | Story | No Comments

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

Over the years, John Zimmer, the co-founder and president of Lyft, has often pointed to a class he took as an undergraduate as the source of his ideas about environmental sustainability—and by extension, Lyft’s goals to create greener transportation options.

The class at Cornell University was called “Green Cities.” The professor, Robert Young, opened the first lecture by describing how roads and transit systems built decades ago weren’t designed to sustain the rapid growth of urban populations today, Zimmer recalled. “If we don’t fix the infrastructure problem, we’re going to have a major economic and environmental problem,” Zimmer told a roundtable of reporters in Washington, DC, in late March.

Founded in 2012, Lyft is now an $11 billion ride-hailing company, second in the industry to Uber alone. Its concept of ride-hailing has long been founded on reducing the need for personal car ownership. But today, the company made perhaps its most meaningful move yet towards reducing carbon emissions: Lyft is promising to offset the carbon emissions of every ride around the world, making all rides “carbon neutral.” From now on, Zimmer and his co-founder Logan Green wrote in a Medium post, “your decision to ride with Lyft will support the fight against climate change.”

According to the post, Lyft’s total annual investment will amount to over a million metric tons of carbon, “equivalent to planting tens of millions of trees or taking hundreds of thousands of cars off the road,” which will make Lyft one of the largest voluntary purchasers of carbon offsets in the world. Scott Coriell, a Lyft communications officer, said the company does not have a specific estimate for the cost of the investment, but that it will be in the millions of dollars. According to a 2015 report by the NGO Ecosystem Marketplace, General Motors, Barclays bank, and PG&E were the top three voluntary buyers of offsets between 2012 and 2013, respectively scooping up 4.6 million, 2.1 million, and 1.4 million carbon offsets, which are measured in metric tons, during that period.

Carbon offsets have been the subject of some scrutiny and scandal; some companies that take money promising to plant trees and capture emissions have been exposed as worthless or scams. Coriell noted that Lyft will become carbon neutral by investing in offset projects that would not have happened without their backing. These projects will all be US-based and close to Lyft’s largest markets, Corriel said, and will include investments in a manufacturing emissions reductions project in Michigan, oil recycling in Ohio, and a wind energy farm in Oklahoma. These projects are verified under the American Carbon Registry, Climate Action Reserve, or Verified Carbon Standard—all rigorous third-party standard setters of legitimacy.

The announcement is not Lyft’s first gesture towards environmental sustainability. In 2017, it signed “We Are Still In,” joining hundreds of states, cities, and corporations (including Uber) in pledging to uphold the US carbon emissions reduction goals set forth by the Paris climate accord, after President Donald Trump announced plans to withdraw the country’s commitment. At the time, Lyft also outlined plans to make the majority of its fleet autonomous and electric by 2025. “Bringing more electric vehicles onto the platform in the future will help us reduce the needs for offsets,” Coriell wrote.

As part of its own efforts to reduce car ownership, Uber has recently pivoted to become a multi-modal mobility provider, building car- and bike-sharing services into its app. It has not announced any plans to offset its carbon emissions. An Uber representative declined to comment on Lyft’s announcement.

Lyft’s commitment to carbon-neutrality is especially meaningful, because one irony of the ride-hailing industry is that, so far, it’s likely creating more vehicle miles traveled, not less. Though some studies have suggested that ride-hailing users are more likely to give up personal car ownership, more and more research shows that the convenience and relatively low cost of on-demand rides are leading travelers to take trips and generate pollution that they wouldn’t have otherwise. (Plus, all of those deadheading drivers.) As these services lure passengers off of public transit systems, it has become hard to argue that there’s anything particularly environmentally friendly about hailing an Uber or Lyft. This announcement changes that.

Lyft is hardly a perfect citizen, planet-saving-wise. Alongside Uber, it lobbies state legislators to preempt local regulations, which may limit the ability of cities from organizing road space in the most environmentally efficient way possible. And from a sustainability perspective, it would probably be better for Lyft to go carbon-neutral and invest in bike-sharing, as Uber is doing. Even renewably powered electric cars have a sizeable carbon footprint. If customers take a Lyft instead of walking or biking because they think these options are all equally green, they’re wrong.

Still, over the past year, Lyft has made genuine efforts to grow into its image as the “woke” alternative to scandal-ridden Uber, to borrow Zimmer’s term. Donations to the ACLU and free rides to anti-gun rallies have bought it credibility among progressives. Going carbon neutral is probably its most significant step in that direction: It is a lasting delivery of one of the company’s most fundamental promises. That really matters, especially as car manufacturers dial back their Obama-era eco-friendly branding efforts and push to weaken environmental regulations. Lyft seems to have real faith in the notion that there’s a market value in socially conscious transportation—that riders will choose Lyft over other apps, or their own vehicles, because they know it’s a better choice.

“We’re aggressively pursuing a set of values because one, we think it’s the right thing to do and two, it’s good for business,” Zimmer said last month. “That’s what we’re out to prove.”

Related Video

Transportation

A Timeline of Uber's Unfolding

Uber's crises have come so fast that they've piled on top of one another. We've laid out some of the companies most infamous milestones.