Author: GETAWAYTHEBERKSHIRES

Home / Author: GETAWAYTHEBERKSHIRES

The Uneven Gains of Energy Efficiency

March 20, 2019 | Story | No Comments

Click:electric three wheel tricycle manufacturer china

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

On a rainy day in New Orleans, people file into a beige one-story building on Jefferson Davis Parkway to sign up for the Low-Income Heating and Energy Assistance Program (LIHEAP), a federal grant that helps people keep up with their utility bills. New Orleans has one of the highest energy burdens in the country, meaning that people must dedicate a large portion of their income to their monthly energy bills. This is due in part to it being one of the least energy-efficient cities in the country.

For many city residents, these bills eat up 20 percent of the money they take in, and the weight of the burden can be measured in the length of the line.

“We’ve got folks wrapped around the block,” said Andreanecia Morris, the executive director of a housing advocacy non-profit called HousingNOLA. “There are people here paying 300, 400, 500 dollars a month. Some are paying utility bills that are as much as their mortgage.”

These bills, as indispensable as rent or healthcare, have exacerbated the affordability crisis as cities become increasingly inhospitable to all but the affluent. Energy costs increased at three times the rate of rent between 2000 and 2010. This rise, paralleling a dramatic stratification of wealth in some American cities, has widened the disparity in energy burdens between low-income and well-off households.

A 2016 study by the American Council for an Energy Efficient Economy (ACEEE) and Energy Efficiency for All (EEFA) set out to quantify what many already assumed: that low-income, black, and Hispanic communities spend a much higher share of their income on energy. The results were unsurprising, but stark. The researchers found that median energy burdens for low-income households are more than three times higher than among the rest of the population.

Utility bills are the primary reason why people resort to payday loans, and play an outsized role in the perpetuation of poverty. But the impacts of soaring energy bills go beyond finances. Living in under-heated homes puts occupants at a higher risk of respiratory problems, heart disease, arthritis, and rheumatism, according to ACEEE and EEFA. Then there are the tragedies, like that of Rodney Todd, a University of Maryland kitchen worker who died of monoxide poisoning, along with his seven children, while using a gas generator to power his home after his electricity was shut off by Delmarva Power.

One reason for the energy-burden gap is that the energy bills of the rich and poor aren’t in fact very different. “Energy is not discretionary,” said Anne Evens, CEO of Elevate Energy, an urban sustainability non-profit. No matter our income level, “We need energy to refrigerate our food, to heat our homes.”

Another cause, the 2016 study found, is that low-end housing is significantly less energy-efficient than other housing stock. People with less money aren’t just paying a greater proportion of their income for energy—they’re paying more per square foot. “Far from being an intractable problem related to persistent income disparity, the excess energy burdens [that low-income communities] face are directly related to the inefficiency of their homes,” the study authors concluded.

“What you’ll see is people finding cheaper rents in buildings because they’re older,” Morris said. “But their savings are offset, because their homes are so energy inefficient.”

There is a great amount of potential for energy savings in these older buildings. ACEEE and EEFA found that 97 percent of the excess energy burdens for renting households could be eliminated by bringing their homes up to median efficiency standards. And a 2015 study by the U.S. Department of Energy found that the value of energy upgrades is 2.2 times their cost. This figure is even higher for the most inefficient homes.

The question is how to find the capital to realize those gains, and whether the benefits can reach those who need relief.

Energy efficiency for some

Energy efficiency programs can go a long way to closing the energy burden gap, but they often do just the opposite.

A revolution in efficiency programs and home weatherization has opened the door to the world’s cheapest energy source: avoided energy waste. But for the most part, it is only accessible to people who can afford an upfront investment. Think of someone who’s renovating their kitchen and decides to replace the appliances with more energy-efficient ones, or a person who puts solar panels on the roof of his house, motivated less by cost savings and more by a bumptious desire to be the chief environmentalist on the block.

“Energy inequity is about the energy system as a whole,” said Evens. “As we make this transition to cleaner energy, who is really benefiting? As we become more energy efficient, is that benefiting all people? Who’s being left behind?”

Even programs that subsidize efficiency upgrades may be inaccessible to, or underutilized by, low-income households because they still require upfront investment and won’t yield benefits for years. For many, the need for aid is immediate.

    More CityLab

  • Linda Poon

    What Climate-Conscious Cities Can Learn From Each Other

  • Michael Isaac Stein

    How to Save a Town From Rising Waters

  • Amy Crawford

    Big Data Suggests Big Potential for Urban Farming

A growing network of programs, both private and public, is trying to correct the imbalance. Local housing authorities all over the country have upgraded their public housing units and designed affordable-housing tax credits that ensure a high degree of energy efficiency. Non-profits and utility companies are helping homeowners make upgrades to their homes by deferring upfront costs and using energy savings to pay down the debt.

But for all the good they do, many of these initiatives sideline a large and vulnerable group of low-income individuals: renters. The number of Americans who use HUD vouchers in the private market greatly outnumbers the public-housing population. And the number of urban renters is only increasing as home prices soar out of reach.

Renters are left out of the efficiency boom because they’re left to the whims of their landlords’ investment decisions. If a tenant pays their own utility bill, there isn’t much incentive for the landlord to make improvements. And renters are unlikely to make long-term efficiency improvements themselves, uncertain of whether they’ll be able to stay there long enough to reap the benefits.

Shrinking resources

Policymakers will continue to experiment with new forms of incentives and targeted funding. Whatever solutions they construct, advocates agree that success will require a bigger pot of money than currently exists. Unfortunately, funding for low-income energy efficiency is shrinking.

“There are so many different programs that have been cut, rolled back, or attacked,” said Michelle Romero, the deputy director of Green For All, a non-profit founded by Van Jones. “Without programs that invest in helping low-income communities afford energy efficiency, you’re going to see the disparity increase.”

LIHEAP is the government’s largest grant focused on low-income energy affordability. But it’s been cut by a third since 2009. Trump has threatened to eliminate LIHEAP entirely, along with similar programs like the Department of Energy’s Weatherization Assistance Program. For now, the programs are still funded, but advocates remain uneasy. “We don’t know what’s going to happen,” said Evens. “Predictability has kind of gone out the window. So we have to be really, really vigilant.”

As funding contracts, efficiency initiatives are the first to go. Only 14 percent of LIHEAP dollars go to energy-efficiency investment. The rest is used for direct bill assistance for those whose needs are too immediate to focus on long-term efficiency.

“You can’t tell someone, ‘We’re not going to help you pay your light bill this month, but in a year we can guarantee your apartment will be energy efficient.’ Well, they may not make it through the year,” Morris said. But prioritizing short-term fixes isn’t a real solution: “We can’t end up in these positions where we’re spending all this money on direct assistance so we can’t do anything else.”

Related Video

Gadgets

A Look at the Belkin Conserve Insight Energy Monitor

Belkin’s Conserve Insight energy monitor tells you how much juice—and dough—your appliances are using so you can decide what’s worth plugging in.

Don't Just Lecture Robots—Make Them Learn

March 20, 2019 | Story | No Comments

The robot apocalypse is nigh. Boston Dynamics’ robots are doing backflips and opening doors for their friends. Oh, and these 7-foot-long robot arms can lift 500 pounds each, which means they could theoretically crush, like, six humans at once.

The robot apocalypse is also laughable. Watch a robot attempt a task it hasn’t been explicitly trained to do, and it’ll fall flat on its face or just give up and catch on fire. And teaching a robot to do something new is exhausting, requiring line after line of code and joystick tutorials in say, picking up an apple.

But new research out of UC Berkeley is making learning way easier on both the human and machine: By drawing on prior experience, a humanoid-ish robot called PR2 can watch a human pick up an apple and drop it in a bowl, then do the same itself in one try, even if it’s never seen an apple before. It’s not the most complex of tasks, but it’s a big step toward making machines rapidly adapt to our needs, fruit-related or otherwise.

Consider the toothbrush. You know how to brush your teeth because your parents showed you how—put water and paste on the bristles and put the thing in your mouth and scrub and then spit. You could then draw on that experience to learn how to floss. You know where your teeth are, and you know there’s gaps between them, and that you have to use an instrument to clean them. Same principle, but kinda different.

To teach a traditional robot to brush its teeth and floss, you’d have to program two sets of distinct commands—it can’t use the context of prior experience like we can. “A lot of machine learning systems have focused on learning completely from scratch,” says Chelsea Finn, a machine learning researcher at UC Berkeley. “While that is very valuable, that means we don't bake in any knowledge. Essentially, these systems are starting with a blank mind every time they learn every single task if they want to learn.”

Finn’s system instead provides the humanoid-ish robot with valuable experience. “We collected videos of humans doing a number of different tasks,” she says. “We collected demonstrations of robots doing the same tasks via teleoperation, and we trained it such that after it sees a video of a human doing one thing, the robot can learn to imitate that thing as well.”

Take a look at the GIF below. A human demonstrates by pushing the container, not the box of tissues, toward the robot’s left arm, as the robot observes through its camera. When presented with the container and the box, only arranged differently, the robot can recognize the correct object and make a similar sweeping motion, pushing the container with its right arm into its left arm. It’s drawing from “experience”—how it’s been teleoperated previously to manipulate various objects on a table, combined with watching videos of humans doing the same. Thus the machine can generalize to manipulate novel objects.

“One of the really nice things about this approach is you don't need to very precisely track the human hand and the objects in the scene,” says Finn. “You really just need to infer what the human was doing and the goal of the task, then have the robot do that.” Precisely tracking the human hand, you see, is prone to failure—parts of the hand can be occluded and things can move too fast for a machine to read in detail. “It's much more challenging than just trying to infer what the human was doing, irrespective of their precise hand pose.”

It’s a robot being less robotic and more human. When you learned to brush your teeth, you didn’t mirror every single move your parent made, brushing the top molars first before moving to the bottom molars and then the front teeth. You inferred, taking the general goal of scrubbing each tooth and then taking your own path. That meant first of all that it was a simpler task to learn, and second of all it gave you context for taking some of the principles of toothbrushing and applying them to flossing. It’s about flexibility, not hard-coding behavior.

Which will be pivotal for the advanced robots that will soon labor in our homes. I mean, do you want to have to teach a robot to manipulate every object in your home? “Part of the hope with this project is we can make it very easy for the average person to show robots what to do,” says Finn. “It takes a lot of effort to joystick around, and if we can just show robots what to do it would be much easier to have robots learning from humans in very natural environments.”

To do things like chores, for instance. To that end, researchers at MIT are working on a similar system that teaches robots in a simulation to do certain household tasks, like making a cup of coffee. A set of commands produces a video of a humanoid grabbing a mug and using the coffee machine and such. The researchers are working on getting this to run in reverse—show the system a video of someone doing chores on YouTube and it could not only identify what’s going on, but learn from it. Finn, too, wants her system to eventually learn from more “unconstrained” videos (read: not in a lab) like you’d find on YouTube.

Let’s just be sure to keep the machines away from the comment section. Wouldn’t want to give them a reason to start the robot apocalypse.

Hunter Williams used to be an English teacher. Then, three years into that job, he started reading the book The Moon Is a Harsh Mistress. The 1966 novel by Robert Heinlein takes place in the 2070s, on the moon, which, in this future, hosts a subterranean penal colony. Like all good sci-fi, the plot hinges on a rebellion and a computer that gains self-awareness. But more important to Williams were two basic fictional facts: First, people lived on the moon. Second, they mined the moon. “I thought, ‘This is it. This is what we really could be doing,” he says.

Today, that vision is closer than ever. And Williams is taking steps to make it reality. This year, he enrolled in a class called Space Resources Fundamentals, the pilot course for the first-ever academic program specializing in space mining. It's a good time for such an education, given that companies like Deep Space Industries and Planetary Resources are planning prospecting missions, NASA's OSIRIS-REx is on its way to get a sample of an asteroid and bring it back to Earth, and there's international and commercial talk of long-term living in space.

Williams had grown up with the space-farers on Star Trek, but he found Heinlein’s vision more credible: a colony that dug into and used the resources of their celestial body. That's the central tenet of the as-yet-unrealized space mining industry: You can't take everything with you, and, even if you can, it's a whole lot cheaper not to—to mine water to make fuel, for instance, rather than launching it on overburdened rockets. “I saw a future that wasn't a hundred or a thousand years away but could be happening now,” says Williams.

So in 2012, he adjusted trajectory and went to school for aerospace engineering. Then he worked at Cape Canaveral in Florida, doing ground support for Lockheed Martin. His building, on that cosmic coast, was right next to one of SpaceX's spots. “Every day when I came to work, I would see testaments to new technology,” he says. “It was inspiring.”

A few years later, he still hadn't let go of the idea that humans could work with what they found in space. Like in his book. So he started talking to Christopher Dreyer, a professor at the Colorado School of Mines’ Center for Space Resources, a research and technology development center that's existed within the school for more than a decade.

It was good timing. Because this summer, Mines announced its intention to found the world’s first graduate program in Space Resources—the science, technology, policy, and politics of prospecting, mining, and using those resources. The multidisciplinary program would offer Post-Baccalaureate certificates and Masters of Science degrees. Although it's still pending approval for a 2018 start date, the school is running its pilot course, taught by Dreyer, this semester.

Williams has committed fully: He left his Canaveral job this summer and moved to Colorado to do research for Dreyer, and hopefully start the grad program in 2018.

Williams wasn't the only one interested in the future of space mining. People from all over, non-traditional students, wanted to take Space Resources Fundamentals. And so Dreyer and Center for Space Resources director Angel Abbud-Madrid decided to run it remotely, ending up with about 15 enrollees who log in every Tuesday and Thursday night for the whole semester. Dreyer has a special setup in his office for his virtual lectures: a laptop stand, a wall of books behind him, a studio-type light that shines evenly.

In the minutes before Thanskgiving-week class started, students' heads popped up on Dreyer's screen as they logged in. Some are full-time students at Mines; some work in industry; some work for the government. There was the employee from the FAA’s Office of Commercial Space Transportation, an office tasked, in part, with making sure the US is obeying international treaties as they explore beyond the planet. Then there’s Justin Cyrus, the CEO of a startup called Lunar Outpost. Cyrus isn’t mining any moons yet, but Lunar Outpost has partnered with Denver’s Department of Environmental Health to deploy real-time air-quality sensors, of the kind it hopes to develop for moony use.

Cyrus was a Mines graduate, with a master’s in electrical and electronics engineering; he sought out Dreyer and Abbud-Madrid when he needed advice for his nascent company. When the professors announced the space resources program, Cyrus decided to get in on this pilot class. He, and the other attendees, seem to see the class not just as an educational opportunity but also as a networking one: Their classmates, they say, are the future leaders of this industry.

    More on Space Mining

  • Sarah Scoles

    Asteroid Mining Sounds Hard, Right? You Don’t Know the Half of It

  • Sarah Scoles

    Luxembourg's Bid to Become the Silicon Valley of Space Mining

  • Clive Thompson

    Space Mining Could Set Off a Star War

Cyrus, the FAA employee, and Williams all smiled from their screens in front of benign backgrounds. About a dozen other students—all men—joined in by the time class started. The day's lesson, about resources on the moon, came courtesy of scientist Paul Spudis, who live-broadcasted from a few states away. Spudis, a guest lecturer, showed charts and maps and data about resources the moon might harbor, and where, and their worth. He's bullish on the prospects of prospecting. Toward the end of his talk, he said, "I think we'll have commercial landings on the moon in the next year or so." Indeed, the company Moon Express is planning to land there in 2018, in a bid to win the Google Lunar X Prize.

Back during Halloween week, the class covered the Outer Space Treaty, a creation of the United Nations that governs outer-space actions and (in some people's interpretations) makes the legality of space mining dubious. The lecture was full of policy detail, but the students drove the ensuing Q&A toward the sociological. Space mining would disproportionately help already-wealthy countries, some thought, despite talk in the broader community about how space mining lowers the barrier to space entry.

In this realism, and this thoughtfulness, Dreyer's class is refreshing. The PR talk of big would-be space mining companies like Planetary Resources and Deep Space Industries can be slick, uncomplicated, and (sometimes) unrealistic. It often skips over the many steps between here and self-sustaining space societies—not to mention the companies' own long-term viability.

But in Space Resource Fundamentals, the students seem grounded. Student Nicholas Proctor, one of few with a non-engineering background, appreciates the pragmatism. Proctor studied accounting as an undergrad and enrolled at Mines in mineral economics. After he received a NASA grant to study space-based solar power and its applications to the mining industry, Abbud-Madrid sent him an email telling him about the class. The professor thought it would be a good fit—and Proctor obviously agreed.

After Thanksgiving-week class was over, students logged off, waving one-handed goodbyes. Williams had been watching from the lab downstairs, in a high-tech warehouse-garage combo. There, he and other students work among experiments about how dust moves in space, and what asteroids are actually like. Of course, they're also interested in how to get stuff—resources—out of them. An old metal chamber dominates the room, looking like an unpeopled iron lung. "The big Apollo-era chamber is currently for asteroid mining," Williams explained, "breaking apart rocks with sunlight and extracting the water and even precious metals."

While Williams closed up class shop downstairs, Dreyer and Abbud-Madrid hung out in Dreyer's office for a few minutes. Dreyer, leaning back in his well-lit chair, talked bemusedly about some of the communications they receive. “We get interest from people to find out what they can mine and bring back to Earth and become a trillionaire,” he said.

That’s not really what the Space Resources program is about, in part because it’s not clear that’s possible—it’s expensive to bring the precious (to bring anything) back to Earth. The class focus—and, not coincidentally, the near-term harvest—is the H2O, which will stay in space, for space-use. “No matter how complex our society becomes, it always comes back to water,” said Abbud-Madrid. He laughed. “We’re going to the moon,” he continued. “For water.”

Related Video

Science

Asteroid Redirect Mission: Crew Segment

NASA announced the next step in the plan to retrieve an asteroid boulder from a near-Earth asteroid and redirect it into a stable orbit around the moon to carry out human exploration missions, all in support of advancing the nation's journey to Mars.

Personal technology is getting a bad rap these days. It keeps getting more addictive: Notifications keep us glued to our phones. Autoplaying episodes lure us into Netflix binges. Social awareness cues—like the "seen-by" list on Instagram Stories—enslave us to obsessive, ouroboric usage patterns. (Blink twice if you've ever closed Instagram, only to re-open it reflexively.) Our devices, apps, and platforms, experts increasingly warn, have been engineered to capture our attention and ingrain habits that are (it seems self evident) less than healthy.

Unless, that is, you're talking about fitness trackers. For years, the problem with Fitbits, Garmins, Apple Watches, and their ilk has been that they aren't addictive enough. About one third of people who buy fitness trackers stop using them within six months, and more than half eventually abandon them altogether.

As for that guy at work whose Fitbit appears to be bionically integrated with his wrist, it's unclear whether wearing the thing actually makes him more fit. Most studies on the effectiveness of fitness trackers have produced weak or inconclusive findings (blame short investigation windows and small, homogenous sample sizes). In fact, two of the most well-designed studies to date have turned up less than stellar results.

The first, a randomized controlled trial involving 800 test subjects, was conducted between June, 2013 and August, 2014. The results, which were published lastyear in The Lancet Diabetes & Endocrinology, found that, after one year of use, a clip-on activity tracker had no effect on test subjects' overall health and fitness—even when it was combined with a financial incentive. (In a perverse twist, volunteers whose incentives were removed six months into the study fared worse, in the long run, than those who were never offered them at all.) The second, an RCT out of the University of Pittsburgh conducted between October 2010 and October 2012, examined whether combining a weight loss program with a fitness tracker, worn on the upper arm, could help test subjects lose more weight or improve their overall health. The results, published last year in the Journal of the American Medical Association, showed that subjects without fitness trackers lost more weight than their gadget-wearing counterparts—a difference of about eight pounds. And while it's true that weight is not a great proxy for health, the findings also showed that the test subjects with fitness trackers were no more active or fit than those without.

All of which is, frankly, pretty embarrassing for companies that manufacture fitness devices—not to mention disquieting for the people who wear them.

And yet, none of this means you should ditch your fancy new fitness tracker. Have companies like Fitbit and Garmin been slow to incorporate sticky features into their products? Yes. Unequivocally. By 2013—the year Apple brought attention-enslaving push notifications to its phones’ lock screens, and around the time the Lancet study was getting off the ground—fitness trackers and their accompanying apps had only just begun to leverage theories from psychology and behavioral economics. But today's products are different.

The fact is, most existing studies on fitness trackers—including the two I cited above—hinge on devices that are several years old. (Think glorified pedometers that don't connect seamlessly with the supercomputer in your pocket.) And while peer-reviewed research on the latest wave of workout gadgets is still sparse, signs suggest newer wearables are finally becoming more addictive.

For starters, wearable fitness trackers themselves have turned into wildly capable machines. It's no longer enough to measure steps and active minutes; features like sleep-tracking and 24/7 heart rate monitoring have also become table stakes. So, too, have the beefy batteries necessary to make features like continuous heart-rate detection worth a damn. Fitbit's newest "motivating timepiece," the Ionic, can go four days between charges. The Fenix 5, Garmin's flagship fitness watch, can last up to two weeks.

"If it's comfortable, it's waterproof, the display's always readable, and it's got a long battery life, there's less excuse for people to take it off," says Phil McClendon, Garmin's lead product manager. For technology companies, few metrics matter more than engagement. Application developers call it time in app. Online publishers (like WIRED!) call it time on site. Wearable manufacturers are all about that time on wrist.

The software's gotten better, too, along with user experience. Collecting information is one thing. Presenting it in a way people find comprehensible, motivating, and actionable is another. Consider something as simple as a reminder to move—another feature ubiquitous among newer fitness watches. Buzzing people once an hour, regardless of their current activity, is annoying (if my device tells me to get up and move while I'm on a hike, it's going off a cliff). Instead, most wearables now tell you to move only if you've been sedentary for more than a predetermined period of time. And according to Fitbit, at least, those reminders work. "People who would get six reminders to move a day, on average, after a few months, they get about 40 percent fewer reminders to move," says Shelton Yuen, Fitbit's vice president of research. "That’s a very detailed example, but I feel like it’s such an important one, because it means the user's innate behavior is changing."

    More on Fitness

  • Erin Griffith

    When Your Activity Tracker Becomes a Personal Medical Device

  • Nick Stockton

    What's Up With That: Why Running Hurts Every Part of Your Body

  • Robbie Gonzalez

    Why You'll Never Run a Sub 2 Hour Marathon—But the Pros Might

Of course, Fitbit would say that. But outside experts agree that fitness tech is improving. "There are two things, specifically, that apps and devices are actually getting better at," says University of Pennsylvania researcher Mitesh Patel, who studies whether and how wearable devices can facilitate improvements in health. The first is leveraging social networks to stoke competition or foster support. Researchers led by Penn State psychologist Liza Rovniak recently showed support networks to be highly effective at increasing physical activity in unmotivated adults, but Patel suspects the leaderboard format, a popular way of promoting competition by ranking users, fails to inspire anyone but those people at the top of the charts (who probably need the least encouragement anyway).

The second is goal setting. "We know that people need to strive for an achievable goal in order to change their behavior," Patel says, the operative word there being "achievable." The problem with early fitness trackers was that they all used the same goal (step count) and they all set the bar way too high (10,000 steps). But the average American takes just 5,000 steps a day. Asking her to double that figure isn't just unrealistic—it can actually be discouraging.

But today's fitness wearables tailor their feedback to users' individual habits. Rather than tell you to take 10,000 steps, Garmin's Insights feature will nudge you if it senses you're moving less than you usually do on a given day of the week. Fitbit now allows users to set and track personalized goals related to things like weight and cardiovascular fitness.

These are just some of the ways wearable manufacturers have begun borrowing theories from psychology and behavioral economics to motivate users in recent years—and there will be more to come. "They're constantly adding features," says Brandeis University psychologist Alycia Sullivan, a researcher at the Boston Roybal Center for Active Lifestyle Interventions and coauthor of a recent review of fitness tracker motivation strategies. Now that these devices are small, powerful, and packed with sensors, she says, expect most of those features to show up on the software side of things. "That's where these companies are most able to leverage the data they're accumulating toward interactive, personalized information you'll actually use."

It may have taken them a while to catch up with the Facebooks and Netflixes of the world, but our fitness devices are finally poised to hijack our brains—and bodies—for good.

Related Video

Gadgets

Fitbit Jailbreak Tips and Tricks

When you're trying to meet a Fitbit threshold, but you just can't cut the mustard, we've got several cheats that will bring your count up to 10,000 steps with minimal effort.

The Case of the Missing Dark Matter

March 20, 2019 | Story | No Comments

Physicists don’t know much about dark matter. They can’t agree on what it’s made of, how much a single particle weighs, or the best way to construct a Play-Doh diorama of it. (How would you do it? Dark matter is invisible—light doesn’t interact with it at all.) Nobody has ever caught a dark matter particle on Earth.

But after 30-plus years of telescope observations, most researchers do agree on one thing: The universe contains a lot of it. Astrophysicists think dark matter dominates ordinary matter in the universe by more than five times because galaxies rotate too fast for their visible star-stuff to handle. Without the extra dark matter holding them together, the laws of physics say that these galaxies would fall apart—the Milky Way, for example, rotates so fast that it must contain 30 times more dark matter than ordinary matter. In fact, every galaxy that astronomers have ever studied contains dark matter.

Until now.

An international team of astrophysicists has discovered a galaxy 65 million light years away with so little dark matter that it may contain none at all. To arrive at this conclusion, they measured the speeds of 10 twinkly blobs in the galaxy, called globular clusters, that each contain millions of stars. Their measurements showed that this galaxy’s stars can handle its rotational speed. Compared to other galaxies of the same brightness, “it has at least 400 times less dark matter than what we expected,” says astrophysicist Pieter van Dokkum of Yale University.

This is weird—and it could change what astrophysicists think dark matter is, in addition to upending their understanding of how galaxies form, says van Dokkum. Right now, they think that galaxies form around a scaffolding made of dark matter. The stars only take shape on top of the dark matter that is already there. “Dark matter accumulates; ordinary gas falls into it; it turns into stars, and then you get a galaxy,” says astrophysicist Jeremiah Ostriker of Columbia University, who was not involved in the work.

“Finding a galaxy without dark matter is an oxymoron,” says van Dokkum. It’s like finding a body without a skeleton. “How do you form such a thing? How do you create a galaxy without dark matter first?”

However, it’s still too early to throw out the old rules, says astrophysicist James Bullock of the University of California, Irvine. He points out that the galaxy, memorably named NGC1052-DF2, is orbiting another one. It’s possible that this galaxy formed on top of dark matter just like any other, and the neighboring galaxy stripped the dark matter away, he says.

To imagine this process, you can visualize dark matter as a diffuse collection of individual particles—unlike ordinary matter, which clumps into stars and planets. “It’s better to think of it as a fluid, like a sea of dark matter,” says Bullock. The leading dark matter theory predicts that this “sea” of particles moves around a galaxy in deep, plunging orbits like comets around the sun. Bullock thinks that as the dark matter particles reached the extremes of their orbits, forces from the neighboring galaxy could have ripped them away.

The next step is to figure out whether this galaxy is an exception or the norm, says Ostriker. If astrophysicists find more similar galaxies, they’ll have to revise their current theories about dark matter. The leading theory—that dark matter consists of so-called weakly interacting massive particles, each slightly heavier than a proton—would not be able to explain the existence of many dark matter-less galaxies.

Other theories might work better. For example, Ostriker has proposed a theory that does predict certain galaxies to have extremely low amounts of dark matter, in which dark matter particles are more than 1030 times lighter than WIMPs.

If the current theory is wrong, that will also affect the strategies of the experiments trying to catch dark matter particles on Earth, says Bullock. These collaborations, such as LUX-ZEPLIN experiment in South Dakota, the XENON1T experiment in Italy, and the ADMX experiment in Washington, are trying to figure out what dark matter actually is made of, and they look to astronomical observations to guide their detector designs. LUX-ZEPLIN and XENON1T both use liquid xenon to hunt for WIMPs. ADMX looks for another candidate known as an axion, which is lighter than a WIMP and requires a different type of detector.

Van Dokkum and his team plan to keep searching for similar galaxies—or just any other weird thing that challenges the current understanding of dark matter. In 2016, they found the opposite of this galaxy—one that was rotating so fast that they concluded it was 99.99 percent dark matter. “That object was a surprise in the other direction,” he says. They don’t know how that galaxy formed, either.

They’re hoping that these weird objects will help guide theorists like Ostriker and Bullock to better understand what dark matter is. “We know so little about dark matter that any new constraint is welcome,” says van Dokkum. Even if it means throwing away what little they have.

Dark Matter

  • To build their dark matter detectors, physicists wade into the wild, speculative xenon gas market

  • After discovering gravitational waves, physicists gird themselves for the next discovery

  • Turns out, dark matter detectors can be re-purposed for nuclear security

Related Video

Science

Massive Black Holes Whip Dark Matter Into a Frenzy

Inside a simulation of the universe's particle accelerator with WIRED Science writer Nick Stockton.

For 20 years, an experiment in Italy known as DAMA has detected an oscillating signal that could be coming from dark matter—the fog of invisible particles that ostensibly fill the cosmos, sculpting everything else with their gravity.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

One of the oldest and biggest experiments hunting for dark matter particles, DAMA is alone in claiming to see them. It purports to pick up on rare interactions between the hypothesized particles and ordinary atoms. But if these dalliances between the visible and invisible worlds really do produce DAMA’s data, several other experiments would probably also have detected dark matter by now. They have not.

Late last month, Rita Bernabei of the University of Rome Tor Vergata, DAMA’s longtime leader, presented the results of an additional six years of measurements. She reported that DAMA’s signal looks as strong as ever. But researchers not involved with the experiment have since raised serious arguments against dark matter as the signal’s source.

DAMA searches for popular dark matter candidates called WIMPs, or “weakly interacting massive particles.” The scientists monitor an array of sodium iodide crystals kept deep under Gran Sasso Mountain in the Apennines, looking for flashes of radiation that could be caused by dark matter particles colliding with atomic nuclei in the crystals. As the solar system hurtles through the galaxy, “it looks like a wind of WIMPs coming at you,” explained Katherine Freese, a physicist at the University of Michigan who in 1986 co-developed the idea for such an experiment, “in the same way that when you’re driving it looks like the rain is coming into your windshield.”

Exactly in line with this hypothesis, the DAMA scientists find that nuclear activity in their crystals varies throughout the year. The signal always peaks in June, when Earth is moving fastest through the dark-matter-filled galaxy, and troughs in December, when the planet curves into the leg of its orbit that opposes the sun’s motion around the galaxy, slowing us relative to the dark matter wind.

The latest run of the experiment, called DAMA/LIBRA-phase 2, began in 2011. After taking data for six earthly orbits, the team reports that they continue to see a seasonal signal consistent with dark matter. As Bernabei told Quanta by email, “The annual modulation signature and the adopted procedures assure sensitivity to a plethora of possible dark matter candidates.”

Outside experts saw otherwise. In a paper posted April 4 to the physics preprint site arxiv.org, three physicists showed that a standard dark-matter WIMP cannot produce the new DAMA signal. “The vanilla one that everybody loves—that one’s gone,” said Freese, who coauthored the new paper with her student Sebastian Baum and Chris Kelso of the University of North Florida.

Freese and colleagues focused on a new feature of the DAMA data. As part of the DAMA/LIBRA-phase 2 upgrade, the team at Gran Sasso switched out hardware to make their detectors sensitive to lower-energy excitations inside the sodium iodide crystals. Bernabei reported an annual modulation in lower-energy nuclear recoils that was broadly similar to the signal for higher-energy recoils.

But if a vanilla WIMP were really the source of the annual modulation, the low-energy recoils should change relative to the high-energy recoils, wrote Freese and her coauthors. They found that nuclear activity should vary between June and December either much more dramatically, or much less so, at low energies than at high energies, depending on whether dark matter particles are lightweight or heavy. If WIMPs are light, DAMA should see them colliding with light sodium atoms at low energies much more often than with heavy iodine atoms. Overall, DAMA’s signal should be strongest for the very-lowest-energy recoils. Alternatively, heavy WIMPs will interact almost exclusively with iodine atoms at low energies and very little with sodium. Overall, in that case, the signal will weaken as you look at the lowest-energy events.

Instead, neither shift is seen in the DAMA/LIBRA-phase 2 data, “which is difficult to explain with dark matter,” said Jonathan Davis, a theoretical physicist at King’s College London.

In their paper, Baum, Freese and Kelso show that WIMPs can still generate the observed annual modulation if they have a twist: an innate preference for protons over neutrons that will lead them to interact more often with sodium than iodine (which has more neutrons). However, several physicists said this special “isospin-violating” property probably would have affected the results of other dark matter experiments, such as XENON1T, a 3.2-ton liquid xenon detector also located under Gran Sasso, which has seen no such effect.

The eerie silence in XENON1T, and in other catchily named dark matter detectors like LUX and PICO, had already dimmed many experts’ hopes about DAMA. These experiments, which look for telltale nuclear activity in different types of materials, have published a string of null results, ruling out large classes of WIMPs that would be compatible with DAMA’s signal. (Other dark matter candidates, such as axions, can’t be tested by these experiments.)

It had been possible, however, to think dark matter might just have an unexplained affinity for sodium iodide. But the April 4 analysis changes that. “What they show in this paper nicely is that … you can exclude DAMA with itself—not with reference to other experiments,” said Laura Baudis, a physicist at the University of Zurich and a member of the collaboration that runs XENON1T.

Hard as it is to account for the DAMA signal using dark matter, it’s equally difficult to understand it any other way. For decades, experts have mulled over more mundane explanations. “Several have been put forward and rapidly dispelled,” said Juan Collar, a physicist at the University of Chicago who leads the CoGeNT dark matter experiment. “I personally cannot come up with a good explanation.”

Davis argued in Physical Review Letters in 2014 that the annual modulation comes from a combination of muons bombarding Earth most heavily in July and solar neutrinos peaking in January. But other physicists quickly showed that the latter seasonal effect is too small to produce the signal, at least in the way he had proposed. In a new paper that’s causing some buzz, Daniel McKinsey, a physicist at the University of California, Berkeley, contends that the signal could come from argon contamination. Certain isotopes of argon radioactively decay more or less depending on the season. Yet this explanation works only if the nitrogen DAMA uses in one step of their procedure contains argon, which is unknown.

Many researchers said that a lack of transparency by Bernabei and the DAMA team has slowed efforts to understand what’s going on. For example, one limitation of Freese and coauthors’ analysis is that DAMA hasn’t released information about whether background effects amplify or diminish at lower energies, leaving outside researchers to assume that these effects have been corrected for already.

“I am certain that if they completely opened up to the community,” such as by sharing their data, “we would get to the bottom of what is causing the annual modulation,” Davis said. But the DAMA scientists present only their finalized data plots, argue strongly that their signal is evidence for dark matter, and take a “combative approach” to anyone who suggests otherwise, he said.

Other groups have had to step up. In the next few years, three new sodium iodide crystal experiments will start yielding results: ANAIS, COSINE-100 and SABRE, which has locations at Gran Sasso and at an underground lab in Australia. By replicating the experiment in the Southern Hemisphere, where summer and winter flip, SABRE will eliminate seasonal effects.

“For as long as physics remains an experimental science, the only way to advance knowledge will be through improved measurements,” Collar said. The complementary experiments either won’t see a modulation, in which case “we will have to turn the page and write the DAMA anomaly off,” he said. Or they’ll see something akin to it, in which case “we will have to work hard on a dark matter model that also explains all null observations with other detector materials.”

“Experience teaches that it will most likely not be dark matter,” said Neal Weiner, a theoretical physicist at New York University. “But I’m certainly persuaded enough that there’s a real chance it’s something interesting.”

Ultimately, said Freese, the new data release and her group’s analysis don’t change the big picture. “You still have to go build detectors made of the same material, but with different people doing it, and in different locations in the world … to figure out what the hell’s going on with DAMA,” she said. “Because nobody understands DAMA.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Massive Black Holes Whip Dark Matter Into a Frenzy

Inside a simulation of the universe's particle accelerator with WIRED Science writer Nick Stockton.

Update: SpaceX successfully launched its Falcon 9 rocket on a mission to resupply the International Space Station.

Another day, another SpaceX launch. On Monday at 4:30 pm ET, the commercial space company is slated to propel a previously-flown Dragon cargo ship into low Earth orbit aboard a used Falcon 9 rocket. The Dragon spacecraft, which is carrying food and supplies for the International Space Station, is scheduled to dock with the orbital outpost on Wednesday.

The cargo run will mark SpaceX’s 14th resupply mission to the ISS, its seventh Falcon 9 launch this year, and its second such launch in less than a week. Last Friday, one year to the day after launching and landing a used rocket for the first time, the company deployed a used Falcon 9 rocket to send 10 Iridium telecommunications satellites into orbit from Vandenberg Air Force Base in California. (SpaceX has now launched five missions for Iridium, but it has only used three boosters—the power of recycling!)

On the other side of the country, the Dragon capsule currently awaiting liftoff from Pad 40 at Florida's Cape Canaveral Air Force Station is loaded with close to three tons of grub, gear, and research equipment. Among these are the Atmosphere-Space Interactions Monitor—a suite of of optical cameras, photometers, and an X- and gamma-ray detector designed to study upper-atmospheric lightning and its relationship relationship to Earth's climate, and a nice complement to the Geostationary Lightning Mappers aboard NASA's next generation weather satellites. Also aboard are the Veggie Passive Orbital Nutrient Delivery System (an experimental method for growing food in microgravity), and the Multi-use Variable-g Platform, aka "MVP," a temperature- and humidity-controlled artificial gravity machine about the size of a microwave, with space inside for a range of biological samples, from cells, to fish, to flatworms.

About 10 minutes after Monday's launch, the Dragon will deploy its solar arrays and begin firing its thrusters with its sights set on the ISS. From aboard the space station, NASA astronaut Scott Tingle will assist Japanese astronaut Norishege Kanai in capturing the Dragon capsule with Canadarm2, the 58-foot-long robotic grappling arm, and mating it to the station's Harmony module. There, it will spend approximately one month before detaching and returning to Earth.

But the most noteworthy thing about Monday's resupply mission isn’t the launch itself. It won't be SpaceX's first time using a recycled rocket in a resupply mission. Neither will it be its first time lofting a previously flown Dragon capsule, let alone its first time reusing both major components simultaneously.

In truth, the most significant thing about the day's cargo run may be that there is little novelty to it whatsoever. Which is, of course, SpaceX's grand vision: To make rocket launches repeatable, reliable, quotidian. To achieve that vision, it'll need to up its launch cadence, which it appears to be doing: In 2017, the company launched 18 rockets. If Monday's liftoff goes as planned, it'll have seven on the year already, putting it well on track to exceed last year's high-water mark.

Up, Up, and Away

  • Elon Musk's long-term plan for SpaceX is to get humans off of Earth and on to Mars—but what does the company's recent progress say about that goal?

  • His plan to launch thousands of small satellites for faster internet is ambitious, too. But that may not be all SpaceX wants to use them for.

  • Oh, and then there's the kookoobananas plan to transport humans between Earth cities on rockets. Sure, man, sure.

Related Video

Science

The Commercial Space Race Heats Up

The private American companies battling it out for a $3.5 billion NASA contract have one last chance to successfully launch their spacecraft before a decision is made in January.

Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

But now the robots are here to help.

In a series of results reported in the journals Physical Review Letters and Chaos, scientists have used machine learning—the same computational technique behind recent successes in artificial intelligence—to predict the future evolution of chaotic systems out to stunningly distant horizons. The approach is being lauded by outside experts as groundbreaking and likely to find wide application.

“I find it really amazing how far into the future they predict” a system’s chaotic evolution, said Herbert Jaeger, a professor of computational science at Jacobs University in Bremen, Germany.

The findings come from veteran chaos theorist Edward Ott and four collaborators at the University of Maryland. They employed a machine-learning algorithm called reservoir computing to “learn” the dynamics of an archetypal chaotic system called the Kuramoto-Sivashinsky equation. The evolving solution to this equation behaves like a flame front, flickering as it advances through a combustible medium. The equation also describes drift waves in plasmas and other phenomena, and serves as “a test bed for studying turbulence and spatiotemporal chaos,” said Jaideep Pathak, Ott’s graduate student and the lead author of the new papers.

After training itself on data from the past evolution of the Kuramoto-Sivashinsky equation, the researchers’ reservoir computer could then closely predict how the flamelike system would continue to evolve out to eight “Lyapunov times” into the future, eight times further ahead than previous methods allowed, loosely speaking. The Lyapunov time represents how long it takes for two almost-identical states of a chaotic system to exponentially diverge. As such, it typically sets the horizon of predictability.

“This is really very good,” Holger Kantz, a chaos theorist at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, said of the eight-Lyapunov-time prediction. “The machine-learning technique is almost as good as knowing the truth, so to say.”

The algorithm knows nothing about the Kuramoto-Sivashinsky equation itself; it only sees data recorded about the evolving solution to the equation. This makes the machine-learning approach powerful; in many cases, the equations describing a chaotic system aren’t known, crippling dynamicists’ efforts to model and predict them. Ott and company’s results suggest you don’t need the equations—only data. “This paper suggests that one day we might be able perhaps to predict weather by machine-learning algorithms and not by sophisticated models of the atmosphere,” Kantz said.

Besides weather forecasting, experts say the machine-learning technique could help with monitoring cardiac arrhythmias for signs of impending heart attacks and monitoring neuronal firing patterns in the brain for signs of neuron spikes. More speculatively, it might also help with predicting rogue waves, which endanger ships, and possibly even earthquakes.

Ott particularly hopes the new tools will prove useful for giving advance warning of solar storms, like the one that erupted across 35,000 miles of the sun’s surface in 1859. That magnetic outburst created aurora borealis visible all around the Earth and blew out some telegraph systems, while generating enough voltage to allow other lines to operate with their power switched off. If such a solar storm lashed the planet unexpectedly today, experts say it would severely damage Earth’s electronic infrastructure. “If you knew the storm was coming, you could just turn off the power and turn it back on later,” Ott said.

He, Pathak and their colleagues Brian Hunt, Michelle Girvan and Zhixin Lu (who is now at the University of Pennsylvania) achieved their results by synthesizing existing tools. Six or seven years ago, when the powerful algorithm known as “deep learning” was starting to master AI tasks like image and speech recognition, they started reading up on machine learning and thinking of clever ways to apply it to chaos. They learned of a handful of promising results predating the deep-learning revolution. Most importantly, in the early 2000s, Jaeger and fellow German chaos theorist Harald Haas made use of a network of randomly connected artificial neurons—which form the “reservoir” in reservoir computing—to learn the dynamics of three chaotically coevolving variables. After training on the three series of numbers, the network could predict the future values of the three variables out to an impressively distant horizon. However, when there were more than a few interacting variables, the computations became impossibly unwieldy. Ott and his colleagues needed a more efficient scheme to make reservoir computing relevant for large chaotic systems, which have huge numbers of interrelated variables. Every position along the front of an advancing flame, for example, has velocity components in three spatial directions to keep track of.

It took years to strike upon the straightforward solution. “What we exploited was the locality of the interactions” in spatially extended chaotic systems, Pathak said. Locality means variables in one place are influenced by variables at nearby places but not by places far away. “By using that,” Pathak explained, “we can essentially break up the problem into chunks.” That is, you can parallelize the problem, using one reservoir of neurons to learn about one patch of a system, another reservoir to learn about the next patch, and so on, with slight overlaps of neighboring domains to account for their interactions.

Parallelization allows the reservoir computing approach to handle chaotic systems of almost any size, as long as proportionate computer resources are dedicated to the task.

Ott explained reservoir computing as a three-step procedure. Say you want to use it to predict the evolution of a spreading fire. First, you measure the height of the flame at five different points along the flame front, continuing to measure the height at these points on the front as the flickering flame advances over a period of time. You feed these data-streams in to randomly chosen artificial neurons in the reservoir. The input data triggers the neurons to fire, triggering connected neurons in turn and sending a cascade of signals throughout the network.

The second step is to make the neural network learn the dynamics of the evolving flame front from the input data. To do this, as you feed data in, you also monitor the signal strengths of several randomly chosen neurons in the reservoir. Weighting and combining these signals in five different ways produces five numbers as outputs. The goal is to adjust the weights of the various signals that go into calculating the outputs until those outputs consistently match the next set of inputs—the five new heights measured a moment later along the flame front. “What you want is that the output should be the input at a slightly later time,” Ott explained.

To learn the correct weights, the algorithm simply compares each set of outputs, or predicted flame heights at each of the five points, to the next set of inputs, or actual flame heights, increasing or decreasing the weights of the various signals each time in whichever way would have made their combinations give the correct values for the five outputs. From one time-step to the next, as the weights are tuned, the predictions gradually improve, until the algorithm is consistently able to predict the flame’s state one time-step later.

“In the third step, you actually do the prediction,” Ott said. The reservoir, having learned the system’s dynamics, can reveal how it will evolve. The network essentially asks itself what will happen. Outputs are fed back in as the new inputs, whose outputs are fed back in as inputs, and so on, making a projection of how the heights at the five positions on the flame front will evolve. Other reservoirs working in parallel predict the evolution of height elsewhere in the flame.

In a plot in their PRL paper, which appeared in January, the researchers show that their predicted flamelike solution to the Kuramoto-Sivashinsky equation exactly matches the true solution out to eight Lyapunov times before chaos finally wins, and the actual and predicted states of the system diverge.

The usual approach to predicting a chaotic system is to measure its conditions at one moment as accurately as possible, use these data to calibrate a physical model, and then evolve the model forward. As a ballpark estimate, you’d have to measure a typical system’s initial conditions 100,000,000 times more accurately to predict its future evolution eight times further ahead.

That’s why machine learning is “a very useful and powerful approach,” said Ulrich Parlitz of the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, who, like Jaeger, also applied machine learning to low-dimensional chaotic systems in the early 2000s. “I think it’s not only working in the example they present but is universal in some sense and can be applied to many processes and systems.” In a paper soon to be published in Chaos, Parlitz and a collaborator applied reservoir computing to predict the dynamics of “excitable media,” such as cardiac tissue. Parlitz suspects that deep learning, while being more complicated and computationally intensive than reservoir computing, will also work well for tackling chaos, as will other machine-learning algorithms. Recently, researchers at the Massachusetts Institute of Technology and ETH Zurich achieved similar results as the Maryland team using a “long short-term memory” neural network, which has recurrent loops that enable it to store temporary information for a long time.

Since the work in their PRL paper, Ott, Pathak, Girvan, Lu and other collaborators have come closer to a practical implementation of their prediction technique. In new research accepted for publication in Chaos, they showed that improved predictions of chaotic systems like the Kuramoto-Sivashinsky equation become possible by hybridizing the data-driven, machine-learning approach and traditional model-based prediction. Ott sees this as a more likely avenue for improving weather prediction and similar efforts, since we don’t always have complete high-resolution data or perfect physical models. “What we should do is use the good knowledge that we have where we have it,” he said, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.” The reservoir’s predictions can essentially calibrate the models; in the case of the Kuramoto-Sivashinsky equation, accurate predictions are extended out to 12 Lyapunov times.

The duration of a Lyapunov time varies for different systems, from milliseconds to millions of years. (It’s a few days in the case of the weather.) The shorter it is, the touchier or more prone to the butterfly effect a system is, with similar states departing more rapidly for disparate futures. Chaotic systems are everywhere in nature, going haywire more or less quickly. Yet strangely, chaos itself is hard to pin down. “It’s a term that most people in dynamical systems use, but they kind of hold their noses while using it,” said Amie Wilkinson, a professor of mathematics at the University of Chicago. “You feel a bit cheesy for saying something is chaotic,” she said, because it grabs people’s attention while having no agreed-upon mathematical definition or necessary and sufficient conditions. “There is no easy concept,” Kantz agreed. In some cases, tuning a single parameter of a system can make it go from chaotic to stable or vice versa.

Wilkinson and Kantz both define chaos in terms of stretching and folding, much like the repeated stretching and folding of dough in the making of puff pastries. Each patch of dough stretches horizontally under the rolling pin, separating exponentially quickly in two spatial directions. Then the dough is folded and flattened, compressing nearby patches in the vertical direction. The weather, wildfires, the stormy surface of the sun and all other chaotic systems act just this way, Kantz said. “In order to have this exponential divergence of trajectories you need this stretching, and in order not to run away to infinity you need some folding,” where folding comes from nonlinear relationships between variables in the systems.

The stretching and compressing in the different dimensions correspond to a system’s positive and negative “Lyapunov exponents,” respectively. In another recent paper in Chaos, the Maryland team reported that their reservoir computer could successfully learn the values of these characterizing exponents from data about a system’s evolution. Exactly why reservoir computing is so good at learning the dynamics of chaotic systems is not yet well understood, beyond the idea that the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. The technique works so well, in fact, that Ott and some of the other Maryland researchers now intend to use chaos theory as a way to better understand the internal machinations of neural networks.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Fear Not the Robot Singularity

The robot revolution we’re in the midst of is way more interesting and way less murder-y than science fiction. Call it the multiplicity.

The robot arm hovers over a pile of products before it makes its move, snagging a toothbrush with its suction cup. It holds the product up, waits for the red flash of a barcode scanner, then turns and drops the toothbrush in a cubby hole. Next the arm suction-cups a box of Goldfish crackers, turns, and files it, too.

At a startup called Kindred in San Francisco, technicians are teaching robots how to precisely manipulate objects like these. Why? Because somebody's got one hell of an online shopping habit. The idea is to get robots so good at picking and placing products that they make human workers look like sloths on sedatives, thus supercharging order fulfillment centers. And how these researchers are trying to do it has big implications for robots beyond the warehouse.

If you want to teach a robot to pick up an object, you could do it the classical way and program it with line after line of code. Or like Kindred says its system works, you can use more modern approaches in artificial intelligence: reinforcement learning and imitation learning.

According to Kindred, its robots start with the former. With reinforcement learning, the robots practice manipulating products on their own with trial and error. When they do something right, they “score,” hence the reinforcement. “The goal is to maximize the score over time,” says George Babu, cofounder of Kindred. “When you do something correctly, then you explore actions similar to the one that gave you a correct response.”

Reinforcement learning has its limitations, though. For one, it’s slow. In a purely digital environment, a simulator could rapidly try and fail, over and over and over—but with a robot in the real world, that iteration is constrained by the laws of the physical universe.

    More from the hardwired series

  • Matt Simon

    What Is a Robot?

  • Matt Simon

    This Robot Tractor Is Ready to Disrupt Construction

  • Matt Simon

    Inside SynTouch, the Mad Lab Giving Robots the Power to Feel

And two, Kindred’s robots can only teach themselves so much; there are simply too many scenarios that play out in the real world. So a human operator steps in to initiate the second of Kindred’s approaches: so-called imitation learning, looking through the robot’s eyes and guiding its arms. “Some of our algorithms are imitating where the human picked the object,” says Babu, “some of our algorithms are imitating how the human is moving through space to get the objects.”

This builds on what the robot learned through reinforcement, showing it what constitutes a good or bad grip. Essentially, it fills in knowledge gaps by creating lessons that the robot couldn’t practice on its own. Thus a robot learns to more precisely manipulate products like boxes of drugs and toothbrushes.

Which will be essential in an ecommerce environment (Gap is currently testing Kindred’s system), where a robot may encounter objects that are hard or soft or floppy or fragile. And with a human in the loop, the robot will have a tutor to guide it remotely if it comes across something novel. “If something changes, our algorithms say, Wait, I don't recognize this object. I don't feel confident doing this,” says Babu. “We quickly kick in the human to help the robot do the task and then we can learn from that and we can improve our algorithms.”

The power to easily teach robots will make for highly adaptable machines far beyond an order fulfillment center. “Long term, it'll likely mean you don't necessarily think of robots just doing one specific thing, like buying a robot for X or Y or Z,” says UC Berkeley roboticist Pieter Abbeel, whose own startup Embodied Intelligence is using VR controls to teach robots skills. “But you buy a robot that can help you with anything, assuming you can give a few demonstrations.”

Sure, the education of the robots has just begun—even boxes of allergy medicine still give them pause. But soon enough they’ll be running laps around us, all thanks to the gold old human touch.

Related Video

Science

Bossa Nova robot

This company wants its robot to take on the tedious and time consuming job of scanning inventory at stores. Please don't kick it while it is working.

You enter the University of Colorado Boulder's newest research laboratory through the side entrance. The door—which is heavy and white, with a black, jug-style handle—slides open from right to left. Crammed inside are a plain wooden dresser, two chairs, and a small desk, above which someone has taped a mediocre landscape-print (mountains, trees, clouds, etc.). A kaleidoscopic purple tapestry hangs from the far wall. The ceiling slings so low that it forces some visitors to duck, and the flooring is made of wood. Well, wood laminate.

The modest setup occupies just a few dozen square feet of space—a tight but necessary fit, given that CU Boulder's newest research laboratory is located not in a building on the university's campus, but the back of a Ram ProMaster cargo van.

The lab is mobile because it has to be. Researchers at CU Boulder’s Change Lab built it to study marijuana’s effects on human test subjects. But even in states like Colorado, where recreational marijuana has been legal since 2014, federal law prohibits scientists from experimenting with anything but government-grown pot.

And Uncle Sam’s weed is weak.

Cultivated by the University of Mississippi with funding from the National Institute on Drug Abuse, federally sanctioned cannabis is less potent and less chemically diverse than the range of cannabis products available for purchase at dispensaries. According to findings published in the journal Nature Scientific Reports earlier this year, the weed that researchers use in clinical cannabis studies is very different from the weed people actually use.

CU Boulder's mobile lab (aka the CannaVan, aka the Mystery Machine) lets researchers drive around that problem. "The idea is: If we can’t bring real-world cannabis into the lab, let’s bring the lab to the people," says neurobiologist Cinnamon Bidwell, a coauthor on the aforementioned Nature study and head of the CannaVan research team.

It works like this: CannaVan researchers first meet with test subjects on CU Boulder campus, where they assign study participants specific commercial cannabis products with known potency and chemical makeups (including edibles and concentrates). Once the test subjects leave, they purchase their assigned cannabis from a local dispensary. Later, CannaVan researchers drive to the subjects' homes. Participants enter the van sober, and researchers perform blood draws and establish test subjects' baseline mental and physical states. Then they go back into their homes; eat, smoke, vape, or dab their product as they please; and return to the van, where researchers draw the subjects' blood again, perform interviews, and evaluate things like memory and motor control.

Bidwell's team is currently using the van to investigate the potential risks of high-potency cannabis concentrates, like dabs, and the potential benefits of cannabis use among medical patients with anxiety and chronic pain. The researchers use the lab to evaluate the drugs' acute effects, track usage and quality of life, monitor symptoms, and investigate how patients titrate their doses. "Basically, we're looking at whether people can have pain relief without walking around feeling stoned all the time," Bidwell says.

Crucially, all of this happens without any CU researchers buying, touching, or even seeing commercial cannabis themselves. "As Colorado citizens, we can purchase and use these products. But as researchers, we can't legally bring them into our lab and directly test their effects, or directly analyze them," Bidwell says. The CannaVan studies are less precise than those her team could perform in a traditional lab (where they'd have greater influence over things like dosage, timing, and chemical makeup), but more controlled than a pure observational study. Plus, these studies are actually legal. “We’ve worked very closely with CU Boulder administration, our legal team, research compliance officers—the list goes on—to see that everything is above board,” Bidwell says.

The upshot: Randomized controlled trials these are not, but these first observational investigations from CU Boulder's CannaVan are liable to be some of the most relevant behavioral and therapeutic studies on cannabis in 2018, and—it seems likely—several years to come.

That's because weak government weed isn't the only thing holding back medical marijuana research. Even as California, Nevada, Massachusetts, and Maine this year join the list of states where recreational weed is legal, in a country where 93 percent of voters support some form of legal pot, cannabis retains its designation under federal law as a Schedule I narcotic. That's a classification on par with heroin and ecstasy, and one that seems unlikely to change in the current political climate.

Attorney General Jeff Sessions' aversion to medical marijuana has been well documented. In April, he directed a Justice Department task force to review and recommend changes to the Cole Memo, which, since 2013, has enabled states to implement their own medical marijuana laws with minimal intervention by the US government. A month later, Sessions asked Congress to undo the protections afforded by the Rohrbacher-Blumenauer amendment, which also shields state-legal medical marijuana programs from federal interference.

"He hasn't yet, but if Sessions prevails at rolling these protections back, everything becomes harder for everybody, and that scares me" says geneticist Reggie Gaudino, chief science officer of marijuana analytics company Steep Hill. "I think it would have a chilling effect on the entire field—sales, medical research, genetic studies, chemical analyses. All of it."

And experts agree a chilling effect is the opposite of what cannabis research needs. "There needs to be an enormous amount of work done not just on the compounds present in various cannabis products, but on the best ways to characterize exposure to those compounds," says Harvard pediatrician and public health researcher Marie McCormick. Earlier this year, she chaired a review by the National Academies of Science, Medicine and Engineering of existing marijuana research—the most thorough evaluation of its kind to date. The report found strong evidence for marijuana's therapeutic potential, but gaping holes in foundational research that could guide its medical and recreational use. "It's not terribly sexy work. It's slow and methodological. But it's critical to understanding the effects of cannabis exposure, its potential risks, and its potential remedies," McCormick says. That's not all going to happen in 2018, she adds, "but developing a solid research agenda would go a long way toward moving things forward, and a big thing that would help would be the removal of marijuana's Schedule I status."

    More on Marijuana

  • Nick Stockton

    Scientists Map the Receptor That Makes Weed Work

  • Katie M. Palmer

    A New Crop of Marijuana Geneticists Sets Out to Build Better Weed

  • Megan Molteni

    Jeff Sessions' War on Medical Marijuana Gets Public Health All Wrong

In Colorado, for example, rescheduling marijuana could embolden CU Boulder's legal team to allow locally grown, non-NIDA weed on campus. This summer, state lawmakers passed House Bill 1367, a law which, when it goes into effect in July of 2018, will allow licensed Colorado cultivators and researchers to grow and study marijuana for clinical investigations. "But it’s still up to the university to say whether they’ll go with state or federal laws," Bidwell says. CU Boulder researchers receive hundreds of millions of dollars in federal funding every year; adhering to local laws over federal ones could put some of that money at risk. "We don't know how the university will come on that," Bidwell says. "But the institution is, understandably, pretty risk averse, and we have no sense of a timeline on when they might decide."

In the meantime, Bidwell and her team will continue cruising Colorado in the CannaVan, conducting observational studies of real-world pot usage. And if you're in the Boulder area, the researchers are looking for study participants. Just … do be sure any vans you climb into are university-affiliated. Look for the CU-Boulder insignia, the chintzy purple tapestry, and the fake wood floors.

Related Video

Science

A New Crop of Marijuana Geneticists Build Better Weed

There are thousands of strains of weed. Cracking their genetic codes may be the key to transforming pot from a budding business to a high-flying industry and a cannabis analytics lab is trying to unlock the true potential of weed. Pictures by Preston Gannaway.