Tag Archive : SCIENCE

/ SCIENCE

The cook, complete with hair net, lays the red patty down on the grill and gives it a press with a spatula. And there, that unmistakable sizzle and smell. She flips the patty and gives it another press, lets it sit, presses it, and pulls it off the grill and onto a bun.

This is no diner, and this is no ordinary cook. She's wearing not an apron, but a lab coat and safety goggles, standing in a lab-kitchen hybrid in a Silicon Valley office park. Here a company called Impossible Foods has over the last six years done something not quite impossible, but definitely unlikely: Engineering a plant-based burger that smells, tastes, looks, and even feels like ground beef.

There are other veggie burgers on the market, of course, but Impossible Foods wants to sell consumers a real meat analog—one that requires a very different kind of engineering than your Boca or black bean burgers. So WIRED wants to take you on the deepest dive yet into the science behind the Impossible Burger.

Biting into an Impossible Burger is to bite into a future in which humanity has to somehow feed an exploding population and not further imperil the planet with ever more livestock. Because livestock, and cows in particular, go through unfathomable amounts of food and water (up to 11,000 gallons a year per cow) and take up vast stretches of land. And their gastrointestinal methane emissions aren’t doing the fight against global warming any favors either (cattle gas makes up 10 percent of greenhouse gas emissions worldwide).

This is the inside story of the engineering of the Impossible Burger, the fake meat on a mission to change the world with one part soy plant, one part genetically engineered yeast—and one part activism. As it happens, though, you can’t raise hell in the food supply without first raising a few eyebrows.

The Lean, Mean Heme Machine

What makes a burger a burger? The smell, for one, and taste and texture, all working in concert to create something animal. It’s loaded with all manner of proteins that interact with each other in unique ways, creating a puzzle of sorts. But Impossible Foods thinks the essence of a meat lies in a compound called heme, which gives ground beef its color and vaguely metallic taste—thanks to iron in the heme molecule. In blood, heme lives in a protein called hemoglobin; in muscle, it's in myoglobin.

Interestingly, you’ll find globins (a class of proteins) not just across the animal kingdom, but in plants as well. Soy roots, for example, carry a version called leghemoglobin, which also carries heme. Leghemoglobin in soy and myoglobin in meat share a similar 3-D structure consisting of what's known as an alpha helical globin fold, which wraps around the heme.

So what if you could extract the heme from a plant to obtain that secret ingredient in ground beef? Well, the main problem, Impossible Foods found, is that you'd need a heck of a lot of soy: One acre of soybeans would yield just a kilogram of soy leghemoglobin.

    More Food Science

  • Joe Ray

    Wheat Nerds and Scientists Join Forces to Build a Better Bread

  • Adam Rogers

    Who Wants Disease-Resistant GM Tomatoes? Probably Not Europe

  • Sarah Zhang

    Farmers Are Manipulating Microbiomes to Help Crops Grow

Impossible Foods founder and CEO Pat Brown figured out how to hack together a better way. Technicians take genes that code for the soy leghemoglobin protein and insert them into a species of yeast called Pichia pastoris. They then feed the modified yeast sugar and minerals, prompting it to grow and replicate and manufacture heme with a fraction of the footprint of field-grown soy. With this process, Impossible Foods claims it produces a fake burger that uses a 20th of the land required for feeding and raising livestock and uses a quarter of the water, while producing an eighth of the greenhouse gases (based on a metric called a life cycle assessment).

Now, engineering a “beef” burger from scratch is of course about more than just heme, which Impossible Foods bills as its essential ingredient. Ground beef features a galaxy of different compounds that interact with each other, transforming as the meat cooks. To piece together a plant-based burger that’s indistinguishable from the real thing, you need to identify and recreate as many of those flavors as possible.

To do this, Impossible Foods is using what's known as a gas chromatography mass spectrometry system. This heats a sample of beef, releasing aromas that bind to a piece of fiber. The machine then isolates and identifies the individual compounds responsible for those aromas. “So we will now have kind of a fingerprint of every single aroma that is in beef,” says Celeste Holz-Schietinger, principal scientist at Impossible Foods. “Then we can say, How close is the Impossible Burger? Where can we make improvements and iterate to identify how to make each of those particular flavor compounds?”

This sort of deconstruction is common in food science, a way to understand exactly how different compounds produce different flavors and aromas. "In theory, if you knew everything that was there in the right proportions, you could recreate from the chemicals themselves that specific flavor or fragrance," says Staci Simonich, a chemist at Oregon State University.

Then there’s the problem of texture. Nothing feels quite like ground beef. So Impossible Foods isolates individual proteins in the meat. “Then as we identify what those particular protein properties are, we go and look at plants for plant proteins that have those same properties,” says Holz-Schietinger. Plant proteins tend to taste more bitter, so Impossible Foods has to develop proteins with a cleaner taste.

What they’ve landed on in the current iteration is a surprising mix. Ingredients include wheat protein, to give the burger that firmness and chew. And potato protein, which allows the burger to hold water and transition from a softer state to a more solid state during cooking. For fat, Impossible Foods uses coconut with the flavor sucked out. And then of course you need the leghemoglobin for heme, which drives home the flavor of “meat.”

For something that so accurately mimics the taste and look and feel and smell of meat (and trust us, it does), the Impossible Burger is actually not all that complex. “Earlier iterations were much more complex because we didn't fully understand it,” says Holz-Schietinger (experiments with cucumber and the famously smelly durian fruit didn't … pan out, nor did trying to replicate the different connective tissues of a cow). “Now we understand which each component drives each sensory experience.”

At the moment, the Impossible Burger is only available in select restaurants, though Impossible Foods just opened a plant with the idea of increasing production from 300,000 pounds a month to a million. But as they focus on expansion, some critics are raising questions about the burger of tomorrow.

Government, Meet the Future. The Future, Government

In 2014, Impossible Foods filed what’s known as a GRAS notice, or “generally recognized as safe,” with the FDA. In it, the company listed the reasons it considered soy leghemoglobin safe for humans to consume. Leghemoglobin, they argued, is chemically similar to other globins considered safe, so it should carry the same confidence with consumers. Food companies aren’t required to tell the FDA when they’re introducing new ingredients, and filing this sort of self GRAS determination is not mandatory, but Impossible Foods says it did so in the name of transparency.

“Leghemoglobin is structurally similar to proteins that we consume all the time,” says Impossible Foods’ chief science officer David Lipman. "But we did the toxicity studies anyway and they showed that that was safe.” They compared the protein to known allergens, for instance, and found no matches. The company also got the OK from a panel of experts, including food scientist Michael Pariza at the University of Wisconsin, Madison.

But the company didn't get the blessing it was looking for from the FDA. As detailed in documents FOIA'ed by environmental groups and published by The New York Times in August, the FDA questioned the company’s conclusions. “FDA believes that the arguments presented, individually and collectively, do not establish the safety of SLH [soy leghemoglobin] for consumption, nor do they point to a general recognition of safety…,” the FDA wrote in a memo. That is not to say the FDA concluded leghemoglobin to be unsafe, just that it had questions.

The FDA also noted that the company's engineered yeast doesn't just produce leghemoglobin—it also produces 40 other normally occurring yeast proteins that end up in the burger, which "raises further question on how the safety argument could be made based solely on SLH." Impossible Foods insists these proteins are safe, and notes that the yeast it has engineered is non-toxic, and that its toxicity studies examined the whole leghemoglobin ingredient.

Impossible Foods withdrew its GRAS notice in November 2015 to perform a new study. They fed rats more than 200 times the amount of the leghemoglobin ingredient than the average American would consume if the ground beef in their diet—an average of 25 grams a day—was replaced with Impossible's fake meat (adjusted for weight). They found no adverse effects.

Meanwhile, the Impossible Burger is on the market, which has some environmental groups peeved. That and there's the larger question of whether GRAS notifications should be voluntary or mandatory. “The generally recognized as safe exception was meant for common food ingredients, not for the leading-edge products, especially the innovative like the leghemoglobin,” says Tom Neltner, chemicals policy director at the Environmental Defense Fund, which was not involved in the FOIA. “We don't think it should be a voluntary review, we don't think the law allows it.” Accordingly, the group is suing the FDA over the agency’s GRAS process.

Others are concerned that leghemoglobin—again, a new ingredient in the food supply, since humans don't typically eat soy roots—hasn’t gone through enough testing to prove it’s safe, and agree with the FDA that Impossible Foods’ GRAS notification came up short. “The point of some of us that are being critical of this is not that everything that's engineered is unsafe or anything like that,” says Michael Hansen, senior staff scientist at the Consumers Union, which was also not involved in the FOIA. “It's like, look, any new food ingredient, some new food additive, of course it should go through a safety assessment process.”

Hansen takes issue with the idea that leghemoglobin is similar to other edible globins are therefore safe. “As the FDA pointed out in their response, just because proteins have similar functions or similar three-dimensional structures, doesn't mean that they're similar," Hansen says. "They can have a very different amino acid sequence, and just slight changes can have impacts."

This is what happens when the future of food lands on the government’s plate. The central question: Should Americans trust companies to do their own food safety testing, or should that always be the job of the feds?

The reality is, different kinds of modified foods attract different levels of regulatory attention. "It is a patchwork system with little rhyme or reason," says crop scientist Wayne Parrott of the University of Georgia. "It depends on what is done, how it is done, and its intended use." You hear plenty about the crops, and most certainly about the long hullabaloo over that GM salmon. But not engineered microorganisms, which are extremely common. Why?

"Out of sight, out of mind," says Parrott. "And people also get more emotional over animals than they do over other things. With the salmon it was political. Very, very political."

Really, there's no inherent danger in genetically modifying a food. After all, the FDA wasn't raising its voice about soy leghemoglobin because it comes from genetically engineered yeast. The agency's job is to determine the safety of foods. "Any risk that's associated comes from traits," Parrott says. "It doesn't come from the way you put those traits in there."

This is only the beginning of a new era of high-tech, genetically engineered foods. Because if we want to feed a rapidly expanding species on a planet that stays the same size, we’re going to need to hack the food supply. Our crops will have to weather a climate in chaos. "We want to improve efficiency so we can feed 9 billion people without more land, without more water, without more fertilizer or pesticides," says Parrott.

And humanity will sure as hell have to cut back on its meat consumption. “We'll change the world more dramatically than any company possibly in history has ever done it,” says Impossible Foods founder Brown. “Because when you look at the impact of the system we're replacing, almost half of the land area of Earth is being occupied by the animal farming industry, grazing, or feed crop production.” That system, of course, will not give up ground quietly.

But who knows. Maybe shocking the system isn’t so impossible after all.

Related Video

Culture

October 2013 Issue: The Joy of Cooking with Science!

Curious what it took to create the Doritos Locos Taco? Need Recipes for a vegan 'meat' feast? We've got the answers to that and more in the October 2013 issue of WIRED. Analyze the fifth taste, explore the world of bug sushi; chefs and researchers are engineering the cuisine of tomorrow. Also in this issue: Beyond – Two Souls, Cuarón, and a special tablet video with Bon Appétit!
Online and on Tablets: 9.17.2013
On Newstands: 9.24.2013

It took a decade for British biotech firm Oxitec to program a self-destruct switch into mosquitoes. Perfecting that genetic technology, timed to kill the insects before they could spread diseases like Zika and dengue fever, was supposed to be the hard part. But getting the modified mosquitoes cleared to battle public health scares in the US has been just as tough. It’s been six years since the company first applied for regulatory approval—and it has zero mosquito releases to show for it.

That’s because square-shaped technologies like Oxitec’s don’t neatly fit into the round tangle of rules that govern US biotechnology. To federal regulators, mosquitoes are pests. But also animals. And disease vectors. Mosquitoes that lead to fewer mosquitoes are also, technically, a pesticide. So a handful of federal agencies can all claim the right to decide their fate.

But at least for Oxitec, that’s now changed. Earlier this month, the US Food and Drug Administration, which had been evaluating the company’s skeeters since 2011, transferred approval power to the Environmental Protection Agency. In short, Oxitec’s bugs have been deemed more like a pesticide—used to suppress wild mosquito populations—than a drug used to prevent disease. And that promises to speed up how quickly the company can get its product on the ground, especially to hurricane-prone areas, where big storms can exacerbate mosquito-borne diseases.

The switch from FDA to EPA oversight means an end to Oxitec’s endless waiting. That’s because the EPA is required by federal law to review new pesticides “as expeditiously as possible,” which the statute defines as within 12 months after the submission of an application. “Every year since 2011 I’ve been saying we’re going ahead with a pilot project that year,” says Derric Nimmo, an Oxitec scientist who leads the company’s work in the US. “Up until now that was just me being optimistic. The FDA had no timelines it had to hold to whatsoever. Now we can actually be confident about when we’ll get a decision.”

Nimmo hopes to get permission to go ahead with releases in the next six months, just in time for next year’s mosquito season. The need is especially urgent in places hit by this hurricane season. Before Harvey covered the Houston area in 33 trillion gallons of rain, Harris County public health officials had been in talks with Oxitec about a possible field trial. Monroe County, where Irma destroyed more than 600 homes and left a dozen dead, is the site of the company’s long-stalled first experimental release—the FDA approved a 22 months-long trial, but no neighborhood wanted to be the site of the mosquito dump. Both locations are now waiting to see what the EPA says before resuming negotiations with Oxitec.

    More on Mosquitoes

  • Megan Molteni

    Verily's Mosquito Factory Accelerates the Fight Against Zika

  • Sarah Zhang

    A California City Is Fending Off Zika by Releasing 40,000 Mosquitoes Every Week

  • Eric Niiler

    Oxitec Pioneered the GM Mosquito. Up Next? Moths, of Course

It’s still too early to say whether residents’ reservations about the safety of genetically modified mosquitoes have been blown away by this year’s hurricanes. The storms left behind acres of debris where rainwater can pool, creating ideal breeding grounds for disease-carrying Aedes aegypti. Without a massive cleanup operation, Houston and the Florida Keys are likely to be smacked with swarms next summer. Instead of blanketing those areas with pesticides, Oxitec’s mosquitoes could tamp down local populations. In 2015, the company published a paper showing its release of GM mosquitoes in the Bahia state of Brazil reduced wild Aedes aegypti by as much as 90 percent, enough to prevent dengue epidemics.

EPA will be looking at those published studies, as well as all the safety and efficacy data Oxitec compiled for the FDA over the years. But it will be asking some different questions. The FDA was interested in how the mosquitoes’ lethal protein acted in the wild—could it harm humans, or other animals? Could it make disease transmission more aggressive? But the EPA will focus on things like how fast the protein degrades in the environment. Oxitec says it has that data, it’s just a matter of presenting it in the way the EPA wants it. If that’s true, and the agency is satisfied, the company could open the door to other applications of genetic sterile insect technologies.

Scientists who work on genetically modified insects to combat other human and animal diseases welcome the change. But they have reason to be skeptical. The FDA still has the power to put the brakes on a mosquito-based technology, according to the guidance, if its goal is to reduce disease transmission. Reminder: That’s pretty much all modified mosquitoes.

“It is still hard for me to understand why they are insisting on regulating mosquitoes at all, given what should be the obvious fact that they are neither food nor drug,” says Zachary Adelman, an entomologist at Texas A&M. He works on a technology called a gene drive—where a detrimental gene is driven through a wild population—to curb disease-carrying mosquitoes. It’s an approach that wouldn’t require releasing millions of sterile male mosquitoes into people’s neighborhoods every few days. But the guidance doesn’t mention gene drives at all, and just about every one of their applications could reasonably fit both the FDA and EPA’s definitions.

Is it better than having nothing at all? “Sure,” says Adelman. “But this creates a lot of uncertainty; basically the opposite of what their purpose was. If the goal is to release something into the environment, one might think the environmental protection agency would be the proper regulatory authority, regardless of intent.” Clarity may have come to Oxitec, but for everyone else it looks like the waiting has just begun.

In Tampa, the conference center’s roof leaked. In Austin, the airport flooded. In Reno, conference organizers had to wait until a motorcycle rally was over before they could do some setup.

During preparation for the SC Conference, a supercomputing meeting, there’s always something getting in the way of networking. But the conference, held annually in November, is perhaps more sensitive to water, delays, and herds of bikes than your average gathering. Because every year, a group of volunteers shows up weeks in advance to build, from the literal ground up, the world’s fastest temporary network. The conference's attendees and exhibitors—from scientific researchers to industry bigwigs—need superfast, reliable connections to stream in the results of their simulations and data analysis. Called SCinet, the network “takes one year to plan, three weeks to build, one week to operate, and less than 24 hours to tear down,” according to its catchphrase.

After all, what good is high-performance computing if its results can’t reach the wider world?

This year, in Denver, one difficulty was elevation—not of the city itself, but of the exhibit hall. The 188 volunteers built up the networks' 13 equipment racks on the floor below the big, main space, constructing the infrastructure that could eventually handle around 3.6 terabits per second of traffic. (For reference, that's probably around 400,000 times more powerful than your personal connection.) And then, after construction, they had to move those millions of dollars of delicate equipment—down a hall, into an elevator, up a floor, and across the exhibit hall.

On November 8, volunteers moved the equipment on customized racklifts. “Welcome to the crazy,” someone said, unprompted, as he rushed past. The SCinetters moved like tightrope walkers, servers in tow, toward the elevators.

One floor up, a guy wearing a Scooby Doo hat pulled up with a forklift, gingerly skewered one rack, and began to lift it to the central stage. As the rack approached the platform, other volunteers put their hands on it, like digital pallbearers. When they were done, eight racks sat on the stage—the beating, blinking heart of the network. Among other duties, it coordinates with the five other racks scattered strategically around the room, ready for the exhibitors that needed 100 gigabit connections, and those requiring mere 1 or 10 gigabit hookups.

The demonstrations started on November 13. NASA brought out a simulation of how shockwaves from meteorites affect the atmosphere—and then how their effects reach the ground, from impacts to tsunamis. Also on board: a simulation showing how person-transporting drones could work, and a global weather prediction model. The Department of Energy presented about particle accelerators, quantum computing in science, and cancer surveillance.

The company Nyriad Limited, meanwhile, has aligned its stars with the International Centre of Radio Astronomy Research, to develop a "science data processing" operating system for a telescope called the Murchison Widefield Array, which itself is a precursor to the Square Kilometer Array. The Square Kilometer Array will require more computing power than any previous one: Its data rate will exceed today's global internet traffic. Nyriad, at the conference, revealed its first commercial offering, spun out of its SKA work: a fast and low-power storage solution useful beyond the world of astronomy.

But their talks would have been all talk were it not for the homebuilt network that let them show and tell. In the weeks leading up to the actual conference, the SCinet volunteers laid 60 miles of fiber and crafted 280 WiFi access points for the nearly 13,000 attendees and their attendant devices. Oh, also, they had to have a network service provider crack up a road to illuminate a dark fiber connection.

    More on Supercomputers

  • Matt Simon

    The Astonishing Engineering Behind America's Latest, Greatest Supercomputer

  • Sarah Scoles

    Why You Should Put Your Supercomputer in Wyoming

  • Brian Barrett

    China's New Supercomputer Puts the US Even Further Behind

SCinet requires lots of physical and mental labor, but people keep coming back because it's their brand of fun—and the kind of professional development they could never get at an individual institution. “They get to touch and play with equipment that they normally wouldn't get to touch and play with in their day jobs,” says Jackie Kern, former general chair of the whole conference and of SCinet. They learn new networking tricks, bring back big-kid versions of their knowledge base, and meet some of the world’s top network types. “It’s a Rolodex moment,” says Jeffrey Schwab, current SCinet chair.

Also, it’s summer camp for people who like to tape fiber to floors. “Everyone wants to be here,” says Schwab.

And the organization is trying to help make it more welcoming to more different kinds of people. Kate Petersen Mace helps run the Women in IT Networking at SC program, which has fully funded 19 women volunteers' attendance since 2015 (around 22 percent of the total number of volunteers, this year, were women). In the male-dominated networking network, that kind of professional opportunity can be rare. Mace says she has often been the only woman in a given professional space. “I got kind of used to it and didn’t think about it,” she says. But the differences and the deficits snap into relief once there are more women in the exhibit hall (real and proverbial), watching the blinking lights on a set of server racks together alongside their male colleagues. "You feel more empowered to speak up," says Mace.

A few hours after the first rack lift, Jim Stewart of the Utah Education and Telehealth Network, who co-chairs the architecture team, treks up to the exhibit hall. All of the equipment is on stage, and SCinet volunteers have installed mirrors behind it, so passersby can appreciate the effort in all dimensions. It won’t last long, though. Remember the catchphrase? “…less than 24 hours to tear down.”

Stewart surveys the hall, thinking, apparently, of creation and destruction. “We’re not even done turning it up, and we are talking about getting out,” he says.

Related Video

Science

Inside a Tornado Modeled By a Supercomputer

Leigh Orf, an an atmospheric scientist, narrates a simulation of a superstorm tornado created by one of the world's most powerful supercomputers.

This story originally appeared on CityLab and is part of the Climate Desk collaboration.

The West is burning, and there’s no relief in sight. More than 80 large wildfires are raging in an area covering more than 1.4 million acres, primarily in California, Montana, and Oregon, according to the National Interagency Fire Center. Taken together, that’s a wildfire larger than the state of Delaware.

California has declared a state of emergency as wildfires burn outside Los Angeles and threaten giant sequoias in Yosemite National Park. In Oregon, the Eagle Creek fire is tearing through the scenic Columbia River Gorge. Seattle, Boise, and Denver are socked in under a haze of smoky air and ash that experts predict could linger until the first snowfall in the mountains.

But nowhere are the fires more devastating than in Montana, where more than 1 million acres of forest burned this summer, and more than 467,000 acres are currently burning in 26 large fires that line the mountainous western side of the state.

Philip Higuera, a professor of fire ecology at the University of Montana, is used to seeing smoky air from his office window in September, but nothing like the thick smoke filling Missoula Valley right now. He recently spoke to CityLab about the fires raging across the West, what we can do about them, and why this year’s big burn might be the new normal.

Breathing the air in Missoula today feels like chain-smoking Chesterfields. Schools aren’t letting the kids out at recess, and public health authorities are saying active adults and children should avoid outdoor exertion. It’s easy to get the impression that this is an extraordinary and unprecedented fire season. But you study forest fires over a timespan of thousands of years. How unusual or unique is this fire season?

It’s not—even in the context of the 21st century. In the Northern Rockies, we had a very large fire year in 2012, in 2007, in 2000, and to an extent in 2003. In this region, 1910 remains the record-setting fire season. If we surpass that, I would be surprised. Events like these are not common on a year-to-year scale. On the other hand, when you look at the role fire plays in ecosystems, you have to look at a longer timescale, and these rare events are what’s expected every once in a while.

Why is this fire season so dramatic?

The main reason there is so much burning right now is the strong seasonal drought across the region. The term we use is that these fires are “climate enabled.” The drought makes most of the vegetation, live or dead, receptive to burning. In Missoula, we had the driest July and August on record and the third-warmest July and August. With those types of conditions, we expect widespread burning. But people underestimate the role that seasonal climate plays in these events, and we start to grasp at lots of other things to explain it.

Aside from the bad air, are most urban residents in fire-affected parts of the West safe?

Aside from that really important impact, I give a cautious yes. There is a risk. And that risk is highest in the wildland-urban interface. If you are living there, you should know that you are living with a much higher risk for exposure to wildfire. And part of the job of educators and U.S. Forest Service outreach is to make that risk known. Eventually insurance companies will also get on board. Floods are obviously on insurance companies’ radar front and center. Wildfire is still not frequent enough that they design programs around it.

Should people in the fire-prone West be living in places like that—in the suburbs and exurbs out in the forested edges of urban areas?

Every place on our planet has some natural phenomenon that is not friendly to humans. If you live on the East Coast, you are going to experience hurricanes. If you live in the Midwest, you are going to experience tornadoes. If you live across forested regions in the West, you are going to experience wildfires. We need to develop in a way that is cognizant of these processes—that is not ignorant of the way the planet, and the environment you live in, works.

Why are these fires so hard to put out?

This goes back to why the fires are happening. The fuels are extremely dry. And most areas burn during extreme weather conditions—the days when it’s hot, humidity is low and there are high winds. These are the conditions in which fires quickly double in size. They are also the conditions where it’s most dangerous to put people in front of the fire. Also, a lot of these fires start in very remote areas with rugged terrain, and just putting people on the ground comes with some risk.

Montana alone has already spent tens of millions of dollars trying to suppress wildfires this summer, and two firefighters have been killed. Is that having any impact, or is it like driving down the expressway throwing bags of money out the window?

When you say it’s not working, the key question is, What’s the goal? “It’s not working” assumes the goal is to have no fires. We will fail if that is the goal. Most of these ecosystems that are burning have evolved with fire. We expect them to burn. We need them to burn if we want them to continue to exist.

So it’s like trying to stop rain?

It’s like trying to stop an earthquake. Trying to stop a volcano. To me, the goal can’t be to have no fire. That’s gotten us into trouble when we pursued that goal. I think the metric should be how much area has burned that we wanted to burn compared to how much burned that we didn’t want to burn. Or closer to the nugget, how many resources were harmed—how many houses were lost, how many people were either directly or indirectly killed?

You don’t see raging forest fires as a failure of suppression efforts?

No. Knowing how climate enables and drives these large fires, I think that it would be impossible to put these fires out.

There is a school of thought that says we should not suppress wildfire because it allows smaller trees and underbrush to accumulate, which leads to larger, hotter fires later. So why not just let it burn?

I think as soon as you live in these environments you will quickly abandon that too-simplistic view. Maybe when I was a graduate student living in Seattle that seemed more like a possibility, but you can’t just let it burn. That would not be wise. It really comes down to what you can afford to burn and what do you want to protect. If the fire is in the wilderness, that’s great. If it’s burning toward a community, that’s not so good.

There’s good fire and bad fire?

There is a spectrum. On one end of the spectrum would be the wilderness fire that is not going to impact anyone—good fire. The fire that burns down your house or kills people—bad fire.

Another school of thought says we should allow more logging to clear trees and help prevent wildfires. Does that hold water?

I don’t think that holds water. That is based on the assumption that fires are occurring because there is more fuel available to burn than in the past. That’s generally not what’s driving this. It’s the drought. It’s true that if cut, there is less fuel in the forests. But in a lot of cases, there is what’s called slash—woody debris—left on the ground that will carry fire across the forest floor, which is what you need for it to spread.

The simple answer—if you want to eliminate fire, then pave it. There will be no fire.

    More on Wildfires

  • Adam Rogers

    The West Is on Fire. Blame the Housing Crisis

  • Megan Molteni

    The Science of Fighting Wildfires Gets a Satellite Boost

  • Menaka Wilhelm

    The Tricked-Out Research Planes That Fly Through Wildfires

Is climate change partly to blame for this year’s fires? Are wildfires in the West set to get worse because of it?

That’s what future climate models project. We can’t say this individual fire was because of climate change. We can’t say this year was because of climate change. But these types of years are what we expect to see more frequently. I heard an analogy that I think is useful. If a baseball player is using steroids and hits a home run, can you attribute that home run to steroids? You can’t—but you know that at some point some component of that was brought to you by this artificial input to the system.

There was a study that came out last year, which looked at fire occurrence in the Western United States over the last 40 years using climate modeling. The conclusion was almost half of burning we have seen over the past several decades can be attributed to climate change due to anthropogenic sources. The fire season has gotten significantly longer across the West, on order of 30 days or more during the past few decades.

What are you and your family doing to live through the fire season?

Personally, I made the decision to not live in the wildland-urban interface. I live in the urban part of Missoula. We had one HEPA air filter. Last week we ordered two more. That’s our adaptation.

Related Video

Science

NASA Sets a Fire in Space—For Science!

NASA started a blaze aboard the unmanned Orbital ATK Cygnus cargo vehicle. It’s the Spacecraft Fire Experiment. Seriously, that’s exactly what NASA is calling it.

Atlas, the hulking humanoid robot from Boston Dynamics, now does backflips. I’ll repeat that. It’s a hulking humanoid that does backflips.

Check out the video below, because it shows a hulking humanoid doing a backflip. And that’s after it leaps from platform to platform, as if such behavior were becoming of a bipedal robot.

To be clear: Humanoids aren’t supposed to be able to do this. It's extremely difficult to make a bipedal robot that can move effectively, much less kick off a tumbling routine. The beauty of four-legged robots is that they balance easily, both at rest and as they’re moving, but bipeds like Atlas have to balance a bulky upper body on just two legs. Accordingly, you could argue that roboticists can better spend their time on non-human forms that are easier to master.

But there’s a case to be made for Atlas and the other bipeds like Cassie (which walks more like a bird than a human). We live in a world built for humans, so there may be situations where you want to deploy a robot that works like a human. If you have to explore a contaminated nuclear facility, for instance, you’ll want something that can climb stairs and ladders, and turn valves. So a humanoid may be the way to go.

If anything gets there, it’ll be Atlas. Over the years, it’s grown not only more backflippy but lighter and more dextrous and less prone to fall on its face. Even if it does tumble, it can now get back up on its own. So it’s not hard to see a future when Atlas does indeed tread where fleshy humans dare not. Especially now that Boston Dynamics is part of the Japanese megacorporation SoftBank, which may have some cash to spend.

While Atlas doing backflips is full-tilt insane, humanoids still struggle. Manipulation, for one, poses a big obstacle, because good luck replicating the human hand. And battery life is a nightmare, what with all the balancing. But who knows, maybe one day humanoids will flip into our lives, or at the very least at the Olympics.

Related Video

Science

Why Two-Legged Robots Aren't a Total Disaster | HardWIRED

Nothing in robotics is as unintentionally hilarious as watching a biped fall. But roboticists are making progress- like with this machine named Cassie.

In 1891, a New York doctor named William B. Coley injected a mixture of beef broth and Streptococcus bacteria into the arm of a 40-year-old Italian man with an inoperable neck tumor. The patient got terribly sick—developing a fever, chills, and vomiting. But a month later, his cancer had shrunk drastically. Coley would go on to repeat the procedure in more than a thousand patients, with wildly varying degrees of success, before the US Food and Drug Administration shut him down.

Coley’s experiments were the first forays into a field of cancer research known today as immunotherapy. Since his first experiments, the oncology world has mostly moved on to radiation and chemo treatments. But for more than a century, immunotherapy—which encompasses a range of treatments designed to supercharge or reprogram a patient’s immune system to kill cancer cells—has persisted, mostly around the margins of medicine. In the last few years, though, an explosion of tantalizing clinical results have reinvigorated the field and plunged investors and pharma execs into a spending spree.

Though he didn’t have the molecular tools to understand why it worked, Coley’s forced infections put the body’s immune system into overdrive, allowing it to take out cancer cells along the way. While the FDA doesn’t have a formal definition for more modern immunotherapies, in the last few years it has approved at least eight drugs that fit the bill, unleashing a flood of money to finance new clinical trials. (Patients had better come with floods of money too—prices can now routinely top six figures.)

But while the drugs are dramatically improving the odds of survival for some patients, much of the basic science is still poorly understood. And a growing number of researchers worry that the sprint to the clinic offers cancer patients more hype than hope.

When immunotherapy works, it really works. But not for every kind of cancer, and not for every patient—not even, it turns out, for the majority of them. “The reality is immunotherapy is incredibly valuable for the people who can actually benefit from it, but there are far more people out there who don’t benefit at all,” says Vinay Prasad, an Oregon Health and Science University oncologist.

Prasad has come to be regarded as a professional cancer care critic, thanks to his bellicose Twitter style and John Arnold Foundation-backed crusade against medical practices he says are based on belief, not scientific evidence. Using national cancer statistics and FDA approval records, Prasad recently estimated the portion of all patients dying from all types of cancer in America this year who might actually benefit from immunotherapy. The results were disappointing: not even 10 percent.

Now, that’s probably a bit of an understatement. Prasad was only looking at the most widely used class of immunotherapy drugs in a field that is rapidly expanding. Called checkpoint inhibitors, they work by disrupting the immune system’s natural mechanism for reining in T cells, blood-borne sentinels that bind and kill diseased cells throughout the body. The immune cells are turned off most of the time, thanks to proteins that latch on to a handful of receptors on their surface. But scientists designed antibodies to bind to those same receptors, knocking out the regulatory protein and keeping the cells permanently switched to attack mode.

The first checkpoint inhibitors just turned T cells on. But some of the newer ones can work more selectively, using the same principle to jam a signal that tumors use to evade T cells. So far, checkpoint inhibitors have shown near-miraculous results for a few rare, previously incurable cancers like Hodgkin’s lymphoma, renal cell carcinoma, and non-small cell lung cancer. The drugs are only approved to treat those conditions, leaving about two-thirds of terminal cancer patients without an approved immunotherapy option.

But Prasad says that isn’t stopping physicians from prescribing the drugs anyway.

“Hype has encouraged rampant off-label use of checkpoint inhibitors as a last-ditch effort,” he says—even for patients with tumors that show no evidence they’ll respond to the drugs. The antibodies are available off the shelf, but at a list price near $150,000 per year, it’s an investment Prasad says doctors shouldn’t encourage lightly. Especially when there’s no reliable way of predicting who will respond and who won’t. “This thwarts one of the goals of cancer care," says Prasad. "When you run out of helpful responses, how do you help a patient navigate what it means to die well?”

Merck and Bristol-Myers Squibb have dominated this first wave of immunotherapy, selling almost $9 billion worth of checkpoint inhibitors since they went on sale in 2015. Roche, AstraZeneca, Novartis, Eli Lilly, Abbvie, and Regeneron have all since jumped in the game, spending billions on acquiring biotech startups and beefing up in-house pipelines. And 800 clinical trials involving a checkpoint inhibitor are currently underway in the US, compared with about 200 in 2015. “This is not sustainable,” Genentech VP of cancer immunology Ira Mellman told the audience at last year’s annual meeting of the Society for Immunotherapy of Cancer. With so many trials, he said, the industry was throwing every checkpoint inhibitor combination at the wall just to see what would stick.

After more than a decade stretching out the promise of checkpoint inhibitors, patients—and businesses—were ready for something new. And this year, they got it: CAR T cell therapy. The immunotherapy involves extracting a patient’s T cells and genetically rewiring them so they can more efficiently home in on tumors in the body—training a foot soldier as an assassin that can slip behind enemy lines.

In September, the FDA cleared the first CAR-T therapy—a treatment for children with advanced leukemia, developed by Novartis—which made history as the first-ever gene therapy approved for market. A month later the agency approved another live cell treatment, developed by Kite Pharma, for a form of adult lymphoma. In trials for the lymphoma drug, 50 percent of patients saw their cancer disappear completely, and stay gone.

    More on Gene Therapy

  • Megan Molteni

    Scientists Save a Kid By Growing a Whole New Skin For Him

  • Carl Zimmer

    Gene Therapy Emerges From Disgrace to Be the Next Big Thing, Again

  • Nick Stockton

    How Crispr Could Snip Away Some of Humanity's Worst Diseases

Kite’s ascendance in particular is a stunning indicator of how much money CAR-T therapy has attracted, and how fast. The company staged a $128 million IPO in 2014—when it had only a single late-phase clinical trial to its name—and sold to Gilead Science in August for $11.9 billion. For some context, consider that when Pfizer bought cancer drugmaker Medivation for $14 billion last year—one of the biggest pharma deals of 2016—the company already had an FDA-approved blockbuster tumor-fighter on the market with $2 billion in annual sales, plus two late-stage candidates in the pipeline.

While Kite and Novartis were the only companies to actually launch products in 2017, more than 40 other pharma firms and startups are currently building pipelines. Chief rival Juno Therapeutics went public with a massive $265 million initial offering—the largest biotech IPO of 2014—before forming a $1 billion partnership with Celgene in 2015. In the last few years, at least half a dozen other companies have made similar up-front deals worth hundreds of millions.

These treatments will make up just a tiny slice of the $107 billion cancer drug market. Only about 600 people a year, for example, could benefit from Novartis’ flagship CAR-T therapy. But the company set the price for a full course of treatment at a whopping $475,000. So despite the small clientele, the potential payoff is huge—and the technology is attracting a lot of investor interest. “CAR-T venture financing is still a small piece of total venture funding in oncology, but given that these therapies are curative for a majority of patients that have received them in clinical trials, the investment would appear to be justified,” says Mandy Jackson, a managing editor for research firm Informa Pharma Intelligence.

CAR-T, with its combination of gene and cell therapies, may be the most radical anticancer treatment ever to arrive in clinics. But the bleeding edge of biology can be a dangerous place for patients.

Sometimes, the modified T cells go overboard, excreting huge quantities of molecules called cytokines that lead to severe fevers, low blood pressure, and difficulty breathing. In some patients it gets even worse. Sometimes the blood-brain barrier inexplicably breaks down—and the T cells and their cytokines get inside patients’ skulls. Last year, Juno pulled the plug on its lead clinical trial after five leukemia patients died from massive brain swelling. Other patients have died in CAR-T trials at the National Cancer Institute and the University of Pennsylvania.

Scientists don’t fully understand why some CAR-T patients experience cytokine storms and neurotoxicity and others come out cured. “It’s kind of like the equivalent of getting on a Wright Brother’s airplane as opposed to walking on a 747 today,” says Wendell Lim, a biophysical chemist and director of the UC San Francisco Center for Systems and Synthetic Biology. To go from bumping along at a few hundred feet to cruise control at Mach 0.85 will mean equipping T cells with cancer-sensing receptors that are more specific than the current offerings.

Take the two FDA-approved CAR-T cell therapies, he says. They both treat blood cancers in which immune responders called B cells become malignant and spread throughout the body. Doctors reprogram patients’ T cells to seek out a B cell receptor called CD-19. When they find it, they latch on and shoot it full of toxins. Thing is, the reprogrammed T cells can’t really tell the difference between cancerous B cells and normal ones. The therapy just takes them all out. Now, you can live without B cells if you receive antibody injections to compensate—so the treatment works out fine most of the time.

But solid tumors are trickier—they’re made up of a mix of cells with different genetic profiles. Scientists have to figure out which tumor cells matter to the growth of the cancer and which ones don’t. Then they have to design T cells with antigens that can target just those ones and nothing else. An ideal signature would involve two to three antigens that your assassin T cells can use to pinpoint the target with a bullet instead of a grenade.

Last year Lim launched a startup called Cell Design Labs to try to do just that, as well as creating a molecular on-off-switch to make treatments more controlled. Only if researchers can gain this type of precise command, says Lim, will CAR-T treatments become as safe and predictable as commercial airline flight.

The field has matured considerably since Coley first shot his dying patient full of a dangerous bacteria, crossed his fingers, and hoped for the best. Sure, the guy lived, even making a miraculous full recovery. But many after him didn’t. And that “fingers crossed” approach still lingers over immunotherapy today.

All these years later, the immune system remains a fickle ally in the war on cancer. Keeping the good guys from going double-agent is going to take a lot more science. But at least the revolution will be well-financed.

Related Video

Science

Biologist Explains One Concept in 5 Levels of Difficulty – CRISPR

CRISPR is a new biomedical technique that enables powerful gene editing. WIRED challenged biologist Neville Sanjana to explain CRISPR to 5 different people; a child, a teen, a college student, a grad student, and a CRISPR expert.

The Big Show Journal is no ordinary gun magazine. The print periodical, which appears on newsstands nationwide six times a year, is also, according to its website, "America’s most interesting gun and knife magazine" and "America’s most accurate and complete gun and knife show calendar." Gun enthusiasts may dispute the former claim—but the latter is less subjective than you might think.

In fact, The Big Show Journal might be the closest thing researchers have to a comprehensive record of gun shows in the US.

"There’s no readily compiled, publicly available database of where and when gun shows occur," says UC Berkeley epidemiologist Ellicott Matthay, who recently found herself in want of such a database. That includes the internet. When Matthay used the Wayback Machine to scour archived web pages for the dates and locations of past shows, she found gaps in the historical record; events she knew had happened were nowhere to be found. So she turned to trade magazines instead. The Big Show Journal, true to its claim, proved more comprehensive than competing publications like Gun List Magazine and Gun and Knife Show Calendar.

Matthay needed that data to test a hypothesis about gun violence in America. Gun shows—of which the US sees about 4,000 per year—account for between 4 and 9 percent of firearm sales. As a public health researcher, Matthay knew that gun ownership increases the risk of suicide, homicide, and unintentional casualties in the home, and that firearms acquired from gun shows are disproportionately implicated in crimes. Matthay wanted to know if gun shows could lead to increased rates of gun violence, specifically in her home state of California. And the numbers she needed to find out were buried not in a government database, but in back issues of a big, glossy gun magazine.

But magazines are hard to mine for data. So Matthay set to work, scanning issues of The Big Show Journal published between 2005 and 2013 in the copy room at UC Berkeley's school of public health. She used optical character recognition software to convert the scans into alphanumeric data. Then she trained an algorithm to isolate the dates and locations of gun shows in California and neighboring Nevada, which shares the largest border with the state.

When Matthay was finished, she cross referenced her database with death records from the California Department of Public Health, along with ER and inpatient hospitalization records collected by the state. By comparing death and injury rates for the two weeks before and after each gun show, Matthay could see whether firearm casualties increased in nearby California areas in the wakes of California and Nevada gun shows.

Her hypothesis turned out to be half right: California gun shows did not appear to have a significant effect on local gun violence. But in regions near Nevada shows, rates of death and injury due to firearms spiked by 70 percent.

That staggering disparity could boil down to policy differences. California's gun laws are among the most stringent in the country. Nevada, in contrast, has some of the least restrictive—and no explicit regulations on gun shows. Data from the Bureau of Alcohol, Tobacco, Firearms and Explosives—which monitors where guns originate and where law enforcement recovers them—shows that firearms have a knack for flooding into states with tough gun laws from those without. (To cite just one example: Sixty percent of guns used to commit crimes in Chicago between 2009 and 2013 originated outside of Illinois.) A state's gun laws, it seems, can be undermined by those of its neighbors.

But that kind of inter-state analysis isn’t always possible. Many states—Nevada among them—don't require documentation of private gun sales.

"If a gun originated in Pennsylvania, changed hands between private parties at a Nevada gun show, and resulted in a firearm death or injury in California, even if we were to trace it, we'd have no way of knowing it was ever in Nevada," Matthay says. What's more, ATF only tracks the provenance of guns recovered from crime scenes. But not all crimes involving guns result in injury or death. Conversely, not all firearm casualties—particularly unintended injuries—are logged as crimes.

    Related Stories

  • Adam Rogers

    How Gunsplaining Could Lead to Better Gun Laws

  • Joanna Pearlstein

    America's Deadly Gun Addiction, by the Numbers

  • Emma Grey Ellis

    Gun Violence Researchers Race to Protect Data From Trump

So researchers like Matthay have to use trickier methods to understand the impact of firearms. "The biggest challenge, when it comes to understanding gun violence, is money—but the second biggest is data," says Frederick Rivara, an epidemiologist at the University of Washington and an expert on gun violence. The NRA's efforts to stymie gun research are extensive, palpable, and well documented: When a car kills somebody in the US, the details of the incident go into a massive government database. No such database exists for gun deaths. If you want to study firearms in this country—how they move, the way they're used, how often they murder and maim—you have to get creative. See: scanning old gun magazines by hand.

But with limited data comes limited information. Matthay's study didn't trace any guns, so it's unclear whether California's post-show spike in gun violence is tied to an inundation of Nevada firearms. (It could be due to an influx of ammunition, for example.) Neither did the study examine associations with firearm casualties in Nevada. (“We could do that,” Matthay says, for Nevada and neighbor states like Oregon and Arizona, “but we would need additional funding.”) Likewise, there's no telling if the link between gun shows and gun violence is causal. A randomized controlled trial—the gold standard of evidence in medical and epidemiological circles—could help isolate the signal. But researchers can't exactly go around exposing random populations to gun shows. "That would not be ethical," Matthay says, not to mention unfeasible.

Still, the study does paint a more nuanced picture of the relationship between gun shows and gun violence. By accounting for deaths and injuries ATF's data overlooks, Matthay's research points not only to the effectiveness of California's gun laws, but the pitfalls of porous borders—two valuable insights that policy makers can use to inform laws at the state and federal levels.

"That's our job as investigators—to see if those laws help or not and determine whether they should be more widespread, to reduce the total gun violence," Rivara says.

With the help of Congress, researchers like Matthay could do that job much more effectively. Until then, they'll make do with the data they have.

Related Video

Security

The Numbers Don't Lie: America's Got a Gun Addiction

America’s gun addiction is bad. But to understand just how bad it is, you’ve got to see the numbers.

When Rebecca Goldin spoke to a recent class of incoming freshmen at George Mason University, she relayed a disheartening statistic: According to a recent study, 36 percent of college students don’t significantly improve in critical thinking during their four-year tenure. “These students had trouble distinguishing fact from opinion, and cause from correlation,” Goldin explained.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

She went on to offer some advice: “Take more math and science than is required. And take it seriously.” Why? Because “I can think of no better tool than quantitative thinking to process the information that is thrown at me.” Take, for example, the study she had cited. A first glance, it might seem to suggest that a third of college graduates are lazy or ignorant, or that higher education is a waste. But if you look closer, Goldin told her bright-eyed audience, you’ll find a different message: “Turns out, this third of students isn’t taking any science.”

Goldin, a professor of mathematical sciences at George Mason, has made it her life’s work to improve quantitative literacy. In addition to her research and teaching duties, she volunteers as a coach at math clubs for elementary- and middle-school students. In 2004, she became the research director of George Mason’s Statistical Assessment Service, which aimed “to correct scientific misunderstanding in the media resulting from bad science, politics or a simple lack of information or knowledge.” The project has since morphed into STATS (run by the nonprofit Sense About Science USA and the American Statistical Association), with Goldin as its director. Its mission has evolved too: It is now less of a media watchdog and focuses more on education. Goldin and her team run statistics workshops for journalists and have advised reporters at publications including FiveThirtyEight, ProPublica and The Wall Street Journal.

When Quanta first reached out to Goldin, she worried that her dual “hats”—those of a mathematician and a public servant—were too “radically different” to reconcile in one interview. In conversation, however, it quickly became apparent that the bridge between these two selves is Goldin’s conviction that mathematical reasoning and study is not only widely useful, but also pleasurable. Her enthusiasm for logic—whether she’s discussing the manipulation of manifolds in high-dimensional spaces or the meaning of statistical significance—is infectious. “I love, love, love what I do,” she said. It’s easy to believe her—and to want some of that delight for oneself.

Quanta Magazine spoke with Goldin about finding beauty in abstract thought, how STATS is arming journalists with statistical savvy, and why mathematical literacy is empowering. An edited and condensed version of the conversation follows.

Where does your passion for mathematics and quantitative thought come from?

As a young person I never thought I liked math. I absolutely loved number sequences and other curious things that, in retrospect, were very mathematical. At the dinner table, my dad, who is a physicist, would pull out some weird puzzle or riddle that sometimes only took a minute to solve, and other times I’d be like, “Huh, I have no idea how that one works!” But there was an overall framework of joy around solving it.

When did you recognize you could apply that excitement about puzzles to pursuing math professionally?

Actually very late in the game. I was always very strong in math, and I did a lot of math in high school. This gave me the false sense that I knew what math was about: I felt like every next step was a little bit more of the same, just more advanced. It was very clear in my mind that I didn’t want to be a mathematician.

But when I went to college at Harvard, I took a course in topology, which is the study of spaces. It wasn’t like anything I’d seen before. It wasn’t calculus; it wasn’t complex calculations. The questions were really complicated and different and interesting in a way I had never expected. And it was just kind of like I fell in love.

You study primarily symplectic and algebraic geometry. How do you describe what you do to people who aren’t mathematicians?

One way I might describe it is to say that I study symmetries of mathematical objects. This comes about when you’re interested in things like our universe, where the Earth is rotating, and it’s also rotating around the sun, and the sun is in a larger system that is rotating. All those rotations are symmetries. There are a lot of other ways symmetries come up, and they can get really, really complicated. So we use neat mathematical objects to think about them, called groups. This is useful because if you’re trying to solve equations, and you know you have symmetries, you can essentially find a way mathematically to get rid of those symmetries and make your equations simpler.

What motivates you to study these complex symmetries?

I just think they’re really beautiful. A lot of mathematics ultimately is artistic rather than useful. Sometimes you see a picture that’s got a lot of symmetry in it, like an M.C. Escher sketch, and it’s like, “Wow, that’s just so amazing!” But when you study mathematics, you start to “see” things in higher dimensions. You’re not necessarily visualizing them in the same way that you could with a sculpture or piece of art. But you start to feel like this whole system of objects that you’re looking at, and the symmetries it has, are really just beautiful. There’s no other good word.

How did you get involved with STATS?

When I arrived as a professor at George Mason, I knew I wanted to do more than research and mathematics. I love teaching, but I felt like I wanted to do something for the world that was not part of the ivory tower of just solving problems that I thought were really curious and interesting.

When I first joined what became STATS, it was a little bit more “gotcha” work: looking at how the media talks about science and mathematics and pointing out when someone has gotten it wrong. As we’ve evolved, I’ve become more and more interested in how journalists think about quantitative issues and how they process them. We found pretty early in our work that there was this huge gap of knowledge and education: Journalists were writing about things that had quantitative content, but they often didn’t absorb what they were writing about, and didn’t understand it, and didn’t have any way to do better because they were often on really tight timelines with limited resources.

So how has your work at STATS changed?

Our mission at STATS has changed to focus on offering journalists two things. One is to be available to answer quantitative questions. They could be as simple as “I don’t know how to calculate this percentage,” or they could be pretty sophisticated things, like “I’ve got this data, and I want to apply this model to it, and I just want to make sure that I’m handling the outliers correctly.” The other really cool thing that we do is, we go to individual news agencies and offer workshops on things like confidence intervals, statistical significance, p values, and all this highly technical language.

Someone once described to me the advice he gives to journalists. He says, “You should always have a statistician in your back pocket.” That’s what we hope to be.

What are the most common pitfalls of reporting on statistics?

A favorite one is distinguishing between causation and correlation. People say, “Oh, that’s obvious. Of course there’s a difference between those two things.” But when you get into examples that target our belief system, it’s really hard to disassociate them. Part of the problem, I think, is that scientists themselves always want to know more than they can with the tools they have. And they don’t always make clear that the questions they’re answering aren’t necessarily the ones you might think they’re answering.

What do you mean?

Like, you might be interested in knowing whether taking hormones is helpful or harmful to women who are postmenopausal. So you start out with a question that’s really well-defined: Does it help or hurt? But you can’t necessarily answer that question. What you can answer is the question of whether women who take hormones whom you enroll in your study — those specific women — have an increase or decrease in, say, heart disease rates or breast cancer rates or stroke rates compared to a control group or to the general population. But that may not answer your initial question, which is: “Is that going to be the case for me? Or people like me? Or the population as a whole?”

What do you hope STATS will achieve?

Partly our goal is to help change the culture of journalism so that people recognize the importance of using quantitative arguments and thinking about quantitative issues before they come to conclusions. That way, they’re coming to conclusions that are supported by science rather than using a study to further their own agenda — which is something scientists do too; they may push a certain interpretation of something. We want to arm journalists with a certain amount of rigor in their thinking so they can challenge a scientist who might say, “Well, you just don’t understand my sophisticated statistic.” There’s a lot of value in giving reporters the tools to develop their sense of quantitative skepticism so that they’re not just bullied.

You argue that statistical literacy gives citizens a kind of power. What do you mean?

What I mean is that if we don’t have the ability to process quantitative information, we can often make decisions that are more based on our beliefs and our fears than based on reality. On an individual level, if we have the ability to think quantitatively, we can make better decisions about our own health, about our own choices with regard to risk, about our own lifestyles. It’s very empowering to not be scared or bullied into doing things one way or another.

On a collective level, the impact of being educated in general is huge. Think about what democracy would be if most of us couldn’t read. We aspire to a literate society because it allows for public engagement, and I think this is also true for quantitative literacy. The more we can get people to understand how to view the world in a quantitative way, the more successful we can be at getting past biases and beliefs and prejudices.

You’ve also said that getting people to understand statistics requires more than reciting numbers. Why do you think storytelling is important for conveying statistical concepts?

As human beings, we live in stories. It doesn’t matter how quantitative you are, we’re all influenced by stories. They become like statistics in our mind. So if you report the statistics without the story, you don’t get nearly the level of interest or emotion or willingness to engage with the ideas.

How has the media’s use of data changed in the 13 years you’ve been with STATS?

With the internet, we see a tremendous growth in data produced by search engines. Journalists are becoming much more adept at collecting these kinds of data and using them in media articles. I think that the current president is also causing a lot of reflection on what we mean by facts, and in that sense journalists maybe think of it as more important in general to get the facts right.

That’s interesting. So you think the public’s awareness of “fake” news and “alternative” facts is motivating journalists to be more rigorous about fact checking?

I do think it’s very motivating. Of course sometimes information gets spun. But ultimately a very small percentage of journalists do that. I think 95 percent of both journalists and scientists are really working hard to get it right.

I’m surprised you’re not more jaded about the media.

Ha! This is maybe more a life view. I think there are people who are pessimistic about humankind and people who are optimistic.

    More Quanta

  • Kevin Hartnett

    The Mathematician Who Will Make You Fall in Love With Numbers

  • Kevin Hartnett

    A Math Genius Blooms Late and Conquers His Field

  • Kevin Hartnett

    Mathematicians Are Building a Unified Theory of Geometric Randomness

You also volunteer with math clubs for kids. What ideas about math and math culture do you try to get across?

I try to bring in problems that are really different and fun and curious and weird. For example, I’ve done an activity with kids where I’ve brought in a bunch of ribbons, and I had them learn a little bit about a field called knot theory. There are two things I’m trying to get across to them. One is that math in school is not the whole story—there’s this whole other world that is logical but also beautiful and creative. The second message is a certain emotional framework that I have to offer: that math is a joyous experience.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

The Fascinating Math Behind Why You Won't Win The Powerball

The Powerball jackpot is over a billion dollars but what are your chances?

How to Solve the Biggest Mystery in Physics

March 20, 2019 | Story | No Comments

Suppose aliens land on our planet and want to learn our current scientific knowledge. I would start with the 40-year-old documentary Powers of Ten. Granted, it’s a bit out of date, but this short film, written and directed by the famous designer couple Charles and Ray Eames, captures in less than 10 minutes a comprehensive view of the cosmos.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

The script is simple and elegant. When the film begins, we see a couple picnicking in a Chicago park. Then the camera zooms out. Every 10 seconds the field of vision gains a power of 10—from 10 meters across, to 100, to 1,000 and onward. Slowly the big picture reveals itself to us. We see the city, the continent, Earth, the solar system, neighboring stars, the Milky Way, all the way to the largest structures of the universe. Then in the second half of the film, the camera zooms in and delves into the smallest structures, uncovering more and more microscopic details. We travel into a human hand and discover cells, the double helix of the DNA molecule, atoms, nuclei and finally the elementary quarks vibrating inside a proton.

The movie captures the astonishing beauty of the macrocosm and microcosm, and it provides the perfect cliffhanger endings for conveying the challenges of fundamental science. As our then-8-year-old son asked when he first saw it, “How does it continue?” Exactly! Comprehending the next sequence is the aim of scientists who are pushing the frontiers of our understanding of the largest and smallest structures of the universe. Finally, I could explain what Daddy does at work!

Powers of Ten also teaches us that, while we traverse the various scales of length, time and energy, we also travel through different realms of knowledge. Psychology studies human behavior, evolutionary biology examines ecosystems, astrophysics investigates planets and stars, and cosmology concentrates on the universe as a whole. Similarly, moving inward, we navigate the subjects of biology, biochemistry, and atomic, nuclear and particle physics. It is as if the scientific disciplines are formed in strata, like the geological layers on display in the Grand Canyon.

Moving from one layer to another, we see examples of emergence and reductionism, these two overarching organizing principles of modern science. Zooming out, we see new patterns “emerge” from the complex behavior of individual building blocks. Biochemical reactions give rise to sentient beings. Individual organisms gather into ecosystems. Hundreds of billions of stars come together to make majestic swirls of galaxies.

As we reverse and take a microscopic view, we see reductionism at work. Complicated patterns dissolve into underlying simple bits. Life reduces to the reactions among DNA, RNA, proteins and other organic molecules. The complexity of chemistry flattens into the elegant beauty of the quantum mechanical atom. And, finally, the Standard Model of particle physics captures all known components of matter and radiation in just four forces and 17 elementary particles.

Which of these two scientific principles, reductionism or emergence, is more powerful? Traditional particle physicists would argue for reductionism; condensed-matter physicists, who study complex materials, for emergence. As articulated by the Nobel laureate (and particle physicist) David Gross: Where in nature do you find beauty, and where do you find garbage?

Take a look at the complexity of reality around us. Traditionally, particle physicists explain nature using a handful of particles and their interactions. But condensed matter physicists ask: What about an everyday glass of water? Describing its surface ripples in terms of the motions of the roughly 1024 individual water molecules—let alone their elementary particles—would be foolish. Instead of the impenetrable complexities at small scales (the “garbage”) faced by traditional particle physicists, condensed matter physicists use the emergent laws, the “beauty” of hydrodynamics and thermodynamics. In fact, when we take the number of molecules to infinity (the equivalent of maximal garbage from a reductionist point of view), these laws of nature become crisp mathematical statements.

While many scientists praise the phenomenally successful reductionist approach of the past centuries, John Wheeler, the influential Princeton University physicist whose work touched on topics from nuclear physics to black holes, expressed an interesting alternative. “Every law of physics, pushed to the extreme, will be found to be statistical and approximate, not mathematically perfect and precise,” he said. Wheeler pointed out an important feature of emergent laws: Their approximate nature allows for a certain flexibility that can accommodate future evolution.

In many ways, thermodynamics is the gold standard of an emergent law, describing the collective behavior of a large number of particles, irrespective of many microscopic details. It captures an astonishingly wide class of phenomena in succinct mathematical formulas. The laws hold in great universality—indeed, they were discovered before the atomic basis of matter was even established. And there are no loopholes. For example, the second law of thermodynamics states that a system’s entropy—a measure of the amount of hidden microscopic information—will always grow in time.

Modern physics provides a precise language to capture the way things scale: the so-called renormalization group. This mathematical formalism allows us to go systematically from the small to the large. The essential step is taking averages. For example, instead of looking at the behavior of individual atoms that make up matter, we can take little cubes, say 10 atoms wide on each side, and take these cubes as our new building blocks. One can then repeat this averaging procedure. It is as if for each physical system one makes an individual Powers of Ten movie.

Renormalization theory describes in detail how the properties of a physical system change if one increases the length scale on which the observations are made. A famous example is the electric charge of particles that can increase or decrease depending on quantum interactions. A sociological example is understanding the behavior of groups of various sizes starting from individual behavior. Is there wisdom in crowds, or do the masses behave less responsibly?

Most interesting are the two endpoints of the renormalization process: the infinite large and infinite small. Here things will typically simplify because either all details are washed away, or the environment disappears. We see something like this with the two cliffhanger endings in Powers of Ten. Both the largest and the smallest structures of the universe are astonishingly simple. It is here that we find the two “standard models,” of particle physics and cosmology.

Remarkably, modern insights about the most formidable challenge in theoretical physics—the push to develop a quantum theory of gravity—employ both the reductionist and emergent perspectives. Traditional approaches to quantum gravity, such as perturbative string theory, try to find a fully consistent microscopic description of all particles and forces. Such a “final theory” necessarily includes a theory of gravitons, the elementary particles of the gravitational field. For example, in string theory, the graviton is formed from a string that vibrates in a particular way. One of the early successes of string theory was a scheme to compute the behavior of such gravitons.

However, this is only a partial answer. Einstein taught us that gravity has a much wider scope: It addresses the structure of space and time. In a quantum-mechanical description, space and time would lose their meaning at ultrashort distances and time scales, raising the question of what replaces those fundamental concepts.

A complementary approach to combining gravity and quantum theory started with the groundbreaking ideas of Jacob Bekenstein and Stephen Hawking on the information content of black holes in the 1970s, and came into being with the seminal work of Juan Maldacena in the late 1990s. In this formulation, quantum space-time, including all the particles and forces in it, emerges from a completely different “holographic” description. The holographic system is quantum mechanical, but doesn’t have any explicit form of gravity in it. Furthermore, it typically has fewer spatial dimensions. The system is, however, governed by a number that measures how large the system is. If one increases that number, the approximation to a classical gravitational system becomes more precise. In the end, space and time, together with Einstein’s equations of general relativity, emerge out of the holographic system. The process is akin to the way that the laws of thermodynamics emerge out of the motions of individual molecules.

    More Quanta

  • Natalie Wolchover

    Mathematicians Bridge the Divide Between Infinity and the Physical World

  • Natalie Wolchover

    The Man Who's Trying to Kill Dark Matter

  • Natalie Wolchover

    Quantum Gravity Research Could Unearth the True Nature of Time

In some sense, this exercise is exactly the opposite of what Einstein tried to achieve. His aim was to build all of the laws of nature out of the dynamics of space and time, reducing physics to pure geometry. For him, space-time was the natural “ground level” in the infinite hierarchy of scientific objects—the bottom of the Grand Canyon. The present point of view thinks of space-time not as a starting point, but as an end point, as a natural structure that emerges out of the complexity of quantum information, much like the thermodynamics that rules our glass of water. Perhaps, in retrospect, it was not an accident that the two physical laws that Einstein liked best, thermodynamics and general relativity, have a common origin as emergent phenomena.

In some ways, this surprising marriage of emergence and reductionism allows one to enjoy the best of both worlds. For physicists, beauty is found at both ends of the spectrum.

How Color Vision Came to the Animals

March 20, 2019 | Story | No Comments

Animals are living color. Wasps buzz with painted warnings. Birds shimmer their iridescent desires. Fish hide from predators with body colors that dapple like light across a rippling pond. And all this color on all these creatures happened because other creatures could see it.

The natural world is so showy, it’s no wonder scientists have been fascinated with animal color for centuries. Even today, the questions how animals see, create, and use color are among the most compelling in biology.

Until the last few years, they were also at least partially unanswerable—because color researchers are only human, which means they can’t see the rich, vivid colors that other animals do. But now new technologies, like portable hyperspectral scanners and cameras small enough to fit on a bird’s head, are helping biologists see the unseen. And as described in a new Science paper, it's a whole new world.

Visions of Life

The basics: Photons strike a surface—a rock, a plant, another animal—and that surface absorbs some photons, reflects others, refracts still others, all according to the molecular arrangement of pigments and structures. Some of those photons find their way into an animal’s eye, where specialized cells transmit the signals of those photons to the animal’s brain, which decodes them as colors and shapes.

It's the brain that determines whether the colorful thing is a distinct and interesting form, different from the photons from the trees, sand, sky, lake, and so on it received at the same time. If it’s successful, it has to decide whether this colorful thing is food, a potential mate, or maybe a predator. “The biology of color is all about these complex cascades of events,” says Richard Prum, an ornithologist at Yale University and co-author of the paper.

In the beginning, there was light and there was dark. That is, basic greyscale vision most likely evolved first, because animals that could anticipate the dawn or skitter away from a shadow are animals that live to breed. And the first eye-like structures—flat patches of photosensitive cells—probably didn't resolve much more than that. It wasn't enough. “The problem with using just light and dark is that the information is quite noisy, and one problem that comes up is determining where one object stops and another one starts. ” says Innes Cuthill, a behavioral ecologist at the University of Bristol and coauthor of the new review.

Color adds context. And context on a scene is an evolutionary advantage. So, just like with smart phones, better resolution and brighter colors became competitive enterprises. For the resolution bit, the patch light-sensing cells evolved over millions of years into a proper eye—first by recessing into a cup, then a cavity, and eventually a fluid-filled spheroid capped with a lens. For color, look deeper at those light-sensing cells. Wedged into their surfaces are proteins called opsins. Every time they get hit with a photon—a quantum piece of light itself—they transduce that signal into an electrical zap to the rudimentary animal's rudimentary brain. The original light/dark opsin mutated into spin-offs that could detect specific ranges of wavelengths. Color vision was so important that it evolved independently multiple times in the animal kingdom—in mollusks, arthropods, and vertebrates.

In fact, primitive fish had four different opsins, to sense four spectra—red, green, blue, and ultraviolet light. That four-fold ability is called tetrachromacy, and the dinosaurs probably had it. Since they're the ancestors of today’s birds, many of them are tetrachromats, too.

But modern mammals don't see things that way. That's probably because early mammals were small, nocturnal things that spent their first 100 million years running around in the dark, trying to keep from being eaten by tetrachromatic dinosaurs. “During that period the complicated visual system they inherited from their ancestors degraded,” says Prum. “We have a clumsy, retrofitted version of color vision. Fishes, and birds, and many lizards see a much richer world than we do."

In fact, most monkeys and apes are dichromats, and see the world as greyish and slightly red-hued. Scientists believe that early primates regained three-color vision because spotting fresh fruit and immature leaves led to a more nutritious diet. But no matter how much you enjoy springtime of fall colors, the wildly varicolored world we humans live in now isn't putting on a show for us. It's mostly for bugs and birds. “Flowering plants of course have evolved to signal pollinators,” says Prum. “The fact that we find them beautiful is incidental, and the fact that we can see them at all is because of an overlap in the spectrums insects and birds can see and the ones we can see.”

Covered in Color

And as animals gained the ability to sense color, evolution kickstarted an arms race in displays—hues and patterns that aided in survival became signifiers of ace baby-making skills. Almost every expression of color in the natural world came about to signal, or obscure, a creature to something else.

For instance, "aposematism" is color used as a warning—the butterfly’s bright colors say “don’t eat me, you'll get sick.” "Crypsis" is color used as camouflage. Color serves social purposes, too. Like, in mating. Did you know that female lions prefer brunets? Or that paper wasps can recognize each others’ faces? “Some wasps even have little black spots that act like karate belts, telling other wasps not to try and fight them,” says Elizabeth Tibbetts, an entomologist at the University of Michigan.

    Related Stories

  • Adam Rogers

    The Science of Why No One Agrees on the Color of This Dress

  • Neel Patel

    3-D Map Shows the Colors You See But Can't Name

  • Neel Patel

    No, These Biohackers Can't Give Themselves Infrared Vision

But animals display colors using two very different methods. The first is with pigments, colored substances created by cells called chromatophores (in reptiles, fish, and cephalopods), and melanocytes (in mammals and birds). They absorb most wavelengths of light and reflect just a few, limiting both their range and brilliance. For instance, most animals cannot naturally produce red; they synthesize it from plant chemicals called carotenoids.

The other way animals make color is with nanoscale structures. Insects, and, to a lesser degree, birds, are the masters of color-based structure. And compared to pigment, structure is fabulous. Structural coloration scatters light into vibrant, shimmering colors, like the shimmering iridescent bib on a Broad-tailed hummingbird, or the metallic carapace of a Golden scarab beetle. And scientists aren't quite sure why iridescence evolved. Probably to signal mates, but still: Why?

Decoding the rainbow of life

The question of iridescence is similar to most questions scientists have about animal coloration. They understand what the colors do in broad strokes, but there's till a lot of nuance to tease out. This is mostly because, until recently, they were limited to seeing the natural world through human eyes. “If you ask the question, what’s this color for, you should approach it the way animals see those colors,” says Tim Caro, a wildlife biologist at UC Davis and the organizing force behind the new paper. (Speaking of mysteries, Caro recently figured out why zebras have stripes.)

Take the peacock. “The male’s tail is beautiful, and it evolved to impress the female. But the female may be impressed in a different way than you or I,” Caro says. Humans tend to gaze at the shimmering eyes at the tip of each tail feather; peahens typically look at the base of the feathers, where they attach to the peacock’s rump. Why does the peahen find the base of the feathers sexy? No one knows. But until scientists strapped to the birds' heads tiny cameras spun off from the mobile phone industry, they couldn't even track the peahens' gaze.

Another new tech: Advanced nanomaterials give scientists the ability to recreate the structures animals use to bend light into iridescent displays. By recreating those structures, scientists can figure out how genetically expensive they are to make.

Likewise, new magnification techniques have allowed scientists to look into an animal’s eye structure. You might have read about how mantis shrimp have not three or four but a whopping 12 different color receptors, and how they see the world in psychedelic hyperspectral saturation. This isn’t quite true. Those color channels aren’t linked together—not like they are in other animals. The shrimp probably aren’t seeing 12 different, overlapping color spectra. “We are thinking maybe those color receptors are being turned on or off by some other, non-color, signal,” says Caro.

But perhaps the most important modern innovation in biological color research is getting all the different people from different disciplines together. “There are a lot of different sorts of people working on color,” says Caro. “Some behavioral biologists, some neurophysiologists, some anthropologists, some structural biologists, and so on.”

And these scientists are scattered all over the globe. He says the reason he brought everyone to Berlin is so they could finally synthesize all these sub-disciplines together, and move into a broader understanding of color in the world. The most important technology in understanding animal color vision isn't a camera or a nanotech surface. It's an airplane. Or the internet.

Related Video

Science

How the Morpho Butterfly Can Be Blue But Also Not Really Blue

The morpho butterfly appears blue but it isn't actually. It looks blue not because of pigment but because of some very fancy scales on its wings.