Category: Story

Home / Category: Story

The Thomas Fire spread through the hills above Ventura, in the northern greater Los Angeles megalopolis, with the speed of a hurricane. Driven by 50 mph Santa Ana winds—bone-dry katabatic air moving at freeway speeds out of the Mojave desert—the fire transformed overnight from a 5,000-acre burn in a charming chaparral-lined canyon to an inferno the size of Orlando, Florida, that only stopped spreading because it reached the Pacific. Tens of thousands of people evacuated their homes in Ventura; 150 buildings burned and thousands more along the hillside and into downtown are threatened.

That isn’t the only part of Southern California on fire. The hills above Valencia, where Interstate 5 drops down out of the hills into the city, are burning. Same for a hillside of the San Gabriel Mountains, overlooking the San Fernando Valley. And the same, too, near the Mount Wilson Observatory, and on a hillside overlooking Interstate 405—the flames in view of the Getty Center and destroying homes in the rich-people neighborhoods of Bel-Air and Holmby Hills.

And it’s all horribly normal.

Southern California’s transverse ranges—the mostly east-west mountains that slice up and define the greater Los Angeles region—were fire-prone long before there was a Los Angeles. They’re a broken fragment of tectonic plate, squeezed up out of the ground by the Pacific Plate on one side and the North American on the other, shaped into the San Gabriels, the Santa Monica Mountains, the San Bernardino Mountains. Even the Channel Islands off Ventura’s coast are the tippy-tops of a transverse range.

Santa Anas notwithstanding, the transverse ranges usually keep cool coastal air in and arid desert out. Famously, they’re part of why the great California writer Carey McWilliams called the region “an island on the land.” The hills provided hiding places for cowboy crooks, hiking for the naturalist John Muir, and passes both hidden and mapped for natives and explorers coming from the north and east.

With the growth and spread of Los Angeles, fire became even more part of Southern California life. “It’s almost textbook. It’s the end of the summer drought, there has not been a lot of rain this year, and we’ve got Santa Ana winds blowing,” says Alexandra Syphard, an ecologist at the Conservation Biology Institute. “Every single year, we have ideal conditions for the types of wildfires we’re experiencing. What we don’t have every single year is an ignition during a wind event. And we’ve had several.”

Alexandra Syphard, Conservation Biology Institute

Before humans, wildfires happened maybe once or twice a century, long enough for fire-adapted plant species like chapparal to build up a bank of seeds that could come back after a burn. Now, with fires more frequent, native plants can’t keep up. Exotic weeds take root. “A lot of Ventura County has burned way too frequently,” says Jon Keeley, a research ecologist with the US Geological Survey at the Sequoia and Kings Canyon Field Station. “We’ve lost a lot of our natural heritage.”

Fires don’t burn like this in Northern California. That’s one of the things that makes the island on the land an island. Most wildfires in the Sierra Nevadas and northern boreal forests are slower, smaller, and more easily put out, relative to the south. (The Napa and Sonoma fires this year were more like southern fires—wind-driven, outside the forests, and near or amid buildings.) Trees buffer the wind and burn less easily than undergrowth. Keeley says northern mountains and forests are “flammability-limited ecosystems,” where fires only get big if the climate allows it—higher temperatures and dryer conditions providing more fuel. Climate change makes fires there more frequent and more severe.

Southern California, on the other hand, is an “ignition-limited ecosystem.” It’s always a tinderbox. The canyons that cut through the transverse ranges align pretty well with the direction of the Santa Ana winds; they turn into funnels. “Whether or not you get a big fire event depends on whether humans ignite a fire,” he says.

And there are just a lot more humans in Southern California these days. In 1969 Ventura County’s population was 369,811. In 2016 it was 849,738—a faster gain than the state as a whole. In 1970 Los Angeles County had 7,032,000 people; in 2015 it was 9,827,000. “If you look historically at Southern California, the frequency of fire has risen along with population growth,” Keeley says. Though even that has a saturation point. The number of fires—though not necessarily their severity—started declining in the 1980s, maybe because of better fire fighting, and maybe because with more people and more buildings and roads and concrete, there’s less to burn.

As Syphard told me back at the beginning of this year’s fire season, “The problem is not fire. The problem is people in the wrong places.”

Like most fresh-faced young actors in Southern California, the idea of dense development is a relatively recent arrival. Most of the buildings on the island on the land are low, metastasizing in a stellate wave across the landscape, over the flats, up the canyons, and along the hillsides. In 1960 Santa Paula, where the Thomas Fire in Ventura started, was a little town where Santa Paula Canyon hit the Santa Clara River. Today it’s part of greater Ventura, stretching up the canyon, reaching past farms along the river toward Saticoy.

    More on Fires

  • Adam Rogers

    The Napa Fire Is a Perfectly Normal Apocalypse

  • Megan Molteni

    Stay Inside, Californians: Wildfire Smoke Is a Big Health Risk

  • Adam Rogers

    After the Napa Fires, a Disaster-in-Waiting: Toxic Ash

So the canyons are perfect places for fires. They’re at the Wildland-Urban Interface, developed but not too developed. Wall-to-wall hardscape leaves nothing to burn; no buildings at all means no people to provide an ignition source. But the hills of Ventura or Bel-Air? Firestarty.

As the transverse ranges defined Southern California before Los Angeles and during its spasmodic growth, today it’s defined by freeways. The mountains shape the roads—I-5 coming over the Grapevine through Tejon Pass in the Tehachapis, the 101 skirting the north side of the Santa Monica Mountains, and the 405 tucking through them via the Sepulveda Pass. The freeways, names spoken as a number with a "the" in front, frame time and space in SoCal. For an Angeleno like me, reports of fires closing the 101, the 210, and the 405 are code for the end of the world. Forget Carey McWilliams; that’s some Nathaniel West stuff right there—the burning of Los Angeles from Day of the Locust, the apocalypse that Hollywood always promises.

It won’t be the end end, of course. Southern California zoning and development are flirting, for now at least, with density, accommodating more people, dealing with the state’s broad crisis in housing, and incidentally minimizing the size of the wildland interface. No one can unbuild what makes the place an island on the land, but better building on the island might help stop the next fires before they can start.

Related Video

Science

How Climate Change Is Already Affecting Earth

Though the planet has only warmed by one-degree Celsius since the Industrial Revolution, climate change's effect on earth has been anything but subtle. Here are some of the most astonishing developments over the past few years.

Most mornings when I step out of my San Francisco apartment, I hear the waves, the seagulls, and occasionally kids yelling out the window across the street. But over the past few weeks, the murmur of Ocean Beach has been cut with a low mechanistic rumble. Walk a few blocks and pop your head over the sand dunes and you’ll find the culprits: orange-yellow tractors piling sand into dump trucks, which caravan three miles south and spit out the sand—50,000 cubic yards, or 75,000 tons, of it in total—back on the beach.

That sandy exodus is part of San Francisco’s campaign to fight severe erosion at the southern end of the beach that faces the Pacific Ocean. During a big storm, the bluffs can lose 25 to 40 feet—which might be fine, if the city’s wastewater infrastructure didn’t run right alongside the beach. Specifically, a 14-foot-wide pipe that ferries both stormwater and sewage. If the sea steals the Earth that supports it, the thing could well snap.

The problem at Ocean Beach will only get worse, because the sea has nothing to do but rise in this era of rapid climate change. So will San Francisco spend the rest of its days shoveling sand in a quixotic battle against inevitability? Far from it—it’s all part of a plan to adapt to inevitability, which could set precedent for how this and other coastal cities fight rising seas.

Climate change modeling is complicated: It takes burly supercomputers crunching a galaxy of variables to understand, say, how a warming arctic might be mucking with weather in the United States. But sea level rise? “It's the one area in climate change that's probably more understood than others,” says Anna Roche, project manager at the San Francisco Public Utilities Commission, which is overseeing the digging. “We actually have calculations for how much sea level rise we're anticipating, and you can start making decisions on actual numbers, versus more just pie-in-the-sky discussions.”

Which is not to say those decisions come easy at Ocean Beach. “There's the whole jurisdictional puzzle,” says Ben Grant, urban design policy director at the San Francisco Bay Area Planning and Urban Research Association. The National Parks Service runs the beach, the San Francisco Recreation and Park Department owns the road that runs along it, and it’s all under the regulatory purview of the California Coastal Commission. “There's just all these different components that need to be considered and balanced.”

What Grant and his colleagues helped craft is a plan that has all those stakeholders and the public. The idea is to replace that beachfront road’s two southbound lanes—that's the Great Highway extension—with a trail as early as this winter. This is known as managed retreat: triaging the infrastructure you’re willing to lose. “By backing up a few hundred feet, you lessen the erosive pressure on the beach and you get a more stable beach,” says Grant.

Meanwhile, engineers will continue to bring in outside sand. But now it’ll come from the Army Corps of Engineers’ regular dredging of shipping channels in the San Francisco Bay—perhaps 10 times the amount the city is currently trucking in from up the beach. This is known as sacrificial protection: dumping sand you know full well will wash away, but will in the interim act as a buffer.

“Now, after that, depending on how sea level rise occurs, we'll see,” says Grant. “We may end up having to make more difficult choices. In fact, I guarantee you we'll have to make more difficult choices all up and down the coast.”

One choice that’s definitely not on a table: doing nothing and letting the sea roll in unabated. “It would be upwards of $75 billion in just replacement costs,” says Diana Sokolove, principal planner at the San Francisco Planning Department, which helped craft the city’s Sea Level Rise Action Plan. “That doesn't even include lost tax revenue or the emotional costs of relocation or lost jobs.”

While it’s relatively easy to map where rising seas are going to inundate land (elevation, elevation, elevation), it’s harder to determine what problems those risings seas are going to cause. The northern coast of San Francisco is particularly low-lying, so its potential for unpredictable flooding—especially during storm surges—is high. And much of the San Francisco Bay Area is built on landfill that’s sinking as seas are rising, exposing some areas more rapidly than others.

“We don't want to be retreating too soon, we don't want to be building walls too soon, we don't want to be spending a ton of money when we don't know exactly what's going to happen,” says Sokolove.

What you do want is what they’re doing at Ocean Beach—keep stakeholders happy, keep the public happy, and figure out how to protect critical infrastructure. We know sea level rise will cause trouble, but it will also unfold over decades, giving engineers and city planners time to perfect what works, and abandon to the sea what doesn’t.

Hopefully the former for Ocean Beach. “This will get us out quite some distance—three, four, five decades,” says Grant. “And long before we get to the end of the lifespan of this set of interventions, we will have to be having another set of conversations based on what we learn.”

What begins with boys and girls playing in the sand with big machines, ideally ends with the salvation of Ocean Beach.

Our Warming Climate

  • Want to learn more? Here's WIRED's Guide to Climate Change
  • Is Cape Town thirsty enough to drink seawater?
  • How engineering earth's climate could imperil life

Related Video

Science

King Tides Show Us How Climate Change Will Threaten Coastal Cities

Seawall-topping king tides occur when extra-high tides line up with other meteorological anomalies. In the past they were a novelty or a nuisance. Now they hint at the new normal, when sea level rise will render current coastlines obsolete.

Are superheroes real? Maybe. In this recently released video, a firefighter in Latvia catches a man falling past a window. Let me tell you something. I have a fairly reasonable understanding of physics and this catch looks close to being impossible—but it's real.

Here is the situation (as far as I can tell). A dude is hanging on a window (actually, the falling human is only rumored to be a male) and then he falls. The firefighters were setting up a proper way to catch him, but it wasn't ready. Of course the only solution is then to catch him as he falls. It seems the victim fell from one level above the firefighter. At least that's what I'm going to assume. Now for some questions and answers.

How fast was the human moving?

This is a classic physics problem (I hope my students are paying attention). An object (or human) starts from rest and then begins to fall under the influence of the gravitational force. If the gravitational force is the only significant interaction on the human then that person will fall with a constant acceleration of 9.8 m/s2. That means that for every second of free fall, the human's speed will increase by 9.8 m/s (hint: 9.8 m/s is fairly fast—about 22 mph).

If I knew the time the human was falling, I could easily determine the speed since it increases a set amount every second. However, I can only approximate the distance the person falls. Of course that is only a small stumbling block for physics. In fact there is a kinematic equation that gives the speed of an object with a constant acceleration after a certain distance (you can also easily derive this with the definition of average velocity and acceleration). But if the object starts from rest and moves a distance y, then the final speed will be:

Yes, the greater the fall, the greater the speed. In this case, I'm just going to guess the distance at about 3 meters (it's just a guess). That would put the speed of the faller (is that a real word) at about 7.7 m/s. Maybe it's a little bit shorter fall at 2 meters—that would give a window-level speed of 6.3 m/s. Either way, it's fast.

How hard would it be to catch this human?

It doesn't take a superhuman to fall but it might take superhuman strength to stop someone during a fall. The key here is the nature of forces. A net force on an object changes the motion of that object. In this case, there will be two forces acting on the falling human. First, there is the gravitational force pulling down. This force depends on the gravitational field (g = 9.8 Newtons per kilogram) and the mass of the human (which I don't actually know). The second force is that of the firefighter pushing up during the catch. The total force (sum of these two forces) must be in the upward direction so that the change in motion is also up. This means the human (during the catch) will be slowing down. That's what we want.

I can estimate the human's mass, but what about that firefighter force? There are two basic ideas that deal with force and motion. First is the momentum principle. This is a relationship between force, momentum (product of mass and velocity) and time. The second is the work energy principle. This deals with forces, energy, and displacement. So it comes down to this. Do I want to estimate the time it takes to catch the human or do I want to estimate the distance over which the human was caught? I think I'll go with distance and the work-energy principle.

Here is your super short intro to the work-energy principle. First, let's look at work. Work is a way to add or take away energy from a system. The work depends on both the magnitude of the force and the direction the object is moving. Let's say that the human travels a distance d during this catch. In that case, the gravitational force will do positive work (since it is pulling in the same direction as the displacement) and the firefighter will do negative work (pushing up in the opposite direction as the motion).

But what about the energy? For this system (of just the falling human), there is only one type of energy—kinetic energy. The kinetic energy depends on both the mass and the speed of the faller. The idea is to have the total work done on the human decrease the kinetic energy to zero (so that the human stops). Now to put it all together, it looks like this (yes, I skip a bunch of details).

I already have the estimated speed (from above) so I just need the human mass and stopping distance. Let's say this is a human that isn't super big—maybe 50 kilograms. For the stopping distance, it looks like the firefighter grabs the falling human and moves about 1.5 meters before coming to a stop. With these values, the force the firefighter needs to exert on the human would be 1,478 Newtons. For you imperials, that is about 330 pounds. It's a large force, but not impossible. Still very impressive for just one hand.

Oh, and don't forget that if the firefighter pulls on the human with almost 1,500 Newtons, the person pulls on the firefighter with the same force in the opposite direction. This means that the hero has to hold onto the window sill in order to not get pulled out of the building and fall along with the victim. Yes, there does appear to be a harness on the firefighter but it doesn't look like it has tension. Still a superhero in my mind.

I have one final comment. Since I used the work-energy principle to estimate this force it seems like this is a good time to add an important note about energy. Remember—energy isn't a real thing. It's just something that we can calculate the can be conserved in many situations. There. I said it.

This past year, 2017, was the worst fire season in American history. Over 9.5 million acres burned across North America. Firefighting efforts cost $2 billion.

This past year, 2017, was the seventh-worst Atlantic hurricane season on record and the worst since 2005. There were six major storms. Early estimates put the costs at more than $180 billion.

As the preventable disease hepatitis A spread through homeless populations in California cities in 2017, 1 million Yemenis contracted cholera amid a famine. Diphtheria killed 21 Rohingya refugees in Bangladesh, on the run from a genocide.

Disaster, Pestilence, War, and Famine are riding as horsemen of a particular apocalypse. In 2016, the amount of carbon dioxide in Earth’s atmosphere reached 403 parts per million, higher than it has been since at least the last ice age. By the end of 2017, the United States was on track to have the most billion-dollar weather- and climate-related disasters since the government started counting in 1980. We did that.

Transnational corporations and the most powerful militaries on Earth are already building to prepare for higher sea levels and more extreme weather. The FIRE complex—finance, insurance, and real estate—knows exactly what 2017 cost them (natural and human-made disasters: $306 billion and 11,000 lives), and can calculate more of the same in 2018. They know that the radical alteration of Earth’s climate isn’t just something that’s going to happen in 100 years if we’re not careful, or in 50 years if we don’t change our economy and moonshot the crap out of science and technology. It’s here. Now. It happened. Look behind you.

Let me rephrase: Absent any changes, by 2050 Earth will be a couple degrees hotter overall. Sea levels will be a foot higher. Now, 2050 seems as impossibly far away to me as 2017 did when I was 12 years old. I live in the future! And I like a lot of it. I like the magic glass slab in my pocket and the gene therapy and the robots. I mention this because in 2050, my oldest child will be the same age I am today, and I have given him a broken world.

I don’t want that.

So 2017 taught a lesson, at last, that scientists and futurists have been screaming about. Humanity has to reduce the amount of carbon it’s pumping into the air. Radically. Or every year will be worse from here on out.

But 2017 also made plain the shape of the overall disaster. All those fires and floods and outbreaks are symptoms of the same problem, and it’s time to start dealing with that in a clear-eyed way. It’s also time to start building differently—to start making policies that understand that the American coastline is going to be redrawn by the sea, and that people can’t keep building single-family homes anywhere they can grade a flat pad. The wildland-urban interface can’t keep spreading at will. People can’t keep pumping fresh water out of aquifers without restoring them. Infrastructure for water and power has to be hardened against more frequent, more intense storms, backed up and reinforced so hundreds of thousands of people don’t go without electricity as they are in post-hurricane Puerto Rico.

In short: Change, but also adapt. Fire season in the West is now a permanent condition; don’t build buildings that burn so easily in places that burn every year. Hurricanes and storm surges are going to continue to walk up the Caribbean and onto the Gulf Coast, or maybe along the seaboard. Don’t put houses on top of the wetlands that absorb those storms. Don’t insure the people who do. Build ways for people to get around without cars. Create a power grid that pulls everything it can from renewable sources like wind and solar. Keep funding public health research, surveillance, and ways to deal with mosquito-borne diseases that thrive in a hotter world.

And the next time someone in a city planning meeting says that new housing shouldn’t get built in a residential area because it’s not in keeping with the sense of the community and might disrupt parking, tell them what that means: that they want young people to have lesser lives, that they don’t want poor people and people of color to have the same opportunities they did, and that they’d rather the planet’s environment get crushed by letting bad buildings spread to inhospitable places than increasing density in cities.

    More on Climate Change

  • Adam Rogers

    Giant Antarctic Icebergs and Crushing Existential Dread

  • Adam Rogers

    The Napa Fire Is a Perfectly Normal Apocalypse

  • Adam Rogers

    Hurricane Irma: A Practically Impossible Storm

This apocalypse doesn't hurt everyone. Some people benefit. It’s not a coincidence that the FIRE industries also donate the most money to federal political campaigns. Rich people living behind walls they think can’t be breached by any rising tide, literal or metaphoric, made this disaster. And then they gaslighted the vulnerable into distrusting anyone raising the alarm. The people who benefit have made it seem as if this dark timeline was all perfectly fine.

It isn’t. And that’s why it’ll change.

In 1957 Charles Fritz and Harry Williams, the research associate and technical director, respectively, of the National Academy of Sciences’ Disaster Studies Committee, wrote a paper that sparked the field of disaster sociology. Their findings were counterintuitive then, and somehow remain so. People in disasters, they said, don’t loot and riot. They help each other. “The net result of most disasters is a dramatic increase in social solidarity among the affected populace during the emergency and immediate post-emergency periods,” they wrote. “The sharing of a common threat to survival and the common suffering produced by the disaster tend to produce a breakdown of pre-existing social distinctions and a great outpouring of love, generosity, and altruism.”

In a disaster, we help each other. The trick is recognizing the disaster. Through that lens, fixing the problem and protecting one another against its consequences isn’t merely inarguable. It’s human nature. We’re all in this together.

Related Video

Science

How Climate Change Is Already Affecting Earth

Though the planet has only warmed by one-degree Celsius since the Industrial Revolution, climate change's effect on earth has been anything but subtle. Here are some of the most astonishing developments over the past few years.

Today, during a World Cup game between Morocco and Iran, Moroccan winger Nordin Amrabat suffered a wicked head injury when he collided with an opponent. After he went down, a team trainer tried to revive him by slapping his face—a move decried by athletes and followers online.

But despite the frequency of those kinds of injuries in soccer, you won’t see many international pros wearing gear that might prevent a concussion—reinforced headbands. Recent tests show that some brands can reduce the impact of a concussive blow by more than 70 percent. Unlike sweatbands, these headbands are made with hardened polyurethane foam, like that found inside military helmets, while still allowing players to see the action around them.

Still, soccer pros are loath to slip them on. The combination of peer pressure (“Does it make me look weak?”) and institutional inertia (some soccer officials don’t think they help) means that soccer is sort of backwards when it comes to preventing head injuries.

“It’s not normal to wear them,” says Steve Rowson, an assistant professor of biomedical engineering at Virginia Tech who just completed tests of 22 commercially available models. “The players that do either have a history of head injury or were just hit.” Head injuries in soccer usually result from a collision between two players, often when one or both is trying to head the ball. To mitigate the risk, padded headbands have been on the market for nearly two decades, and FIFA, the sport’s international governing organization, allowed them for play in 2004. But Rowson and colleagues wanted to find out whether the headbands really work or are just expensive bits of padding. They cost about $15 to $90, which for most players is less than a pair of primo soccer shoes.

Rowson connected sensors to the soccer headbands and slipped them on a pair of crash test dummies at Virginia Tech’s helmet lab, which has tested football helmets for pro and collegiate teams. His team slammed the two dummy heads together, with and without headgear, and the embedded sensors measured linear and rotational acceleration at three different speeds and two locations on the heads. Those values were used to calculate a score representing how much the headband reduced a player’s risk of concussion for a given impact, according to Rowson.

While direct head-to-head hits generated a force of 150 g's (150 times the accelerative force of gravity), compared to an average of 100 g's during football hits, the headbands could reduce that acceleration. The three best headband models received a five-star rating in a system devised by Rowson's team at Virginia Tech; five stars translates to a reduction in concussion risk of at least 70 percent for the impacts tested.

Superstars like England’s Wayne Rooney and USA’s Ally Krieger have worn headbands after injuries but took them off after a while. A few goalkeepers, like former Czech Republic captain Petr Čech, wear them religiously. But the push for protection isn’t trickling down from highly paid and idolized professionals, but rather from soccer parents who don’t want their kids facing a lifetime of concussion-related health problems.

The problem is especially acute for girls, who are suffering high rates of concussions from soccer. A 2017 study by Northwestern University researchers presented by the Society of Orthopaedic Surgeons showed that concussions among female soccer players occur at a higher rate than any other male athletes, and are increasing. Some researchers believe that boys suffer fewer concussions because they have stronger neck muscles than girls; others say that boys hide their symptoms, while girls are more likely to report them.

In 2014, a group of parents sued USA Soccer to force the sport’s governing body to prevent heading the ball because of the risk of head injury. That lawsuit was dismissed in 2015, but officials did agree to ban heading for both boys and girls under 12 years old.

In May, parents of two Pennsylvania players sued the US Soccer Federation and USA Youth Soccer claiming officials were negligent and failed to require headbands despite scientific evidence that they work. “We would like to protect these girls,” says Joe Murphy, a Pittsburgh attorney who filed the class action.

In a 2015 statement after the earlier lawsuit, US soccer officials stated that headbands don’t provide any protection and may increase the risk of concussions because they give players a false sense of security. But advocates of the gear say that times have changed. “The use of outdated studies and outdated ideas that have been invalidated is reckless,” Murphy says. “It’s like issuing leather football helmets.”

WIRED reached USA Soccer team physician George Chiampas by phone, but he said he was unable to answer questions about head injuries and headbands without clearance from the organization’s media department. Attempts to reach USA Soccer spokesman Neil Buethe were not successful.

As those lawsuits progress, new science will hopefully inform best practices. Tim McGuine, professor of sports medicine at the University of Wisconsin School of Medicine, is wrapping up a two-year clinical trial of 3,000 male and female high school soccer players in Wisconsin, Minnesota, and Ohio. He distributed headbands to half the group, while the others play without them. He is still processing the data, but said an initial analysis shows that the headbands do make a difference for some groups of athletes, and there’s no indication that using them increases the risk of head injury.

Still, McGuine says many soccer coaches are stuck in the past. “It’s like football coaching culture was 30 years ago,” McGuine says about the attitude of coaches and league officials toward protective headbands. “The coaches say we don’t want to change the game, or the girls are just faking it. I thought it was just a Wisconsin phenomenon, but it's everywhere. It’s just bizarre.”

It's likely that more than one World Cup player will get a head injury during the month-long tournament that just kicked off. Some will shake it off and return to play (just like Morocco's Amrabat, who rejoined his teammates), while others will get a serious concussion that could lead to health issues down the road. But by the time the US hosts the 2026 World Cup, perhaps we'll be seeing more soccer players deciding that headbands are worth wearing before they get hit.

Kelly Brunt won’t be home for the holidays, nor will she be ringing in the New Year at a fabulous party or watching Ryan Seacrest schmooze B-list celebrities on TV. Instead, between December 21 and January 11, she’ll be leading a four-person expedition around the South Pole, sleeping in a small tent mounted on a plastic sled that is pulled by a snowcat. But that doesn't mean she won't celebrate—it'll just be a demure affair, with her crew, a cozy fleece, and a carefully prepared cup of her favorite gourmet coffee.

“We are being anal about the kind of coffee and the pour over,” says Brunt, a climate scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “We bring down our own filters and we will buy whole beans before we arrive.”

Brunt is making her tenth trip to Antarctica since 2000. She's celebrated Christmas and New Year’s five times there, as well as five Thanksgivings and eight birthdays. As a glaciologist, she’s worked at the US main base at McMurdo Station, camped on massive floating icebergs in the Ross Sea, and in 2009, and spent three months with the Australian Antarctic Program on the Amery Ice Shelf.

And this year, Brunt is leading the ground-based team on NASA’s ICESAT2, which is studying changes in the Antarctic ice sheet and how they contribute to global sea level changes.

The expedition will cross 500 miles of crunchy, chunky ice at the bottom of the planet. The massive ice cap that covers the South Pole is more than 10,000 feet in elevation, so the team will have to acclimate for several days at the Amundsen-Scott base before heading out into the "deep field," which is Antarctic-speak for any place beyond the comfort of a permanent station. Yes, it will be cold, ranging from -20 to -40 degrees Fahrenheit (plus wind chill), but the crew has a combination of extreme polar gear issued by the National Science Foundation, as well as personal favorites from home. For Kelly, it's a lucky brown fleece that accompanies her on every polar trip.

Over two weeks, the crew will take turns sleeping and working—taking precise measurements of the ice sheet thickness and comparing it to satellite-based measurements to make sure the two agree down to the centimeter scale. At the same time, a NASA aircraft will be flying over the ground crew, using a laser altimeter to triple check the data. Afterward, the pilots get to land at McMurdo and Amundsen-Scott South Pole base for a hot meal, while Brunt and her crew trudge on to the next ground station.

While December is the holiday season in the Northern Hemisphere, it’s a time of intense work for the 1,200 or so American scientists and support personnel on the frozen continent. That’s because December falls in the middle of the austral summer, a time when the Antarctic sun rarely sets below the horizon (it starts in November and lasts until late February). It’s also calm enough to actually move about without getting knocked down by fierce winds that howl during winter months.

Yes, it means they won't be home for the holidays. But they're used to it. On Christmas Day, Brunt and her colleagues will probably perform a modest day of work. “I don’t miss the commercialization of Christmas,” Brunt says. “I don’t know how we will celebrate, but it’s hard to do nothing on the field. We will just make an acknowledgement and celebrate where we are without being sentimental that we aren’t with family.”

Brunt and her colleagues began their journey in late November, flying from various points around the US to Christchurch, New Zealand. From there, they boarded a C-17 transport plane for a five-hour flight to McMurdo Station, a town of about 1,100 people on Ross Island, Antarctica. The next leg was a three-hour flight to the South Pole via a ski-equipped C-130.

And there, waiting for the crew at the Amundsen-Scott South Pole Station, was their home for the next two weeks: two PistenBully snowcats pulling specially designed sleds. The sleds are made of a plastic with a density structure that significantly reduces the coefficient of friction, so 10,000 pounds will tow like 1,000. The sleds will be towing essential stuff, like fuel—but they will also carry fully set-up tents, so the researchers don't have to establish camp every day.

    More on Antarctica

  • Eric Niiler

    Scientists Are Defrosting Britain's Very Frozen Antarctic Base

  • Eric Niiler

    Antarctica Is Looking for a Few Good Firefighters

  • Megan Molteni

    For Scientists Predicting Sea Level Rise, Wind Is the Biggest Unknown

The trip isn't just a laborious one; it's potentially dangerous, too. The crew will likely avoid the cracks and crevasses that are often found around the edges of the Antarctic continent—this time, their job is to measure the thickest point of the Antarctic ice sheet. It’s also less windy at the South Pole than other places, according to Forrest McCarthy, a mountaineer and safety guide assigned to the ICESAT2 team. But there are still plenty of equipment hazards. “When you think about fuels in the cold temperature, the fuel we use is at minus 40 degrees and could lead to instant frostbite," McCarthy says. "If you spill it on yourself you will be in a bad way.” Any accident will be days away from medical help or evacuation. That means he’s got to keep everyone attentive, focused on their work, and able to get along when things get tough.

“Group dynamics is really important,” says McCarthy, who works as a fishing and mountain guide in Wyoming and Alaska during the rest of the year. McCarthy is a self-described Grinch, but he clearly gets along with Brunt, who he's worked with since 2000. “When people get along, you end up being more productive. She’s got a good sense of humor and highly competent. That is one reason I signed up for the mission.”

On Christmas, McCarthy will call his wife back home on a satphone—but he doesn't miss being home. Each year, he feels the magnetic draw of the Antarctica’s stark beauty. “I love Antarctica and the culture of exploration and science,” McCarthy says. “It is one of the last great wildernesses on Earth.” Although even the globe's most remote locations can use a bit of comfort. Along with his safety gear, extra clothes and food, McCarthy packed an Italian moka express pot to make everyone a holiday latte.

Related Video

Science

King Tides Show Us How Climate Change Will Threaten Coastal Cities

Seawall-topping king tides occur when extra-high tides line up with other meteorological anomalies. In the past they were a novelty or a nuisance. Now they hint at the new normal, when sea level rise will render current coastlines obsolete.

Next month, in a laboratory an hour outside of London, scientists will begin stitching bits of DNA together and inserting them into hundreds of tiny, cucumber-shaped insect eggs. It’s the first step toward engineering a new kind of mosquito—the kind that could help eradicate malaria on this side of the Prime Meridian.

The mosquito is a species called Anopheles albimanus, the primary transmitter of the deadly disease in Central America and the Caribbean. The scientists work for Oxitec, the UK-based subsidiary of global GMO giant Intrexon, whose portfolio also sports transgenic salmon and non-browning apples. Oxitec has made a name for itself in the pest-prevention business by making mosquitoes and other insects that can’t produce offspring.

Now, with a new $4.1 million investment from the Gates Foundation, Oxitec is putting its patented Friendly tech inside malaria’s main host in the Western Hemisphere. The company intends to have “self-limiting” skeeters ready for field trials by 2020.

The timing isn’t coincidental. Five years ago, health ministers from ten countries in Central America and the Caribbean got together in the capital of Costa Rica and committed to eliminating malaria in the region by 2020. It seemed reasonable at the time; cases of the deadly disease had been declining steeply since 2005. But starting in 2015, as the Zika crisis began to unfold, those numbers began to tick back up. The World Health Organization’s 2017 malaria report warned that progress in fighting the disease had stalled and was in danger of reversing.

So in January of this year, the Bill & Melinda Gates Foundation—which has become one of the leaders in the recent explosion of malaria funding—joined the fight. Along with the Inter-American Development Bank, they announced a $180 million initiative to help Central America meet its malaria elimination goals. The financing is meant to help those countries continue to invest in anti-malarial drugs, insecticide-laced bed nets, and better clinical diagnostics, even as Zika and Dengue have become the bigger public health bogeyman. But the Gates Foundation, true to its tech founder’s roots, is betting that won’t be enough.

“We’re not going to bed net our way out of malaria,” a foundation spokesperson said in an interview with WIRED. “Investments like the one with Oxitec will help bring other tools online, that in combination with existing ones will really get transmission down to zero.”

In recent years, the Gates Foundation has become one of the most prolific proponents of harnessing genetic eco-technologies to combat public health threats. It has supported experiments releasing Aedes aegypti mosquitoes infected with the Wolbachia bacterium to prevent them from spreading diseases like Zika and dengue in Brazil. And in Africa it’s bankrolling an even more ambitious project called Target Malaria, which intends to use a Crispr-based gene drive to exterminate local populations of mosquitoes.

But neither of those approaches is expected to work very well on malaria in the Americas. Wolbachia doesn’t confer sterility in Anopheles albimanus. Gene drives—with all their attendant uncertainties—would be a hard risk to sell, especially when the more urban geography of the region makes more controlled technologies like Oxitec’s operationally feasible.

Oxitec has previously worked with local governments in Brazil, Panama, and the Cayman Islands to release its first generation Aedes aegypti mosquito, developed back in 2002. But that technology—which involved inserting a gene to make the mosquitoes die unless fed a steady diet of the antibiotic tetracycline—is already old news. It required egg facility workers to painstaking sort larvae by sex so they could release only the non-biting males into the wild, where they would mate and then die, along with all their offspring. Even with mechanical sorting machines, it was still an overly burdensome process.

So Oxitec has since developed second generation insect sterility tech. Now it does all the sex-sorting with genetics. It starts with the same basic parts: a gene that vastly overproduces a protein that turns deadly in the absence of tetracycline and a fluorescent marker to allow field scientists to keep track of them in the wild. But then Oxitec scientists put those parts somewhere interesting.

Unlike humans, mosquitoes don’t have X and Y chromosomes. Instead, they have identical regions of DNA that get translated into different proteins—a regulated process called differential splicing—and those proteins determine whether the mosquito grows up to be a male or a female. Oxitec scientists piggybacked off this natural mechanism by sticking their antibiotic-or-death construct onto that region, where it also got spliced into two different forms: one that worked like it should, in females, and one that was broken, in males. Which means that only the males survive.

It also means Oxitec can release a lot fewer mosquitoes, because those male offspring go on and mate themselves, further reducing the pest population. Unlike a gene drive though, the modification is still inherited in Mendelian fashion, so it eventually disappears from the environment, about 10 generations after the last release. In May, the company launched its first open field trial of this second generation Aedes aegypti mosquito in Indaiatuba, Brazil.

That’s the tech that Oxitec plans on developing for the malaria-carrying Anopheles albimanus. But it’s not a simple interspecies plug and play. Their scientists don’t really know where the American insect keeps its sex determination machinery. Or how best to turn it to their own advantage. “The mosquito family never ceases to surprise me,” says Oxitec’s chief scientific officer, Simon Warner. “They’re very ancient animals and their diversity is huge. So we’re relying on nature to actually tell us the answer.”

Warner’s team of about 15 will start by randomly inserting their self-limiting gene construct into Anopheles albimanus embryos raised on tetracycline. Then they’ll take them off the antibiotic diet and select for the lines where only the females die. They’ll do that a few hundred times until they find ones that work. Then they’ll sequence their DNA to see where the gene inserted and run tests for multiple generations to see how the trait gets passed down. In two and a half years they hope to have a line ready to be released into the open. Then it will be up to the countries in Central America and the Caribbean to decide if they want them.

Over the past century, scientists have become adept at plotting the ecological interactions of the diverse organisms that populate the planet’s forests, plains and seas. They have established powerful mathematical techniques to describe systems ranging from the carbon cycles driven by plants to the predator-prey dynamics that dictate the behavior of lions and gazelles. Understanding the inner workings of microbial communities that can involve hundreds or thousands of microscopic species, however, poses a far greater challenge.

Quanta Magazine


About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Microbes nourish each other and engage in chemical warfare; their behavior shifts with their spatial arrangements and with the identities of their neighbors; they function as populations of separate species but also as a cohesive whole that can at times resemble a single organism. Data collected from these communities reveal incredible diversity but also hint at an underlying, unifying structure.

Scientists want to tease out what that structure might be—not least because they hope one day to be able to manipulate it. Microbial communities help to define ecosystems of all shapes and sizes: in oceans and soil, in plants and animals. Some health conditions correlate with the balance of microbes in a person’s gut, and for a few conditions, such as Crohn’s disease, there are known causal links to onset and severity. Controlling the balance of microbes in different settings might provide new ways to treat or prevent various illnesses, improve crop productivity or make biofuels.

But to reach that level of control, scientists first have to work out all the ways in which the members of any microbial community interact—a challenge that can become incredibly complicated. In a paper published in Nature Communications last month, a team of researchers led by Yang-Yu Liu, a statistical physicist at Harvard Medical School, presented an approach that gets around some of the formidable obstacles and could enable scientists to analyze a lot of data they haven’t been able to work with.

The paper joins a growing body of work seeking to make sense of how microbes interact, and to illuminate one of the field’s biggest unknowns: whether the main drivers of change in a microbial community are the microbes themselves or the environment around them.

Gleaning More From Snapshots

“We understand so little about the mechanisms underlying how microbes interact with each other,” said Joao Xavier, a computational biologist at Memorial Sloan Kettering Cancer Center, “so trying to understand this problem using methods that come from data analysis is really important at this stage.”

But current strategies for gaining such insights cannot make use of a wealth of data that have already been collected. Existing approaches require time-series data: measurements taken repeatedly from the same hosts or communities over long stretches of time. Starting with an established model of population dynamics for one species, scientists can use those measurements to test assumptions about how certain species affect others over time, and based on what they find out, they then adjust the model to fit the data.

Such time-series data are difficult to obtain, and a lot is needed to get results. Moreover, the samples are not always informative enough to yield reliable inferences, particularly in relatively stable microbial communities. Scientists can get more informative data by adding or removing microbial species to perturb the systems—but doing so poses ethical and practical issues, for example, when studying the gut microbiota of people. And if the underlying model for a system isn’t a good fit, the subsequent analysis can go very far astray.

Because gathering and working with time-series data are so difficult, most measurements of microbes—including the information collected by the Human Microbiome Project, which characterized the microbial communities of hundreds of individuals—tend to fall into a different category: cross-sectional data. Those measurements serve as snapshots of separate populations of microbes during a defined interval, from which a chronology of changes can be inferred. The trade-off is that although cross-sectional data are much more readily available, inferring interactions from them has been difficult. The networks of modeled behaviors they yield are based on correlations rather than direct effects, which limits their usefulness.

Imagine two types of microbes, A and B: When the abundance of A is high, the abundance of B is low. That negative correlation doesn’t necessarily mean that A is directly detrimental to B. It could be that A and B thrive under the opposite environmental conditions, or that a third microbe, C, is responsible for the observed effects on their populations.

But now, Liu and his colleagues claim that cross-sectional data can say something about direct ecological interactions after all. “A method that doesn’t need time-series data would create a lot of possibilities,” Xavier said. “If such a method works, it would open up a bunch of data that’s already out there.”

A Simpler Framework

Liu’s team sifts through those mountains of data by taking a simpler, more fundamental approach: Rather than getting caught up in measuring the specific, finely calibrated effects of one microbial species on another, Liu and his colleagues characterize those interactions with broad, qualitative labels. The researchers simply infer whether the interactions between two species are positive (species A promotes the growth of species B), negative (A inhibits the growth of B) or neutral. They determine those relationships in both directions for every pair of species found in the community.

Liu’s work builds on prior research that used cross-sectional data from communities that differ by only a single species. For instance, if species A grows alone until it reaches an equilibrium, and then B is introduced, it is easy to observe whether B is beneficial, harmful or unrelated to A.

The great advantage of Liu’s technique is that it allows relevant samples to differ by more than one species, heading off what would otherwise be an explosion in the number of samples needed. In fact, according to his study’s findings, the number of required samples scales linearly with the number of microbial species in the system. (By comparison, with some popular modeling-based approaches, the number of samples needed increases with the square of the number of species in the system.) “I consider this really encouraging for when we talk about the network reconstruction of very large, complex ecosystems,” Liu said. “If we collect enough samples, we can map the ecological network of something like the human gut microbiota.”

Those samples allow scientists to constrain the combination of signs (positive, negative, zero) that broadly define the interactions between any two microbial strains in the network. Without such constraints, the possible combinations are astronomical: “If you have 170 species, there are more possibilities than there are atoms in the visible universe,” said Stefano Allesina, an ecologist at the University of Chicago. “The typical human microbiome has more than 10,000 species.” Liu’s work represents “an algorithm that, instead of exhaustively searching among all possibilities, pre-computes the most informative ones and proceeds in a much quicker way,” Allesina said.

Perhaps most important, with Liu’s method, researchers don’t need to presuppose a model of what the interactions among microbes might be. “Those decisions can often be quite subjective and open to conjecture,” said Karna Gowda, a postdoctoral fellow studying complex systems at the University of Illinois, Urbana-Champaign. “The strength of this study [is that] it gets information out of the data without resorting to any particular model.”

Instead, scientists can use the method to verify when a certain community’s interactions follow the equations of classical population dynamics. In those cases, the technique allows them to infer the information their usual methods sacrifice: the specific strengths of those interactions and the growth rates of species. “We can get the real number, not just the sign pattern,” Liu said.

In tests, when given data from microbial communities of eight species, Liu’s technique generated networks of inferred interactions that included 78 percent of those that Jonathan Friedman, a systems biologist at the Hebrew University of Jerusalem and one of Liu’s co-authors, had identified in a previous experiment. “It was better than I expected,” Friedman said. “The mistakes it made were when the real interactions I had measured were weak.”

Liu hopes to eventually use the method to make inferences about communities like those in the human microbiome. For example, he and some of his colleagues posted a preprint on biorxiv.org in June that detailed how one could identify the minimum number of “driver species” needed to push a community toward a desired microbial composition.

A Greater Question

Realistically, Liu’s goal of fine-tuning microbiomes lies far in the future. Aside from the technical difficulties of getting enough of the right data for Liu’s approach to work, some scientists have more fundamental conceptual reservations—ones that tap into a much larger question: Are changes in the composition of a microbial community mainly due to the interactions between the microbes themselves, or to the perturbations in their environment?

Some scientists think it’s impossible to gain valuable information without taking environmental factors into account, which Liu’s method does not. “I’m a bit skeptical,” said Pankaj Mehta, a biophysicist at Boston University. He is doubtful because the method assumes that the relationship between two microbial strains does not change as their shared environment does. If that’s indeed the case, Mehta said, then the method would be applicable. “It would be really exciting if what they’re saying is true,” he said. But he questions whether such cases will be widespread, pointing out that microbes might compete under one set of conditions but help each other in a different environment. And they constantly modify their own surroundings by means of their metabolic pathways, he added. “I’m not sure how you can talk about microbial interactions independent of their environment.”

A more sweeping criticism was raised by Alvaro Sanchez, an ecologist at Yale University who has collaborated with Mehta on mechanistic, resource-based models. He emphasized that the environment overwhelmingly determines the composition of microbial communities. In one experiment, he and his colleagues began with 96 completely different communities. When all were exposed to the same environment, Sanchez said, over time they tended to converge on having the same families of microbes in roughly the same proportions, even though the abundance of each species within the families varied greatly from sample to sample. And when the researchers began with a dozen identical communities, they found that changing the availability of even one sugar as a resource created entirely divergent populations. “The new composition was defined by the carbon [sugar] source,” Sanchez said.

The effects of the microbes’ interactions were drowned out by the environmental influences. “The structure of the community is determined not by what’s there but by the resources that are put in … and what [the microbes] themselves produce,” Mehta said.

That’s why he’s unsure how well Liu’s work will translate into studies of microbiomes outside the laboratory. Any cross-sectional data taken for the human microbiome, he said, would be influenced by the subjects’ different diets.

Liu, however, says this wouldn’t necessarily be the case. In a study published in Nature in 2016, he and his team found that human gut and mouth microbiomes exhibit universal dynamics. “It was a surprising result,” he said, “to have strong evidence of healthy individuals having a similar universal ecological network, despite different diet patterns and lifestyles.”

His new method may help bring researchers closer to unpacking the processes that shape the microbiome—and learning how much of them depends on the species’ relationships rather than the environment.

    More Quanta

  • John Rennie

    Seeing the Beautiful Intelligence of Microbes

  • Jordana Cepelewicz

    Microbes May Rig Their DNA to Speed Up Evolution

  • Emily Singer

    On the Microbial Frontier, Cheaters Rarely Prosper

Researchers in both camps can also work together to provide new insights into microbial communities. The network approach taken by Liu and others, and the more detailed metabolic understanding of microbial interactions, “represent different scales,” said Daniel Segrè, a professor of bioinformatics at Boston University. “It’s essential to see how those scales relate to each other.” Although Segrè himself focuses on molecular, metabolism-based mappings, he finds value in gaining an understanding of more global information. “It’s like, if you know a factory is producing cars, then you also know it has to produce engines and wheels in certain fixed proportions,” he said.

Such a collaboration could have practical applications, too. Xavier and his colleagues have found that the microbiome diversity of cancer patients is a huge predictor of their survival after a bone marrow transplant. The medical treatments that precede transplant—acute chemotherapy, prophylactic antibiotics, irradiation—can leave patients with microbiomes in which one microbe overwhelmingly dominates the composition. Such low diversity is often a predictor of low patient survival: According to Xavier, his colleagues at Sloan Kettering have found that the lowest microbial diversity can leave patients with five times the mortality rate seen in patients with high diversity.

Xavier wants to understand the ecological basis for that loss of microbial diversity, in the hopes of designing preventive measures to maintain the needed variability or interventions to reconstitute it. But to do that, he also needs the information Liu’s method provides about microbial interactions. For example, if a patient takes a narrow-spectrum antibiotic, might that affect a broader spectrum of microbes because of ecological dependencies among them? Knowing how an antibiotic’s effects could propagate throughout a microbial network could help physicians determine whether the drug could cause a huge loss to a patient’s microbiome diversity.

“So both the extrinsic perturbation and the intrinsic properties of the system are important to know,” Xavier said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Related Video

Science

Using Shark Skin to Fight Against Bacteria

Scientists are looking to an unlikely source for new ways to fight bacteria. Could the skin of a Galapagos shark hold the key to warding off hospital-born bacteria and superbugs?

San Francisco Mayor Ed Lee died in December of 2017; the election to replace him was Tuesday. No one knows who won. Partially that’s because the votes are still trickling in. Mail-in ballots merely had to be postmarked by election day, and as I write the city is reporting 87,000 votes yet to be processed. But that’s not the only roadblock. The other problem is math.

See, the San Francisco mayoral election isn’t just another whoever-gets-the-most-votes-wins sort of deal. No, this race was another example of the kind of cultural innovation that California occasionally looses upon an unsuspecting America, like smartphones and fancy toast. Surprise, you guys! We don’t even vote like y’all out here.

The way it worked is called ranked choice voting, also known as an instant runoff. Voters rank three choices in order of preference. The counting process drops the person with the fewest first-choice votes, reallocates that candidate’s votes to all his or her voters’ second choices, and then repeats. Does this sound insane? Actually, it’s genius. It is also insane.

The mayoral ballot had eight candidates, including unlikely winners like a lawyer who’d run three times before, a holistic health practitioner, and a Republican. San Franciscans coalesced around three: London Breed, Jane Kim, and Mark Leno, all local elected officials with the kinds of intertwined histories that you could only get from two-fisted municipal politics in a region with astronomical amounts of tech money (mostly out of government reach thanks to sweetheart corporate tax deals and a history of failing to tax homeowners on the real value of their property). Breed has the most first-place votes so far—10 percentage points up on Leno, in second—but Kim’s reallocated third-place vote count has given Leno a margin so narrow it’d disappear if you looked at it on-end.

What’s the point of complexifying a straightforward election? The thing is, elections aren’t straightforward. Social choice theory lays out a bunch of different ways a group might make a decision, and “plurality”—whatever gets the most vote wins—is just one. It works great if you have a ballot with only two choices on it. But add more choices, and you have problems.

Daniel Ullman, George Washington University

When Reform Party candidate Jesse Ventura defeated the Republican Norm Coleman and the Democrat Skip Humphrey for governor of Minnesota in 1998, political pundits saw voter disgust with The System at work. Ventura got 37 percent of the vote; Coleman, 35; and Humphrey, 28. But as Emory mathematician Victoria Powers wrote in a 2015 paper, exit polls said that almost everyone who voted for Coleman had Humphrey as a second choice, and Coleman was the second choice of almost everyone who voted for Humphrey. “The voters preferred Coleman to both of the other candidates, and yet he lost the election,” Powers wrote.

That’s plurality. The same problems come up with “antiplurality,” in which everyone says who they hate, and the person with the least votes wins. Both potentially violate Condorcet’s Theorem—as in the philosopher-mathematician the Marquis de Condorcet, who in 1785 said in part that an election should be won by a candidate who’d beat all the other candidates head-to-head. (Sequential pairwise voting, in which you eliminate the losers March Madness style, gives you a clear Condorcet winner … but that winner is different for every order you run the matchups.)

So, yeah, plurality: bad. “It’s very restrictive on voters,” says Daniel Ullman, a mathematician at George Washington University and the co-author of The Mathematics of Politics. “If you allow voters to say who their top two candidates are, or rank all 10 in order, or give approval to those they like or don’t like, or all sorts of other ballots, then things get interesting.”

They do indeed. The other systems let voters express more choice, but they also introduce what mathematicians call paradoxes. Here’s an example: ranked choice voting lacks “monotonicity.” That is to say, people sometimes have to vote against the candidate they’re actually supporting to make a win more likely. “That’s disturbing, because when you go into the ballot box you’re not sure if you should reveal what your true wish is,” Ullman says.

And indeed, some of the campaigning leading up to election day involved telling people which two candidates to vote for, regardless of order—basically, please vote against the other corner of the triangle. Flip side, imagine how different American history might be if the 2000 presidential election (Al Gore virtually tied with George W. Bush, Ralph Nader and Pat Buchanan as spoilers) had been ranked choice.

Ranked-choice and sequential pairwise aren’t even the weirdest possibilities out there. You could assign everyone a score, with some points for top choice, fewer for second, fewer for third, etc. Whoever has the most points at the end wins. That’s a “Borda Count.” Fun problem: In the same election with the same vote counts, plurality, antiplurality, and Borda count could all yield different winners. And Borda violates Condorcet, too. Yiiiiikes.

“There was a meeting of voting system experts a number of years ago, and they voted on which method they liked best. Apparently the plurality method got zero votes,” Ullman says. “One of the favorites was approval, where your ballot is a yes-no choice for each candidate, and whoever gets the most yeses wins.”

Yes, I asked how they chose. “They actually used approval voting,” Ullman says.

So do a lot of professional societies, including mathematicians. You might think that this would yield only the most anodyne, least objectionable choice, but you actually get winners—Condorcet winners!—with broad support. (Engineers don’t like it as much; the Institute of Electrical and Electronic Engineers abandoned the practice.) You can even go harder and hybridize various options, or add ranking to simple yes/no approvals. One downside might be that voters have to have an opinion about everyone on the ballot. “If someone said, ‘you have to submit a preference ballot and you have to rank all 20,’ there’d be a lot of people who would know their first and second choice, and maybe their third, but then say, ‘I’ve never heard of the rest of these people.’”

The mystery of who the next mayor of San Francisco will be wasn’t even primary day’s only drama. Instead of splitting up other races by party, in California everyone goes onto the same ballot, and the top two vote-getters advance to the general election in November. If they’re from the same party? So be it. Except this year Democratic enthusiasm was so high because of, like, everything, that the slates in those races filled with rarin’-to-go Dems representing every wavelength of blue from desaturated purple to deep indigo. That freaked out the national party, which worried that everyone would peel votes from everyone else, locking Democrats out of both slots in four districts when the party is hoping to take control of the House of Representatives. They didn’t get locked out, but the attempt at electoral innovation came from the same spirit.

California has long been willing to perform surgery on democracy to correct flaws both cosmetic and life-threatening. Gilded Age California politics was so corrupt that progressive reformers instituted the initiative process, for example, letting anyone with enough signatures put legislation on a ballot. The top-two primary, also used in Washington and Nebraska, comes in part as a tool in the fight against gerrymandering. Like a lot of Californian ideals, the voting system is a little crazy-making and a little noble all at once.

It’s also doomed. In the 1950s the economist Kenneth Arrow set out to find the one best voting method, one election to rule them all. He ended up proving that there wasn’t one. Arrow’s Impossibility Theorem, for which he won the Nobel Prize in 1971, says that outside a two-choice plurality, no good method exists to make a rock-solid social choice.

But that’s democracy for you. We’re not here to make the union perfect—just more perfect.

On July 1, 2013, Amos Joseph Wells III went to his pregnant girlfriend's home in Fort Worth, Texas, and shot her multiple times in the head and stomach. He then killed her mother and her 10-year-old brother. Wells surrendered voluntarily within hours, and in a tearful jailhouse interview told reporters, "There's no explanation that I could give anyone, or anybody could give anyone, to try to make it seem right, or make it seem rational, to make everybody understand."

Heinous crimes tend to defy comprehension, but some researchers believe neuroscience and genetics could help explain why certain people commit such atrocities. Meanwhile, lawyers are introducing so-called neurobiological evidence into court more than ever.

Take Wells, for instance. His lawyers called on Pietro Pietrini—director of the IMT School for Advanced Studies in Lucca, Italy, and an expert on the neurobiological correlates of antisocial behavior—to testify at their client's trial last year. “Wells had several abnormalities in the frontal regions of his brain, plus a very bad genetic profile," Pietrini says. Scans of the defendant's brain showed abnormally low neuronal activity in his frontal lobe, a condition associated with increased risk of reactive, aggressive, and violent behavior. In Pietrini's estimation, that "bad genetic profile" consisted of low MAOA gene activity—a trait long associated with aggression in people raised in abusive environments—and five other notable genetic variations. To differing degrees, they're linked with a susceptibility to violent behavior, impulsivity, risk-taking, and impaired decision-making.

"What we tried to sustain was that he had some evidence of a neurobiological impairment that would affect his brain function, decision making, and impulse control," Pietrini says. "And this, we hoped, would spare him from the death penalty."

It did not. On November 3, 2016, a Tarrant County jury found Wells guilty of capital murder. Two weeks later, the same jury deliberated Wells' fate for just four hours before sentencing him to die. The decision, as mandated by Texas law, was unanimous.

In front of a different judge or another jury, Wells might have avoided the death penalty. In 2010, lawyers used a brain-mapping technology called quantitative electroencephalography to try to convince a Dade City, Florida, jury that defendant Grady Nelson was predisposed to impulsiveness and violence when he stabbed his wife 61 times before raping and stabbing her 11-year-old daughter. The evidence's sway over at least two jurors locked the jury in a 6-6 split over whether Nelson should be executed, resulting in a recommendation of life without parole.

Nelson's was one of nearly 1,600 court cases examined in a recent analysis of neurobiological evidence in the US criminal justice system. The study, by Duke University bioethicist Nita Farahany, found that the number of judicial opinions mentioning neuroscience or behavioral genetics more than doubled between 2005 and 2012, and that roughly 25 percent of death penalty trials employ neurobiological data in pursuit of a lighter sentence.

Farahany's findings also suggest defense attorneys are applying neuroscientific findings to more than capital murder cases; lawyers are increasingly introducing neuroscientific evidence in cases ranging from burglary and robbery to kidnapping and rape.

"Neuro cases without a doubt are increasing, and they're likely to continue increasing over time" says Farahany, who adds that people appear to be particularly enamored of brain-based explanations. "It’s a much simpler sell to jurors. They seem to believe that it’s much more individualized than population genetics. Also, they can see it, right? You can show somebody a brain scan and say: There. See that? That big thing, in this person’s brain? You don’t have that. I don’t have that. And it affects how this person behaves.”

And courts seem to be buying it. Farahany found that between 20 and 30 percent of defendants who invoke neuroscientific evidence get some kind of break on appeal—a higher success rate than one sees in criminal appeals, in general. (A 2010 analysis of nearly 70,000 US criminal appeals found that only about 12 percent of cases wound up being reversed, remanded, or modified.) At least in the instances Farahany investigated (a small sample, she notes, of criminal cases, 90 percent of which never go to trial), neurobiological evidence seemed to have a small but positive impact on defendants' outcomes.

The looming question—scientifically, legally, philosophically—is whether it should.

Many scientists and legal experts question whether neurobiological evidence belongs in court in the first place. "Most of the time, the science isn’t strong enough," says Stephen Morse, professor of law and psychiatry at the University of Pennsylvania.

Morse calls this the "clear cut" problem: Where the defendant's mental and behavioral state are obvious, you don’t need neurobiological evidence to support it. But in cases where the behavioral evidence is unclear, the brain data or genetic data aren't exact enough to serve as diagnostic markers. "So where we need the help most—where it’s a gray area case, and we’re simply not sure whether the behavioral impairment is sufficient—the scientific data can help us least," says Morse. "Maybe this will change over time, but that’s where we are now.”

You don't have to look hard to see his point. To date, no brain abnormality or genetic variation has been shown to have a deterministic effect on a person's behavior, and it's reasonable to assume that one never will. Medicine, after all, is not physics; your neurobiological state cannot predict that you will engage in violent, criminal, or otherwise antisocial activity, as any researcher will tell you.

But some scientific arguments appear to be more persuasive than others. Brain scans, for example, seem to hold greater sway over the legal system than behavioral genetic analyses. "Most of the evidence right now suggests that genetic evidence, alone, isn’t having much influence on judges and juries," says Columbia psychiatrist Paul Appelbaum, co-author of a recent review, published in Nature Human Behavior, that examines the use of such evidence in criminal court. Juries, he says, might not understand the technical intricacies of genetic evidence. Conversely, juries may simply believe genetic predispositions are irrelevant in determining someone's guilt or punishment.

Still another explanation could be what legal researchers call the double-edged sword phenomenon. "The genetic evidence might indicate a reduced degree of responsibility for my behavior, because I have a genetic variant that you don’t, but at the same time suggest that I'm more dangerous than you are. That if I really can't control my behavior, maybe I'm exactly the kind of person who should be locked up for a longer period of time," Appelbaum says. Whatever the reason for genetic evidence's weak impact, Appelbaum predicts its use in court—absent complementary neurological evidence—will decrease.

That's not necessarily a bad thing. There's considerable disagreement within the scientific community over the influence of so-called gene-environment interactions on human behavior, including ones believed to affect people like Amos Wells.

In their 2014 meta-analysis of the two most commonly studied genetic variants linked to aggression and antisocial behavior (both of which Wells possesses), Emory University psychologists Courtney Ficks and Irwin Waldman concluded that the variants appear to play a "modest" role in antisocial behavior. But they also identified numerous examples of studies bedeviled by methodological and interpretive flaws, susceptibility to error, loose standards for replication, and evidence of publication bias. "Notwithstanding the excitement that many researchers have felt at the prospect of [gene-environment] interactions in the development of complex traits, there is growing evidence that we must be wary of these findings," the researchers wrote.

So then. What should a jury consider in the case of someone like Amos Wells? In his expert report, Pietrini cited Ficks and Waldman's analysis—and more than 80 other papers—to emphasize the modest role of genetic variation in antisocial behavior. And in their cross examination, the prosecution went through several of Pietrini's citations line by line, calling for circumspection. They pointed to the Ficks paper, for instance. They also quoted excerpts that cast behavioral genetics findings in an uncertain light. Lines like this one, from a 2003 paper in Nature about the association of gene variants with anger-related traits: "Nevertheless, our findings warrant further replication to avoid any spurious associations for the example due to the ethnic stratification effects and sampling errors."

Pietrini chuckles when I recount the prosecution's criticisms. "You look at the discussion section of any medical study, and you'll find sentences like that: Needs more research. Needs a larger sample size. Needs to be replicated. Warrants caution. But it doesn't mean that what's been observed is wrong. It means that, as scientists, we're always cautious. Medical science is only ever proven true by history, but Amos Wells, from my point of view, had many genetic and neurological factors that impaired his mental ability. I say that not because I was a consultant to the defense, but in absolute terms."

    More on Genetics

  • Megan Molteni

    To Protect Genetic Privacy, Encrypt Your DNA

  • Andy Greenberg

    Biohackers Encoded Malware in a Strand of DNA

  • Sarah Zhang

    Cheap DNA Sequencing Is Here. Writing DNA Is Next

Pietrini's point gets to the heart of a question still tackled by researchers and legal scholars: When do scientific findings become worthy of legal consideration?

The general assumption is that the same standards that guide the scientific community should guide the law, says Drexel University legal professor Adam Benforado, author of Unfair: The New Science of Criminal Injustice. "But I think that probably shouldn't be the case," he says. "I think when someone is facing the death penalty, they ought to have a right to present neuroscientific or genetic research findings that may not be entirely settled but are sound enough to be published in peer reviewed literature. Because at the end of the day, when someone's life is at stake, to wait for things to be absolutely settled is dangerous. The consequences of inaction are too grave."

That's basically the Supreme Court's stance, too. In the US, the bar for admissibility on mitigating evidence in death penalty proceedings is very low, owing to a Supreme Court ruling in the 1978 trial of Lockett against Ohio. "Essentially, the kitchen sink comes in. And in very few death penalty proceedings will the judge make a searching inquiry into relevance," says Morse, who begrudgingly agrees that neurobiological evidence should be admissible in capital cases, because so much is at stake. "I'd rather it wasn't, because I think it debases the legal process," he says, adding that most neuroscientific and genetic evidence introduced at capital proceedings has more rhetorical relevance than legal relevance.

"What they’re doing is making what I call the fundamental psycho-legal error. This is the belief that once you have found a partially causal explanation for a behavior, then the behavior must be excused altogether. All behavior has causes, including causes at the biological, psychological, and sociological level. But causation is not an excusing condition." If it were, Morse says, no one would be responsible for any behavior.

But that is not the world we live in. Today, in most cases, the law holds people responsible for their actions, not their predispositions. As Wells told his relatives in the courtroom after his sentence was handed down: "I did this. I'm an adult. Don't bear this burden. This burden is mine."

Trial by Tech

  • TrueAllele is a secretive program transforming how courts treat DNA evidence.

  • New Jersey has transformed its court system to be driven by tech—with mixed results.

  • A popular crime-predicting algorithm might not perform much better than a group of untrained humans.

Related Video

Science

Biologist Explains One Concept in 5 Levels of Difficulty – CRISPR

CRISPR is a new biomedical technique that enables powerful gene editing. WIRED challenged biologist Neville Sanjana to explain CRISPR to 5 different people; a child, a teen, a college student, a grad student, and a CRISPR expert.